Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-building-do-list-ajax
Packt
08 Nov 2013
8 min read
Save for later

Building a To-do List with Ajax

Packt
08 Nov 2013
8 min read
(For more resources related to this topic, see here.) Creating and migrating our to-do list's database As you know, migrations are very helpful to control development steps. We'll use migrations in this article. To create our first migration, type the following command: php artisan migrate:make create_todos_table --table=todos --create When you run this command, Artisan will generate a migration to generate a database table named todos. Now we should edit the migration file for the necessary database table columns. When you open the folder migration in app/database/ with a file manager, you will see the migration file under it. Let's open and edit the file as follows: <?php use IlluminateDatabaseMigrationsMigration; class CreateTodosTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('todos', function(Blueprint $table){ $table->create(); $table->increments("id"); $table->string("title", 255); $table->enum('status', array('0', '1'))->default('0'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::drop("todos"); } } To build a simple TO-DO list, we need five columns: The id column will store ID numbers of to-do tasks The title column will store a to-do task's title The status column will store statuses of the tasks The created_at and updated_at columns will store the created and updated dates of tasks If you write $table->timestamps() in the migration file, Laravel's migration class automatically creates created_at and updated_at columns. As you know, to apply migrations, we should run the following command: php artisan migrate After the command is run, if you check your database, you will see that our todos table and columns have been created. Now we need to write our model. Creating a todos model To create a model, you should open the app/models/ directory with your file manager. Create a file named Todo.php under the directory and write the following code: <?php class Todo extends Eloquent { protected $table = 'todos'; } Let's examine the Todo.php file. As you see, our Todo class extends an Eloquent model, which is the ORM (Object Relational Mapper) database class of Laravel. The protected $table = 'todos'; code tells Eloquent about our model's table name. If we don't set the table variable, Eloquent accepts the plural version of the lower case model name as table name. So this isn't required technically. Now, our application needs a template file, so let's create it. Creating the template Laravel uses a template engine that is called blade for static and application template files. Laravel calls the template files from the app/views/ directory, so we need to create our first template under this directory. Create a file with the name index.blade.php. The file contains the following code: <html> <head> <title>To-do List Application</title> <link rel="stylesheet" href="assets/css/style.css"> <!--[if lt IE 9]><script src = "//html5shim.googlecode.com/svn/trunk/html5.js"> </script><![endif]--> </head> <body> <div class="container"> <section id="data_section" class="todo"> <ul class="todo-controls"> <li><img src = "/assets/img/add.png" width="14px" onClick="show_form('add_task');" /></li> </ul> <ul id="task_list" class="todo-list"> @foreach($todos as $todo) @if($todo->status) <li id="{{$todo->id}}" class="done"> <a href="#" class="toggle"></a> <span id="span_{{$todo->id}}">{ {$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon-delete">Delete</a> <a href="#" onClick="edit_task('{{$todo->id}}', '{{$todo->title}}');" class="icon-edit">Edit</a></li> @else <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo->id}}');" class="toggle"></a> <span id="span_{ {$todo->id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{ {$todo->id}}');" class= "icon-delete">Delete</a> <a href="#" onClick="edit_task('{ {$todo->id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endif @endforeach </ul> </section> <section id="form_section"> <form id="add_task" class="todo" style="display:none"> <input id="task_title" type="text" name="title" placeholder="Enter a task name" value=""/> <button name="submit">Add Task</button> </form> <form id="edit_task" class="todo" style="display:none"> <input id="edit_task_id" type="hidden" value="" /> <input id="edit_task_title" type="text" name="title" value="" /> <button name="submit">Edit Task</button> </form> </section> </div> <script src = "http://code.jquery.com/ jquery-latest.min.js"type="text/javascript"></script> <script src = "assets/js/todo.js" type="text/javascript"></script> </body> </html> The preceding code may be difficult to understand if you're writing a blade template for the first time, so we'll try to examine it. You see a foreach loop in the file. This statement loops our todo records. We will provide you with more knowledge about it when we are creating our controller in this article. If and else statements are used for separating finished and waiting tasks. We use if and else statements for styling the tasks. We need one more template file for appending new records to the task list on the fly. Create a file with the name ajaxData.blade.php under app/views/ folder. The file contains the following code: @foreach($todos as $todo) <li id="{{$todo->id}}"><a href="#" onClick="task_done('{{$todo- >id}}');" class="toggle"></a> <span id="span_{{$todo >id}}">{{$todo->title}}</span> <a href="#" onClick="delete_task('{{$todo->id}}');" class="icon delete">Delete</a> <a href="#" onClick="edit_task('{{$todo >id}}','{{$todo->title}}');" class="icon-edit">Edit</a></li> @endforeach Also, you see the /assets/ directory in the source path of static files. When you look at the app/views directory, there is no directory named assets. Laravel separates the system and public files. Public accessible files stay under your public folder in root. So you should create a directory under your public folder for asset files. We recommend working with these types of organized folders for developing tidy and easy-to-read code. Finally you see that we are calling jQuery from its main website. We also recommend this way for getting the latest, stable jQuery in your application. You can style your application as you wish, hence we'll not examine styling code here. We are putting our style.css files under /public/assets/css/. For performing Ajax requests, we need JavaScript coding. This code posts our add_task and edit_task forms and updates them when our tasks are completed. Let's create a JavaScript file with the name todo.js in /public/assets/js/. The files contain the following code: function task_done(id){ $.get("/done/"+id, function(data) { if(data=="OK"){ $("#"+id).addClass("done"); } }); } function delete_task(id){ $.get("/delete/"+id, function(data) { if(data=="OK"){ var target = $("#"+id); target.hide('slow', function(){ target.remove(); }); } }); } function show_form(form_id){ $("form").hide(); $('#'+form_id).show("slow"); } function edit_task(id,title){ $("#edit_task_id").val(id); $("#edit_task_title").val(title); show_form('edit_task'); } $('#add_task').submit(function(event) { /* stop form from submitting normally */ event.preventDefault(); var title = $('#task_title').val(); if(title){ //ajax post the form $.post("/add", {title: title}).done(function(data) { $('#add_task').hide("slow"); $("#task_list").append(data); }); } else{ alert("Please give a title to task"); } }); $('#edit_task').submit(function() { /* stop form from submitting normally */ event.preventDefault(); var task_id = $('#edit_task_id').val(); var title = $('#edit_task_title').val(); var current_title = $("#span_"+task_id).text(); var new_title = current_title.replace(current_title, title); if(title){ //ajax post the form $.post("/update/"+task_id, {title: title}).done(function(data) { $('#edit_task').hide("slow"); $("#span_"+task_id).text(new_title); }); } else{ alert("Please give a title to task"); } }); Let's examine the JavaScript file.
Read more
  • 0
  • 0
  • 12641

article-image-dynamic-pom
Packt
06 Nov 2013
9 min read
Save for later

Dynamic POM

Packt
06 Nov 2013
9 min read
(For more resources related to this topic, see here.) Case study Our project meets the following requirements: It depends on org.codehaus.jedi:jedi-XXX:3.0.5. Actually, the XXX is related to the JDK version, that is, either jdk5 or jdk6. The project is built and run on three different environments: PRODuction, UAT, and DEVelopment The underlying database differs owing to the environment: PostGre in PROD, MySQL in UAT, and HSQLDB in DEV. Besides, the connection is set in a Spring file, which can be spring-PROD.xml, spring-UAT.xml, or spring-DEV.xml, all being in the same src/main/resource folder. The first bullet point can be easily answered, using a jdk-version property. The dependency is then declared as follows: <dependency> <groupId>org.codehaus.jedi</groupId> <!--For this dependency two artifacts are available, one for jdk5 or and a second for jdk6--> <artifactId>jedi-${jdk.version}</artifactId> <version>${jedi.version}</version> </dependency> Still, the fourth bullet point is resolved by specifying a resource folder: <resources> <resource> <directory>src/main/resource</directory> <!--include the XML files corresponding to the environment: PROD, UAT, DEV. Here, the only XML file is a Spring configuration one. There is one file per environment--> <includes> <include> **/*-${environment}.xml </include> </includes> </resource> </resources> Then, we will have to run Maven adding the property values using one of the following commands: mvn clean install –Denvironment=PROD –Djdk.version=jdk6 mvn clean install –Denvironment=DEV –Djdk.version=jdk5 By the way, we could have merged the three XML files as a unique one, setting dynamically the content thanks to Maven's filter tag and mechanism. The next point to solve is the dependency to actual JDBC drivers. A quick and dirty solution A quick and dirty solution is to mention the three dependencies: <!--PROD --> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.1-901.jdbc4</version> <scope>runtime</scope> </dependency> <!--UAT--> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.25</version> <scope>runtime</scope> </dependency> <!--DEV--> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>2.3.0</version> <scope>runtime</scope> </dependency> Anyway, this idea has drawbacks. Even though only the actual driver (org. postgresql.Driver, com.mysql.jdbc.Driver, or org.hsqldb.jdbcDriver as described in the Spring files) will be instantiated at runtime, the three JARs will be transitively transmitted—and possibly packaged—in a further distribution. You may argue that we can work around this problem in most of situations, by confining the scope to provided, and embed the actual dependency by any other mean (such as rely on an artifact embarked in an application server); however, even then you should concede the dirtiness of the process. A clean solution Better solutions consist in using dynamic POM. Here, too, there will be a gradient of more or less clean solutions. Once more, as a disclaimer, beware of dynamic POMs! Dynamic POMs are a powerful and tricky feature of Maven. Moreover, modern IDEs manage dynamic POMs better than a few years ago. Yet, their use may be dangerous for newcomers: as with generated code and AOP for instance, what you write is not what you execute, which may result in strange or unexpected behaviors, needing long hours of debug and an aspirin tablet for the headache. This is why you have to carefully weigh their interest, relatively to your project before introducing them. With properties in command lines As a first step, let's define the dependency as follows: <!-- The dependency to effective JDBC drivers: PostGre, MySQL or HSQLDB--> <dependency> <groupId>${effective.groupId}</groupId> <artifactId> ${effective.artifactId} </artifactId> <version>${effective.version}</version> </dependency> As you can see, the dependency is parameterized thanks to three properties: effective.groupId, effective.artifactId, and effective.version. Then, in the same way we added earlier the –Djdk.version property, we will have to add those properties in the command line, for example,: mvn clean install –Denvironment=PROD –Djdk.version=jdk6 -Deffective.groupId=postgresql -Deffective.artifactId=postgresql -Deffective.version=9.1-901.jdbc4 Or add the following property mvn clean install –Denvironment=DEV –Djdk.version=jdk5 -Deffective.groupId=org.hsqldb -Deffective.artifactId=hsqldb -Deffective.version=2.3.0 Then, the effective POM will be reconstructed by Maven, and include the right dependencies: <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>3.2.3.RELEASE</version> <scope>compile</scope> </dependency> <dependency> <groupId>org.codehaus.jedi</groupId> <artifactId>jedi-jdk6</artifactId> <version>3.0.5</version> <scope>compile</scope> </dependency> <dependency> <groupId>postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.1-901.jdbc4</version> <scope>compile</scope> </dependency> </dependencies> Yet, as you can imagine, writing long command lines like the preceding one increases the risks of human error, all the more that such lines are "write-only". These pitfalls are solved by profiles. Profiles and settings As an easy improvement, you can define profiles within the POM itself. The profiles gather the information you previously wrote in the command line, for example: <profile> <!-- The profile PROD gathers the properties related to the environment PROD--> <id>PROD</id> <properties> <environment>PROD</environment> <effective.groupId> postgresql </effective.groupId> <effective.artifactId> postgresql </effective.artifactId> <effective.version> 9.1-901.jdbc4 </effective.version> <jdk.version>jdk6</jdk.version> </properties> <activation> <!-- This profile is activated by default: in other terms, if no other profile in activated, then PROD will be--> <activeByDefault>true</activeByDefault> </activation> </profile> Or: <profile> <!-- The profile DEV gathers the properties related to the environment DEV--> <id>DEV</id> <properties> <environment>DEV</environment> <effective.groupId> org.hsqldb </effective.groupId> <effective.artifactId> hsqldb </effective.artifactId> <effective.version> 2.3.0 </effective.version> <jdk.version>jdk5</jdk.version> </properties> <activation> <!-- The profile DEV will be activated if, and only if, it is explicitly called--> <activeByDefault>false</activeByDefault> </activation> </profile> The corresponding command lines will be shorter: mvn clean install (Equivalent to mvn clean install –PPROD) Or: mvn clean install –PDEV You can list several profiles in the same POM, and one, many or all of them may be enabled or disabled. Nonetheless, multiplying profiles and properties hurts the readability. Moreover, if your team has 20 developers, then each developer will have to deal with 20 blocks of profiles, out of which 19 are completely irrelevant for him/her. So, in order to make the thing smoother, a best practice is to extract the profiles and inset them in the personal settings.xml files, with the same information: <?xml version="1.0" encoding="UTF-8"?> <settings xsi_schemaLocation="http://maven.apache.org/ SETTINGS/1.0.0 http://maven.apache.org/xsd/ settings-1.0.0.xsd"> <profiles> <profile> <id>PROD</id> <properties> <environment>PROD</environment> <effective.groupId> postgresql </effective.groupId> <effective.artifactId> postgresql </effective.artifactId> <effective.version> 9.1-901.jdbc4 </effective.version> <jdk.version>jdk6</jdk.version> </properties> <activation> <activeByDefault>true</activeByDefault> </activation> </profile> </profiles> </settings> Dynamic POMs – conclusion As a conclusion, the best practice concerning dynamic POMs is to parameterize the needed fields within the POM. Then, by order of priority: Set an enabled profile and corresponding properties within the settings.xml. mvn <goals> [-f <pom_Without_Profiles.xml> ] [-s <settings_With_Enabled_Profile.xml>] Otherwise, include profiles and properties within the POM mvn <goals> [-f <pom_With_Profiles.xml> ] [-P<actual_Profile> ] [-s <settings_Without_Profile.xml>] Otherwise, launch Maven with the properties in command lines mvn <goals> [-f <pom_Without_Profiles.xml> ] [-s <settings_Without_Profile.xml>] -D<property_1>=<value_1> -D<property_2>=<value_2> (...) -D<property_n>=<value_n> Summary In this article we learned about Dynamic POM. We saw a case study and also saw its quick and easy solutions. Resources for Article: Further resources on this subject: Integrating Scala, Groovy, and Flex Development with Apache Maven [Article] Creating a Camel project (Simple) [Article] Using Hive non-interactively (Simple) [Article]
Read more
  • 0
  • 0
  • 11071

article-image-downloading-pyrocms-and-its-pre-requisites
Packt
31 Oct 2013
6 min read
Save for later

Downloading PyroCMS and it's pre-requisites

Packt
31 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting started PyroCMS, like many other content management systems including WordPress, Typo3, or Drupal, comes with a pre-developed installation process. For PyroCMS, this installation process is easy to use and comes with a number of helpful hints just in case you hit a snag while installing the system. If, for example, your system files don't have the correct permissions profile (writeable versus write-protected), the PyroCMS installer will help you, along with all the other installation details, such as checking for required software and taking care of file permissions. Before you can install PyroCMS (the version used for examples in this article is 2.2) on a server, there are a number of server requirements that need to be met. If you aren't sure if these requirements have been met, the PyroCMS installer will check to make sure they are available before installation is complete. Following are the software requirements for a server before PyroCMS can be installed: HTTP Web Server MySQL 5.x or higher PHP 5.2.x or higher GD2 cURL Among these requirements, web developers interested in PyroCMS will be glad to know that it is built on CodeIgniter, a popular MVC patterned PHP framework. I recommend that the developers looking to use PyroCMS should also have working knowledge of CodeIgniter and the MVC programming pattern. Learn more about CodeIgniter and see their excellent system documentation online at http://ellislab.com/codeigniter. CodeIgniter If you haven't explored the Model-View-Controller (MVC) programming pattern, you'll want to brush up before you start developing for PyroCMS. The primary reason that CodeIgniter is a good framework for a CMS is that it is a well-documented framework that, when leveraged in the way PyroCMS has done, gives developers power over how long a project will take to build and the quality with which it is built. Add-on modules for PyroCMS, for example, follow the MVC method, a programming pattern that saves developers time and keeps their code dry and portable. Dry and portable programming are two different concepts. Dry is an acronym for "don't repeat yourself" code. Portable code is like "plug-and-play" code—write it once so that it can be shared with other projects and used quickly. HTTP web server Out of the PyroCMS software requirements, it is obvious, you can guess, that a good HTTP web server platform will be needed. Luckily, PyroCMS can run on a variety of web server platforms, including the following: Abyss Web Server Apache 2.x Nginx Uniform Server Zend Community Server If you are new to web hosting and haven't worked with web hosting software before, or this is your first time installing PyroCMS, I suggest that you use Apache as a HTTP web server. It will be the system for which you will find the most documentation and support online. If you'd prefer to avoid Apache, there is also good support for running PyroCMS on Nginx, another fairly-well documented web server platform. MySQL Version 5 is the latest major release of MySQL, and it has been in use for quite some time. It is the primary database choice for PyroCMS and is thoroughly supported. You don't need expert level experience with MySQL to run PyroCMS, but you'll need to be familiar with writing SQL queries and building relational databases if you plan to create add-ons for the system. You can learn more about MySQL at http://www.mysql.com. PHP Version 5.2 of PHP is no longer the officially supported release of PHP, which is, at the time of this article, Version 5.4. Version 5.2, which has been criticized as being a low server requirement for any CMS, is allowed with PyroCMS because it is the minimum version requirement for CodeIgniter, the framework upon which PyroCMS is built. While future versions of PyroCMS may upgrade this minimum requirement to PHP 5.3 or higher, you can safely use PyroCMS with PHP 5.2. Also, many server operating systems, like SUSE and Ubuntu, install PHP 5.2 by default. You can, of course, upgrade PHP to the latest version without causing harm to your instance of PyroCMS. To help future-proof your installation of PyroCMS, it may be wise to install PHP 5.3 or above, to maximize your readiness for when PyroCMS more strictly adopts features found in PHP 5.3 and 5.4, such as namespaceing. GD2 GD2, a library used in the manipulation and creation of images, is used by PyroCMS to dynamically generate images (where needed) and to crop and resize images used in many PyroCMS modules and add-ons. The image-based support offered by this library is invaluable. cURL As described on the cURL project website, cURL is "a command line tool for transferring data with URL syntax" using a large number of methods, including HTTP(S) GET, POST, PUT, and so on. You can learn more about the project and how to use cURL on their website http://curl.haxx.se. If you've never used cURL with PHP, I recommend taking time to learn how to use it, especially if you are thinking about building a web-based API using PyroCMS. Most popular web hosting companies meet the basic server requirements for PyroCMS. Downloading PyroCMS Getting your hands on a copy of PyroCMS is very simple. You can download the system files from one of two locations, the PryoCMS project website and GitHub. To download PyroCMS from the project website, visit http://www.pyrocms.com and click on the green button labeled Get PyroCMS! This will take you to a download page that gives you the choice between downloading the Community version of PyroCMS and buying the Professional version. If you are new to PyroCMS, you can start with the Community version, currently at Version 2.2.3. The following screenshot shows the download screen: To download PyroCMS from GitHub, visit https://github.com/pyrocms/pyrocms and click on the button labeled Download ZIP to get the latest Community version of PyroCMS, as shown in the following screenshot: If you know how to use Git, you can also clone a fresh version of PyroCMS using the following command. A word of warning, cloning PyroCMS from GitHub will usually give you the latest, stable release of the system, but it could include changes not described in this article. Make sure you checkout a stable release from PyroCMS's repository. git clone https://github.com/pyrocms/pyrocms.git As a side-note, if you've never used Git, I recommend taking some time to get started using it. PyroCMS is an open source project hosted in a Git repository on Github, which means that the system is open to being improved by any developer looking to contribute to the well-being of the project. It is also very common for PyroCMS developers to host their own add-on projects on Github and other online Git repository services. Summary In this article, we have covered the pre-requisites for using PyroCMS, and also how to download PyroCMS. Resources for Article : Further resources on this subject: Kentico CMS 5 Website Development: Managing Site Structure [Article] Kentico CMS 5 Website Development: Workflow Management [Article] Web CMS [Article]
Read more
  • 0
  • 0
  • 13109

article-image-creating-image-gallery
Packt
30 Oct 2013
5 min read
Save for later

Creating an image gallery

Packt
30 Oct 2013
5 min read
(For more resources related to this topic, see here.) Getting ready Before we get started, we need to find a handful of images that we can use for the gallery. Find four to five images to use for the gallery and put them in the images folder. How to do it... Add the following links to the images to the index.html file: <a class="fancybox"href="images/waterfall.png">Waterfall</a><a class="fancybox" href="images/frozenlake.png">Frozen Lake</a><a class="fancybox" href="images/road-inforest.png">Road in Forest</a><a class="fancybox" href="images/boston.png">Boston</a> The anchor tags no longer have an ID, but a class. It is important that they all have the same class so that Fancybox knows about them. Change our call to the Fancybox plugin in the scripts.js file to use the class that all of the links have instead of show-fancybox ID. $(function() { // Using fancybox class instead of the show-fancybox ID $('.fancybox').fancybox(); }); Fancybox will now work on all of the images but they will not be part of the same gallery. To make images part of a gallery, we use the rel attribute of the anchor tags. Add rel="gallery" to all of the anchor tags, shown as follows: <a class="fancybox" rel="gallery" href="images/waterfall.png">Waterfall</a> <a class="fancybox" rel="gallery" href="images/frozenlake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> Now that we have added rel="gallery" to each of our anchor tags, you should see left and right arrows when you hover over the left-hand side or right-hand side of Fancybox. These arrows allow you to navigate between images as shown in the following screenshot: How it works... Fancybox determines that an image is part of a gallery using the rel attribute of the anchor tags. The order of the images is based on the order of the anchor tags on the page. This is important so that the slideshow order is exactly the same as a gallery of thumbnails without any additional work on our end. We changed the ID of our single image to a class for the gallery because we wanted to call Fancybox on all of the links instead of just one. If we wanted to add more image links to the page, it would just be a matter of adding more anchor tags with the proper href values and the same class. There's more... So, what else can we do with the gallery functionality of Fancybox? Let's take a look at some of the other things that we could do with the gallery that we have currently. Captions and thumbnails All of the functionalities that we discussed for single images apply to galleries as well. So, if we wanted to add a thumbnail, it would just be a matter of adding an img tag inside the anchor tag instead of the text. If we wanted to add a caption, we can do so by adding the title attribute to our anchor tags. Showing slideshow from one link Let's say that we wanted to have just one link to open our gallery slideshow. This can be easily achieved by hiding the other links via CSS with the help of the following step: We start by adding this style tag to the <head> tag just under the <script> tag for our scripts.js file, shown as follows: <style type="text/css"> .hidden { display: none; } </style> Now, we update the HTML file so that all but one of our anchor tags have the hidden class. Next, when we reload the page, we will see only one link. When you click on the link, you should still be able to navigate through the gallery just like all of the links were on the page. <a class="fancybox" rel="gallery" href="images/waterfall.png">Image Gallery</a> <div class="hidden"> <a class="fancybox" rel="gallery" href="images/frozen-lake.png">Frozen Lake</a> <a class="fancybox" rel="gallery" href="images/roadin-forest.png">Road in Forest</a> <a class="fancybox" rel="gallery" href="images/boston.png">Boston</a> </div> Summary In this article we saw that Fancybox provides very strong image handling functionalities. We also saw how an image gallery is created by Fancybox. We can also display images as thumbnails and display the images as a slideshow using just one link. Resources for Article: Further resources on this subject: Getting started with your first jQuery plugin [Article] OpenCart Themes: Styling Effects of jQuery Plugins [Article] The Basics of WordPress and jQuery Plugin [Article]
Read more
  • 0
  • 0
  • 2143

article-image-dhtmlx-grid
Packt
30 Oct 2013
7 min read
Save for later

The DHTMLX Grid

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) The DHTMLX grid component is one of the more widely used components of the library. It has a vast amount of settings and abilities that are so robust we could probably write an entire book on them. But since we have an application to build, we will touch on some of the main methods and get into utilizing it. Some of the cool features that the grid supports is filtering, spanning rows and columns, multiple headers, dynamic scroll loading, paging, inline editing, cookie state, dragging/ordering columns, images, multi-selection, and events. By the end of this article, we will have a functional grid where we will control the editing, viewing, adding, and removing of users. The grid methods and events When creating a DHTMLX grid, we first create the object; second we add all the settings and then call a method to initialize it. After the grid is initialized data can then be added. The order of steps to create a grid is as follows: Create the grid object Apply settings Initialize Add data Now we will go over initializing a grid. Initialization choices We can initialize a DHTMLX grid in two ways, similar to the other DHTMLX objects. The first way is to attach it to a DOM element and the second way is to attach it to an existing DHTMLX layout cell or layout. A grid can be constructed by either passing in a JavaScript object with all the settings or built through individual methods. Initialization on a DOM element Let's attach the grid to a DOM element. First we must clear the page and add a div element using JavaScript. Type and run the following code line in the developer tools console: document.body.innerHTML = "<div id='myGridCont'></div>"; We just cleared all of the body tags content and replaced it with a div tag having the id attribute value of myGridCont. Now, create a grid object to the div tag, add some settings and initialize it. Type and run the following code in the developer tools console: var myGrid = new dhtmlXGridObject("myGridCont"); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1", "Column2", "Column3"]); myGrid.init(); You should see the page with showing just the grid header with three columns. Next, we will create a grid on an existing cell object. Initialization on a cell object Refresh the page and add a grid to the appLayout cell. Type and run the following code in the developer tools console: var myGrid = appLayout.cells("a").attachGrid(); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1","Column2","Column3"]); myGrid.init(); You will now see the grid columns just below the toolbar. Grid methods Now let's go over some available grid methods. Then we can add rows and call events on this grid. For these exercises we will be using the global appLayout variable. Refresh the page. attachGrid We will begin by creating a grid to a cell. The attachGrid method creates and attaches a grid object to a cell. This is the first step in creating a grid. Type and run the following code line in the console: var myGrid = appLayout.cells("a").attachGrid(); setImagePath The setImagePath method allows the grid to know where we have the images placed for referencing in the design. We have the application image path set in the config object. Type and run the following code line in the console: myGrid.setImagePath(config.imagePath); setHeader The setHeader method sets the column headers and determines how many headers we will have. The argument is a JavaScript array. Type and run the following code line in the console: myGrid.setHeader(["Column1", "Column2", "Column3"]); setInitWidths The setinitWidths method will set the initial widths of each of the columns. The asterisk mark (*) is used to set the width automatically. Type and run the following code line in the console: myGrid.setInitWidths("125,95,*");   setColAlign The setColAlign method allows us to align the column's content position. Type and run the following code line in the console: myGrid.setColAlign("right,center,left"); init Up until this point, we haven't seen much going on. It was all happening behind the scenes. To see these changes the grid must be initialized. Type and run the following code line in the console: myGrid.init(); Now you see the columns that we provided. addRow Now that we have a grid created let's add a couple rows and start interacting. The addRow method adds a row to the grid. The parameters are ID and columns. Type and run the following code in the console: myGrid.addRow(1,["test1","test2","test3"]); myGrid.addRow(2,["test1","test2","test3"]); We just created two rows inside the grid. setColTypes The setColTypes method sets what types of data a column will contain. The available type options are: ro (readonly) ed (editor) txt (textarea) ch (checkbox) ra (radio button) co (combobox) Currently, the grid allows for inline editing if you were to double-click on grid cell. We do not want this for the application. So, we will set the column types to read-only. Type and run the following code in the console: myGrid.setColTypes("ro,ro,ro"); Now the cells are no longer editable inside the grid. getSelectedRowId The getSelectedRowId method returns the ID of the selected row. If there is nothing selected it will return null. Type and run the following code line in the console: myGrid.getSelectedRowId(); clearSelection The clearSelection method clears all selections in the grid. Type and run the following code line in the console: myGrid.clearSelection(); Now any previous selections are cleared. clearAll The clearAll method removes all the grid rows. Prior to adding more data to the grid we first must clear it. If not we will have duplicated data. Type and run the following code line in the console: myGrid.clearAll(); Now the grid is empty. parse The parse method allows the loading of data to a grid in the format of an XML string, CSV string, XML island, XML object, JSON object, and JavaScript array. We will use the parse method with a JSON object while creating a grid for the application. Here is what the parse method syntax looks like (do not run this in console): myGrid.parse(data, "json"); Grid events The DHTMLX grid component has a vast amount of events. You can view them in their entirety in the documentation. We will cover the onRowDblClicked and onRowSelect events. onRowDblClicked The onRowDblClicked event is triggered when a grid row is double-clicked. The handler receives the argument of the row ID that was double-clicked. Type and run the following code in console: myGrid.attachEvent("onRowDblClicked", function(rowId){ console.log(rowId); }); Double-click one of the rows and the console will log the ID of that row. onRowSelect The onRowSelect event will trigger upon selection of a row. Type and run the following code in console: myGrid.attachEvent("onRowSelect", function(rowId){ console.log(rowId); }); Now, when you select a row the console will log the id of that row. This can be perceived as a single click. Summary In this article, we learned about the DHTMLX grid component. We also added the user grid to the application and tested it with the storage and callbacks methods. Resources for Article: Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 13091

article-image-dialog-widget
Packt
30 Oct 2013
14 min read
Save for later

The Dialog Widget

Packt
30 Oct 2013
14 min read
(For more resources related to this topic, see here.) Wijmo additions to the dialog widget at a glance By default, the dialog window includes the pin, toggle, minimize, maximize, and close buttons. Pinning the dialog to a location on the screen disables the dragging feature on the title bar. The dialog can still be resized. Maximizing the dialog makes it take up the area inside the browser window. Toggling it expands or collapses it so that the dialog contents are shown or hidden with the title bar remaining visible. If these buttons cramp your style, they can be turned off with the captionButtons option. You can see how the dialog is presented in the browser from the following screenshot: Wijmo features additional API compared to jQuery UI for changing the behavior of the dialog. The new API is mostly for the buttons in the title bar and managing window stacking. Window stacking determines which windows are drawn on top of other ones. Clicking on a dialog raises it above other dialogs and changes their window stacking settings. The following table shows the options added in Wijmo. Options Events Methods captionButtons contentUrl disabled expandingAnimation stack zIndex blur buttonCreating stateChanged disable enable getState maximize minimize pin refresh reset restore toggle widget The contentUrl option allows you to specify a URL to load within the window. The expandingAnimation option is applied when the dialog is toggled from a collapsed state to an expanded state. The stack and zIndex options determine whether the dialog sits on top of other dialogs. Similar to the blur event on input elements, the blur event for dialog is fired when the dialog loses focus. The buttonCreating method is called when buttons are created and can modify the buttons on the title bar. The disable method disables the event handlers for the dialog. It prevents the default button actions and disables dragging and resizing. The widget method returns the dialog HTML element. The methods maximize, minimize, pin, refresh, reset, restore, and toggle, are available as buttons on the title bar. The best way to see what they do is play around with them. In addition, the getState method is used to find the dialog state and returns either maximized, minimized, or normal. Similarly, the stateChanged event is fired when the state of the dialog changes. The methods are called as a parameter to the wijdialog method. To disable button interactions, pass the string disable: $("#dialog").wijdialog ("disable"); Many of the methods come as pairs, and enable and disable are one of them. Calling enable enables the buttons again. Another pair is restore/minimize. minimize hides the dialog in a tray on the left bottom of the screen. restore sets the dialog back to its normal size and displays it again. The most important option for usability is the captionButtons option. Although users are likely familiar with the minimize, resize, and close buttons; the pin and toggle buttons are not featured in common desktop environments. Therefore, you will want to choose the buttons that are visible depending on your use of the dialog box in your project. To turn off a button on the title bar, set the visible option to false. A default jQuery UI dialog window with only the close button can be created with: $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: false } } }); The other options for each button are click, iconClassOff, and iconClassOn. The click option specifies an event handler for the button. Nevertheless, the buttons come with default actions and you will want to use different icons for custom actions. That's where iconClass comes in. iconClassOn defines the CSS class for the button when it is loaded. iconClassOff is the class for the button icon after clicking. For a list of available jQuery UI icons and their classes, see http://jquery-ui.googlecode.com/svn/tags/1.6rc5/tests/static/icons.html. Our next example uses ui-icon-zoomin, ui-icon-zoomout, and ui-icon-lightbulb. They can be found by toggling the text for the icons on the web page as shown in the preceding screenshot. Adding custom buttons jQuery UI's dialog API lacks an option for configuring the buttons shown on the title bar. Wijmo not only comes with useful default buttons, but also lets you override them easily. <!DOCTYPE HTML> <html> <head> ... <style> .plus { font-size: 150%; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({ autoOpen: true, captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: {visible: true, click: function () { $('#dialog').toggleClass('plus') }, iconClassOn: 'ui-icon-zoomin', iconClassOff: 'ui-icon-zoomout'} , minimize: { visible: false }, maximize: {visible: true, click: function () { alert('To enloarge text, click the zoom icon.') }, iconClassOn: 'ui-icon-lightbulb' }, close: {visible: true, click: self.close, iconClassOn:'ui-icon-close'} } }); }); </script> </head> <body> <div id="dialog" title="Basic dialog"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate</p> </div> </body> </html> We create a dialog window passing in the captionButtons option. The pin, refresh, and minimize buttons have visible set to false so that the title bar is initialized without them. The final output looks as shown in the following screenshot: In addition, the toggle and maximize buttons are modified and given custom behaviors. The toggle button toggles the font size of the text by applying or removing a CSS class. Its default icon, set with iconClassOn, indicates that clicking on it will zoom in on the text. Once clicked, the icon changes to a zoom out icon. Likewise, the behavior and appearance of the maximize button have been changed. In the position where the maximize icon was displayed in the title bar previously, there is now a lightbulb icon with a tip. Although this method of adding new buttons to the title bar seems clumsy, it is the only option that Wijmo currently offers. Adding buttons in the content area is much simpler. The buttons option specifies the buttons to be displayed in the dialog window content area below the title bar. For example, to display a simple confirmation button: $('#dialog').wijdialog({buttons: {ok: function () { $(this).wijdialog('close') }}}); The text displayed on the button is ok and clicking on the button hides the dialog. Calling $('#dialog').wijdialog('open') will show the dialog again. Configuring the dialog widget's appearance Wijmo offers several options that change the dialog's appearance including title, height, width, and position. The title of the dialog can be changed either by setting the title attribute of the div element of the dialog, or by using the title option. To change the dialog's theme, you can use CSS styling on the wijmo-wijdialog and wijmo-wijdialog-captionbutton classes: <!DOCTYPE HTML> <html> <head> ... <style> .wijmo-wijdialog { /*rounded corners*/ -webkit-border-radius: 12px; border-radius: 12px; background-clip: padding-box; /*shadow behind dialog window*/ -moz-box-shadow: 3px 3px 5px 6px #ccc; -webkit-box-shadow: 3px 3px 5px 6px #ccc; box-shadow: 3px 3px 5px 6px #ccc; /*fade contents from dark gray to gray*/ background-image: -webkit-gradient(linear, left top, left bottom, from(#444444), to(#999999)); background-image: -webkit-linear-gradient(top, #444444, #999999); background-image: -moz-linear-gradient(top, #444444, #999999); background-image: -o-linear-gradient(top, #444444, #999999); background-image: linear-gradient(to bottom, #444444, #999999); background-color: transparent; text-shadow: 1px 1px 3px #888; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({width: 350}); }); </script> </head> <body> <div id="dialog" title="Subtle gradients"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate </p> </div> </body> </html> We now add rounded boxes, a box shadow, and a text shadow to the dialog box. This is done with the .wijmo-wijdialog class. Since many of the CSS3 properties have different names on different browsers, the browser specific properties are used. For example, -webkit-box-shadow is necessary on Webkit-based browsers. The dialog width is set to 350 px when initialized so that the title text and buttons all fit on one line. Loading external content Wijmo makes it easy to load content in an iFrame. Simply pass a URL with the contentUrl option: $(document).ready(function () { $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: true }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: true }, close: { visible: false } }, contentUrl: "http://wijmo.com/demo/themes/" }); }); This will load the Wijmo theme explorer in a dialog window with refresh and maximize/restore buttons. This output can be seen in the following screenshot: The refresh button reloads the content in the iFrame, which is useful for dynamic content. The maximize button resizes the dialog window. Form Components Wijmo form decorator widgets for radio button, checkbox, dropdown, and textbox elements give forms a consistent visual style across all platforms. There are separate libraries for decorating the dropdown and other form elements, but Wijmo gives them a consistent theme. jQuery UI lacks form decorators, leaving the styling of form components to the designer. Using Wijmo form components saves time during development and presents a consistent interface across all browsers. Checkbox The checkbox widget is an excellent example of the style enhancements that Wijmo provides over default form controls. The checkbox is used if multiple choices are allowed. The following screenshot shows the different checkbox states: Wijmo adds rounded corners, gradients, and hover highlighting to the checkbox. Also, the increased size makes it more usable. Wijmo checkboxes can be initialized to be checked. The code for this purpose is as follows: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $("#checkbox3").wijcheckbox({checked: true}); $(":input[type='checkbox']:not(:checked)").wijcheckbox(); }); </script> <style> div { display: block; margin-top: 2em; } </style> </head> <body> <div><input type='checkbox' id='checkbox1' /><label for='checkbox1'>Unchecked</label></div> <div><input type='checkbox' id='checkbox2' /><label for='checkbox2'>Hover</label></div> <div><input type='checkbox' id='checkbox3' /><label for='checkbox3'>Checked</label></div> </body> </html>. In this instance, checkbox3 is set to Checked as it is initialized. You will not get the same result if one of the checkboxes is initialized twice. Here, we avoid that by selecting the checkboxes that are not checked after checkbox3 is set to be Checked. Radio buttons Radio buttons, in contrast with checkboxes, allow only one of the several options to be selected. In addition, they are customized through the HTML markup rather than a JavaScript API. To illustrate, the checked option is set by the checked attribute: <input type="radio" checked /> jQuery UI offers a button widget for radio buttons, as shown in the following screenshot, which in my experience causes confusion as users think that they can select multiple options: The Wijmo radio buttons are closer in appearance to regular radio buttons so that users would expect the same behavior, as shown in the following screenshot: Wijmo radio buttons are initialized by calling the wijradiomethod method on radio button elements: <!DOCTYPE html> <html> <head> ... <script id="scriptInit" type="text/javascript">$(document).ready(function () { $(":input[type='radio']").wijradio({ changed: function (e, data) { if (data.checked) { alert($(this).attr('id') + ' is checked') } } }); }); </script> </head> <body> <div id="radio"> <input type="radio" id="radio1" name="radio"/><label for="radio1">Choice 1</label> <input type="radio" id="radio2" name="radio" checked="checked"/><label for="radio2">Choice 2</label> <input type="radio" id="radio3" name="radio"/><label for="radio3">Choice 3</label> </div> </body> </html> In this example, the changed option, which is also available for checkboxes, is set to a handler. The handler is passed a jQuery.Event object as the first argument. It is just a JavaScript event object normalized for consistency across browsers. The second argument exposes the state of the widget. For both checkboxes and radio buttons, it is an object with only the checked property. Dropdown Styling a dropdown to be consistent across all browsers is notoriously difficult. Wijmo offers two options for styling the HTML select and option elements. When there are no option groups, the ComboBox is the better widget to use. For a dropdown with nested options under option groups, only the wijdropdown widget will work. As an example, consider a country selector categorized by continent: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('select[name=country]').wijdropdown(); $('#reset').button().click(function(){ $('select[name=country]').wijdropdown('destroy') }); $('#refresh').button().click(function(){ $('select[name=country]').wijdropdown('refresh') }) }); </script> </head> <body> <button id="reset"> Reset </button> <button id="refresh"> Refresh </button> <select name="country" style="width:170px"> <optgroup label="Africa"> <option value="gam">Gambia</option> <option value="mad">Madagascar</option> <option value="nam">Namibia</option> </optgroup> <optgroup label="Europe"> <option value="fra">France</option> <option value="rus">Russia</option> </optgroup> <optgroup label="North America"> <option value="can">Canada</option> <option value="mex">Mexico</option> <option selected="selected" value="usa">United States</option> </optgroup> </select> </body> </html> The select element's width is set to 170 pixels so that when the dropdown is initialized, both the dropdown menu and items have a width of 170 pixels. This allows the North America option category to be displayed on a single line, as shown in the following screenshot. Although the dropdown widget lacks a width option, it takes the select element's width when it is initialized. To initialize the dropdown, call the wijdropdown method on the select element: $('select[name=country]').wijdropdown(); The dropdown element uses the blind animation to show the items when the menu is toggled. Also, it applies the same click animation as on buttons to the slider and menu: To reset the dropdown to a select box, I've added a reset button that calls the destroy method. If you have JavaScript code that dynamically changes the styling of the dropdown, the refresh method applies the Wijmo styles again. Summary The Wijmo dialog widget is an extension of the jQuery UI dialog. In this article, the features unique to Wijmo's dialog widget are explored and given emphasis. I showed you how to add custom buttons, how to change the dialog appearance, and how to load content from other URLs in the dialog. We also learned about Wijmo's form components. A checkbox is used when multiple items can be selected. Wijmo's checkbox widget has style enhancements over the default checkboxes. Radio buttons are used when only one item is to be selected. While jQuery UI only supports button sets on radio buttons, Wijmo's radio buttons are much more intuitive. Wijmo's dropdown widget should only be used when there are nested or categorized <select> options. The ComboBox comes with more features when the structure of the options is flat. Resources for Article: Further resources on this subject: Wijmo Widgets [Article] jQuery Animation: Tips and Tricks [Article] Building a Custom Version of jQuery [Article]
Read more
  • 0
  • 0
  • 11508
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-working-different-types-interactive-charts
Packt
30 Oct 2013
7 min read
Save for later

Working with Different Types of Interactive Charts

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) This article explains how to create and embed 2D and 3D charts. They can also be interactive or static and we will insert them into our Moodle courses. We will mainly work with several spreadsheets in order to include diverse tools and techniques that are also present. The main idea is to display data in charts and provide students with the necessary information for their activities. We will also work with a variety of charts and deal with statistics as a baseline topic in this article. We can either develop a chart or work with ready-to-use data. You can design these types of activities in your Moodle course, together with a math teacher. When thinking of statistics, we generally have in mind a picture of a chart and some percentages representing the data of the chart. We can change that paradigm and create a different way to draw and read statistics in our Moodle course. We design charts with drawings, map charts, links to websites, and other interesting items. We can also redesign the charts, comprising numbers, with different assets because we want not only to enrich, but also strengthen the diversity of the material for our Moodle course since some students are not keen on numbers and dislike activities with them. So, let's give another chance to statistics! There are different types of graphics to show statistics. Therefore, we show a variety of tools available to display different results. No matter what our subject is, we can include these types of graphics in our Moodle course. You can use these graphics to help your students give weight to their arguments and express themselves using key points clearly. We teach students to include graphics, read them, and use them as a tool of communication. We can also work with puzzles related to statistics. That is to say, we can invent a graph and give tips or clues to our students so that they can sort out which percentages belong to the chart. In other words, we can create a listening comprehension activity, a reading comprehension activity, or a math problem. We can just upload or embed the chart, create an appealing activity, and give clues to our students so that they can think of the items belonging to the chart. Inserting column charts In this activity, we work with the website http://populationaction.org/. We work with statistics about different topics that are related to each other. We can explore different countries and use several charts in order to draw conclusions. We can also embed the charts in our Moodle course. Getting ready We need to think of a country to work with. We can compare statistics of population, water, croplands, and forests of different countries in order to draw conclusions about their futures. How to do it... We go to the website mentioned earlier and follow some steps in order to get the HTML code to embed it in our Moodle course. In this case, we choose Canada. These are the steps to follow: Enter http://populationaction.org/ in the browser window. Navigate to Publications | Data & Maps. Click on People in the Balance. Click on the down arrow next to the Country or Region Name search block and choose Canada, as shown in the following screenshot: Go to the bottom of the page and click on Share. Copy the HTML code, as shown in the following screenshot: Click on Done. How it works... It is time to embed the charts in our Moodle course. Another option is to draw the charts using a spreadsheet. So, we choose the weekly outline section where we want to add this activity and perform the following steps: Click on Add an activity or resource. Click on Forum | Add. Complete the Forum name block. Click on the down arrow in Forum type and choose Q and A forum. Complete the Description block. Click on the Edit HTML source icon. Paste the HTML code that was copied. Click on Update. Click on the down arrow next to Subscription mode and choose Forced subscription. Click on Save and display. The activity looks as shown in the following screenshot: Embedding a line chart In this recipe, we will present the estimated number of people (in millions) using a particular language over the Internet. To do this, we may include images in our spreadsheet in accordance with the method being used to design the activity. Instead of writing the name of the languages, we insert the flags that represent the language used. We design the line chart taking into account the statistical operations carried out at http://www.internetworldstats.com/stats7.htm. Getting ready We carry out the activity using Google Docs. We have to sign in and follow the steps required to design a spreadsheet file. We have several options for working with the document. After you have an account to work with Google Drive, let's see how to make our line chart! How to do it... We work with s spreadsheet because we need to make calculations and create a chart. First, we need to create a document in the spreadsheet. Therefore, we need to perform the following steps: Click on Create | Spreadsheet, as shown in the following screenshot: Write the name of the languages spoken in the A column. Write the figures in the B column (from the http://www.internetworldstats.com/stats7.htm website). Select the data from A1 up to the B11 column. Click on Insert | Chart. Edit your chart using the Chart Editor, as shown in the following screenshot: Click on Insert. Add the images of the flags corresponding to the languages spoken. Position the cursor over C1 and click on Insert | Image.... Another pop-up window will appear. You have several ways to upload images, as shown in the following screenshot: Click on Choose an image to upload and insert the image from your computer. Click on Select. Repeat the same process for all the languages. Steps 7 to 11 are optional. Click on the chart. Click on the down arrow in Share | Publish chart..., as shown in the following screenshot: Click on the down arrow next to Select a public format and choose Image, as shown in the following screenshot: Copy the HTML code that appears, as shown in the previous screenshot. Click on Done. How it works... We have just designed the chart that we want our students to work with. We are going to embed the chart in our Moodle course; another option is to share the spreadsheet and allow students to draw the chart. If you want to design a warm-up activity for students to guess or find out which the top languages used over the Internet are, you could add a chat, forum, or a question in the course. In this recipe, we are going to create a wiki so that students can work together. So, select the weekly outline section where you want to add the activity and perform the following steps: Click on Add an activity or resource. Click on Wiki | Add. Complete the Wiki name and Description blocks. Click on the Edit HTML source icon and paste the HTML code that we have previously copied. Then click on Update. Complete the First page name block. Click on Save and return to course. The activity looks as shown in the following screenshot:
Read more
  • 0
  • 0
  • 2230

article-image-what-drupal
Packt
30 Oct 2013
3 min read
Save for later

What is Drupal?

Packt
30 Oct 2013
3 min read
(For more resources related to this topic, see here.) Currently Drupal is being used as a CMS in below listed domains Arts Banking and Financial Beauty and Fashion Blogging Community E-Commerce Education Entertainment Government Health Care Legal Industry Manufacturing and Energy Media Music Non-Profit Publishing Social Networking Small business Diversity that is being offered by Drupal is the reason of its growing popularity. Drupal is written in PHP.PHP is open source server side scripting language and it has changed the technological landscape to great extent. The Economist, Examiner.com and The White house websites have been developed in Drupal. System requirements Disk space A minimum installation requires 15 Megabytes. 60 MB is needed for a website with many contributed modules and themes installed. Keep in mind you need much more for the database, files uploaded by the users, media, backups and other files. Web server Apache, Nginx, or Microsoft IIS. Database Drupal 6: MySQL 4.1 or higher, PostgreSQL 7.1, Drupal 7: MySQL 5.0.15 or higher with PDO, PostgreSQL 8.3 or higher with PDO, SQLite 3.3.7 or higher Microsoft SQL Server and Oracle are supported by additional modules. PHP Drupal 6: PHP 4.4.0 or higher (5.2 recommended). Drupal 7: PHP 5.2.5 or higher (5.3 recommended). Drupal 8: PHP 5.3.10 or higher. How to create multiple websites using Drupal Multi-site allows you to share a single Drupal installation (including core code, contributed modules, and themes) among several sites One of the greatest features of Drupal is Multi-site feature. Using this feature a single Drupal installation can be used for various websites. Multisite feature is helpful in managing code during the code upgradation.Each site will have will have its own content, settings, enabled modules, and enabled theme. When to use multisite feature? If the sites are similar in functionallity (use same modules or use the same drupal distribution) you should use multisite feature. If the functionality is different don't use multisite. To create a new site using a shared Drupal code base you must complete the following steps: Create a new database for the site (if there is already an existing database you can also use this by defining a prefix in the installation procedure). Create a new subdirectory of the 'sites' directory with the name of your new site (see below for information on how to name the subdirectory). Copy the file sites/default/default.settings.php into the subdirectory you created in the previous step. Rename the new file to settings.php. Adjust the permissions of the new site directory. Make symbolic links if you are using a subdirectory such as packtpub.com/subdir and not a subdomain such as subd.example.com. In a Web browser, navigate to the URL of the new site and continue with the standard Drupal installation procedure. Summary This article discusses in brief about the Drupal platform and also the requirements for installing it. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Drupal 7 Module Development: Drupal's Theme Layer [Article]
Read more
  • 0
  • 0
  • 12318

article-image-apex-plug-ins
Packt
30 Oct 2013
17 min read
Save for later

APEX Plug-ins

Packt
30 Oct 2013
17 min read
(For more resources related to this topic, see here.) In APEX 4.0, Oracle introduced the plug-in feature. A plug-in is an extension to the existing functionality of APEX. The idea behind plug-ins is to make life easier for developers. Plug-ins are reusable, and can be exported and imported. In this way it is possible to create a functionality which is available to all APEX developers. And installing and using them without the need of having a knowledge of what's inside the plug-in. APEX translates settings from the APEX builder to HTML and JavaScript. For example, if you created a text item in the APEX builder, APEX converts this to the following code (simplified): <input type="text" id="P12_NAME" name="P12_NAME" value="your name"> When you create an item type plug-in, you actually take over this conversion task of APEX, and you generate the HTML and JavaScript code yourself by using PL/SQL procedures. That offers a lot of flexibility because now you can make this code generic, so that it can be used for more items. The same goes for region type plug-ins. A region is a container for forms, reports, and so on. The region can be a div or an HTML table. By creating a region type plug-in, you create a region yourself with the possibility to add more functionality to the region. Plug-ins are very useful because they are reusable in every application. To make a plug-in available, go to Shared Components | Plug-ins , and click on the Export Plug-in link on the right-hand side of the page. Select the desired plug-in and file format and click on the Export Plug-in button. The plug-in can then be imported into another application. Following are the six types of plug-in: Item type plug-ins Region type plug-ins Dynamic action plug-ins Process type plug-ins Authorization scheme type plug-ins Authentication scheme type plug-ins In this Aricle we will discuss the first five types of plug-ins. Creating an item type plug-in In an item type plug-in you create an item with the possibility to extend its functionality. To demonstrate this, we will make a text field with a tooltip. This functionality is already available in APEX 4.0 by adding the following code to the HTML form element attributes text field in the Element section of the text field: onmouseover="toolTip_enable(event,this,'A tooltip')" But you have to do this for every item that should contain a tooltip. This can be made more easily by creating an item type plug-in with a built-in tooltip. And if you create an item of type plug-in, you will be asked to enter some text for the tooltip. Getting ready For this recipe you can use an existing page, with a region in which you can put some text items. How to do it... Follow these steps: Go to Shared Components | User Interface | Plug-ins . Click on the Create button. In the Name section, enter a name in the Name text field. In this case we enter tooltip. In the Internal Name text field, enter an internal name. It is advised to use the company's domain address reversed to ensure the name is unique when you decide to share this plug-in. So for example you can use com.packtpub.apex.tooltip. In the Source section, enter the following code in the PL/SQL Code text area: function render_simple_tooltip ( p_item in apex_plugin.t_page_item , p_plugin in apex_plugin.t_plugin , p_value in varchar2 , p_is_readonly in boolean , p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; -- sys.htp.p('<input type="text" id="'||p_item.name||'" name="'||p_item.name||'" class="text_field" onmouseover="toolTip_enable(event,this,'||''''||p_item.attribute_01||''''||')">');--return l_result;end render_simple_tooltip; This function uses the sys.htp.p function to put a text item (<input type="text") on the screen. On the text item, the onmouseover event calls the function toolTip_enable(). This function is an APEX function, and can be used to put a tooltip on an item. The arguments of the function are mandatory. The function starts with the option to show debug information. This can be very useful when you create a plug-in and it doesn't work. After the debug information, the htp.p function puts the text item on the screen, including the call to tooltip_enable. You can also see that the call to tooltip_enable uses p_item.attribute_01. This is a parameter that you can use to pass a value to the plug-in. That is, the following steps in this recipe. The function ends with the return of l_result. This variable is of the type apex_plugin.t_page_item_render_result. For the other types of plug-in there are dedicated return types also, for example t_region_render_result. Click on the Create Plug-in button The next step is to define the parameter (attribute) for this plug-in. In the Custom Attributes section, click on the Add Attribute button. In the Name section, enter a name in the Label text field, for example tooltip. Ensure that the Attribute text field contains the value 1 . In the Settings section, set the Type field to Text . Click on the Create button. In the Callbacks section, enter render_simple_tooltip into the Render Function Name text field. In the Standard Attributes section, check the Is Visible Widget checkbox. Click on the Apply Changes button. The plug-in is now ready. The next step is to create an item of type tooltip plug-in. Go to a page with a region where you want to use an item with a tooltip. In the Items section, click on the add icon to create a new item. Select Plug-ins . Now you will get a list of the available plug-ins. Select the one we just created, that is tooltip . Click on Next . In the Item Name text field, enter a name for the item, for example, tt_item. In the Region drop-down list, select the region you want to put the item in. Click on Next . In the next step you will get a new option. It's the attribute you created with the plug-in. Enter here the tooltip text, for example, This is tooltip text. Click on Next . In the last step, leave everything as it is and click on the Create Item button. You are now ready. Run the page. When you move your mouse pointer over the new item, you will see the tooltip. How it works... As stated before, this plug-in actually uses the function htp.p to put an item on the screen. Together with the call to the JavaScript function, toolTip_enable on the onmouseover event makes this a text item with a tooltip, replacing the normal text item. There's more... The tooltips shown in this recipe are rather simple. You could make them look better, for example, by using the Beautytips tooltips. Beautytips is an extension to jQuery and can show configurable help balloons. Visit http://plugins.jquery.com to download Beautytips. We downloaded Version 0.9.5-rc1 to use in this recipe. Go to Shared Components and click on the Plug-ins link. Click on the tooltip plug-in you just created. In the Source section, replace the code with the following code: function render_simple_tooltip ( p_item in apex_plugin.t_page_item, p_plugin in apex_plugin.t_plugin, p_value in varchar2, p_is_readonly in boolean, p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; The function also starts with the debug option to see what happens when something goes wrong. --Register the JavaScript and CSS library the plug-inuses. apex_javascript.add_library ( p_name => 'jquery.bgiframe.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.bt.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.easing.1.3', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.hoverintent.minified', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'excanvas', p_directory => p_plugin.file_prefix, p_version => null ); After that you see a number of calls to the function apex_javascript.add_library. These libraries are necessary to enable these nice tooltips. Using apex_javascript.add_library ensures that a JavaScript library is included in the final HTML of a page only once, regardless of how many plug-in items appear on that page. sys.htp.p('<input type="text" id="'||p_item.name||'"class="text_field" title="'||p_item.attribute_01||'">');-- apex_javascript.add_onload_code (p_code =>'$("#'||p_item.name||'").bt({ padding: 20 , width: 100 , spikeLength: 40 , spikeGirth: 40 , cornerRadius: 50 , fill: '||''''||'rgba(200, 50, 50, .8)'||''''||' , strokeWidth: 4 , strokeStyle: '||''''||'#E30'||''''||' , cssStyles: {color: '||''''||'#FFF'||''''||',fontWeight: '||''''||'bold'||''''||'} });'); -- return l_result; end render_tooltip; Another difference with the first code is the call to the Beautytips library. In this call you can customize the text balloon with colors and other options. The onmouseover event is not necessary anymore as the call to $().bt in the wwv_flow_javascript.add_onload_code takes over this task. The $().bt function is a jQuery JavaScript function which references the generated HTML of the plug-in item by ID, and converts it dynamically to show a tooltip using the Beautytips plug-in. You can of course always create extra plug-in item type parameters to support different colors and so on per item. To add the other libraries, do the following: In the Files section, click on the Upload new file button. Enter the path and the name of the library. You can use the file button to locate the libraries on your filesystem. Once you have selected the file, click on the Upload button. The files and their locations can be found in the following table: Libra ry Location jquery.bgiframe.min.js bt-0.9.5-rc1other_libsbgiframe_2.1.1 jquery.bt.min.js bt-0.9.5-rc1 jquery.easing.1.3.js bt-0.9.5-rc1other_libs jquery.hoverintent.minified.js bt-0.9.5-rc1other_libs Excanvas.js bt-0.9.5-rc1other_libsexcanvas_r3     If all libraries have been uploaded, the plug-in is ready. The tooltip now looks quite different, as shown in the following screenshot: In the plug-in settings, you can enable some item-specific settings. For example, if you want to put a label in front of the text item, check the Is Visible Widget checkbox in the Standard Attributes section. For more information on this tooltip, go to http://plugins.jquery.com/project/bt. Creating a region type plug-in As you may know, a region is actually a div. With the region type plug-in you can customize this div. And because it is a plug-in, you can reuse it in other pages. You also have the possibility to make the div look better by using JavaScript libraries. In this recipe we will make a carousel with switching panels. The panels can contain images but they can also contain data from a table. We will make use of another jQuery extension, Step Carousel. Getting ready You can download stepcarousel.js from http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm. However, in order to get this recipe work in APEX, we needed to make a slight modification in it. So, stepcarousel.js, arrowl.gif, and arrow.gif are included in this book. How to do it... Follow the given steps, to create the plug-in: Go to Shared Components and click on the Plug-ins link. Click on the Create button. In the Name section, enter a name for the plug-in in the Name field. We will use Carousel. In the Internal Name text field, enter a unique internal name. It is advised to use your domain reversed, for example com.packtpub.carousel. In the Type listbox, select Region . In the Source section, enter the following code in the PL/SQL Code text area: function render_stepcarousel ( p_region in apex_plugin.t_region, p_plugin in apex_plugin.t_plugin, p_is_printer_friendly in boolean ) return apex_plugin.t_region_render_result is cursor c_crl is select id , panel_title , panel_text , panel_text_date from app_carousel order by id; -- l_code varchar2(32767); begin The function starts with a number of arguments. These arguments are mandatory, but have a default value. In the declare section there is a cursor with a query on the table APP_CAROUSEL. This table contains several data to appear in the panels in the carousel. -- add the libraries and stylesheets -- apex_javascript.add_library ( p_name => 'stepcarousel', p_directory => p_plugin.file_prefix, p_version => null ); -- --Output the placeholder for the region which is used by--the Javascript code The actual code starts with the declaration of stepcarousel.js. There is a function, APEX_JAVASCRIPT.ADD_LIBRARY to load this library. This declaration is necessary, but this file needs also to be uploaded in the next step. You don't have to use the extension .js here in the code. -- sys.htp.p('<style type="text/css">'); -- sys.htp.p('.stepcarousel{'); sys.htp.p('position: relative;'); sys.htp.p('border: 10px solid black;'); sys.htp.p('overflow: scroll;'); sys.htp.p('width: '||p_region.attribute_01||'px;'); sys.htp.p('height: '||p_region.attribute_02||'px;'); sys.htp.p('}'); -- sys.htp.p('.stepcarousel .belt{'); sys.htp.p('position: absolute;'); sys.htp.p('left: 0;'); sys.htp.p('top: 0;'); sys.htp.p('}'); sys.htp.p('.stepcarousel .panel{'); sys.htp.p('float: left;'); sys.htp.p('overflow: hidden;'); sys.htp.p('margin: 10px;'); sys.htp.p('width: 250px;'); sys.htp.p('}'); -- sys.htp.p('</style>'); After the loading of the JavaScript library, some style elements are put on the screen. The style elements could have been put in a Cascaded Style Sheet (CSS ), but since we want to be able to adjust the size of the carousel, we use two parameters to set the height and width. And the height and the width are part of the style elements. -- sys.htp.p('<div id="mygallery" class="stepcarousel"style="overflow:hidden"><div class="belt">'); -- for r_crl in c_crl loop sys.htp.p('<div class="panel">'); sys.htp.p('<b>'||to_char(r_crl.panel_text_date,'DD-MON-YYYY')||'</b>'); sys.htp.p('<br>'); sys.htp.p('<b>'||r_crl.panel_title||'</b>'); sys.htp.p('<hr>'); sys.htp.p(r_crl.panel_text); sys.htp.p('</div>'); end loop; -- sys.htp.p('</div></div>'); The next command in the script is the actual creation of a div. Important here is the name of the div and the class. The Step Carousel searches for these identifiers and replaces the div with the stepcarousel. The next step in the function is the fetching of the rows from the query in the cursor. For every line found, the formatted text is placed between the div tags. This is done so that Step Carousel recognizes that the text should be placed on the panels. --Add the onload code to show the carousel -- l_code := 'stepcarousel.setup({ galleryid: "mygallery" ,beltclass: "belt" ,panelclass: "panel" ,autostep: {enable:true, moveby:1, pause:3000} ,panelbehavior: {speed:500, wraparound:true,persist:true} ,defaultbuttons: {enable: true, moveby: 1,leftnav:["'||p_plugin.file_prefix||'arrowl.gif", -5,80], rightnav:["'||p_plugin.file_prefix||'arrowr.gif", -20,80]} ,statusvars: ["statusA", "statusB", "statusC"] ,contenttype: ["inline"]})'; -- apex_javascript.add_onload_code (p_code => l_code); -- return null; end render_stepcarousel; The function ends with the call to apex_javascript.add_onload_code. Here starts the actual code for the stepcarousel and you can customize the carousel, like the size, rotation speed and so on. In the Callbacks section, enter the name of the function in the Return Function Name text field. In this case it is render_stepcarousel. Click on the Create Plug-in button. In the Files section, upload the stepcarousel.js, arrowl.gif, and arrowr.gif files. For this purpose, the file stepcarousel.js has a little modification in it. In the last section (setup:function), document.write is used to add some style to the div tag. Unfortunately, this will not work in APEX, as document.write somehow destroys the rest of the output. So, after the call, APEX has nothing left to show, resulting in an empty page. Document.write needs to be removed, and the following style elements need to be added in the code of the plug-in: sys.htp.p('</p><div id="mygallery" class="stepcarousel" style="overflow: hidden;"><div class="belt">'); In this line of code you see style='overflow:hidden'. That is the line that actually had to be included in stepcarousel.js. This command hides the scrollbars. After you have uploaded the files, click on the Apply Changes button. The plug-in is ready and can now be used in a page. Go to the page where you want this stepcarousel to be shown. In the Regions section, click on the add icon. In the next step, select Plug-ins . Select Carousel . Click on Next . Enter a title for this region, for example Newscarousel. Click on Next . In the next step, enter the height and the width of the carousel. To show a carousel with three panels, enter 800 in the Width text field. Enter 100 in the Height text field. Click on Next . Click on the Create Region button. The plug-in is ready. Run the page to see the result. How it works... The stepcarousel is actually a div. The region type plug-in uses the function sys.htp.p to put this div on the screen. In this example, a div is used for the region, but a HTML table can be used also. An APEX region can contain any HTML output, but for the positioning, mostly a HTML table or a div is used, especially when layout is important within the region. The apex_javascript.add_onload_code starts the animation of the carousel. The carousel switches panels every 3 seconds. This can be adjusted (Pause : 3000). See also For more information on this jQuery extension, go to http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm.
Read more
  • 0
  • 0
  • 10237

article-image-creating-and-using-composer-packages
Packt
29 Oct 2013
7 min read
Save for later

Creating and Using Composer Packages

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) Using Bundles One of the great features in Laravel is the ease in which we can include the class libraries that others have made using bundles. On the Laravel site, there are already many useful bundles, some of which automate certain tasks while others easily integrate with third-party APIs. A recent addition to the PHP world is Composer, which allows us to use libraries (or packages) that aren't specific to Laravel. In this article, we'll get up-and-running with using bundles, and we'll even create our own bundle that others can download. We'll also see how to incorporate Composer into our Laravel installation to open up a wide range of PHP libraries that we can use in our application. Downloading and installing packages One of the best features of Laravel is how modular it is. Most of the framework is built using libraries, or packages, that are well tested and widely used in other projects. By using Composer for dependency management, we can easily include other packages and seamlessly integrate them into our Laravel app. For this recipe, we'll be installing two popular packages into our app: Jeffrey Way's Laravel 4 Generators and the Imagine image processing packages. Getting ready For this recipe, we need a standard installation of Laravel using Composer. How to do it... For this recipe, we will follow these steps: Go to https://packagist.org/. In the search box, search for way generator as shown in the following screenshot: Click on the link for way/generators : View the details at https://packagist.org/packages/way/generators and take notice of the require line to get the package's version. For our purposes, we'll use "way/generators": "1.0.*" . In our application's root directory, open up the composer.json file and add in the package to the require section so it looks like this: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*" }, Go back to http://packagist.org and perform a search for imagine as shown in the following screenshot: Click on the link to imagine/imagine and copy the require code for dev-master : Go back to our composer.json file and update the require section to include the imagine package . It should now look similar to the following code: "require": { "laravel/framework": "4.0.*", "way/generators": "1.0.*", "imagine/imagine": "dev-master" }, Open the command line, and in the root of our application, run the Composer update as follows: php composer.phar update Finally, we'll add the Generator Service Provider, so open the app/config/app.php file and in the providers array, add the following line: 'WayGeneratorsGeneratorsServiceProvider' How it works... To get our package, we first go to packagist.org and search for the package we want. We could also click on the Browse packages link. It will display a list of the most recent packages as well as the most popular. After clicking on the package we want, we'll be taken to the detail page, which lists various links including the package's repository and home page. We could also click on the package's maintainer link to see other packages they have released. Underneath, we'll see the various versions of the package. If we open that version's detail page, we'll find the code we need to use for our composer.json file. We could either choose to use a strict version number, add a wildcard to the version, or use dev-master, which will install whatever is updated on the package's master branch. For the Generators package, we'll only use Version 1.0, but allow any minor fixes to that version. For the imagine package, we'll use dev-master, so whatever is in their repository's master branch will be downloaded, regardless of version number. We then run update on Composer and it will automatically download and install all of the packages we chose. Finally, to use Generators in our app, we need to register the service provider in our app's config file. Using the Generators package to set up an app Generators is a popular Laravel package that automates quite a bit of file creation. In addition to controllers and models, it can also generate views, migrations, seeds, and more, all through a command-line interface. Getting ready For this recipe, we'll be using the Laravel 4 Generators package maintained by Jeffrey Way that was installed in the Downloading and installing packages recipe. We'll also need a properly configured MySQL database. How to do it… Follow these steps for this recipe: Open the command line in the root of our app and, using the generator, create a scaffold for our cities as follows: php artisan generate:scaffold cities --fields="city:string" In the command line, create a scaffold for our superheroes as follows: php artisan generate:scaffold superheroes --fields="name:string, city_id:integer:unsigned" In our project, look in the app/database/seeds directory and find a file named CitiesTableSeeder.php. Open it and add some data to the $cities array as follows: <?php class CitiesTableSeeder extends Seeder { public function run() { DB::table('cities')->delete(); $cities = array( array( 'id' => 1, 'city' => 'New York', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 2, 'city' => 'Metropolis', 'created_at' => date('Y-m-d g:i:s',time()) ), array( 'id' => 3, 'city' => 'Gotham', 'created_at' => date('Y-m-d g:i:s',time()) ) ); DB::table('cities')->insert($cities); } } In the app/database/seeds directory, open SuperheroesTableSeeder.php and add some data to it: <?php class SuperheroesTableSeeder extends Seeder { public function run() { DB::table('superheroes')->delete(); $superheroes = array( array( 'name' => 'Spiderman', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Superman', 'city_id' => 2, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'Batman', 'city_id' => 3, 'created_at' => date('Y-m-d g:i:s', time()) ), array( 'name' => 'The Thing', 'city_id' => 1, 'created_at' => date('Y-m-d g:i:s', time()) ) ); DB::table('superheroes')->insert($superheroes); } } In the command line, run the migration then seed the database as follows: php artisan migrate php artisan db:seed Open up a web browser and go to http://{your-server}/cities. We will see our data as shown in the following screenshot: Now, navigate to http://{your-server}/superheroes and we will see our data as shown in the following screenshot: How it works... We begin by running the scaffold generator for our cities and superheroes tables. Using the --fields tag, we can determine which columns we want in our table and also set options such as data type. For our cities table, we'll only need the name of the city. For our superheroes table, we'll want the name of the hero as well as the ID of the city where they live. When we run the generator, many files will automatically be created for us. For example, with cities, we'll get City.php in our models, CitiesController.php in controllers, and a cities directory in our views with the index, show, create, and edit views. We then get a migration named Create_cities_table.php, a CitiesTableSeeder.php seed file, and CitiesTest.php in our tests directory. We'll also have our DatabaseSeeder.php file and our routes.php file updated to include everything we need. To add some data to our tables, we opened the CitiesTableSeeder.php file and updated our $cities array with arrays that represent each row we want to add. We did the same thing for our SuperheroesTableSeeder.php file. Finally, we run the migrations and seeder and our database will be created and all the data will be inserted. The Generators package has already created the views and controllers we need to manipulate the data, so we can easily go to our browser and see all of our data. We can also create new rows, update existing rows, and delete rows.
Read more
  • 0
  • 0
  • 7807
Packt
29 Oct 2013
10 min read
Save for later

RESS – The idea and the Controversies

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) The RWD concept appeared first in 2010 in an article by Ethan Marcotte (available at http://alistapart.com/article/responsive-web-design). He presented an approach that allows us to progressively enhance page design within different viewing contexts with the help of fluid grids, flexible images, and media queries. This approach was opposed to the one that separates websites geared toward specific devices. Instead of two or more websites (desktop and mobile), we could have one that adapts to all devices. The technical foundation of RWD (as proposed in Marcotte's article) consists of three things, fluid grids, flexible images, and media queries. Illustration: Fluid (and responsive) grid adapts to device using both column width and column count Fluid grid is basically nothing more than a concept of dividing the monitor width into modular columns, often accompanied by some kind of a CSS framework (some of the best-known examples were the 960 grid system, blueprint, pure, 1140px grid, and elastic), that is, a base stylesheet that simplifies and standardizes writing website-specific CSS. What makes it fluid is the use of relative measurements like %, em, or rem. With changing the screen (or the window), the number of these columns changes (thanks to CSS statements enclosed in media queries). This allows us to adjust the design layout to device capabilities (screen width and pixel density in particular). Images in such a layout become fluid by using a simple technique of setting width, x% or max-width, 100% in CSS, which causes the image to scale proportionally. With those two methods and a little help from media queries, one can radically change the page layout and handle this enormous, up to 800 percent, difference between the thinnest and the widest screen (WQXGA's 2560px/iPhone's 320px). This is a big step forward and a good base to start creating One Web, that is, to use one URL to deliver content to all the devices. Unfortunately, that is not enough to achieve results that would provide an equally great experience and fast loading websites for everybody. The RESS idea Besides screen width, we may need to take into account other things such as bandwidth and pay-per-bandwidth plans, processor speed, available memory, level of HTML/CSS compatibility, monitoring color depth, and possible navigation methods (touch screen, buttons, and keyboard). On a practical level, it means we may have to optimize images and navigation patterns, and reduce page complexity for some devices. To make this possible, some Server Side solutions need to be engaged. We may use Server Side just for optimizing images. Server Side optimization lets us send pages with just some elements adjusted or a completely changed page; we can rethink the application structure to build a RESTful web interface and turn our Server Side application into a web service. The more we need to place responsibility for device optimization on the Server Side, the closer we get to the old way of disparate desktops and mobile web's separate mobile domains, such as iPhone, Android, or Windows applications. There are many ways to build responsive websites but there is no golden rule to tell you which way is the best. It depends on the target audience, technical contexts, money, and time. Ultimately, the way to be chosen depends on the business decisions of the website owner. When we decide to employ Server Side logic to optimize components of a web page designed in a responsive way, we are going the RESS (Responsive Web Design with Server Side components) way. RESS was proposed by Luke Wroblewski on his blog as a result of his experiences on extending RWD with Server Side components. Essentially, the idea was based on storing IDs of resources (such as images) and serving different versions of the same resource, optimized for some defined classes of devices. Device detection and assigning them to respective classes can be based on libraries such as WURFL or YABFDL. Controversies It is worth noting that both of these approaches raised many controversies. Introducing RWD has broken some long-established rules or habits such as standard screen width (the famous 960px maximum page width limit). It has put in question the long-practiced ways of dealing with mobile web (such as separate desktop and mobile websites). It is no surprise that it raises both delight and rage. One can easily find people calling this fool's gold, useless, too difficult, a fad, amazing, future proof, and so on. Each of those opinions has a reason behind it, for better or worse. A glimpse of the following opinions may help us understand some of the key benefits and issues related to RWD. "Separate mobile websites are a good thing" You may have heard this line in an article by Jason Grigsby, Css media query for mobile is fool's gold , available at http://blog.cloudfour.com/css-media-query-for-mobile-is-fools-gold/. Separate mobile websites allow reduction of bandwidth, prepare pages that are less CPU and memory intensive, and at the same time allow us to use some mobile-only features such as geolocation. Also, not all mobile browsers are wise enough to understand media queries. That is generally true and media queries are not enough in most scenarios, but with some JavaScript (Peter-Paul Koch blog available at, http://www.quirksmode.org/blog/archives/2010/08/combining_media.html#more, describes a method to exclude some page elements or change the page structure via JS paired with media queries), it is possible to overcome many of those problems. At the same time, making a separate mobile website introduces its own problems and requires significant additional investment that can easily get to tens or hundreds of times more than the RWD solution (detecting devices, changing application logic, writing separate templates, integrating, and testing the whole thing). Also, at the end of the day, your visitors may prefer the mobile version, but this doesn't have to be the case. Users are often accessing the same content via various devices and providing consistent experience across all of them becomes more and more important. The preceding controversy is just a part of a wider discussion on channels to provide content on the Internet. RWD and RESS are relatively new kids on the block. For years, technologies to provide content for mobile devices were being built and used, from device-detection libraries to platform-specific applications (such as iStore, Google Play, and MS). When, in 2010, US smartphone users started to spend more time using their mobile apps than browsers (Mobile App Usage Further Dominates Web, Spurred by Facebook, at http://blog.flurry.com/bid/80241/Mobile-App-Usage-Further-Dominates-Web-Spurred-by-Facebook), some hailed it as dangerous for the Web (Apps: The Web Is The Platform, available at http://blog.mozilla.org/webdev/2012/09/14/apps-the-web-is-the-platform/). A closer look at stats reveals though, that most of this time was spent on playing games. No matter how much time kids can spend playing Angry Birds now, after more than two years from then, people still prefer to read the news via a browser rather than via native mobile applications. The Future of Mobile News report from October 2012 reveals that for accessing news, 61 percent mobile users prefer a browser while 28 percent would rather use apps (Future of Mobile News, http://www.journalism.org/analysis_report/future_mobile_news). The British government is not keen on apps either, as they say, "Our position is that native apps are rarely justified" (UK Digital Cabinet Office blog, at http://digital.cabinetoffice.gov.uk/2013/03/12/were-not-appy-not-appy-at-all/). Recently, Tim Berners-Lee, the inventor of the Web, criticized closed world apps such as those released by Apple for threatening openness and universality that the architects of the Internet saw as central to its design. He explains it the following way, "When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, what software they use, or which human language they speak and regardless of whether they have a wired or a wireless Internet connection." This kind of thinking goes in line with the RWD/RESS philosophy to have one URL for the same content, no matter what way you'd like to access it. Nonetheless, it is just one of the reasons why RWD became so popular during the last year. "RWD is too difficult" CSS coupled with JS can get really complex (some would say messy) and requires a lot of testing on all target browsers/platforms. That is or was true. Building RWD websites requires good CSS knowledge and some battlefield experience in this field. But hey, learning is the most important skill in this industry. It actually gets easier and easier with new tools released nearly every week. "RWD means degrading design" Fluid layouts break the composition of the page; Mobile First and Progressive Enhancement mean, in fact, reducing design to a few simplistic and naive patterns. Actually the Mobile First concept contains two concepts. One is design direction and the second is the structure of CSS stylesheets, in particular the order of media queries. With regard to design direction, the Mobile First concept is meant to describe the sequence of designs. First the design for a mobile should be created and then for a desktop. While there are several good reasons for using this approach, one should never forget the basic truth that at the end of the day only the quality of designs matters, not the order they were created in. With regard to the stylesheet structure, Mobile First means that we first write statements for small screens and then add statements for wider screens, such as @media screen and (min-width: 480px). It is a design principle meant to simplify the whole thing. It is assumed here that CSS for small screens is the simplest version, which will be progressively enhanced for larger screens. The idea is smart and helps to maintain a well-structured CSS but sometimes the opposite, the Desktop First approach, seems natural. Typical examples are tables with many columns. The Mobile First principle is not a religious dogma and should not be treated as such. As a side note, it remains an open question why this is still named Mobile First, while the new iPad-related statements should come here at the end (min-width: 2000px). There are some examples of rather poor designs made by RWD celebrities. But there are also examples of great designs that happened, thanks to the freedom that RWD gave to the web design world. The rapid increase in Internet access via mobile devices during 2012 made RWD one of the hottest topics in web design. The numbers vary across countries and websites but no matter what numbers you look at, one thing is certain, mobile is already big and will soon get even bigger (valuable stats on mobile use are available at http://www.thinkwithgoogle.com/mobileplanet/en/). Statistics are not the only reason why Responsive Web Design became popular. Equally important are the benefits for web designers, users, website owners, and developers. Summary This article, as discussed, covered the RESS idea, as well as the controversies associated with it. Resources for Article: Further resources on this subject: Introduction to RWD frameworks [Article] Getting started with Modernizr using PHP IDE [Article] Understanding Express Routes [Article]
Read more
  • 0
  • 0
  • 11690

article-image-understanding-websockets-and-server-sent-events-detail
Packt
29 Oct 2013
10 min read
Save for later

Understanding WebSockets and Server-sent Events in Detail

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) Encoders and decoders in Java API for WebSockets As seen in the previous chapter, the class-level annotation @ServerEndpoint indicates that a Java class is a WebSocket endpoint at runtime. The value attribute is used to specify a URI mapping for the endpoint. Additionally the user can add encoder and decoder attributes to encode application objects into WebSocket messages and WebSocket messages into application objects. The following table summarizes the @ServerEndpoint annotation and its attributes: Annotation Attribute Description @ServerEndpoint   This class-level annotation signifies that the Java class is a WebSockets server endpoint.   value The value is the URI with a leading '/.'   encoders The encoders contains a list of Java classes that act as encoders for the endpoint. The classes must implement the Encoder interface.   decoders The decoders contains a list of Java classes that act as decoders for the endpoint. The classes must implement the Decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ServerEndpoint.Configurator that is used when configuring the server endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. In this section we shall look at providing encoder and decoder implementations for our WebSockets endpoint. The preceding diagram shows how encoders will take an application object and convert it to a WebSockets message. Decoders will take a WebSockets message and convert to an application object. Here is a simple example where a client sends a WebSockets message to a WebSockets java endpoint that is annotated with @ServerEndpoint and decorated with encoder and decoder class. The decoder will decode the WebSockets message and send back the same message to the client. The encoder will convert the message to a WebSockets message. This sample is also included in the code bundle for the book. Here is the code to define ServerEndpoint with value for encoders and decoders: @ServerEndpoint(value="/book", encoders={MyEncoder.class}, decoders = {MyDecoder.class} ) public class BookCollection { @OnMessage public void onMessage(Book book,Session session) { try { session.getBasicRemote().sendObject(book); } catch (Exception ex) { ex.printStackTrace(); } } @OnOpen public void onOpen(Session session) { System.out.println("Opening socket" +session.getBasicRemote() ); } @OnClose public void onClose(Session session) { System.out.println("Closing socket" + session.getBasicRemote()); } } In the preceding code snippet, you can see the class BookCollection is annotated with @ServerEndpoint. The value=/book attribute provides URI mapping for the endpoint. The @ServerEndpoint also takes the encoders and decoders to be used during the WebSocket transmission. Once a WebSocket connection has been established, a session is created and the method annotated with @OnOpen will be called. When the WebSocket endpoint receives a message, the method annotated with @OnMessage will be called. In our sample the method simply sends the book object using the Session.getBasicRemote() which will get a reference to the RemoteEndpoint and send the message synchronously. Encoders can be used to convert a custom user-defined object in a text message, TextStream, BinaryStream, or BinaryMessage format. An implementation of an encoder class for text messages is as follows: public class MyEncoder implements Encoder.Text<Book> { @Override public String encode(Book book) throws EncodeException { return book.getJson().toString(); } } As shown in the preceding code, the encoder class implements Encoder.Text<Book>. There is an encode method that is overridden and which converts a book and sends it as a JSON string. (More on JSON APIs is covered in detail in the next chapter) Decoders can be used to decode WebSockets messages in custom user-defined objects. They can decode in text, TextStream, and binary or BinaryStream format. Here is a code for a decoder class: public class MyDecoder implements Decoder.Text<Book> { @Override public Book decode(String string) throws DecodeException { javax.json.JsonObject jsonObject = javax.json.Json.createReader(new StringReader(string)).readObject(); return new Book(jsonObject); } @Override public boolean willDecode(String string) { try { javax.json.Json.createReader(new StringReader(string)).readObject(); return true; } catch (Exception ex) { } return false; } In the preceding code snippet, the Decoder.Text needs two methods to be overridden. The willDecode() method checks if it can handle this object and decode it. The decode() method decodes the string into an object of type Book by using the JSON-P API javax.json.Json.createReader(). The following code snippet shows the user-defined class Book: public class Book { public Book() {} JsonObject jsonObject; public Book(JsonObject json) { this.jsonObject = json; } public JsonObject getJson() { return jsonObject; } public void setJson(JsonObject json) { this.jsonObject = json; } public Book(String message) { jsonObject = Json.createReader(new StringReader(message)).readObject(); } public String toString () { StringWriter writer = new StringWriter(); Json.createWriter(writer).write(jsonObject); return writer.toString(); } } The Book class is a user-defined class that takes the JSON object sent by the client. Here is an example of how the JSON details are sent to the WebSockets endpoints from JavaScript. var json = JSON.stringify({ "name": "Java 7 JAX-WS Web Services", "author":"Deepak Vohra", "isbn": "123456789" }); function addBook() { websocket.send(json); } The client sends the message using websocket.send() which will cause the onMessage() of the BookCollection.java to be invoked. The BookCollection.java will return the same book to the client. In the process, the decoder will decode the WebSockets message when it is received. To send back the same Book object, first the encoder will encode the Book object to a WebSockets message and send it to the client. The Java WebSocket Client API WebSockets and Server-sent Events , covered the Java WebSockets client API. Any POJO can be transformed into a WebSockets client by annotating it with @ClientEndpoint. Additionally the user can add encoders and decoders attributes to the @ClientEndpoint annotation to encode application objects into WebSockets messages and WebSockets messages into application objects. The following table shows the @ClientEndpoint annotation and its attributes: Annotation Attribute Description @ClientEndpoint   This class-level annotation signifies that the Java class is a WebSockets client that will connect to a WebSockets server endpoint.   value The value is the URI with a leading /.   encoders The encoders contain a list of Java classes that act as encoders for the endpoint. The classes must implement the encoder interface.   decoders The decoders contain a list of Java classes that act as decoders for the endpoint. The classes must implement the decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ClientEndpoint.Configurator, which is used when configuring the client endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. Sending different kinds of message data: blob/binary Using JavaScript we can traditionally send JSON or XML as strings. However, HTML5 allows applications to work with binary data to improve performance. WebSockets supports two kinds of binary data Binary Large Objects (blob) arraybuffer A WebSocket can work with only one of the formats at any given time. Using the binaryType property of a WebSocket, you can switch between using blob or arraybuffer: websocket.binaryType = "blob"; // receive some blob data websocket.binaryType = "arraybuffer"; // now receive ArrayBuffer data The following code snippet shows how to display images sent by a server using WebSockets. Here is a code snippet for how to send binary data with WebSockets: websocket.binaryType = 'arraybuffer'; The preceding code snippet sets the binaryType property of websocket to arraybuffer. websocket.onmessage = function(msg) { var arrayBuffer = msg.data; var bytes = new Uint8Array(arrayBuffer); var image = document.getElementById('image'); image.src = 'data:image/png;base64,'+encode(bytes); } When the onmessage is called the arrayBuffer is initialized to the message.data. The Uint8Array type represents an array of 8-bit unsigned integers. The image.src value is in line using the data URI scheme. Security and WebSockets WebSockets are secured using the web container security model. A WebSockets developer can declare whether the access to the WebSocket server endpoint needs to be authenticated, who can access it, or if it needs an encrypted connection. A WebSockets endpoint which is mapped to a ws:// URI is protected under the deployment descriptor with http:// URI with the same hostname,port path since the initial handshake is from the HTTP connection. So, WebSockets developers can assign an authentication scheme, user roles, and a transport guarantee to any WebSockets endpoints. We will take the same sample as we saw in , WebSockets and Server-sent Events , and make it a secure WebSockets application. Here is the web.xml for a secure WebSocket endpoint: <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <security-constraint> <web-resource-collection> <web-resource-name>BookCollection</web-resource-name> <url-pattern>/index.jsp</url-pattern> <http-method>PUT</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> <http-method>GET</http-method> </web-resource-collection> <user-data-constraint> <description>SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> As you can see in the preceding snippet, we used <transport-guarantee>CONFIDENTIAL</transport-guarantee>. The Java EE specification followed by application servers provides different levels of transport guarantee on the communication between clients and application server. The three levels are: Data Confidentiality (CONFIDENTIAL) : We use this level to guarantee that all communication between client and server goes through the SSL layer and connections won't be accepted over a non-secure channel. Data Integrity (INTEGRAL) : We can use this level when a full encryption is not required but we want our data to be transmitted to and from a client in such a way that, if anyone changed the data, we could detect the change. Any type of connection (NONE) : We can use this level to force the container to accept connections on HTTP and HTTPs. The following steps should be followed for setting up SSL and running our sample to show a secure WebSockets application deployed in Glassfish. Generate the server certificate: keytool -genkey -alias server-alias -keyalg RSA -keypass changeit --storepass changeit -keystore keystore.jks Export the generated server certificate in keystore.jks into the file server.cer: keytool -export -alias server-alias -storepass changeit -file server.cer -keystore keystore.jks Create the trust-store file cacerts.jks and add the server certificate to the trust store: keytool -import -v -trustcacerts -alias server-alias -file server.cer -keystore cacerts.jks -keypass changeit -storepass changeit Change the following JVM options so that they point to the location and name of the new keystore. Add this in domain.xml under java-config: <jvm-options>-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks</jvm-options> <jvm-options>-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks</jvm-options> Restart GlassFish. If you go to https://localhost:8181/helloworld-ws/, you can see the secure WebSocket application. Here is how the the headers look under Chrome Developer Tools: Open Chrome Browser and click on View and then on Developer Tools . Click on Network . Select book under element name and click on Frames . As you can see in the preceding screenshot, since the application is secured using SSL the WebSockets URI, it also contains wss://, which means WebSockets over SSL. So far we have seen the encoders and decoders for WebSockets messages. We also covered how to send binary data using WebSockets. Additionally we have demonstrated a sample on how to secure WebSockets based application. We shall now cover the best practices for WebSocket based-applications.
Read more
  • 0
  • 0
  • 18374

article-image-getting-store
Packt
28 Oct 2013
21 min read
Save for later

Getting into the Store

Packt
28 Oct 2013
21 min read
(For more resources related to this topic, see here.) This all starts by visiting https://appdev.microsoft.com/StorePortals, which will get you to the store dashboard that you use to submit and manage your applications. If you already have an account you'll just log in here and proceed. If not, we'll take a look at ways of getting it set up. There are a couple of ways to get a store account, which you will need before you can submit any game or application to the store. There are also two different types of accounts: Individual accounts Company accounts In most cases you will only need the first option. It's cheaper and easier to get, and you won't require the enterprise features provided by the company account for a game. For this reason we'll focus on the individual account. To register you'll need a credit card for verification, even if you gain a free account another way. Just follow the registration instructions, pay the fee, and complete verification, after which you'll be ready to go. Free accounts Students and developers with MSDN subscriptions can access registration codes that waive the fee for a minimum of one year. If you meet either of these requirements you can gain a code using the respective methods, and use that code during the registration process to set the fee to zero. Students can access their free accounts using the DreamSpark service that Microsoft runs. To access this you need to create an account on www.dreamspark.com and create an account. From there follow the steps to verify your student status and visit https://www.dreamspark.com/Student/Windows-Store-Access.aspx to get your registration code. If you have access to an MSDN subscription you can use this to gain a store account for free. Just log in to your account and in your account benefits overview you should be able to generate your registration code. Submitting your game So your game is polished and ready to go. What do you need to do to get it in the store? First log in to the dashboard and select Submit an App from the menu on the left. Here you can see the steps required to submit the app. This may look like a lot to do, but don't worry. Most of these are very simple to resolve and can be done before you even start working on the game. The first step is to choose a name for your game, and this can be done whenever you want. By reserving a name and creating the application entry you have a year to submit your application, giving you plenty of time to complete it. This is why it's a good idea to jump in and register your application once you have a name for it. If you change your mind later and want a different name you can always change it. The next step is to choose how and where you will sell your game. The other thing you need to choose here are the markets you want to sell your game in. This can be an area you need to be careful of, because the markets you choose here define the localization or content you need to watch for in your game. Certain markets are restrictive and including content that isn't appropriate for a market you say you want to sell in can cause you to fail the certification process. Once that is done you need to choose when you want to release your game—you can choose to release as soon as certification finishes or on a specific date, and then you choose the app category, which in this case will be Games. Don't forget to specify the genre of your game as the sub-category so players can find it. The final option on the Selling Details page which applies to us is the Hardware requirements section. Here we define the DirectX feature-level required for the game, and the minimum RAM required to run it. This is important because the store can help ensure that players don't try to play your game on systems that cannot run it. The next section allows you to define the in-app offers that will be made available to players. The Age rating and rating certificates section allows you to define the minimum age required to play the game, as well as submit official ratings certificates from ratings boards so that they may be displayed in the store to meet legal requirements. The latter part is optional in some cases, and may affect where you can submit your game depending on local laws. Aside from official ratings, all applications and games submitted to the store require a voluntary rating, chosen from one of the following age options: 3+ 7+ 12+ 16+ 18+ While all content is checked, the 7+ and 3+ ratings both have extra checks because of the extra requirements for those age ranges. The 3+ rating is especially restrictive as apps submitted with that age limit may not contain features that could connect to online services, collect personal information, or use the webcam or microphone. To play it safe it's recommended the 12+ rating is chosen, and if you're still uncertain, higher is safer. GDF Certificates The other entry required here if you have official ratings certificates is a GDF file. This is a Game Definition File, which defines the different ratings in a single location and provides the necessary information to display the rating and inform any parental settings. To do this you need to use the GDFMAKER.exe utility that ships with the Windows 8 SDK, and generate a GDF file which you can submit to the store. Alongside that you need to create a DLL containing that file (as a resource) without any entry point to include in the application package. For full details on how to create the GDF as well as the DLL, view the following MSDN article: http://msdn.microsoft.com/en-us/library/windows/apps/hh465153.aspx The final section before you need to submit your compiled application package is the cryptography declaration. For most games you should be able to declare that you aren't using any cryptography within the game and quickly move through this step. If you are using cryptography, including encrypting game saves or data files, you will need to declare that here and follow the instructions to either complete the step or provide an Export Control Classification Number (ECCN). Now you need to upload the compiled app package before you can continue, so we'll take a look at what it takes to do that before you continue. App packages To submit your game to the store, you need to package it up in a format that makes it easy to upload, and easy for the store to distribute. This is done by compiling the application as an .appx file. But before that happens we need to ensure we have defined all of the required metadata, and fulfill the certification requirements, otherwise we'll be uploading a package only to fail soon after. Part of this is done through the application manifest editor, which is accessible in Visual Studio by double-clicking on the Package.appxmanifest file in solution explorer. This editor is where you specify the name that will be seen in the start menu, as well as the icons used by the application. To pass certification all icons have to be provided at 100 percent DPI, which is referred to as Scale 100 in the editor. Icon/Image Base resolution Required Standard 150 x 150 px Yes Wide 310 x 150 px If Wide Tile Enabled Small 30 x 30 px Yes Store 50 x 50 px Yes Badge 24 x 24 px If Toasts Enabled Splash 620 x 300 px Yes If you wish to provide a higher quality images for people running on high DPI setups, you can do so with a simple filename change. If you add scale-XXX to your filename, just before the extension, and replace XXX with one of the following values, Windows will automatically make use of it at the appropriate DPI. scale-100 scale-140 scale-180 In the following image you can see the options available for editing the visual assets in the application. These all apply to the start menu and application start-up experience, including the splash screen and toast notifications. Toast Notifications in Windows 8 are pop-up notifications that slide in from the edge of the screen and show the users some information for a short period of time. They can click on the toast to open the application if they want. Alongside Live Tiles, Toast Notifications allow you to give the user information when the application is not running (although they work when the application is running). The previous table shows the different images you required for a Windows 8 application, and if they are mandatory or just required in certain situations. Note that this does not include the imagery required for the store, which includes some screenshots of the application and optional promotional art in case you want your application to be featured. You must replace all of the required icons with your own. Automated checks during certification will detect the use of the default "box" icon shown in the previous screenshot and automatically fail the submission. Capabilities Once you have the visual aspects in place, you need to declare the capabilities that the application will receive. Your game may not need any, however you should still only specify what you need to run, as some of these capabilities come with extra implications and non-obvious requirements. Adding a privacy policy One of those requirements is the privacy policy. Even if you are creating a game, there may be situations where you are collecting private information, which requires you to have a privacy policy. The biggest issue here is connecting to the internet. If your game marks any of the Internet capabilities in the manifest, you automatically trigger a check for a privacy policy as private information—in this case an IP address—is being shared. To avoid failing certification for this, you need to put together a privacy policy if you collect privacy information, or use any of the capabilities that would indicate you collect information. These include the Internet capabilities as well as location, webcam, and microphone capabilities. This privacy policy just needs to describe what you will do with the information, and directly mention your game and publisher name. Once you have the policy written, it needs to be posted in two locations. The first is a publicly accessible website, which you will provide a link to when filling out the description after uploading your game. The second is within the game itself. It is recommended you place this policy in the Windows 8 provided settings menu, which you can build using XAML or your own code. If you're going with a completely native Windows 8 application you may want to display the policy in your own way and link to it from options within your game. Declarations Once you've indicated the capabilities you want, you need to declare any operating system integration you've done. For most games you won't use this, however if you're taking advantage of Windows 8 features such as share targets (the destination for data shared using the Share Charm), or you have a Game Definition File, you will need to declare it here and provide the required information for the operating system. In the case of the GDF, you need to provide the file so that the parental controls system can make use of the ratings to appropriately control access. Certification kit The next step is to make sure you aren't going to fail the automated tests during certification. Microsoft provides the same automated tests used when you submit your app in the Windows Application Certification Kit (WACK). WACK is installed by default with Visual Studio 2012 or higher version. There are two ways to run the test: after you build your application package, or by running the kit directly against an installed app. We'll look at the latter first, as you might want to run the test on your deployed test game well before you build anything for the store. This is also the only way to run the WACK on a WinRT device, if you want to cover all bases. If you haven't already deployed or tested your app, deploy it using the Build menu in Visual Studio and then search for the Windows App Cert Kit using the start menu (just start typing). When you run this you will be given an option to choose which type of application you want to validate. In this case we want to select the Windows Store App option, which will then give you access to the list of apps installed on your machine. From there it's just a matter of selecting the app you want and starting the test. At this point you will want to leave your machine alone until the automated tests are complete. Any interference could lead to an incorrect failure of the certification tests. The results will indicate ways you can fix any issues; however, you should be fine for most of the tests. The biggest issues will arise from third party libraries that haven't been developed or ported to Windows 8. In this case the only option is to fix them yourself (if they're open source) or find an alternative. Once you have the test passing, or you feel confident that it won't be an issue, you need to create app packages that are compatible with the store. At this point your game will be associated with the submission you have created in the Windows Store dashboard so that it is prepared for upload. Creating your app packages To do this, right click on your game project in Visual Studio and click on Create App Packages inside the Store menu. Once you do that, you'll be asked if you want to create a package for the store. The difference between the two options comes down to how the package is signed. If you choose No here, you can create a package with your test certificate, which can be distributed for testing. These packages must be manually installed and cannot be submitted to the store. You can, however, use this type of package on other machines to install your game for testers to try out. Choosing No will give you a folder with a .ps1 file (Powershell), which you can run to execute the install script. Choosing Yes at this option will take you to a login screen where you can enter your Windows Store developer account details. Once you've logged in you will be presented with a list of applications that you have registered with the store. If you haven't yet reserved the name of your application, you can click on the Reserve Name link, which will take you directly to the appropriate page in the store dashboard. Otherwise select the name of the game you're trying to build and click on Next. The next screen will allow you to specify which architectures to build for, and the version number of the built package. As this is a C++ game, we need to provide separate packages for the ARM, x86, and x64 builds, depending on what you want to support. Simply providing an x86 and ARM build will cover the entire market; 64 bit can be nice to have if you need a lot of memory, but ultimately it is optional, and some users may not even be able to run x64 code. When you're ready click on Create and Visual Studio will proceed to build your game and compile the requested packages, placing them in the directory specified. If you've built for the store, you will need the .appxupload files from this directory when you proceed to upload your game. Once the build has completed you will be asked if you want to launch the Windows Application Certification Kit. As mentioned previously this will test your game for certification failures, and if you're submitting to the store it's strongly recommended you run this. Doing so at this screen will automatically deploy the built package and run the test, so ensure you have a little bit of time to let it run. Uploading and submitting Now that you have a built app package you can return to the store dashboard to submit your game. Just edit the submission you made previously and enter the Packages section, which will take you to the page where you can upload the appxupload file. Once you have successfully uploaded your game you will gain access to the next section, the Description. This is where you define the details that will be displayed in the store. This is also where your marketing skills come into play as you prepare the content that will hopefully get players to buy your game. You start with the description of your game, and any big feature bullet points you want to emphasize. This is the best place to mention any reviews or praise, as well as give a quick description that will help the players decide if they want to try your game. You can have a number of app features listed; however, like any "back of the box" bullet points, keep them short and exciting. Along with the description, the store requires at least one screenshot to display to the potential player. These screenshots need to be of the entire screen, and that means they need to be at least 1366x768, which is the minimum resolution of Windows 8. These are also one of the best ways to promote your game, so ensure you take some great screenshots that show off the fun and appeal of your game. There are a few ways to take a screenshot of your game. If you're testing in the simulator you can use the screenshot icon on the right toolbar of the simulator to take the screenshot. If not, you can use Windows Key + Prt Scr SysRq to take a screenshot of your entire screen, and then use that (or edit it if you have multiple monitors). Screenshots taken with either of these tools can be found in the Screenshots folder within your Pictures library. There are two other small pieces of information that are required during this stage: Copyright Info and Support contact info. For the support info, an e-mail address will usually suffice. At this point you can also include your website and, if applicable to your game, a link to the privacy policy included in your game. Note that if you require a privacy policy, it must be included in two places: your game, and the privacy policy field on this form. The last items you may want to add here are promotional images. These images are intended for use in store promotions and allow Microsoft to easily feature your game with larger promotional imagery in prominent locations within the store. If you are serious about maximizing the reach of your game, you will want to include these images. If you don't, the number of places your game can be featured will be reduced. At a minimum the 414x180 px image should be included if you want some form of promotion. Now you're almost done! The next section allows you to leave notes for the testing team. This is where you would leave test account details for any features in your game that require an account so that they can test those features. This is also the location to leave any notes about testing in case there are situations where you can point out any features that might not be obvious. In certain situations you may have an exemption from Microsoft for a certification requirement; this would be where you include that exemption. When every step has been completed and you have tick marks in all of the stages, the Submit for Certification button will unlock, allowing you to complete your submission and send it off for certification. At this stage a number of automated tests will run before human testers will try your game on a variety of devices to ensure it fits the requirements for the store. If all goes well, you will receive an email notifying you of your successful certification and, depending on if you set the release date as ASAP, you will find your game in the store a few hours later (it may take a few hours to appear in the store once you receive an email informing you that your game or app is in the store). Certification tips Your first stop should be the certification requirements page, which lists all of the current requirements your game will be tested for: http://msdn.microsoft.com/en-us/library/windows/apps/hh694083.aspx. There are some requirements that you should take note of, and in this section we'll take a look at ways to help ensure you pass those requirements. Privacy The first of course is the privacy policy. As mentioned before, if your game collects any sort of personal information, you will need that policy in two places: In full text within the game Accessible through an Internet link The default app template generated by Visual Studio automatically enables the Internet capability, and by simply having that enabled you require a privacy policy. If you aren't connecting to the Internet at all in your game, you should always ensure that none of the Internet options are enabled before you package your game. If you share any personal information, then you need to provide players with a method of opting in to the sharing. This could be done by gating the functionality behind a login screen. Note that this functionality can be locked away, and the requirement doesn't demand that you find a way to remain fully functional even if the user opts out. Features One requirement is that your game support both touch input and keyboard/mouse input. You can easily support this by using an input system like the one described in this article; however, by supporting touch input you get mouse input for free and technically fulfill this requirement. It's all about how much effort you want to put into the experience your player will have, and that's why including gamepad input is recommended as some players may want to use a connected Xbox 360 gamepad for their input device in games. Legacy APIs Although your game might run while using legacy APIs, it won't pass certification. This is checked through an automated test that also occurs during the WACK testing process, so you can easily check if you have used any illegal APIs. This often arises in third party libraries which make use of parts of the standard IO library such as the console, or the insecure versions of functions such as strcpy or fopen. Some of these APIs don't exist in WinRT for good reason; the console, for example, just doesn't exist, so calling APIs that work directly with the console makes no sense, and isn't allowed. Debug Another issue that may arise through the use of third-party libraries is that some of them may be compiled in debug mode. This could present issues at runtime for your app, and the packaging system will happily include these when compiling your game, unless it has to compile them itself. This is detected by the WACK and can be resolved by finding a release mode version, or recompiling the library. WACK The final tip is: run WACK. This kit quickly and easily finds most of the issues you may encounter during certification, and you see the issues immediately rather than waiting for it to fail during the certification process. Your final step before submitting to the store should be to run WACK, and even while developing it's a good idea to compile in release mode and run the tests to just make sure nothing is broken. Summary By now you should know how to submit your game to the store, and get through certification with little to no issues. We've looked at what you require for the store including imagery and metadata, as well as how to make use of the Windows Application Certification Kit to find problems early on and fix them up without waiting hours or days for certification to fail your game. One area unique to games that we have covered in this article is game ratings. If you're developing your game for certain markets where ratings are required, or if you are developing children's games, you may need to get a rating certificate, and hopefully you have an idea of where to look to do this. Resources for Article: Further resources on this subject: Introduction to Game Development Using Unity 3D [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 4710
article-image-getting-started-json
Packt
28 Oct 2013
6 min read
Save for later

Getting Started with JSON

Packt
28 Oct 2013
6 min read
(For more resources related to this topic, see here.) JSON was developed by Douglas Crockford. It is text-based, lightweight, and a human-readable format for data exchange between clients and servers. JSON is derived from JavaScript and bears a close resemblance to JavaScript objects, but it is not dependent on JavaScript. JSON is language-independent, and support for the JSON data format is available in all the popular languages, some of which are C#, PHP, Java, C++, Python, and Ruby JSON is a format and not a language. Prior to JSON, XML was considered to be the chosen data interchange format. XML parsing required an XML DOM implementation on the client side that would ingest the XML response, and then XPath was used to query the response in order to access and retrieve the data. That made life tedious, as querying for data had to be performed at two levels: first on the server side where the data was being queried from a database, and the second time was on the client side using XPath. JSON does not need any specific implementations; the JavaScript engine in the browser handles JSON parsing. XML messages often tend to be heavy and verbose, and take up a lot of bandwidth while sending the data over a network connection. Once the XML message is retrieved, it has to be loaded into memory to parse it; let us take a look at a students data feed in XML and JSON. The following is an example in XML: Let us take a look at the example in JSON: As we notice, the size of the XML message is bigger when compared to its JSON counterpart, and this is just for two records. A real-time feed will begin with a few thousands and go upwards. Another point to note is the amount of data that has to be generated by the server and then transmitted over the Internet is already big, and XML, as it is verbose, makes it bigger. Given that we are in the age of mobile devices where smart phones and tablets are getting more and more popular by the day, transmitting such large volumes of data on a slower network causes slow page loads, hang ups, and poor user experience, thus driving the users away from the site. JSON has come about to be the preferred Internet data interchange format, to avoid the issues mentioned earlier. Since JSON is used to transmit serialized data over the Internet, we will need to make a note of its MIME type. A MIME (Multipurpose Internet Mail Extensions) type is an Internet media type, which is a two-part identifier for content that is being transferred over the Internet. The MIME types are passed through the HTTP headers of an HTTP Request and an HTTP Response. The MIME type is the communication of content type between the server and the browser. In general, a MIME type will have two or more parts that give the browser information about the type of data that is being sent either in the HTTP Request or in the HTTP Response. The MIME type for JSON data is application/json. If the MIME type headers are not sent across the browser, it treats the incoming JSON as plain text. The Hello World program with JSON Now that we have a basic understanding of JSON, let us work on our Hello World program. This is shown in the screenshot that follows: The preceding program will alert World onto the screen when it is invoked from a browser. Let us pay close attention to the script between the <script> tags. In the first step, we are creating a JavaScript variable and initializing the variable with a JavaScript object. Similar to how we retrieve data from a JavaScript object, we use the key-value pair to retrieve the value. Simply put, JSON is a collection of key and value pairs, where every key is a reference to the memory location where the value is stored on the computer. Now let us take a step back and analyze why we need JSON, if all we are doing is assigning JavaScript objects that are readily available. The answer is, JSON is a different format altogether, unlike JavaScript, which is a language. JSON keys and values have to be enclosed in double quotes, if either are enclosed in single quotes, we will receive an error. Now, let us take a quick look at the similarities and differences between JSON and a normal JavaScript object. If we were to create a similar JavaScript object like our hello_world JSON variable from the earlier example, it would look like the JavaScript object that follows: The big difference here is that the key is not wrapped in double quotes. Since a JSON key is a string, we can use any valid string for a key. We can use spaces, special characters, and hyphens in our keys, which is not valid in a normal JavaScript object. When we use special characters, hyphens, or spaces in our keys, we have to be careful while accessing them. The reason the preceding JavaScript statement doesn't work is that JavaScript doesn't accept keys with special characters, hyphens, or strings. So we have to retrieve the data using a method where we will handle the JSON object as an associative array with a string key. This is shown in the screenshot that follows: Another difference between the two is that a JavaScript object can carry functions within, while a JSON object cannot carry any functions. The example that follows has the property getName, which has a function that alerts the name John Doe when it is invoked: Finally, the biggest difference is that a JavaScript object was never intended to be a data interchange format, while the sole purpose of JSON was to use it as a data interchange format. Summary This article introduced us to JSON, took us through its history, and its advantages over XML. It focussed on how JSON can be used in web applications for data transfer Resources for Article: Further resources on this subject: Syntax Validation in JavaScript Testing [Article] Enhancing Page Elements with Moodle and JavaScript [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 15081

article-image-advanced-system-management
Packt
25 Oct 2013
11 min read
Save for later

Advanced System Management

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Beyond backups Of course, backups are not the only issue with managing multiple, remote systems. In particular, managing such multiple configurations using a centralized application is often desirable. Configuration management One of the issues frequently faced by administrators is that of having multiple, remote systems all with similar software for the most part, but with minor differences in what is installed or running. Debian provides several packages that can help manage such an environment in a unified manner. Two of the more popular packages, both available in Debian, are FAI and Puppet. While we don't have the space to go into details, both applications are described briefly here. Fully Automated Installation Fully Automated Installation (FAI) focuses on managing Linux installations, and is developed using Debian, although it works with many different distributions, not just Debian. FAI uses a class concept for categorizing similar systems, and provides a good deal of flexibility and customization via hooks. FAI provides for unattended, automatic installation as well as tools for monitoring and updating groups of systems. FAI is frequently used for creating and maintaining clusters. More information is available at http://fai-project.org/. Puppet Probably the best known application for distributed management is Puppet, developed by Puppet Labs. Unlike FAI, only the Open Source edition is free, the Enterprise edition, which has many additional features, is not. Puppet does include support for environments other than Linux. The desired configuration is described in a custom, high-level definition language, and distributed to systems with installed clients. Unlike FAI, Puppet does not provide its own bare metal remote installation method, but does use existing methods (such as kickstart) to provide this function. A number of companies that make heavy use of distributed and clustered systems use Puppet to manage their environments. More information is available at http://puppetlabs.com/. Other packages There are other packages that can be used to manage a distributed environment, such as Chef and BCFG2. While simpler than Puppet or FAI, they support similar functions and have been used in some distributed and clustered environments. The use of FAI, Puppet, and others in cluster management warrants a brief look at clustering next, and what packages in Debian support clustering. Clusters A cluster is a group of systems that work together in such a way that the whole functions as a single unit. Such clusters can be loosely coupled or tightly coupled. A loosely coupled environment, each system is complete in itself, and can handle all of the tasks any of the other systems can handle. The environment provides mechanisms for redundancy, load sharing, and fail-over between systems, and is often called a High Availability (HA) cluster. In a tightly coupled environment, the systems involved are highly dependent on one another, often sharing memory and disk storage, and all work on the same task together. The environment provides mechanisms for data sharing, avoiding storage conflicts, keeping the systems in synchronization, and splitting up tasks appropriately. This design is often used in super-computing environments. Clustering is an advanced technique that involves more than just installing and configuring software. It also involves hardware integration, and systems and network design, and implementation. Along with the URLs mentioned below, a good text on the subject is Building Clustered Linux Systems, by Robert W. Lucke, Prentice Hall. Here we will only touch the very basics, along with what tools Debian provides. Let's take a brief look at each environment, and some of the tools used to create them. High Availability clusters Two primary functions are required to implement a high availability cluster: A way to handle load balancing and individual host fail-over. A way to synchronize storage so that all servers provide the same view of the data they serve. Debian includes meta packages that bring together software from the Linux High Availability project, including cluster-agents and resource-agents, two of the higher-level meta packages. These packages install various agents that are useful in coordinating and managing load balancing and fail-over. In some cases, a master server is designated to distribute the processing load among other servers. Data synchronization is handled by using shared storage and any of the filesystems that provide for multiple accesses and shared files, such as NFS or AFS. High Availability clusters generally use standard software, along with software that is readily available to manage the dynamics of such environments. Beowulf clusters In addition to the considerations for High Availability clusters, more tightly coupled environments such as Beowulf clusters also require an infrastructure to manage and distribute computing tasks. There are several web pages devoted to creating a Beowulf cluster using Debian as well as packages that aid in creating such a cluster. One such page is https://wiki.debian.org/StartaBeowulf, a Debian Wiki page on Beowulf basics. The manual for FAI also has a section on creating a Beowulf cluster. Books are available as well. Debian provides several packages that are helpful in building such a cluster, such as the OpenMPI libraries for message passing, and various utilities that run commands on multiple systems, such as those in the kadif package. There are even projects that have released scripts and live CDs that allow you to set up a cluster quickly (one such project is the PelicanHPC project, developed for Debian Lenny, hosted at http://www.pelicanhpc.org/. This type of cluster is not something that you can set up and go. Beowulf and other tightly coupled clusters are intended for highly parallel computing, and the programs that do the actual computing must be designed specifically for such an environment. That said, some packages for specific parallel computations do exist in Debian, such as nwchem, which provides several applications for computational chemistry that take advantage of parallelism. Common tools Some common components of clusters have already been mentioned, such as the OpenMPI libraries. Aside from the meta-packages already mentioned, the redhat-cluster suite of tools is available in Debian, as well as many useful libraries, scheduling tools, and failover tools such as booth. All of these can be found using apt-cache or Synaptic by searching for "cluster". Webmin Many administrators will never have to administer a cluster, and many won't be responsible for a large number of systems requiring central backup solutions. However, even administering a single system using command line tools and text editors can be a chore. Even clusters sometimes require administrative tasks on individual systems. Fortunately, there is an application that can ease many administrative tasks, is easy to use, and can handle many aspects of Linux administration. It is called Webmin. Up until Debian Sarge, Webmin was a part of Debian distributions. However, the Debian developer in charge of packaging it had difficulty keeping up with the frequent releases, and it was eventually dropped from Debian. However, the upstream Webmin developers maintain current packages that install cleanly. Some users have reported issues because Webmin does not always handle configuration files exactly as Debian intends, but it most certainly attempts to handle them in a compatible manner, and while some users have experienced problems with upgrades, many administrators are quite happy with Webmin. As long as you are willing to deal with conflicts during upgrades, or restrict use of modules that have major configuration impacts, you will find Webmin quite useful. Installing Webmin Webmin may be installed by adding the following lines to your apt sources file: deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib Usually, this is added to a separate webmin.list file in /etc/apt/sources.list.d. The use of 'sarge' for the release name in the configuration is not a mistake. Since Webmin was dropped after the Sarge release (Debian 3.1), the developers update the repository as it is and haven't bothered changing it to keep up with the Debian code names. However, the versions available in the repository are compatible with any Debian release since 3.1. After updating your cache file, Webmin can be installed and maintained using apt-get, aptitude, or Synaptic. Also, if you request a Webmin upgrade from within Webmin itself on a Debian system, it will use the proper Debian package to upgrade. Using Webmin Webmin runs in the background, and provides an HTTP or HTTPS server on localhost port 10,000. You can use any web browser to connect to http://localhost:10000/ to access Webmin. Upon first installation, only the root user or those in a group allowed to use sudo to access the root account, may log in but Webmin users can be managed separately or in conjunction with local users. Webmin provides extensive and easy to understand menus and icons for various configuration tasks. Webmin is also highly modular and extensible, and an extensive list of standard modules is included with the base package. It is not possible to cover Webmin as fully here as it deserves, but a short list of some of its capabilities includes: Configuration of Webmin itself (the server, users, modules, and security) Local system user and password management Filesystem management Bootup and service management CRON job management Software updates Basic filesystem backups Authentication and security configuration APACHE, DNS, SSH, and FTP (if you're using ProFTP) configuration User mail management Qmail or sendmail configuration Network and Firewall configuration and management Bandwidth monitoring Printer management There are even modules that apply to clusters. Also, Webmin can search and allow access to other Webmin servers on the local network or you can define remote servers manually. This allows a central Webmin server, installed on a particular system, to be the gateway to all of the other servers in your environment, essentially providing a single point of access to manage all Webmin enabled servers. Webmin and Debian Webmin understands the configuration file layout of many distributions. The main problem is when a particular module does not handle certain types of configuration in the way the Debian developers prefer, which can make package upgrades somewhat difficult. This can be handled in a couple of ways. Most modules provide a means to edit configuration files directly, so if you have read the Debian documentation you can modify the configuration appropriately to use Debian specific configuration techniques. Or, you may choose to allow Webmin to modify files as it sees fit, and handle any conflicts manually when you upgrade the software involved. Finally, you can avoid those modules involved with specific software that are more likely to cause problems. One such module is Apache, which doesn't use links from sites-enabled to sites-available. Rather, it configures directly in the sites-enabled directory. Some administrators create the configuration in Webmin, and then move and link the files. Others prefer to manually configure Apache outside of Webmin. Webmin modules are constantly changing, and some actually recognize the Debian file layouts well, so it is not possible to give a comprehensive list of modules to avoid at this time. Best practice when using Webmin is to read the documentation and check the configuration files for specific software prior to using Webmin. Then, after configuring with Webmin, check the files again to determine whether changes may be required to work within the particular package's Debian configuration framework. Based upon this, you can decide whether to continue to configure using Webmin or switch back to manual configuration of that particular software. Webmin security Security is always a concern when remote access to a system is involved. Webmin handles this by requiring authentication and providing for detailed access restrictions that provide a layer of control beyond the firewall. Webmin users can be defined separately, or certain local users can be designated. Access to the various modules in Webmin can be restricted to certain users or groups of users, and detailed logs of Webmin actions are kept. Usermin In addition to Webmin, there is a server called Usermin which may be installed from the same repository as Webmin. It allows individual users to perform a number of functions more easily, such as changing their password, accessing their files, read and manage their email, and managing some aspects of their user profile. It is also modular and has the same security features as Webmin. Summary Several powerful and flexible central backup solutions exist that help manage backups for multiple remote servers and sites. Debian provides packages that assist in building High Availability and Beowulf style multiprocessing clusters as well. And, whether you are involved in managing clusters or not, or even a single system, Webmin can ease an administrator's tasks. Resources for Article: Further resources on this subject: Customizing a Linux kernel [Article] Microsoft SharePoint 2010 Administration: Farm Governance [Article] Testing Workflows for Microsoft Dynamics AX 2009 Administration [Article]
Read more
  • 0
  • 0
  • 5659
Modal Close icon
Modal Close icon