Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-load-validate-and-submit-forms-using-ext-js-30-part-2
Packt
19 Nov 2009
4 min read
Save for later

Load, Validate, and Submit Forms using Ext JS 3.0: Part 2

Packt
19 Nov 2009
4 min read
Creating validation functions for URLs, email addresses, and other types of data Ext JS has an extensive library of validation functions. This is how it can be used to validate URLs, email addresses, and other types of data. The following screenshot shows email address validation in action: This screenshot displays URL validation in action: How to do it... Initialize the QuickTips singleton: Ext.QuickTips.init(); Create a form with fields that accept specific data formats: Ext.onReady(function() { var commentForm = new Ext.FormPanel({ frame: true, title: 'Send your comments', bodyStyle: 'padding:5px', width: 550, layout: 'form', defaults: { msgTarget: 'side' }, items: [ { xtype: 'textfield', fieldLabel: 'Name', name: 'name', anchor: '95%', allowBlank: false }, { xtype: 'textfield', fieldLabel: 'Email', name: 'email', anchor: '95%', vtype: 'email' }, { xtype: 'textfield', fieldLabel: 'Web page', name: 'webPage', vtype: 'url', anchor: '95%' }, { xtype: 'textarea', fieldLabel: 'Comments', name: 'comments', anchor: '95%', height: 150, allowBlank: false }], buttons: [{ text: 'Send' }, { text: 'Cancel' }] }); commentForm.render(document.body);}); How it works... The vtype configuration option specifies which validation function will be applied to the field. There's more... Validation types in Ext JS include alphanumeric, numeric, URL, and email formats. You can extend this feature with custom validation functions, and virtually, any format can be validated. For example, the following code shows how you can add a validation type for JPG and PNG files: Ext.apply(Ext.form.VTypes, { Picture: function(v) { return /^.*.(jpg|JPG|png|PNG)$/.test(v); }, PictureText: 'Must be a JPG or PNG file';}); If you need to replace the default error text provided by the validation type, you can do so by using the vtypeText configuration option: { xtype: 'textfield', fieldLabel: 'Web page', name: 'webPage', vtype: 'url', vtypeText: 'I am afraid that you did not enter a URL', anchor: '95%'} See also... The Specifying the required fields in a form recipe, covered earlier in this article, explains how to make some form fields required The Setting the minimum and maximum length allowed for a field's value recipe, covered earlier in this article, explains how to restrict the number of characters entered in a field The Changing the location where validation errors are displayed recipe, covered earlier in this article, shows how to relocate a field's error icon Refer to the previous recipe, Deferring field validation until form submission, to know how to validate all fields at once upon form submission, instead of using the default automatic field validation The next recipe, Confirming passwords and validating dates using relational field validation, explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe (covered later in this article) explains how to perform server-side validation Confirming passwords and validating dates using relational field validation Frequently, you face scenarios where the values of two fields need to match, or the value of one field depends on the value of another field. Let's examine how to build a registration form that requires the user to confirm his or her password when signing up. How to do it… Initialize the QuickTips singleton: Ext.QuickTips.init(); Create a custom vtype to handle the relational validation of the password: Ext.apply(Ext.form.VTypes, { password: function(val, field) { if (field.initialPassField) { var pwd = Ext.getCmp(field.initialPassField); return (val == pwd.getValue()); } return true; }, passwordText: 'What are you doing?<br/>The passwords entered do not match!'}); Create the signup form: var signupForm = { xtype: 'form', id: 'register-form', labelWidth: 125, bodyStyle: 'padding:15px;background:transparent', border: false, url: 'signup.php', items: [ { xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"><img src="img/businessman add.png" class="app-img" /> Register for The Magic Forum</div>' } }, { xtype: 'textfield', id: 'email', fieldLabel: 'Email', allowBlank: false, minLength: 3, maxLength: 64,anchor:'90%', vtype:'email' }, { xtype: 'textfield', id: 'pwd', fieldLabel: 'Password', inputType: 'password',allowBlank: false, minLength: 6, maxLength: 32,anchor:'90%', minLengthText: 'Password must be at least 6 characters long.' }, { xtype: 'textfield', id: 'pwd-confirm', fieldLabel: 'Confirm Password', inputType: 'password', allowBlank: false, minLength: 6, maxLength: 32,anchor:'90%', minLengthText: 'Password must be at least 6 characters long.', vtype: 'password', initialPassField: 'pwd' }],buttons: [{ text: 'Register', handler: function() { Ext.getCmp('register-form').getForm().submit(); }},{ text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the signup form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [signupForm] }); win.show();
Read more
  • 0
  • 0
  • 5410

article-image-user-extensions-and-add-ons-selenium-10-testing-tools
Packt
26 Nov 2010
6 min read
Save for later

User Extensions and Add-ons in Selenium 1.0 Testing Tools

Packt
26 Nov 2010
6 min read
Important preliminary points If you are creating an extension that can be used by all, make sure that it is stored in a central place, like a version control system. This will prevent any potential issues in the future when others in your team start to use it. User extensions Imagine that you wanted to use a snippet of code that is used in a number of different tests. You could use: type | locator | javascript{ .... } However, if you had a bug in the JavaScript you would need to go through all the tests that reused this snippet of code. This, as we know from software development, is not good practice and is normally corrected with a refactoring of the code. In Selenium, we can create our own function that can then be used throughout the tests. User extensions are stored in a separate file that we will tell Selenium IDE or Selenium RC to use. Inside there the new function will be written in JavaScript. Because Selenium's core is developed in JavaScript, creating an extension follows the standard rules for prototypal languages. To create an extension, we create a function in the following design pattern. Selenium.prototype.doFunctionName = function(){ . . . } The "do" in front of the function name tells Selenium that this function can be called as a command for a step instead of an internal or private function. Now that we understand this, let's see this in action. Time for action – installing a user extension Now that you have a need for a user extension, let's have a look at installing an extension into Selenium IDE. This will make sure that we can use these functions in future Time for action sections throughout this article. Open your favorite text editor. Create an empty method with the following text: Selenium.prototype.doNothing = function(){ . . . } Start Selenium IDE. Click on the Options menu and then click on Options. Place the path of the user-extension.js file in the textbox labeled Selenium IDE extensions. Click on OK. Restart Selenium IDE. Start typing in the Command textbox and your new command will be available, as seen in the next screenshot: What just happened? We have just seen how to create our first basic extension command and how to get this going in Selenium IDE. You will notice that you had to restart Selenium IDE for the changes to take effect. Selenium has a process that finds all the command functions available to it when it starts up, and does a few things to it to make sure that Selenium can use them without any issues. Now that we understand how to create and install an extension command let's see what else we can do with it. In the next Time for action, we are going to have a look at creating a randomizer command that will store the result in a variable that we can use later in the test. Time for action – using Selenium variables in extensions Imagine that you are testing something that requires some form of random number entered into a textbox. You have a number of tests that require you to create a random number for the test so you can decide that you are going to create a user extension and then store the result in a variable. To do this we will need to pass in arguments to our function that we saw earlier. The value in the target box will be passed in as the first argument and the value textbox will be the second argument. We will use this in a number of different examples throughout this article. Let's now create this extension. Open your favorite text editor and open the user-extension.js file you created earlier. We are going to create a function called storeRandom. The function will look like the following: Selenium.prototype.doStoreRandom = function(variableName){ random = Math.floor(Math.random()*10000000); storedVars[variableName] = random; } Save the file. Restart Selenium IDE. Create a new step with storeRandom and the variable that will hold the value will be called random. Create a step to echo the value in the random variable. What just happened? In the previous example, we saw how we can create an extension function that allows us to use variables that can be used throughout the rest of the test. It uses the storedVars dictionary that we saw in the previous chapter. As everything that comes from the Selenium IDE is interpreted as a string, we just needed to put the variable as the key in storedVars. It is then translated and will look like storedVars['random'] so that we can use it later. As with normal Selenium commands, if you run the command a number of times, it will overwrite the value that is stored within that variable, as we can see in the previous screenshot. Now that we know how to create an extension command that computes something and then stores the results in a variable, let's have a look at using that information with a locator. Time for action – using locators in extensions Imagine that you need to calculate today's date and then type that into a textbox. To do that you can use the type | locator | javascript{...} format, but sometimes it's just neater to have a command that looks like typeTodaysDate | locator. We do this by creating an extension and then calling the relevant Selenium command in the same way that we are creating our functions. To tell it to type in a locator, use: this.doType(locator,text); The this in front of the command text is to make sure that it used the doType function inside of the Selenium object and not one that may be in scope from the user extensions. Let's see this in action: Use your favorite text editor to edit the user extensions that you were using in the previous examples. Create a new function called doTypeTodaysDate with the following snippet: Selenium.prototype.doTypeTodaysDate = function(locator){ var dates = new Date(); var day = dates.getDate(); if (day < 10){ day = '0' + day; } month = dates.getMonth() + 1; if (month < 10){ month = '0' + month; } var year = dates.getFullYear(); var prettyDay = day + '/' + month + '/' + year; this.doType(locator, prettyDay); } Save the file and restart Selenium IDE. Create a step in a test to type this in a textbox. Run your script. It should look similar to the next screenshot: What just happened? We have just seen that we can create extension commands that use locators. This means that we can create commands to simplify tests as in the previous example where we created our own Type command to always type today's date in the dd/mm/yyyy format. We also saw that we can call other commands from within our new command by calling its original function in Selenium. The original function has do in front of it. Now that we have seen how we can use basic locators and variables, let's have a look at how we can access the page using browserbot from within an extension method.
Read more
  • 0
  • 0
  • 5401

Packt
22 Oct 2009
8 min read
Save for later

Working with Rails – ActiveRecord, Migrations, Models, Scaffolding, and Database Completion

Packt
22 Oct 2009
8 min read
ActiveRecord, Migrations, and Models ActiveRecord is the ORM layer (see the section Connecting Rails to a Database in the previous article) used in Rails. It is used by controllers as a proxy to the database tables. What's really great about this is that it protects you against having to code SQL. Writing SQL is one of the least desirable aspects of developing with other web-centric languages (like PHP): having to manually build SQL statements, remembering to correctly escape quotes, and creating labyrinthine join statements to pull data from multiple tables. ActiveRecord does away with all of that (most of the time), instead presenting database tables through classes (a class which wraps around a database table is called a model) and instances of those classes (model instances). The best way to illustrate the beauty of ActiveRecord is to start using it. Model == Table The base concept in ActiveRecord is the model. Each model class is stored in the app/models directory inside your application, in its own file. So, if you have a model called Person, the file holding that model is in app/models/person.rb, and the class for that model, defined in that file, is called Person. Each model will usually correspond to a table in the database. The name of the database table is, by convention, the pluralized (in the English language), lower-case form of the model's class name. In the case of our Intranet application, the models are organized as follows: Table Model class File containing class definition (in app/models) people Person person.rb companies Company company.rb addresses Address address.rb We haven't built any of these yet, but we will shortly. Which Comes First: The Model or The Table? To get going with our application, we need to generate the tables to store data into, as shown in the previous section. It used to be at this point where we would reach for a MySQL client, and create the database tables using a SQL script. (This is typically how you would code a database for a PHP application.) However, things have moved on in the Rails world. The Rails developers came up with a pretty good (not perfect, but pretty good) mechanism for generating databases without the need for SQL: it's called migrations, and is a part of ActiveRecord. Migrations enable a developer to generate a database structure using a series of Ruby script files (each of which is an individual migration) to define database operations. The "operations" part of that last sentence is important: migrations are not just for creating tables, but also for dropping tables, altering them, and even adding data to them. It is this multi-faceted aspect of migrations which makes them useful, as they can effectively be used to version a database (in much the same way as Subversion can be used to version code). A team of developers can use migrations to keep their databases in sync: when a change to the database is made by one of the team and coded into a migration, the other developers can apply the same migration to their database, so they are all working with a consistent structure. When you run a migration, the Ruby script is converted into the SQL code appropriate to your database server and executed over the database connection. However, migrations don't work with every database adapter in Rails: check the Database Support section of the ActiveRecord::Migration documentation to find out whether your adapter is supported. At the time of writing, MySQL, PostgreSQL, SQLite, SQL Server, Sybase, and Oracle were all supported by migrations. Another way to check whether your database supports migrations is to run the following command in the console (the output shown below is the result of running this using the MySQL adapter): >> ActiveRecord::Base.connection.supports_migrations? => true We're going to use migrations to develop our database, so we'll be building the model first. The actual database table will be generated from a migration attached to the model. Building a Model with Migrations In this section, we're going to develop a series of migrations to recreate the database structure outlined in Chapter 2 of the book Ruby on Rails Enterprise Application Development: Plan, Program, Extend. First, we'll work on a model and migration for the people table. Rails has a generate script for generating a model and its migration. (This script is in the script directory, along with the other Rails built-in scripts.) The script builds the model, a base migration for the table, plus scripts for testing the model. Run it like this: $ ruby script/generate model Person exists app/models/  exists test/unit/    exists test/fixtures/    create app/models/person.rb    create test/unit/person_test.rb    create test/fixtures/people.yml    exists db/migrate    create db/migrate/001_create_people.rb Note that we passed the singular, uppercase version of the table name ("people" becomes "Person") to the generate script. This generates a Person model in the file app/models/person.rb; and a corresponding migration for a people table (db/ migrate/001_create_people.rb). As you can see, the script enforces the naming conventions, which connects the table to the model. The migration name is important, as it contains sequencing information: the "001" part of the name indicates that running this migration will bring the database schema up to version 1; subsequent migrations will be numbered "002...", "003..." etc., each specifying the actions required to bring the database schema up to that version from the previous one. The next step is to edit the migration so that it will create the people table structure. At this point, we can return to Eclipse to do our editing. (Remember that you need to refresh the file list in Eclipse to see the files you just generated). Once, you have started Eclipse, open the file db/migrate/001_create_people.rb. It should look like this:     class CreatePeople < ActiveRecord::Migration        def self.up            create_table :people do |t|                # t.column :name, :string            end        end        def self.down            drop_table :people        end    end This is a migration class with two class methods, self.up and self.down. The self.up method is applied when migrating up one database version number: in this case, from version 0 to version 1. The self.down method is applied when moving down a version number (from version 1 to 0). You can leave self.down as it is, as it simply drops the database table. This migration's self.up method is going to add our new table using the create_table method, so this is the method we're going to edit in the next section. Ruby syntaxExplaining the full Ruby syntax is outside the scope of this book. For our purposes, it suffices to understand the most unusual parts. For example, in the create_table method call shown above:,     create_table :people do |t|        t.column :title, :string        ...    end The first unusual part of this is the block construct, a powerful technique for creating nameless functions. In the example code above, the block is initialized by the do keyword; this is followed by a list of parameters to the block (in this case, just t); and closed by the end keyword. The statements in-between the do and end keywords are run within the context of the block. Blocks are similar to lambda functions in Lisp or Python, providing a mechanism for passing a function as an argument to another function. In the case of the example, the method call create_table:people is passed to a block, which accepts a single argument, t; t has methods called on it within the body of the block. When create_table is called, the resulting table object is "yielded" to the block; effectively, the object is passed into the block as the argument t, and has its column method called multiple times. One other oddity is the symbol: that's what the words prefixed with a colon are. A symbol is the name of a variable. However, in much of Rails, it is used in contexts where it is functionally equivalent to a string, to make the code look more elegant. In fact, in migrations, strings can be used interchangeably with symbols.  
Read more
  • 0
  • 0
  • 5395

article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 5392

article-image-feeds-facebook-applications
Packt
16 Oct 2009
7 min read
Save for later

Feeds in Facebook Applications

Packt
16 Oct 2009
7 min read
{literal} What Are Feeds? Feeds are the way to publish news in Facebook. As we have already mentioned before, there are two types of feeds in Facebook, News feed and Mini feed. News feed instantly tracks activities of a user's online friends, ranging from changes in relationship status to added photos to wall comments. Mini feed appears on individuals' profiles and highlights recent social activity. You can see your news feed right after you log in, and point your browser to http://www.facebook.com/home.php. It looks like the following, which is, in fact, my news feed. Mini feeds are seen in your profile page, displaying your recent activities and look like the following one: Only the last 10 entries are being displayed in the mini feed section of the profile page. But you can always see the complete list of mini feeds by going to http://www.facebook.com/minifeed.php. Also the mini feed of any user can be accessed from http://www.facebook.com/minifeed.php?id=userid. There is another close relation between news feed and mini feed. When applications publish a mini feed in your profile, it will also appear in your friend's news feed page. How to publish Feeds Facebook provides three APIs to publish mini feeds and news feeds. But these are restricted to call not more than 10 times for a particular user in a 48 hour cycle. This means you can publish a maximum of 10 feeds in a specific user's profile within 48 hours. The following three APIs help to publish feeds: feed_publishStoryToUser—this function publishes the story to the news feed of any user (limited to call once every 12 hours). feed_publishActionOfUser—this one publishes the story to a user's mini feed, and to his or her friend's news feed (limited to call 10 times in a rolling 48 hour slot). feed_publishTemplatizedAction—this one also publishes mini feeds and news feeds, but in an easier way (limited to call 10 times in a rolling 48 hour slot). You can test this API also from http://developers.facebook.com/tools.php?api, and by choosing Feed Preview Console, which will give you the following interface: And once you execute the sample, like the previous one, it will preview the sample of your feed. Sample application to play with Feeds Let's publish some news to our profile, and test how the functions actually work. In this section, we will develop a small application (RateBuddies) by which we will be able to send messages to our friends, and then publish our activities as a mini feed. The purpose of this application is to display friends list and rate them in different categories (Awesome, All Square, Loser, etc.). Here is the code of our application: index.php<?include_once("prepend.php"); //the Lib and key container?><div style="padding:20px;"><?if (!empty($_POST['friend_sel'])){ $friend = $_POST['friend_sel']; $rating = $_POST['rate']; $title = "<fb:name uid='{$fbuser}' useyou='false' /> just <a href='http://apps.facebook.com/ratebuddies/'>Rated</a> <fb:name uid='{$friend}' useyou='false' /> as a '{$rating}' "; $body = "Why not you also <a href='http://apps.facebook.com/ratebuddies/'>rate your friends</a>?";try{//now publish the story to user's mini feed and on his friend's news feed $facebook->api_client->feed_publishActionOfUser($title, $body, null, $null,null, null, null, null, null, null, 1); } catch(Exception $e) { //echo "Error when publishing feeds: "; echo $e->getMessage(); }}?> <h1>Welcome to RateBuddies, your gateway to rate your friends</h1> <div style="padding-top:10px;"> <form method="POST"> Seect a friend: <br/><br/> <fb:friend-selector uid="<?=$fbuser;?>" name="friendid" idname="friend_sel" /> <br/><br/><br/> And your friend is: <br/> <table> <tr> <td valign="middle"><input name="rate" type="radio" value="funny" /></td> <td valign="middle">Funny</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="hot tempered" /></td> <td valign="middle">Hot Tempered</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="awesome" /></td> <td valign="middle">Awesome</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="naughty professor" /></td> <td valign="middle">Naughty Professor</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="looser" /></td> <td valign="middle">Looser</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="empty veseel" /></td> <td valign="middle">Empty Vessel</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="foxy" /></td> <td valign="middle">Foxy</td> </tr> <tr> <td valign="middle"><input name="rate" type="radio" value="childish" /></td> <td valign="middle">Childish</td> </tr> </table> &nbsp; <input type="submit" value="Rate Buddy"/> </form> </div></div> index.php includes another file called prepend.php. In that file, we initialized the facebook api client using the API key and Secret key of the current application. It is a good practice to keep them in separate file because we need to use them throughout our application, in as many pages as we have. Here is the code of that file: prepend.php<?php// this defines some of your basic setupinclude 'client/facebook.php'; // the facebook API library// Get these from ?http://www.facebook.com/developers/apps.phphttp://www.facebook.com/developers/apps.php$api_key = 'your api key';//the api ket of this application$secret = 'your secret key'; //the secret key$facebook = new Facebook($api_key, $secret); //catch the exception that gets thrown if the cookie has an invalid session_key in it try { if (!$facebook->api_client->users_isAppAdded()) { $facebook->redirect($facebook->get_add_url()); } } catch (Exception $ex) { //this will clear cookies for your application and redirect them to a login prompt $facebook->set_user(null, null); $facebook->redirect($appcallbackurl); }?> The client is a standard Facebook REST API client, which is available directly from Facebook. If you are not sure about these API keys, then point your browser to http://www.facebook.com/developers/apps.php and collect the API key and secret key from there. Here is a screenshot of that page: Just collect your API key and Secret Key from this page, when you develop your own application. Now, when you point your browser to http://apps.facebooks.com/ratebuddies and successfully add that application, it will look like this: To see how this app works, type a friend in the box, Select a friend, and click on any rating such as Funny or Foxy. Then click on the Rate Buddy button. As soon as the page submits, open your profile page and you will see that it has published a mini feed in your profile.
Read more
  • 0
  • 0
  • 5373

article-image-edx-e-learning-course-marketing
Packt
05 Jun 2015
9 min read
Save for later

edX E-Learning Course Marketing

Packt
05 Jun 2015
9 min read
In this article by Matthew A. Gilbert, the author of edX E-Learning Course Development, we are going to learn various ways of marketing. (For more resources related to this topic, see here.) edX's marketing options If you don't market your course, you might not get any new students to teach. Fortunately, edX provides you with an array of tools for this purpose, as follows: Creative Submission Tool: Submit the assets required for creating a page in your edX course using the Creative Submission Tool. You can also use those very materials in promoting the course. Access the Creative Submission Tool at https://edx.projectrequest.net/index.php/request. Logo and the Media Kit: Although these are intended for members of the media, you can also use the edX Media Kit for your promotional purposes: you can download high-resolution photos, edX logo visual guidelines (in Adobe Illustrator and EPS versions), key facts about edX, and answers to frequently asked questions. You can also contact the press office for additional information. You can find the edX Media Kit online at https://www.edx.org/media-kit. edX Learner Stories: Using stories of students who have succeeded with other edX courses is a compelling way to market the potential of your course. Using Tumblr, edX Learner Stories offers more than a dozen student profiles. You might want to use their stories directly or use them as a template for marketing materials of your own. Read edX Learner Stories at http://edxstories.tumblr.com. Social media marketing Traditional marketing tools and the options available in the edX Marketing Portal are a fitting first step in promoting your course. However, social media gives you a tremendously enhanced toolkit you can use to attract, convert, and transform spectators into students. When marketing your course with social media, you will also simultaneously create a digital footprint for yourself. This in turn helps establish your subject matter expertise far beyond one edX course. What's more, you won't be alone; there exists a large community of edX instructors and students, including those from other MOOC platforms already online. Take, for example, the following screenshot from edX's Twitter account (@edxonline). edX has embraced social media as a means of marketing and to create a practicing virtual community for those creating and taking their courses. Likewise, edX also actively maintains a page on Facebook, as follows: You can also see how active edX's YouTube channel is in the following screenshot. Note that there are both educational and promotional videos. To get you started in social media—if you're not already there—take a look at the list of 12 social media tools, as follows. Not all of these tools might be relevant to your needs, but consider the suggestions to decide how you might best use them, and give them a try: Facebook (https://www.facebook.com): Create a fan page for your edX course; you can re-use content from your course's About page such as your course intro video, course description, course image, and any other relevant materials. Be sure to include a link from the Facebook page for your course to its About page. Look for ways to share other content from your course (or related to your course) in a way that engages members of your fan page. Use your Facebook page to generate interest and answer questions from potential students. You might also consider creating a Facebook group. This can be more useful for current students to share knowledge during the class and to network once it's complete. Visit edX on Facebook at https://www.facebook.com/edX. Google+ (https://plus.google.com): Take the same approach as you did with your Facebook fan page. While this is not as engaging as Facebook, you might find that posting content on Google+ increases traffic to your course's About page due to the increased referrals you are likely to experience via Google search results. Add edX to your circles on Google+ at https://plus.google.com/+edXOnline/posts. Instagram (https://instagram.com): Share behind-the-scenes pictures of you and your staff for your course. Show your students what a day in your life is like, making sure to use a unique hashtag for your course. Picture the possibilities with edX on Instagram at https://instagram.com/edxonline/. LinkedIn (https://www.linkedin.com): Share information about your course in relevant LinkedIn groups, and post public updates about it in your personal account. Again, make sure you include a unique hashtag for your course and a link to the About page. Connect with edX on LinkedIn at https://www.linkedin.com/company/edx. Pinterest (https://www.pinterest.com): Share photos as with Instagram, but also consider sharing infographics about your course's subject matter or share infographics or imagers you use in your actual course as well. You might consider creating pin boards for each course, or one per pin board per module in a course. Pin edX onto your Pinterest pin board at https://www.pinterest.com/edxonline/. Slideshare (http://www.slideshare.net): If you want to share your subject matter expertise and thought leadership with a wider audience, Slideshare is a great platform to use. You can easily post your PowerPoint presentations, class documents or scholarly papers, infographics, and videos from your course or another topic. All of these can then be shared across other social media platforms. Review presentations from or about edX courses on Slideshare at http://www.slideshare.net/search/slideshow?searchfrom=header&q=edx. SoundCloud (https://soundcloud.com): With SoundCloud, you can share MP3 files of your course lectures or create podcasts related to your areas of expertise. Your work can be shared on Twitter, Tumblr, Facebook, and Foursquare, expanding your influence and audience exponentially. Listen to some audio content from Harvard University at https://soundcloud.com/harvard. Tumblr (https://www.tumblr.com): Resembling what the child of WordPress and Twitter might be like, Tumblr provides a platform to share behind-the-scenes text, photos, quotes, links, chat, audios, and videos of your edX course and the people who make it possible. Share a "day in the life" or document in real time, an interactive history of each edX course you teach. Read edX's learner stories at http://edxstories.tumblr.com. Twitter (https://twitter.com): Although messages on Twitter are limited to 140 characters, one tweet can have a big impact. For a faculty wanting to promote its edX course, it is an efficient and cost-effective option. Tweet course videos, samples of content, links to other curriculum, or promotional material. Engage with other educators who teach courses and retweet posts from academic institutions. Follow edX on Twitter at https://twitter.com/edxonline. You might also consider subscribing to edX's Twitter list of edX instructors at https://twitter.com/edXOnline/lists/edx-professors-teachers, and explore the Twitter accounts of edX courses by subscribing to that list at https://twitter.com/edXOnline/lists/edx-course-handles. Vine (https://vine.co): A short-format video service owned by Twitter, Vine provides you with 6 seconds to share your creativity, either in a continuous stream or smaller segments linked together like stop motion. You might create a vine showing the inner working of the course faculty and staff, or maybe even ask short questions related to the course content and invite people to reply with answers. Watch vines about MOOCs at https://vine.co. WordPress: WordPress gives you two options to manage and share content with students. With WordPress.com (https://wordpress.com), you're given a selection of standardized templates to use on a hosted platform. You have limited control but reasonable flexibility and limited, if any, expenses. With Wordpress.org (https://wordpress.org), you have more control but you need to host it on your own web server, which requires some technical know-how. The choice is yours. Read posts on edX on the MIT Open Matters blog on Wordpress.com at https://mitopencourseware.wordpress.com/category/edx/. YouTube (https://www.youtube.com): YouTube is the heart of your edX course. It's the core of your curriculum and the anchor of engagement for your students. When promoting your course, use existing videos from your curriculum in your social media campaigns, but identify opportunities to record short videos specifically for promoting your course. Watch course videos and promotional content on the edX YouTube channel at https://www.youtube.com/user/EdXOnline. Personal branding basics Additionally, whether the impact of your effort is immediately evident or not, your social media presence powers your personal brand as a professor. Why is that important? Read on to know. With the possible exception of marketing professors, most educators likely tend to think more about creating and teaching their course than promoting it—or themselves. Traditionally, that made sense, but it isn't practical in today's digitally connected world. Social media opens an area of influence where all educators—especially those teaching an edX course—should be participating. Unfortunately, many professors don't know where or how to start with social media. If you're teaching a course on edX, or even edX Edge, you will likely have some kind of marketing support from your university or edX. But if you are just in an organization using edX Code, or simply want to promote yourself and your edX course, you might be on your own. One option to get you started with social media is the Babb Group, a provider of resources and consulting for online professors, business owners, and real-estate investors. Its founder and CEO, Dani Babb (PhD), says this: "Social media helps you show that you are an expert in a given field. It is an important tool today to help you get hired, earn promotions, and increase your visibility." The Babb Group offers five packages focused on different social media platforms: Twitter, LinkedIn, Facebook, Twitter and Facebook, or Twitter with Facebook and LinkedIn. You can view the Babb Group's social media marketing packages at http://www.thebabbgroup.com/social-media-profiles-for-professors.html. Connect with Dani Babb on LinkedIn at https://www.linkedin.com/in/drdanibabb or on Twitter at https://twitter.com/danibabb Summary In this article, we tackled traditional marketing tools, identified options available from edX, discussed social media marketing, and explored personal branding basics. Resources for Article: Further resources on this subject: Constructing Common UI Widgets [article] Getting Started with Odoo Development [article] MODx Web Development: Creating Lists [article]
Read more
  • 0
  • 0
  • 5369
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-moodle-cims-installing-and-using-bulk-course-upload-tool
Packt
07 Jan 2011
7 min read
Save for later

Moodle CIMS: Installing and Using the Bulk Course Upload Tool

Packt
07 Jan 2011
7 min read
Moodle as a Curriculum and Information Management System Use Moodle to manage and organize your administrative duties; monitor attendance records, manage student enrolment, record exam results, and much more Transform your Moodle site into a system that will allow you to manage information such as monitoring attendance records, managing the number of students enrolled in a particular course, and inter-department communication Create courses for all subjects in no time with the Bulk Course Creation tool Create accounts for hundreds of users swiftly and enroll them in courses at the same time using a CSV file. Part of Packt's Beginner's Guide series: Readers are walked through each task as they read the book with the end result being a sample CIMS Moodle site Using the Bulk Course Upload tool Rather than creating course categories and then courses one at a time and assigning teachers to each course after the course is created, we can streamline the process through the use of the Bulk Course Upload tool. This tool allows you to organize all the information required to create your courses in a CSV (Comma Separated Values) file that is then uploaded into the creation tool and used to create all of your courses at once. Due to its design, the Bulk Course Upload tool only works with MySQL databases. Our MAMP package uses a MySQL database as do the LAMP packages. If your Moodle site is running on a database of a different variety you will not be able to use this tool. Time for action – installing the Bulk Course Upload tool Now that we have our teacher's accounts created, we are ready to use the Bulk Course Creation tool to create all of our courses. First we need to install the tool as an add-on admin report into our Moodle site. To install this tool, do the following: Go to the Modules and plugins area of www.moodle.org. Search for Bulk Course Upload tool. Click on Download latest version to download the tool to your computer. If this does not download the package to your hard drive and instead takes you to a forum in the Using Moodle course on Moodle.org, download the package that was posted in that forum on Sunday, 11 May 2008. Expand the package, contained within, and find the uploadcourse.php file. Place the uploadcourse.php file in your admin directory located inside your main Moodle directory. When logged in as admin, enter the following address in your browser address bar: http://localhost:8888/moodle19/admin/uploadcourse.php. (If you are not using a MAMP package, the first part of the address will of course be different.) You will then see the Upload Course tool explanation screen that looks like the following screenshot: The screen, shown in the previous screenshot, lists the thirty-nine different fields that can be included in a CSV file when creating courses in bulk via this tool. Most of the fields here control settings that are modified in individual courses by clicking on the Settings link found in the Administration block of each course. The following is an explanation of the fields with notes about which ones are especially useful when setting up Moodle as a CIMS: category: You will definitely want to specify categories in order to organize your courses. The best way to organize courses and categories here is such that the organization coincides with the organization of your curriculum as displayed in school documentation and student handbooks. If you already have categories in your Moodle site, make sure that you spell the categories exactly as they appear on your site, including capitalization. A mistake will result in the creation of a new category. This field should start with a forward slash followed by the category name with each subcategory also being followed by a forward slash (for example, /Listening/Advanced). cost: If students must pay to enroll in your courses, via the PayPal plugin, you may enter the cost here. You must have the PayPal plugin activated on your site, which can be done by accessing it via the Site Administration block by clicking on Courses and then Enrolments. Additionally, as this book goes to print, the ability to enter a field in the file used by the Bulk Course tool that allows you to set the enrolment plugin, is not yet available. Therefore, if you enter a cost value for a course, it will not be shown until the enrolment plugin for the course is changed manually by navigating to the course and editing the course through the Settings link found in the course Administration block. Check Moodle.org frequently for updates to the Bulk Course Upload tool as the feature should be added soon. enrolperiod: This controls the amount of time a student is enrolled in a course. The value must be entered in seconds so, for example, if you had a course that ran for one month and students were to be unenrolled after that period, you would set this value to 2,592,000 (60 seconds X 60 minutes per hour X 24 hours per day X 30 = 2,592,000). enrollable: This simply controls whether the course is enrollable or not. Entering a 0 will render the course unenrollable and a 1 will set the course to allow enrollments. enrolstartdate and enrolenddate: If you wish to set an enrollment period, you should enter the dates (start and end dates) in these two fields. The dates can be entered in the month/day/year format (for example, 8/1/10). expirynotify: Enter a 1 here to have e-mails sent to the teacher when a student is going to be unenrolled from a course. Enter a 0 to prevent e-mails from being sent when a student is going to be unenrolled. This setting is only functional when the enrolperiod value is set. expirythreshold: Enter the number of days in advance you want e-mails notifying of student unenrollment sent. The explanation file included calls for a value between 10 and 30 days but this value can actually be set to between 1 and 30 days. This setting is only functional when the enrolperiod value and expirynotify and/or notifystudents (see below) is/are set. format: This field controls the format of the course. As of Moodle 1.9.8+ there are six format options included in the standard package. The options are lams, scorm, social, topics, weeks, and weeks CSS, and any of these values can be entered in this field. fullname: This is the full name of the course you are creating (for example, History 101). groupmode: Set this to 0 for no groups, 1 for separate groups, and 2 for visible groups. groupmodeforce: Set this to 1 to force group mode at the course level and 0 to allow group mode to be set in each individual activity. guest: Use a 0 to prevent guests from accessing this course, a 1 to allow uests in the course, and a 2 to allow only guests who have the key into the course. idnumber: You can enter a course ID number using this field. This number is only used for administrative purposes and is not visible to students. This is a very useful field for institutions that use identification numbers for courses and can provide a link for connecting the courses within Moodle to other systems. If your institution uses any such numbering system it is recommended that you enter the appropriate numbers here. lang: This is the language setting for the course. Leaving this field blank will result in the Do not force language setting, which can be seen from the Settings menu accessed from within each individual course. Doing so will allow users to toggle between languages that have been installed in the site. To specify a language, and thus force the display of the course using this language, enter the language as it is displayed within the Moodle lang directory (for example, English = en_utf8). maxbytes: This field allows you to set the maximum size of individual files that are uploaded to the course. Leaving this blank will result in the course being created with the site wide maximum file upload size setting. Values must be entered in bytes (for example, 1 MB = 1,048,576 bytes). Refer to an online conversion site such as www.onlineconversion.com to help you determine the value you want to enter here. metacourse: If the course you are creating is a meta course, enter a 1, otherwise enter a 0 or leave the field blank.
Read more
  • 0
  • 0
  • 5355

article-image-teaching-special-kids-how-write-simple-sentences-and-paragraphs-using-moodle-19
Packt
12 Jul 2010
8 min read
Save for later

Teaching Special Kids How to Write Simple Sentences and Paragraphs using Moodle 1.9

Packt
12 Jul 2010
8 min read
Creating a sentence using certain words Last Saturday, Alice went to the circus with her mother. Today is Priscilla's birthday and Alice cannot wait to tell her friends about the funny and dangerous things she saw in the circus. She was really scared when she saw the lions jumping through the flaming hoops. She enjoyed the little dogs jumping and twirling, and the big seals spinning balls. However, she has to remember some of the shows. Shall we help her? Time for action – choosing and preparing the words to be used in a sentence We are first going to choose the words to be used in a sentence and then add a new advanced uploading of files activity to an existing Moodle course. Log in to your Moodle server. Click on the desired course name (Circus). As previously learned, follow the necessary steps to edit the summary for a desired week. Enter Exercise 1 in the Summary textbox and save the changes. Click on the Add an activity combo box for the selected week and choose Advanced uploading of files. Enter Creating a sentence using certain words in Assignment name. Select Verdana in font and 5 (18) in size—the first two combo boxes below Description. Click on the Font Color button (a T with six color boxes) and select your desired color for the text. Click on the big text box below Description and enter the following description of the student's goal for this exercise. You can use the enlarged editor window as shown in the next screenshot. Use a different font color for each of the three words: Lion, Hoops, and Flaming. Close the enlarged editor's window. Select 10MB in Maximum size. This is the maximum size for the file that each student is going to be able to upload as a result for this activity. However, it is very important to check the possibilities offered by your Moodle server with its Moodle administrator. Select 1 in Maximum number of uploaded files. Select Yes in Allow notes. This way, the student will be able to add notes with the sentence. Scroll down and click on the Save and display button. The web browser will show the description for the advanced uploading of files activity. What just happened? We added an advanced uploading of files activity to a Moodle course that will allow a student to write a sentence that has to include the three words specified in the notes section. The students are now going to be able to read the goals for this activity by clicking on its hyperlink on the corresponding week. They are then going to write the sentence and upload their voices with the description of the situation. We added the description of the goal and the three words to use in the sentence with customized fonts and colors using the online text activity editor features. Time for action – writing and recording the sentence We must first download and install Audacity 1.2. We will then help Alice to write a sentence, read it, and record her voice by using Audacity's features. If you do not have it yet, download and install Audacity 1.2 (http://audacity.sourceforge.net/download/). This software will allow the student to record his/her voice and save the recording as an MP3 file compatible with the previously explained Moodle multimedia plugins. In this case, we are covering a basic installation and usage for Audacity 1.2. The integration of sound and music elements for Moodle, including advanced usages for Audacity, is described in depth in Moodle 1.9 Multimedia by João Pedro Soares Fernandes, Packt Publishing. Start Audacity. Next, it is necessary to download the LAME MP3 encoder to make it possible for Audacity to export the recorded audio in the MP3 file format. Open your default web browser and go to the Audacity web page that displays the instructions to install the correct version of the LAME MP3 encoder, http://audacity.sourceforge.net/help/faq?s=install&item=lame-mp3. Click on the LAME download page hyperlink and click on the hyperlink under For Audacity on Windows, in this case, Lame_v3.98.2_for_Audacity_on_Windows.exe. Run the application, read the license carefully, and follow the necessary steps to finish the installation. The default folder for the LAME MP3 encoder is C:Program FilesLame for Audacity, as shown in the following screenshot: Minimize Audacity. Log in to your Moodle server using the student role. Click on the course name (Circus). Click on the Creating a sentence using certain words link on the corresponding week. The web browser will show the description for the activity and the three words to be used in the sentence. Click on the Edit button below Notes. Moodle will display a big text area with an HTML editor. Select Verdana in font and 5 (18) in size. Write a sentence, The lion jumps through the flaming hoops., as shown in the next screenshot: Go back to Audacity. Resize and move its window in order to be able to see the sentence you have recently written. Click on the Record button (the red circle) and start reading the sentence. Audacity will display the waveform of the audio track being recorded, as shown in the next screenshot: You need a microphone connected to the computer in order to record your voice with Audacity. Once you finish reading the sentence, click on the Stop button (the yellow square). Audacity will stop recording your voice. Select File | Export As MP3 from Audacity's main menu. Save the MP3 audio file as mysentence.mp3 in your documents folder. Audacity will display a message indicating that it uses the freely available LAME library to handle MP3 file encoding, as shown in the next screenshot: Click on Yes and browse to the folder where you installed the LAME MP3 encoder, by default, C:Program FilesLame for Audacity. Click on Open and Audacity will display a dialog box to edit some properties for the MP3 file. Click on OK and it will save the MP3 file, mysentence.mp3, in your documents folder. Next, go back to your web browser with the Moodle activity, scroll down, and click on the Save changes button. Click on the Browse button below Submission draft. Browse to the folder that holds your MP3 audio file with the recorded sentence, your documents folder, select the file to upload, mysentence.mp3, and click on Open. Then, click on Upload this file to upload the MP3 audio file to the Moodle server. The file name, mysentence.mp3, will appear below Submission draft if the MP3 file could finish the upload process without problems, as shown in the next screenshot. Next, click on Continue. Click on Send for marking and then on Yes. A new message, Assignment was already submitted for marking and cannot be updated, will appear below the Notes section with the sentence. Log out and log in with your normal user and role. You can check the submitted assignments by clicking on the Creating a sentence using certain words link on the corresponding week and then on View x submitted assignments. Moodle will display the links for the notes and the uploaded file for each student that submitted this assignment, as shown in the next screenshot. You will be able to read the notes and listen to the recorded sentence by clicking on the corresponding links. Once you have checked the results, click on Grade in the corresponding row in the grid. A feedback window will appear with a text editor and a drop-down list with the possible grades. Select the grade in the Grade drop-down list and write any feedback in the text editor, as shown in the next screenshot. Then click on Save changes. The final grade will appear in a corresponding cell in the grid. What just happened? In this activity, we defined a simple list of words and we asked the student to write a simple sentence. In this case, there is no image or multimedia resource, and therefore, they have to use their imagination. The child has to read and understand the three words. He/she has to associate them, imagine a situation and say and/or write a sentence. Sometimes, it is going to be too difficult for the child to write the sentence. In this case, he/she can work with the help of a therapist or a family member to run the previously explained software and record the sentence. This way, it is going to be possible to evaluate the results of this exercise even if the student cannot write a complete sentence with the words. Have a go hero – discussing the results in Moodle forums The usage of additional software to record the voice in order to solve the exercises can be challenging for the students and their parents. Prepare answers of frequently asked questions in the forums offered by Moodle. This way, you can interact with the students and their parents through other channels in Moodle, with different feedback possibilities. You can access the forums for each Moodle course by clicking on Forums in the Activities panel.
Read more
  • 0
  • 0
  • 5349

article-image-adding-random-background-image-your-joomla-template
Packt
02 Jul 2010
2 min read
Save for later

Adding a Random Background Image to your Joomla! Template

Packt
02 Jul 2010
2 min read
(Read more interesting articles on Joomla! 1.5 here.) Adding a random background image to your Joomla! template In distinguishing your Joomla! template from others, there are a number of extensions for Joomla! to help you, including one that allows you to display a random image as your template's background image for the <body> element. Getting ready You need to install the extension called Random Background. You can find the file's download link on the Joomla! website at http://extensions.Joomla.org/extensions/style-a-design/templating/6054. Once you have saved the extension files somewhere on your computer, log in to your website's Joomla! administration panel (if Joomla! is installed at example.com, the administration panel is typically accessible at example.com/administrator), and select the Install/Uninstall option from the Extensions option in the primary navigation: You will then be presented with a form, from where you can upload the extension's .zip file. Select the file from your computer, and then click on the Upload file & install button: Once complete, you should receive a confirmation message: Setting relevant permissions for installing the module If you have problems installing the module, you may receive an error message like the following one: The error is most likely because two directories on your server do not have sufficient permissions: /tmp /modules Use Joomla!'s FTP layer to manage the necessary file permissions for you. You can edit Joomla!'s configuration file, which is called configuration.php, in the root of your Joomla! website. Simply add these variables into the file if they don't exist already: var $ftp_host = ''; // your FTP host, e.g. ftp.example.com or just example.com, depending on your hostvar $ftp_port = ''; // usually 21var $ftp_user = ''; // your FTP usernamevar $ftp_pass = ''; // your FTP passwordvar $ftp_root = ''; // usually / or the directory of your Joomla! installvar $ftp_enable = '1'; // 1 = enabled
Read more
  • 0
  • 0
  • 5344

article-image-test-driven-data-model
Packt
13 Jan 2016
17 min read
Save for later

A Test-Driven Data Model

Packt
13 Jan 2016
17 min read
In this article by Dr. Dominik Hauser, author of Test-driven Development with Swift, we will cover the following topics: Implementing a To-Do item Implementing the location iOS apps are often developed using a design pattern called Model-View-Controller (MVC). In this pattern, each class (also, a struct or enum) is either a model object, a view, or a controller. Model objects are responsible to store data. They should be independent from the kind of presentation. For example, it should be possible to use the same model object for an iOS app and command-line tool on Mac. View objects are the presenters of data. They are responsible for making the objects visible (or in case of a VoiceOver-enabled app, hearable) for users. Views are special for the device that the app is executed on. In the case of a cross-platform application view, objects cannot be shared. Each platform needs its own implementation of the view layer. Controller objects communicate between the model and view objects. They are responsible for making the model objects presentable. We will use MVC for our to-do app because it is one of the easiest design patterns, and it is commonly used by Apple in their sample code. This article starts with the test-driven development of the model layer of our application. for more info: (For more resources related to this topic, see here.) Implementing the To-Do item A to-do app needs a model class/struct to store information for to-do items. We start by adding a new test case to the test target. Open the To-Do project and select the ToDoTests group. Navigate to File | New | File, go to iOS | Source | Unit Test Case Class, and click on Next. Put in the name ToDoItemTests, make it a subclass of XCTestCase, select Swift as the language, and click on Next. In the next window, create a new folder, called Model, and click on Create. Now, delete the ToDoTests.swift template test case. At the time of writing this article, if you delete ToDoTests.swift before you add the first test case in a test target, you will see a pop up from Xcode, telling you that adding the Swift file will create a mixed Swift and Objective-C target: This is a bug in Xcode 7.0. It seems that when adding the first Swift file to a target, Xcode assumes that there have to be Objective-C files already. Click on Don't Create if this happens to you because we will not use Objective-C in our tests. Adding a title property Open ToDoItemTests.swift, and add the following import expression right below import XCTest: @testable import ToDo This is needed to be able to test the ToDo module. The @testable keyword makes internal methods of the ToDo module accessible by the test case. Remove the two testExample() and testPerformanceExample()template test methods. The title of a to-do item is required. Let's write a test to ensure that an initializer that takes a title string exists. Add the following test method at the end of the test case (but within the ToDoItemTests class): func testInit_ShouldTakeTitle() {    ToDoItem(title: "Test title") } The static analyzer built into Xcode will complain about the use of unresolved identifier 'ToDoItem': We cannot compile this code because Xcode cannot find the ToDoItem identifier. Remember that not compiling a test is equivalent to a failing test, and as soon as we have a failing test, we need to write an implementation code to make the test pass. To add a file to the implementation code, first click on the ToDo group in Project navigator. Otherwise, the added file will be put into the test group. Go to File | New | File, navigate to the iOS | Source | Swift File template, and click on Next. Create a new folder called Model. In the Save As field, put in the name ToDoItem.swift, make sure that the file is added to the ToDo target and not to the ToDoTests target, and click on Create. Open ToDoItem.swift in the editor, and add the following code: struct ToDoItem { } This code is a complete implementation of a struct named ToDoItem. So, Xcode should now be able to find the ToDoItem identifier. Run the test by either going to Product | Test or use the ⌘U shortcut. The code does not compile because there is Extra argument 'title' in call. This means that at this stage, we could initialize an instance of ToDoItem like this: let item = ToDoItem() But we want to have an initializer that takes a title. We need to add a property, named title, of the String type to store the title: struct ToDoItem {    let title: String } Run the test again. It should pass. We have implemented the first micro feature of our to-do app using TDD. And it wasn't even hard. But first, we need to check whether there is anything to refactor in the existing test and implementation code. The tests and code are clean and simple. There is nothing to refactor as yet. Always remember to check whether refactoring is needed after you have made the tests green. But there are a few things to note about the test. First, Xcode shows a warning that Result of initializer is unused. To make this warning go away, assign the result of the initializer to an underscore _ = ToDoItem(title: "Test title"). This tells Xcode that we know what we are doing. We want to call the initializer of ToDoItem, but we do not care about its return value. Secondly, there is no XCTAssert function call in the test. To add an assert, we could rewrite the test as follows: func testInit_ShouldTakeTitle() {    let item = ToDoItem(title: "Test title")    XCTAssertNotNil(item, "item should not be nil") } But in Swift an non-failable initializer cannot return nil. It always returns a valid instance. This means that the XCTAssertNotNil() method is useless. We do not need it to ensure that we have written enough code to implement the tested micro feature. It is not needed to drive the development and it does not make the code better. In the following tests, we will omit the XCTAssert functions when they are not needed in order to make a test fail. Before we proceed to the next tests, let's set up the editor in a way that makes the TDD workflow easier and faster. Open ToDoItemTests.swift in the editor. Open Project navigator, and hold down the option key while clicking on ToDoItem.swift in the navigator to open it in the assistant editor. Depending on the size of your screen and your preferences, you might prefer to hide the navigator again. With this setup, you have the tests and code side by side, and switching from a test to code and vice versa takes no time. In addition to this, as the relevant test is visible while you write the code, it can guide the implementation. Adding an item description property A to-do item can have a description. We would like to have an initializer that also takes a description string. To drive the implementation, we need a failing test for the existence of that initializer: func testInit_ShouldTakeTitleAndDescription() {    _ = ToDoItem(title: "Test title",    itemDescription: "Test description") } Again, this code does not compile because there is Extra argument 'itemDescription' in call. To make this test pass, we add a itemDescription of type String? property to ToDoItem: struct ToDoItem {    let title: String    let itemDescription: String? } Run the tests. The testInit_ShouldTakeTitleAndDescription()test fails (that is, it does not compile) because there is Missing argument for parameter 'itemDescription' in call. The reason for this is that we are using a feature of Swift where structs have an automatic initializer with arguments setting their properties. The initializer in the first test only has one argument, and, therefore, the test fails. To make the two tests pass again, replace the initializer in testInit_ShouldTakeTitleAndDescription() with this: toDoItem(title: "Test title", itemDescription: nil) Run the tests to check whether all the tests pass again. But now the initializer in the first test looks bad. We would like to be able to have a short initializer with only one argument in case the to-do item only has a title. So, the code needs refactoring. To have more control over the initialization, we have to implement it ourselves. Add the following code to ToDoItem: init(title: String, itemDescription: String? = nil) {    self.title = title    self.itemDescription = itemDescription } This initializer has two arguments. The second argument has a default value, so we do not need to provide both arguments. When the second argument is omitted, the default value is used. Before we refactor the tests, run the tests to make sure that they still pass. Then, remove the second argument from the initializer in testInit_ShouldTakeTitle(): func testInit_ShouldTakeTitle() {    _ = ToDoItem(title: "Test title") } Run the tests again to make sure that everything still works. Removing a hidden source for bugs To be able to use a short initializer, we need to define it ourselves. But this also introduces a new source for potential bugs. We can remove the two micro features we have implemented and still have both tests pass. To see how this works, open ToDoItem.swift, and comment out the properties and assignment in the initializer: struct ToDoItem {    //let title: String    //let itemDescription: String?       init(title: String, itemDescription: String? = nil) {               //self.title = title        //self.itemDescription = itemDescription    } } Run the tests. Both tests still pass. The reason for this is that they do not check whether the values of the initializer arguments are actually set to any the ToDoItem properties. We can easily extend the tests to make sure that the values are set. First, let's change the name of the first test to testInit_ShouldSetTitle(), and replace its contents with the following code: let item = ToDoItem(title: "Test title") XCTAssertEqual(item.title, "Test title",    "Initializer should set the item title") This test does not compile because ToDoItem does not have a property title (it is commented out). This shows us that the test is now testing our intention. Remove the comment signs for the title property and assignment of the title in the initializer, and run the tests again. All the tests pass. Now, replace the second test with the following code: func testInit_ShouldSetTitleAndDescription() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description")      XCTAssertEqual(item.itemDescription , "Test description",        "Initializer should set the item description") } Remove the remaining comment signs in ToDoItem, and run the tests again. Both tests pass again, and they now test whether the initializer works. Adding a timestamp property A to-do item can also have a due date, which is represented by a timestamp. Add the following test to make sure that we can initialize a to-do item with a title, a description, and a timestamp: func testInit_ShouldSetTitleAndDescriptionAndTimestamp() {    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0)      XCTAssertEqual(0.0, item.timestamp,        "Initializer should set the timestamp") } Again, this test does not compile because there is an extra argument in the initializer. From the implementation of the other properties, we know that we have to add a timestamp property in ToDoItem and set it in the initializer: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp    } } Run the tests. All the tests pass. The tests are green, and there is nothing to refactor. Adding a location property The last property that we would like to be able to set in the initializer of ToDoItem is its location. The location has a name and can optionally have a coordinate. We will use a struct to encapsulate this data into its own type. Add the following code to ToDoItemTests: func testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name") } The test is not finished, but it already fails because Location is an unresolved identifier. There is no class, struct, or enum named Location yet. Open Project navigator, add Swift File with the name Location.swift, and add it to the Model folder. From our experience with the ToDoItem struct, we already know what is needed to make the test green. Add the following code to Location.swift: struct Location {    let name: String } This defines a Location struct with a name property and makes the test code compliable again. But the test is not finished yet. Add the following code to testInit_ShouldSetTitleAndDescriptionAndTimestampAndLocation(): func testInit_ShouldTakeTitleAndDescriptionAndTimestampAndLocation() {    let location = Location(name: "Test name")    let item = ToDoItem(title: "Test title",        itemDescription: "Test description",        timestamp: 0.0,        location: location)      XCTAssertEqual(location.name, item.location?.name,        "Initializer should set the location") } Unfortunately, we cannot use location itself yet to check for equality, so the following assert does not work: XCTAssertEqual(location, item.location,    "Initializer should set the location") The reason for this is that the first two arguments of XCTAssertEqual() have to conform to the Equatable protocol. Again, this does not compile because the initializer of ToDoItem does not have an argument called location. Add the location property and the initializer argument to ToDoItem. The result should look like this: struct ToDoItem {    let title: String    let itemDescription: String?    let timestamp: Double?    let location: Location?       init(title: String,        itemDescription: String? = nil,        timestamp: Double? = nil,        location: Location? = nil) {                   self.title = title            self.itemDescription = itemDescription            self.timestamp = timestamp            self.location = location    } } Run the tests again. All the tests pass and there is nothing to refactor. We have now implemented a struct to hold the to-do items using TDD. Implementing the location In the previous section, we added a struct to hold the location information. We will now add tests to make sure Location has the needed properties and initializer. The tests could be added to ToDoItemTests, but they are easier to maintain when the test classes mirror the implementation classes/structs. So, we need a new test case class. Open Project navigator, select the ToDoTests group, and add a unit test case class with the name LocationTests. Make sure to go to iOS | Source | Unit Test Case Class because we want to test the iOS code and Xcode sometimes preselects OS X | Source. Choose to store the file in the Model folder we created previously. Set up the editor to show LocationTests.swift on the left-hand side and Location.swift in the assistant editor on the right-hand side. In the test class, add @testable import ToDo, and remove the testExample() and testPerformanceExample()template tests. Adding a coordinate property To drive the addition of a coordinate property, we need a failing test. Add the following test to LocationTests: func testInit_ShouldSetNameAndCoordinate() {    let testCoordinate = CLLocationCoordinate2D(latitude: 1,        longitude: 2)    let location = Location(name: "",        coordinate: testCoordinate)      XCTAssertEqual(location.coordinate?.latitude,        testCoordinate.latitude,        "Initializer should set latitude")    XCTAssertEqual(location.coordinate?.longitude,        testCoordinate.longitude,        "Initializer should set longitude") } First, we create a coordinate and use it to create an instance of Location. Then, we assert that the latitude and the longitude of the location's coordinate are set to the correct values. We use the 1 and 2 values in the initializer of CLLocationCoordinate2D because it has also an initializer that takes no arguments (CLLocationCoordinate2D()) and sets the longitude and latitude to zero. We need to make sure in the test that the initializer of Location assigns the coordinate argument to its property. The test does not compile because CLLocationCoordinate2D is an unresolved identifier. We need to import CoreLocation in LocationTests.swift: import XCTest @testable import ToDo import CoreLocation The test still does not compile because Location does not have a coordinate property yet. Like ToDoItem, we would like to have a short initializer for locations that only have a name argument. Therefore, we need to implement the initializer ourselves and cannot use the one provided by Swift. Replace the contents of Location.swift with the following code: import CoreLocation   struct Location {    let name: String    let coordinate: CLLocationCoordinate2D?       init(name: String,        coordinate: CLLocationCoordinate2D? = nil) {                   self.name = ""            self.coordinate = coordinate    } } Note that we have intentionally set the name in the initializer to an empty string. This is the easiest implementation that makes the tests pass. But it is clearly not what we want. The initializer should set the name of the location to the value in the name argument. So, we need another test to make sure that the name is set correctly. Add the following test to LocationTests: func testInit_ShouldSetName() {    let location = Location(name: "Test name")    XCTAssertEqual(location.name, "Test name",        "Initializer should set the name") } Run the test to make sure it fails. To make the test pass, change self.name = "" in the initializer of Location to self.name = name. Run the tests again to check that now all the tests pass. There is nothing to refactor in the tests and implementation. Let's move on. Summary In this article, we covered the implementation of a to-do item by adding a title property, item description property, timestamp property, and more. We also covered the implementation of a location using the coordinate property. Resources for Article: Further resources on this subject: Share and Share Alike [article] Introducing Test-driven Machine Learning[article] Testing a UI Using WebDriverJS [article]
Read more
  • 0
  • 0
  • 5341
article-image-qooxdoo-working-layouts
Packt
27 Dec 2011
16 min read
Save for later

qooxdoo: Working with Layouts

Packt
27 Dec 2011
16 min read
(For more resources on this topic, see here.) qooxdoo uses the generic terminology of the graphical user interface. So, it is very easy to understand the concepts involved in it.. The basic building block in qooxdoo is termed a widget. Each widget (GUI component) is a subclass of the Widget class. A widget also acts as a container to hold more widgets. Wherever possible, grouping of the widgets to form a reusable component or custom widget is a good idea. This allows you to maintain consistency across your application and also helps you to build the application quicker than the normal time. It also increases maintainability, as you need to fix the defect at only one place. qooxdoo provides a set of containers too, to carry widgets, and provides public methods to manage. Let's start with the framework's class hierarchy: Base classes for widgets qooxdoo framework abstracts the common functionalities required by all the widgets into a few base classes, so that it can be reused by any class through object inheritance. Let's start with these base classes. qx.core.Object Object is the base class for all other qooxdoo classes either directly or indirectly. The qx.core.Object class has the implementation for most of the functionalities, such as, object management, logging, event handling, object-oriented features, and so on. A class can extend the qx.core.Object class to get all the functionalities defined in the this class. When you want to add any functionality to your class, just inherit the Object class and add the extra functionalities in the subclass. The major functionalities of the Object class are explained in the sections that follow. Object management   The Object class provides the following methods for object management, such as, creation, destruction, and so on: base(): This method calls base class method dispose(): This method disposes or destroys the object isDisposed(): This method returns a true value if the object is disposed toString(): This method returns the object in string format toHashCode(): This method returns hash code of the object Event handling The Object class provides the following methods for event creation, event firing, event listener, and so on: addListener(): This method adds the listener on the event target and returns the ID of the listener addListenerOnce(): This method adds the listener and listens only to the first occurrence of the event dispatchEvent(): This method dispatches the event fireDataEvent(): This method fires the data event fireEvent(): This method fires the event removeListener(): This method removes the listener removeListenerById(): This method removes the listener by its ID, given by addListener() Logging The Object class provides the following methods to log the message at different levels: warn(): Logs the message at warning level info(): Logs the message at information level error(): Logs the message at error level debug(): Logs the message at the debugging level trace(): Logs the message at the tracing level Also, the Object class provides the methods for setters and getters for properties, and so on. qx.core.LayoutItem LayoutItem is the super most class in the hierarchy. You can place only the layout items in the layout manager. LayoutItem is an abstract class. The LayoutItem class mainly provides properties, such as, height, width, margins, shrinking, growing, and many more, for the item to be drawn on the screen. It also provides a set of public methods to alter these properties. Check the API documentation for a full set of class information. qx.core.Widget Next in the class hierarchy is the Widget class, which is the base class for all the GUI components. Widget is the super class for all the individual GUI components, such as, button, text field, combobox, container, and so on, as shown in the class hierarchy diagram. There are different kinds of widgets, such as, containers, menus, toolbars, form items, and so on; each kind of widgets are defined in different namespaces. We will see all the different namespaces or packages, one-by-one, in this article. A widget consists of at least three HTML elements. The container element, which is added to the parent widget, has two child elements—the decoration and the content element. The decoration element decorates the widget. It has a lower z-index and contains markup to render the widget's background and border styles, using an implementation of the qx.ui.decoration.IDecorator interface. The content element is positioned inside the container element, with the padding, and contains the real widget element. Widget properties Common widget properties include:   Visibility: This property controls the visibility of the widget. The possible values for this property are: visible: Makes the widget visible on screen. hidden: Hides the widget, but widget space will be occupied in the parent widget's layout. This is similar to the CSS style visibility:hidden. exclude: Hides the widget and removes from the parent widget's layout, but the widget is still a child of its parent's widget. This is similar to the CSS style display:none. The methods to modify this property are show(), hide(), and exclude(). The methods to check the status are isVisible(), isHidden(), and isExcluded(). Tooltip: This property displays the tooltip when the cursor is pointing at the widget. This tooltip information consists of toolTipText and toolTipIcon. The different methods available to alter this property are: setToolTip()/getToolTip(): Sets or returns the qx.ui.tooltip.ToolTip instance. The default value is null. setToolTipIcon()/getToolTipIcon(): Sets or returns the URL for the icon. The default value is null. setToolTipText()/getToolTipText(): Sets or returns the string text. It also supports the HTML markup. Default value is null. Text color: The textColor property sets the frontend text color of the widget. The possible values for this property are any color or null. Padding: This property is a shorthand group property for paddingTop, paddingRight, paddingBottom and paddingLeft of the widget. The available methods are setPadding() and resetPadding(), which sets values for top, right, bottom, and left padding, consecutively. If any values are missing, the opposite side values will be taken for that side. Also, set/get methods for each padding side are also available. Tab index: This property controls the traversal of widgets on the Tab key press. Possible values for this property are any integer or null. The traversal order is from lower value to higher value. By default, tab index for the widgets is set in the order in which they are added to the container. If you want to provide a custom traversal order, set the tab index accordingly. The available methods are setTabIndex() and getTabIndex(). These methods, respectively set and return the integer value (0 to 32000) or null. Font: The Font property defines the font for the widget. The possible value is either a font name defined in the theme, or an instance of qx.bom.Font, or null. The available methods are: setFont(): Sets the font getFont(): Retrieves the font initFont(): Initializes the font resetFont(): Resets the font Enabled: This property enables or disables the widget for user input. Possible values are true or false (Boolean value). The default value is true. The widget invokes all the input events only if it is in the enabled state. In the disabled state, the widget will be grayed out and no user input is allowed. The only events invoked in the disabled state are mouseOver and mouseOut. In the disabled state, tab index and widget focus are ignored. The tab traversal focus will go to the next enabled widget. setEnabled()/getEnabled() are the methods to set or get a Boolean value, respectively. Selectable: This property says whether the widget contents are selectable. When a widget contains text data and the property is true, native browser selection can be used to select the contents. Possible values are true or false. The default value is false. setSelectable(), getSelectable(), initSelectable(), resetSelectable(), and toggleSelectable() are the methods available to modify the Selectable property. Appearance: This property controls style of the element and identifies the theme for the widget. Possible values are any string defined in the theme; the default value is widget. setAppearence(), getAppearence(), initAppearence(), and resetAppearence() are the methods to alter the appearance. Cursor: This property specifies which type of cursor to display on mouse over the widget. The possible values are any valid CSS2 cursor name defined by W3C (any string) and null. The default value is null. Some of the W3C-defined cursor names are default, wait, text, help, pointer, crosshair, move, n-resize, ne-resize, e-resize, se-resize, s-resize, sw-resize, w-resize, and nw-resize. setCursor(), getCursor(), resetCursor(), and initCursor() are the methods available to alter the cursor property. qx.application The starting point for a qooxdoo application is to write a custom application class by inheriting one of the qooxdoo application classes in the qx.application namespace or package. Similar to the main method in Java, the qooxdoo application also starts from the main method in the custom application class. qooxdoo supports three different kinds of applications: Standalone: Uses the application root to build full-blown, standalone qooxdoo applications. Inline: Uses the page root to build traditional web page-based applications, which are embedded into isles in the classic HTML page. Native: This class is for applications that do not involve qooxdoo's GUI toolkit. Typically, they only make use of the IO (AJAX) and BOM functionality (for example, to manipulate the existing DOM). Whenever a user creates an application with the Python script, a custom application class gets generated with a default main method. Let's see the custom application class generated for our Team Twitter application. After generation, the main function code is edited to add functionality to communicate to the RPC server and say "hello" to the qooxdoo world. The following code is the content of the Application.js class file with an RPC call to communicate with the server: /** * This is the main application class of your custom application "teamtwitter" */ qx.Class.define("teamtwitter.Application", { extend : qx.application.Standalone, members : { /** * This method contains the initial application code and gets called during startup of the application * @lint ignoreDeprecated(alert) */ main : function() { // Call super class this.base(arguments); // Enable logging in debug variant if (qx.core.Variant.isSet("qx.debug", "on")) { // support native logging capabilities, e.g. Firebug for Firefox qx.log.appender.Native; // support additional cross-browser console. Press F7 to toggle visibility qx.log.appender.Console; } /* Below is your actual application code... */ // Create a button var button1 = new qx.ui.form.Button("First Button", "teamtwitter/test.png"); // Document is the application root var doc = this.getRoot(); // Add button to document at fixed coordinates doc.add(button1, {left: 100, top: 50}); // Add an event listener button1.addListener("execute", function(e) { var rpc = new qx.io.remote.Rpc(); rpc.setCrossDomain( false ); rpc.setTimeout(1000); var host = window.location.host; var proto = window.location.protocol; var webURL = proto + "//" + host + "/teamtwitter/.qxrpc"; rpc.setUrl(webURL); rpc.setServiceName("qooxdoo.test"); rpc.callAsync(function(result, ex, id){ if (ex == null) { alert(result); } else { alert("Async(" + id + ") exception: " + ex); } }, "echo", "Hello to qooxdoo World!"); }); } } }); We've had an overview of the class hierarchy of the qooxdoo framework and got to know the base classes for the widgets. Now, we have an idea of the core functionalities available for the widgets, the core properties of the widgets, and the methods to manage those properties. We've received more information on the application in the qooxdoo framework. Now, it is time to learn about the containers. Containers A container is a kind of widget. It holds multiple widgets and exposes public methods to manage their child widgets. One can configure a layout manager for the container to position all the child widgets in the container. qooxdoo provides different containers for different purposes. Let's check different containers provided by the qooxdoo framework and understand the purpose of each container. Once you understand the purpose of each container, you can select the right container when you design your application. Scroll Whenever the content widget size (width and height) is larger than the container size (width and height), the Scroll container provides vertical, or horizontal, or both scroll bars automatically. You have to set the Scroll container's size carefully to make it work properly. The Scroll container is used most commonly if the application screen size is large. The Scroll container has a fixed layout and it can hold a single child. So, there is no need to configure the layout for this container. The following code snippet demonstrates how to use the Scroll container: // create scroll containervar scroll = new qx.ui.container.Scroll().set({width: 300,height: 200});// adding a widget with larger widget and height of the scrollscroll.add(new qx.ui.core.Widget().set({width: 600,minWidth: 600,height: 400,minHeight: 400})); // add to the root widget.this.getRoot().add(scroll); The GUI look for the preceding code is as follows: Stack The Stack container puts a widget on top of an old widget. This container displays only the topmost widget. The Stack container is used if there are set of tasks to be carried out in a flow. An application user can work on each user interface one-by-one in order. The following code snippet demonstrates how to use the Stack container: // create stack containervar stack = new qx.ui.container.Stack();// add some childrenstack.add(new qx.ui.core.Widget().set({backgroundColor: "red"}));stack.add(new qx.ui.core.Widget().set({backgroundColor: "green"}));stack.add(new qx.ui.core.Widget().set({backgroundColor: "blue"}));this.getRoot().add(stack); The GUI look for the preceding code is as follows: Resizer Resizer is a container that gives flexibility for resizing at runtime. This container should be used only if you want to allow the application user to dynamically resize the container. The following code snippet demonstrates how to use the Resizer container: var resizer = new qx.ui.container.Resizer().set({marginTop : 50,marginLeft : 50,width: 200,height: 100});resizer.setLayout(new qx.ui.layout.HBox());var label = new qx.ui.basic.Label("Resize me <br>I'm resizable");label.setRich(true);resizer.add(label);this.getRoot().add(resizer); The GUI look for the preceding code is as follows: Composite This is a generic container. If you do not want any of the specific features, such as, resize on runtime, stack, scroll, and so on, but just want a container, you can use this container. This is one of the mostly used containers. The following code snippet demonstrates the Composite container usage. A horizontal layout is configured to the Composite container. A label and a text field are added to the container. The horizontal layout manager places them horizontally: // create the compositevar composite = new qx.ui.container.Composite()// configure a layout.composite.setLayout(new qx.ui.layout.HBox());// add some child widgetscomposite.add(new qx.ui.basic.Label("Enter Text: "));composite.add(new qx.ui.form.TextField());// add to the root widget.this.getRoot().add(composite); The GUI look for the preceding code is as follows: Window Window is a container that has all features, such as, minimize, maximize, restore, and close. The icons for these operations will appear on the top-right corner. Different themes can be set to get the look and feel of a native window within a browser. This window is best used when an application requires Multiple Document Interface (MDI) or Single Document Interface (SDI). The following code snippet demonstrates a window creation and display: var win = new qx.ui.window.Window("First Window");win.setWidth(300);win.setHeight(200);// neglecting minimize buttonwin.setShowMinimize(false);this.getRoot().add(win, {left:20, top:20});win.open(); The GUI look for the preceding code is as follows: TabView The TabView container allows you to display multiple tabs, but only one tab is active at a time. The TabView container simplifies the GUI by avoiding the expansive content spreading to multiple pages, with a scroll. Instead, the TabView container provides the tab title buttons to navigate to other tabs. You can group the related fields into each tab and try to avoid the scroll by keeping the most-used tab as the first tab and making it active. Application users can move to other tabs, if required. TabView is the best example for the stack container usage. It stacks all pages one over the other and displays one page at a time. Each page will have a button at the top, in a button bar, to allow switching the page. Tabview allows positioning the button bar on top, bottom, left, or right. TabView also allows adding pages dynamically; a scroll appears when the page buttons exceed the size. The following code snippet demonstrates the usage of TabView: var tabView = new qx.ui.tabview.TabView();// create a pagevar page1 = new qx.ui.tabview.Page("Layout", "icon/16/apps/utilitiesterminal.png");// add page to tabviewtabView.add(page1);var page2 = new qx.ui.tabview.Page("Notes", "icon/16/apps/utilitiesnotes.png");page2.setLayout(new qx.ui.layout.VBox());page2.add(new qx.ui.basic.Label("Notes..."));tabView.add(page2);var page3 = new qx.ui.tabview.Page("Calculator", "icon/16/apps/utilities-calculator.png");tabView.add(page3);this.getRoot().add(tabView, {edge : 0}); The GUI look for the preceding code is as follows: GroupBox GroupBox groups a set of form widgets and shows an effective visualization with the use of a legend, which supports text and icons to describe the group. As with the container, you can configure any layout manager and allow adding a number of form widgets to the GroupBox. Additionally, it is possible to use checkboxes or radio buttons within the legend. This allows you to provide group functionalities such as selecting or unselecting all the options in the group. This feature is most important for complex forms with multiple choices. The following code snippet demonstrates the usage of GroupBox: // group boxvar grpBox = new qx.ui.groupbox.GroupBox("I am a box");this.getRoot().add(grpBox, {left: 20, top: 70});// radio group boxvar rGrpBox = new qx.ui.groupbox.RadioGroupBox("I am a box");rGrpBox.setLayout(new qx.ui.layout.VBox(4));rGrpBox.add(new qx.ui.form.RadioButton("Option1"));rGrpBox.add(new qx.ui.form.RadioButton("Option2"));this.getRoot().add(rGrpBox, {left: 160, top: 70});// check group boxvar cGrpBox = new qx.ui.groupbox.CheckGroupBox("I am a box");this.getRoot().add(cGrpBox, {left: 300, top: 70}); The GUI look for the preceding code is as follows: We got to know the different containers available in the qooxdoo framework. Each container provides a particular functionality. Based on the information displayed on the GUI, you should choose the right container to have better usability of the application. Containers are the outer-most widgets in the GUI. Once you decide on the containers for your user interface, the next thing to do is to configure the layout manager for the container. Layout manager places the child widgets in the container, on the basis of the configured layout manager's policies. Now, it's time to learn how to place and arrange widgets inside the container, that is, how to lay out the container.
Read more
  • 0
  • 0
  • 5334

article-image-soap-and-php-5
Packt
22 Oct 2009
16 min read
Save for later

SOAP and PHP 5

Packt
22 Oct 2009
16 min read
SOAP SOAP, formerly known as Simple Object Access Protocol (until the acronym was dropped in version 1.2), came around shortly after XML-RPC was released. It was created by a group of developers with backing from Microsoft. Interestingly, the creator of XML-RPC, David Winer, was also one of the primary contributors to SOAP. Winer released XML-RPC before SOAP, when it became apparent to him that though SOAP was still a way away from being completed, there was an immediate need for some sort of web service protocol. Like XML-RPC, SOAP is an XML-based web service protocol. SOAP, however, satisfies a lot of the shortcomings of XML-RPC: namely the lack of user-defined data types, better character set support, and rudimentary security. It is quite simply, a more powerful and flexible protocol than REST or XML-RPC. Unfortunately, sacrifices come with that power. SOAP is a much more complex and rigid protocol. For example, even though SOAP can stand alone, it is much more useful when you use another XML-based standard, called Web Services Descriptor Language (WSDL), in conjunction with it. Therefore, in order to be proficient with SOAP, you should also be proficient with WSDL. The most-levied criticism of SOAP is that it is overly complex. Indeed, SOAP is not simple. It is long and verbose. You need to know how namespaces work in XML. SOAP can rely heavily on other standards. This is true for most implementations of SOAP, including Microsoft Live Search, which we will be looking at. The most common external specifications used by a SOAP-based service is WSDL to describe its available services, and that, in turn, usually relies on XML Schema Data (XSD) to describe its data types. In order to "know" SOAP, it would be extremely useful to have some knowledge of WSDL and XSD. This will allow one to figure out how to use the majority of SOAP services. We are going to take a "need to know" approach when looking at SOAP. Microsoft Live Search's SOAP API uses WSDL and XSD, so we will take a look at SOAP with the other two in mind. We will limit our discussion on how to gather information about the web service that you, as a web service consumer, would need and how to write SOAP requests using PHP 5 against it. Even though this article will just introduce you to the core necessities of SOAP, there is a lot of information and detail. SOAP is very meticulous and you have to keep track of a fair amount of things. Do not be discouraged, take notes if you have to, and be patient. All three, SOAP, WSD, and XSD are maintained by the W3C. All three specifications are available for your perusal. The official SOAP specification is located at http://www.w3.org/TR/soap/. WSDL specification is located at http://www.w3.org/TR/wsdl. Finally, the recommended XSD specification can be found at http://www.w3.org/XML/Schema. Web Services Descriptor Language (WSDL) With XML Schema Data (XSD) Out of all the drawbacks of XML-RPC and REST, there is one that is prominent. Both of these protocols rely heavily on good documentation by the service provider in order to use them. Lacking this, you really do not know what operations are available to you, what parameters you need to pass in order to use them, and what you should expect to get back. Even worse, an XML-RPC or REST service may be poorly or inaccurately documented and give you inaccurate or unexpected results. SOAP addresses this by relying on another XML standard called WSDL to set the rules on which web service methods are available, how parameters should be passed, and what data type might be returned. A service's WSDL document, basically, is an XML version of the documentation. If a SOAP-based service is bound to a WSDL document, and most of them are, requests and responses must adhere to the rules set in the WSDL document, otherwise a fault will occur. WSDL is an acronym for a technical language. When referring to a specific web service's WSDL document, people commonly refer to the document as "the WSDL" even though that is grammatically incorrect. Being XML-based, this allows clients to automatically discover everything about the functionality of the web service. Human-readable documentation is technically not required for a SOAP service that uses a WSDL document, though it is still highly recommended. Let's take a look at the structure of a WSDL document and how we can use it to figure out what is available to us in a SOAP-based web service. Out of all three specifications that we're going to look at in relationship to SOAP, WSDL is the most ethereal. Both supporters and detractors often call writing WSDL documents a black art. As we go through this, I will stress the main points and just briefly note other uses or exceptions. Basic WSDL Structure Beginning with a root definitions element, WSDL documents follow this basic structure:     <definitions>        <types>        …        </types>        <message>        …        </message>        <portType>        …        </portType>        <binding>        …        </binding>    </definitions> As you can see, in addition to the definitions element, there are four main sections to a WSDL document: types, message, portType, and binding. Let's take a look at these in further detail. Google used to provide a SOAP service for their web search engine. However, this service is now deprecated, and no new developer API keys are given out. This is unfortunate because the service was simple enough to learn SOAP quickly, but complex enough to get a thorough exposure to SOAP. Luckily, the service itself is still working and the WSDL is still available. As we go through WSDL elements, we will look at the Google SOAP Search WSDL and Microsoft Live Search API WSDL documents for examples. These are available at http://api.google.com/GoogleSearch.wsdl and http://soap.search.msn.com/webservices.asmx?wsdl respectively. definitions Element This is the root element of a WSDL document. If the WSDL relies on other specifications, their namespace declarations would be made here. Let's take a look at Google's WSDL's definition tag:     <definitions name="GoogleSearch"        targetNamespace="urn:GoogleSearch"                                                > The more common ones you'll run across are xsd for schema namespace, wsdl for the WSDL framework itself, and soap and soapenc for SOAP bindings. As these namespaces refer to W3C standards, you will run across them regardless of the web service implementation. Note that some searches use an equally common prefix, xs, for XML Schema. tns is another common namespace. It means "this namespace" and is a convention used to refer to the WSDL itself. types Element In a WSDL document, data types used by requests and responses need to be explicitly declared and defined. The textbook answer that you'll find is that the types element is where this is done. In theory, this is true. In practice, this is mostly true. The types element is used only for special data types. To achieve platform neutrality, WSDL defaults to, and most implementations use, XSD to describe its data types. In XSD, many basic data types are already included and do not need to be declared. Common Built-in XSD Data Types Time Date Boolean String Base64Binary Float Double Integer Byte For a complete list, see the recommendation on XSD data types at http://www.w3.org/TR/xmlschema-2/. If the web service utilizes nothing more than these built-in data types, there is no need to have special data types, and thus, types will be empty. So, the data types will just be referred to later, when we define the parameters. There are three occasions where data types would be defined here: If you want a special data type that is based on a built-in data type. Most commonly this is a built-in, whose value is restricted in some way. These are known as simple types. If the data type is an object, it is known as a complex type in XSD, and must be declared. An array, which can be described as a hybrid of the former two. Let's take a look at some examples of what we will encounter in the types element. Simple Type Sometimes, you need to restrict or refine a value of a built-in data type. For example, in a hospital's patient database, it would be ludicrous to have the length of a field called Age to be more than three digits. To add such a restriction in the SOAP world, you would have to define Age here in the types section as a new type. Simple types must be based on an existing built-in type. They cannot have children or properties like complex types. Generally, a simple type is defined with the simpleType element, the name as an attribute, followed by the restriction or definition. If the simple type is a restriction, the built-in data type that it is based on, is defined in the base attribute of the restriction element. For example, a restriction for an age can look like this:     <xsd:simpleType name="Age">        <xsd:restriction base="xsd:integer">            <xsd:totalDigits value="3" />        </xsd:restriction>    </xsd:simpleType> Children elements of restriction define what is acceptable for the value. totalDigits is used to restrict a value based on the character length. A table of common restrictions follows: Restriction Use Applicable In enumeration Specifies a list of acceptable values. All except boolean fractionDigits Defines the number of decimal places allowed. Integers length Defines the exact number of characters allowed. Strings and all binaries maxExclusive/ maxInclusive Defines the maximum value allowed. If Exclusive is used, value cannot be equal to the definition. If Inclusive, can be equal to, but not greater than, this definition. All numeric and dates minLength/ maxLength Defines the minimum and maximum number of characters or list items allowed. Strings and all binaries minExclusive/ minInclusive Defines the minimum value allowed. If Exclusive is used, value cannot be equal to the definition. If Inclusive, can be equal to, but not less than, this definition. All numeric and dates pattern A regular expression defining the allowed values. All totalDigits Defines the maximum number of digits allowed. Integers whiteSpace Defines how tabs, spaces, and line breaks are handled. Can be preserve (no changes), replace (tabs and line breaks are converted to spaces) or collapse (multiple spaces, tabs, and line breaks are converted to one space. Strings and all binaries A practical example of a restriction can be found in the MSN Search Web Service WSDL. Look at the section that defines SafeSearchOptions.     <xsd:simpleType name="SafeSearchOptions">        <xsd:restriction base="xsd:string">            <xsd:enumeration value="Moderate" />            <xsd:enumeration value="Strict" />            <xsd:enumeration value="Off" />    </xsd:restriction> </xsd:simpleType> In this example, the SafeSearchOptions data type is based on a string data type. Unlike a regular string, however, the value that SafeSearchOptions takes is restricted by the restriction element. In this case, the several enumeration elements that follow. SafeSearchOptions can only be what is given in this enumeration list. That is, SafeSearchOptions can only have a value of "Moderate", "Strict", or "Off". Restrictions are not the only reason to use a simple type. There can also be two other elements in place of restrictions. The first is a list. If an element is a list, it means that the value passed to it is a list of space-separated values. A list is defined with the list element followed by an attribute named itemType, which defines the allowed data type. For example, this example specifies an attribute named listOfValues, which comprises all integers.     <xsd:simpleType name="listOfValues">        <xsd:list itemType="xsd:integer" />    </xsd:simpleType> The second is a union. Unions are basically a combination of two or more restrictions. This gives you a greater ability to fine-tune the allowed value. Back to our age example, if our service was for a hospital's pediatrics ward that admits only those under 18 years old, we can restrict the value with a union.     <xsd:simpleType name="Age">        <xsd:union>            <xsd:simpleType>                <xsd:restriction base="decimal">                        <xsd:minInclusive value="0" />                </xsd:restriction>            </xsd:simpleType>            <xsd:simpleType>                <xsd:restriction base="decimal">                        <xsd:maxExclusive value="18" />                </xsd:restriction>            </xsd:simpleType>        </xsd:union>    </xsd:simpleType> Finally, it is important to note that while simple types are, especially in the case of WSDLs, used mainly in the definition of elements, they can be used anywhere that requires the definition of a number. For example, you may sometimes see an attribute being defined and a simple type structure being used to restrict the value. Complex Type Generically, a complex type is anything that can have multiple elements or attributes. This is opposed to a simple type, which can have only one element. A complex type is represented by the element complexType in the WSDL. The most common use for complex types is as a carrier for objects in SOAP transactions. In other words, to pass an object to a SOAP service, it needs to be serialized into an XSD complex type in the message. The purpose of a complexType element is to explicitly define what other data types make up the complex type. Let's take a look at a piece of Google's WSDL for an example:     <xsd:complexType name="ResultElement">        <xsd:all>            <xsd:element name="summary" type="xsd:string"/>            <xsd:element name="URL" type="xsd:string"/>            <xsd:element name="snippet" type="xsd:string"/>            <xsd:element name="title" type="xsd:string"/>            <xsd:element name="cachedSize" type="xsd:string"/>            <xsd:element name=                        "relatedInformationPresent" type="xsd:boolean"/>            <xsd:element name="hostName" type="xsd:string"/>            <xsd:element name=                        "directoryCategory" type="typens:DirectoryCategory"/>            <xsd:element name="directoryTitle" type="xsd:string"/>        </xsd:all>    </xsd:complexType> First thing to notice is how the xsd: namespace is used throughout types. This denotes that these elements and attributes are part of the XSD specification. In this example, a data type called ResultElement is defined. We don't exactly know what it is used for right now, but we know that it exists. An element tag denotes complex type's equivalent to an object property. The first property of it is summary, and the type attribute tells us that it is a string, as are most properties of ResultElement. One exception is relatedInformationPresent, which is a Boolean. Another exception is directoryCategory. This has a data type of DirectoryCategory. The namespace used in the type attribute is typens. This tells us that it is not an XSD data type. To find out what it is, we'll have to look for the namespace declaration that declared typens.
Read more
  • 0
  • 0
  • 5329

article-image-guide-understanding-core-data-ios
Packt
21 Apr 2011
13 min read
Save for later

A Guide to Understanding Core Data iOS

Packt
21 Apr 2011
13 min read
Core Data iOS Essentials A fast-paced, example-driven guide guide to data-drive iPhone, iPad, and iPod Touch applications        Core Data Core Data is Apple's persistence framework, which is used to persist—store our application's data in a persistent store, which may be memory or a flat file database. It helps us represent our data model in terms of an object graph, establish relationships among objects, and it can also store object graphs on the disk. It also allows us to use the entities of our data model in the form of objects, that is, it maps our data into a form that can be easily stored in a database, such as SQLite, or into a flat file. Also, the Core Data reduces a lot of coding. On using Xcode's templates for Core Data applications, we automatically get the boilerplate code that does several complex tasks such as generating XML files, binary files, SQLite files automatically for us without writing a single code, allowing us to focus on the business logic of our application. Besides this, Core Data also provides several features that are required in data manipulation, which includes filtering data, querying data, sorting data, establishing relationships with other data, and persisting data in different repositories. Core Data features The Core Data framework provides lots of features that include the following: Supports migrating and versioning: It means we can modify our data model, that is, entities of the application, whenever desired. The Core Data will replace the older persistent store with the revised data model. Supports Key-Value Coding (KVC): It is used to store and retrieve data from the managed objects. Core Data provides the methods required for setting and retrieving attribute values from the managed object, respectively. We will be using this feature in our application to display the information of customers and the products sold to them through the table view. Tracks the modifications: Core Data keeps track of the modifications performed on managed objects thus allowing us to undo any changes if required. We will be using this feature in our application while modifying the information of a customer or product to know what the earlier value was and what the new value entered for it is. Supports lazy loading: It's a situation that arises when all the property values of a managed object are not loaded from the data store and the property values are accessed by the application. In such situations, faulting occurs and the data is retrieved from the store automatically. Efficient database retrievals: Core Data queries are optimized for this, though the execution of query is dependent on the data store. Multi-threading: Core Data supports multi-threading in an application, that is, more than one thread can be executed in parallel to increase performance. Even some tasks can be performed in the background using a separate thread. Inverse relationship: Core Data maintains an inverse relationship for consistency. If we add an object to a relationship, Core Data will automatically take care of adding the correct object to the inverse relationship. Also, if we remove an object from a relationship, Core Data will automatically remove it from the inverse relationship. In our application, we will be using an inverse relationship between the Customer and Product entities, so that if a customer is deleted, the information of all the products purchased by him/her should also be automatically deleted. External data repositories: Core Data supports storing objects in external data repositories in different formats. Data Model Core Data describes the data in terms of a data model. A data model is used to define the structure of the data in terms of entities, properties, and their relationships. Entities Because Core Data maintains data in terms of objects, an entity is an individual data object to represent complete information of the person, item, object, and so on. For example, customer is an entity, which represents information of customers, such as name, address, e-mail ID, contact number, products purchased, date of purchase, and so on. Similarly, the product is an entity, which represents the information of a product, such as name of the product, price, weight, and so on. An entity consists of properties that are a combination of attributes and relationships. An entity in Xcode's Data Model Editor may appear as shown in the following screenshot: Properties Properties of an entity give detailed information about it, such as what are its attributes and how it is related to other entities. A property of an entity refers to its attributes and relationships. Attributes are scalar values and relationships are pointers to or collections of other entities at the object level. A property is represented by a name and a type. Attributes Attributes are the variables within an object (entity). In fact, a collection of attributes makes an entity. In database language, they are known as columns of the table. For example, the customer's entity may consist of attributes such as name, address, contact number, items purchased, and so on. Similarly, the attributes in the products table may be item code, item name, quantity, and so on. While creating attributes of an entity, we have to specify its name and its data type to declare the kind of information (whether integer, float, string, and so on) that will be stored in the attribute. Also, we can define the constraints on the information that can be stored in the column. For example, we can specify the maximum, minimum value (range) that can be stored in that attribute, or whether the attribute can or cannot store certain special symbols, and so on. Also, we can specify the default value of an attribute. Relationships Besides attributes, an entity may also contain relationships (which define how an entity is related to other entities). The attributes and relationships of an entity are collectively known as properties. The relationships are of many types (To-One, To-Many, and Many-to-Many) and play a major role in defining connection among the entities and what will be the impact of insertion or deletion of a row in one entity on the connected entities. Examples of relationship types: The relationship from a child entity to a parent entity is a To-One relationship as a child can have only one parent The relationship from a customer to a product entity is a To-Many relationship as a customer can purchase several products The relationship from an employee to a project entity is of Many-to-Many type as several employees can work on one project and an employee can work on several projects simultaneously To define a many-to-many relationship in Core Data, we have to use two To-many relationships. The first To-many relationship is set from the first entity to the second entity. The second To-many relationship is set from the second entity to the first entity. In Xcode's Data Model Editor, the relationship from Customer to Product—a To-Many relationship—is represented by a line that appears pointing from the Customer entity to the Product entity with two arrows, (designating a One-to-Many relationship) as shown in the subsequent screenshot, whereas the To-One relationship is represented by a line with a single arrow: When defining relationships in Core Data we may use inverse relationships, though it's optional. Inverse relationship In Core Data, every relationship can have an inverse relationship. Like, if there is a relationship from Customer to Product, there will be a relationship from Product to Customer too. A relationship does not need to be the same kind as its inverse; for example, a To-One relationship can have an inverse relationship of type To-Many. Although relationships are not required to have an inverse, Apple generally recommends that you always create and specify the inverse, (even if you won't need) as it helps Core Data to ensure data integrity. For example, consider a situation when a Customer entity has a relationship of the To-Many type to a Product entity and some information of a customer is changed or a row of a customer is deleted. Then it will be easier for Core Data to ensure consistency; that is, by inverse relationship, Core Data can automatically find the products related to the deleted customer and hence, delete them too. Before we go further, let us have a quick look at the architecture that is used in iPhone application development: MVC. Model View Controller (MVC) iPhone application development uses MVC architecture where M stands for Model, V stands for View, and C for Controller. Model represents the backend data—data model View represents the user interface elements through which the user looks at the contents displayed by the application and can interact with them Controller represents the application logic that decides the type of view to be displayed on the basis of actions taken by the user Core Data organizes the data model in terms of objects that are easy to handle and manipulate. The finalized objects are stored on a persistent storage. The usual way of representing data models is through classes that contains variables and accessor methods. We don't have to create classes by hand, (for our data models) as Core Data framework provides a special Data Model Design tool (also known as Data Model Editor) for quickly creating an entity relationship model. The terms that we will be frequently using from now onwards are Managed Object Model, Managed Objects, and Managed Object Context. Let us see what these terms mean: Managed Object Model: The data model created by the Data Model Design tool (Data Model Editor) is also known as Managed Object Model. Managed Objects: Managed objects are instances of the NSManagedObject class (or its subclass) that represent instances of an entity that are maintained (managed) by the Core Data framework. In a managed object model, an entity is defined by an entity name and the name of the class that is used at runtime to represent it. The NSManagedObject class implements all of the functionality required by a managed object. A managed object is associated with an entity description (an instance of NSEntityDescription) that describes the object; for example, the name of the entity, its attributes, relationships, and so on. In other words, an NSEntityDescription object may consist of NSAttributeDescription and NSRelationshipDescription objects that represent the properties of the entity. At runtime, the managed object is associated with a managed object context. Managed Object Context: The objects when fetched from the persistent storage are placed in managed object context. It performs validations and keeps track of the changes made to the object's attributes so that undo and redo operations can be applied to it, if required. In a given context, a managed object provides a representation of a record in a persistent store. Depending on a situation, there may be multiple contexts—each containing a separate managed object representing that record. All managed objects are registered with managed object context. For an application, we need the information represented by the Managed Object (instance of an entity) to be stored on the disk (persistent store) via managed object context. To understand the concepts of managed object context and its relation with data persistence, we need to understand the components of Core Data API, so let us go ahead and look at what Core Data API is all about. Core Data API The Core Data API, also called the stack, consists of three main components: NSPersistentStoreCoordinator NSManagedObjectModel NSManagedObjectContext The PersistentStoreCoordinator plays a major role in storing and retrieving managed objects from the Persistent Store via ManagedObjectContext. We can see in the following figure how the three are related: The Managed Object Model (an instance of NSManagedObjectModel class) is created from the data model of our application. If there is more than one data model in our application, the Managed Object Model is created by merging all of the data models found in the application bundle. The managed object (instance of the NSManagedObject class or its subclass) represents an instance of an entity that is maintained (managed) by the Core Data framework. A managed object is an instance of an Objective-C class, but it differs from other objects in three main ways: A managed object must be an instance of NSManagedObject or of a class that inherits from NSManagedObject The state of managed object is maintained by its managed object context A managed object has an associated entity description that describes the properties of the object For working with a managed object, it is loaded into memory. The managed object context maintains the state of the managed object after it is loaded in memory. The Managed Object Context tracks in-memory changes that have yet to be persisted to the data store. Any changes made to the state of an NSManagedObject do actually affect the state of the object in memory, not just the persistent representation of that object in the data store. When we want to commit the modifications made to the managed object, we save the managed object context to the persistent store. In order to deal with persistent store, the managed object context needs a reference to a PersistentStoreCoordinator. In other words, a pointer to the PersistentStoreCoordinator is required for creating a Managed Object Context. Remember, the PersistentStoreCoordinator is the essential middle layer in the stack that helps in storing and retrieving the managed object model from the persistent store. The managed object context is an object that plays a major role in the life cycle of managed objects. It handles all the aspects of managed object from faulting to validation including undo/redo. To modify managed objects, they are fetched from a persistent store through managed context. The modified managed objects are committed to the persistent store through context only. The managed objects represent data held in a persistent store. Faulting is considered to occur for an object whose property values have not yet been loaded from the external data store. To access the objects (entity) in managed object context, FetchRequest, an instance of NSFetchRequest class, is used. To define the entity to be retrieved via NSFetchRequest, we pass the appropriate NSEntityDescription to the NSFetchRequest. The result, that is, the set of entities retrieved from the managed object context (on the basis of FetchRequest) are managed by FetchedResultsController—an instance of NSFetchedResultsController. In fact, FetchRequest is passed to the FetchedResultsController along with a reference to the managed object context. Once the NSFetchedResultsController class has been initialized, we can perform a fetch operation to load the entities (stored in it) into memory. The managed object context keeps track of all the changes made to the managed object since the last time it was loaded in memory and hence helps in undoing any changes made to the managed object (if required).The Persistent Store Coordinator helps in avoiding redundancy if multiple calls are made by different classes on the same file at the same time, that is, the multiple calls are serialized by the NSPersistentStoreCoordinator class to avoid redundancy. Let us now get a detailed understanding of the terms used above.
Read more
  • 0
  • 0
  • 5327
article-image-apache-solr-analyzing-your-text-data
Packt
22 Jul 2011
13 min read
Save for later

Apache Solr: Analyzing your Text Data

Packt
22 Jul 2011
13 min read
  Apache Solr 3.1 Cookbook Introduction Type's behavior can be defined in the context of the indexing process or the context of the query process, or both. Furthermore, type definition is composed of tokenizers and filters (both token filters and character filters). Tokenizer specifies how your data will be preprocessed after it is sent to the appropriate field. Analyzer operates on the whole data that is sent to the field. Types can only have one tokenizer. The result of the tokenizer work is a stream of objects called tokens. Next in the analysis chain are the filters. They operate on the tokens in the token stream. And they can do anything with the tokens—changing them, removing them, or for example, making them lowercase. Types can have multiple filters. One additional type of filter is the character filter. The character filters do not operate on tokens from the token stream. They operate on the data that is sent to the field and they are invoked before the data is sent to the analyzer. This article will focus on the data analysis and how to handle the common day-to-day analysis questions and problems. Storing additional information using payloads Imagine that you have a powerful preprocessing tool that can extract information about all the words in the text. Your boss would like you to use it with Solr or at least store the information it returns in Solr. So what can you do? We can use something that is called payload and use it to store that data. This recipe will show you how to do it. How to do it... I assumed that we already have an application that takes care of recognizing the part of speech in our text data. Now we need to add it to the Solr index. To do that we will use payloads, a metadata that can be stored with each occurrence of a term. First of all, you need to modify the index structure. For this, we will add the new field type to the schema.xml file: <fieldtype name="partofspeech" class="solr.TextField"> <analyzer> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.DelimitedPayloadTokenFilterFactory" encoder="integer" delimiter="|"/> </analyzer> </fieldtype> Now add the field definition part to the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="text" type="text" indexed="true" stored="true" /> <field name="speech" type="partofspeech" indexed="true" stored= "true" multivalued="true" /> Now let's look at what the example data looks like (I named it ch3_payload.xml): <add> <doc> <field name="id">1</field> <field name="text">ugly human</field> <field name="speech">ugly|3 human|6</field> </doc> <doc> <field name="id">2</field> <field name="text">big book example</field> <field name="speech">big|3 book|6 example|1</field> </doc> </add> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_payload.xml file there): java -jarpost.jar ch3_payload.xml How it works... What information can payload hold? It may hold information that is compatible with the encoder type you define for the solr.DelimitedPayloadTokenFilterFactory filter . In our case, we don't need to write our own encoder—we will use the supplied one to store integers. We will use it to store the boost of the term. For example, nouns will be given a token boost value of 6, while the adjectives will be given a boost value of 3. First we have the type definition. We defined a new type in the schema.xml file, named partofspeech based on the Solr text field (attribute class="solr.TextField"). Our tokenizer splits the given text on whitespace characters. Then we have a new filter which handles our payloads. The filter defines an encoder, which in our case is an integer (attribute encoder="integer"). Furthermore, it defines a delimiter which separates the term from the payload. In our case, the separator is the pipe character |. Next we have the field definitions. In our example, we only define three fields: Identifier Text Recognized speech part with payload   Now let's take a look at the example data. We have two simple fields: id and text. The one that we are interested in is the speech field. Look how it is defined. It contains pairs which are made of a term, delimiter, and boost value. For example, book|6. In the example, I decided to boost the nouns with a boost value of 6 and adjectives with the boost value of 3. I also decided that words that cannot be identified by my application, which is used to identify parts of speech, will be given a boost of 1. Pairs are separated with a space character, which in our case will be used to split those pairs. This is the task of the tokenizer which we defined earlier. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it, we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/update. The following parameter is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. That is how you index payloads in Solr. In the 1.4.1 version of Solr, there is no further support for payloads. Hopefully this will change. But for now, you need to write your own query parser and similarity class (or extend the ones present in Solr) to use them. Eliminating XML and HTML tags from the text There are many real-life situations when you have to clean your data. Let's assume that you want to index web pages that your client sends you. You don't know anything about the structure of that page—one thing you know is that you must provide a search mechanism that will enable searching through the content of the pages. Of course, you could index the whole page by splitting it by whitespaces, but then you would probably hear the clients complain about the HTML tags being searchable and so on. So, before we enable searching on the contents of the page, we need to clean the data. In this example, we need to remove the HTML tags. This recipe will show you how to do it with Solr. How to do it... Let's suppose our data looks like this (the ch3_html.xml file): <add> <doc> <field name="id">1</field> <field name="html"><![CDATA[<html><head><title>My page</title></ head><body><p>This is a <b>my</b><i>sample</i> page</body></html> ]]></field> </doc> </add> Now let's take care of the schema.xml file. First add the type definition to the schema.xml file: <fieldType name="html_strip" class="solr.TextField"> <analyzer> <charFilter class="solr.HTMLStripCharFilterFactory"/> <tokenizer class="solr.WhitespaceTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType> And now, add the following to the field definition part of the schema.xml file: <field name="id" type="string" indexed="true" stored="true" required="true" /> <field name="html" type="html_strip" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar ch3_html.xml If there were no errors, you should see a response like this: SimplePostTool: version 1.2 SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, other encodings are not currently supported SimplePostTool: POSTing files to http://localhost:8983/solr/update.. SimplePostTool: POSTingfile ch3_html.xml SimplePostTool: COMMITting Solr index changes.. How it works... First of all, we have the data example. In the example, we see one file with two fields; the identifier and some HTML data nested in the CDATA section. You must remember to surround the HTML data in CDATA tags if they are full pages, and start from HTML tags like our example, otherwise Solr will have problems with parsing the data. However, if you only have some tags present in the data, you shouldn't worry. Next, we have the html_strip type definition. It is based on solr.TextField to enable full-text searching. Following that, we have a character filter which handles the HTML and the XML tags stripping. This is something new in Solr 1.4. The character filters are invoked before the data is sent to the tokenizer. This way they operate on untokenized data. In our case, the character filter strips the HTML and XML tags, attributes, and so on. Then it sends the data to the tokenizer, which splits the data by whitespace characters. The one and only filter defined in our type makes the tokens lowercase to simplify the search. To index the documents, we use simple post tools provided with the example deployment of Solr. To use it we invoke the command shown in the example. The post tools will send the data to the default update handler found under the address http://localhost:8983/solr/ update. The parameter of the command execution is the file that is going to be sent to Solr. You can also post a list of files, not just a single one. As you can see, the sample response from the post tools is rather informative. It provides information about the update handler address, files that were sent, and information about commits being performed. If you want to check how your data was indexed, remember not to be mistaken when you choose to store the field contents (attribute stored="true"). The stored value is the original one sent to Solr, so you won't be able to see the filters in action. If you wish to check the actual data structures, please take a look at the Luke utility (a utility that lets you see the index structure, field values, and operate on the index). Luke can be found at the following address: http://code.google.com/p/luke Solr provides a tool that lets you see how your data is analyzed. That tool is a part of Solr administration pages. Copying the contents of one field to another Imagine that you have many big XML files that hold information about the books that are stored on library shelves. There is not much data, just the unique identifier, name of the book, and the name of the author. One day your boss comes to you and says: "Hey, we want to facet and sort on the basis of the book author". You can change your XML and add two fields, but why do that when you can use Solr to do that for you? Well, Solr won't modify your data, but it can copy the data from one field to another. This recipe will show you how to do that. How to do it... Let's assume that our data looks like this: <add> <doc> <field name="id">1</field> <field name="name">Solr Cookbook</field> <field name="author">John Kowalsky</field> </doc> <doc> <field name="id">2</field> <field name="name">Some other book</field> <field name="author">Jane Kowalsky</field> </doc> </add> We want the contents of the author field to be present in the fields named author, author_facet, and author sort. So let's define the copy fields in the schema.xml file (place the following right after the fields section): <copyField source="author"dest="author_facet"/> <copyField source="author"dest="author_sort"/> And that's all. Solr will take care of the rest. The field definition part of the schema.xml file could look like this: <field name="id" type="string" indexed="true" stored="true" required="true"/> <field name="author" type="text" indexed="true" stored="true" multiValued="true"/> <field name="name" type="text" indexed="true" stored="true"/> <field name="author_facet" type="string" indexed="true" stored="false"/> <field name="author_sort" type="alphaOnlySort" indexed="true" stored="false"/> Let's index our data. To do that, we run the following command from the exampledocs directory (put the ch3_html.xml file there): java -jar post.jar data.xml How it works... As you can see in the example, we have only three fields defined in our sample data XML file. There are two fields which we are not particularly interested in: id and name. The field that interests us the most is the author field. As I have mentioned earlier, we want to place the contents of that field in three fields: Author (the actual field that will be holding the data) author_ sort author_facet   To do that we use the copy fields. Those instructions are defined in the schema.xml file, right after the field definitions, that is, after the tag. To define a copy field, we need to specify a source field (attribute source) and a destination field (attribute dest). After the definitions, like those in the example, Solr will copy the contents of the source fields to the destination fields during the indexing process. There is one thing that you have to be aware of—the content is copied before the analysis process takes place. This means that the data is copied as it is stored in the source. There's more... There are a few things worth nothing when talking about copying contents of the field to another field. Copying the contents of dynamic fields to one field You can also copy multiple field content to one field. To do that, you should define a copy field like this: <copyField source="*_author"dest="authors"/> The definition like the one above would copy all of the fields that end with _author to one field named authors. Remember that if you copy multiple fields to one field, the destination field should be defined as multivalued. Limiting the number of characters copied There may be situations where you only need to copy a defined number of characters from one field to another. To do that we add the maxChars attribute to the copy field definition. It can look like this: <copyField source="author" dest="author_facet" maxChars="200"/> The above definition tells Solr to copy upto 200 characters from the author field to the author_facet field. This attribute can be very useful when copying the content of multiple fields to one field.
Read more
  • 0
  • 0
  • 5326

article-image-working-gradle
Packt
11 Aug 2015
18 min read
Save for later

Working with Gradle

Packt
11 Aug 2015
18 min read
In this article by Mainak Mitra, author of the book Mastering Gradle, we cover some plugins such as War and Scala, which will be helpful in building web applications and Scala applications. Additionally, we will discuss diverse topics such as Property Management, Multi-Project build, and logging aspects. In the Multi-project build section, we will discuss how Gradle supports multi-project build through the root project's build file. It also provides the flexibility of treating each module as a separate project, plus all the modules together like a single project. (For more resources related to this topic, see here.) The War plugin The War plugin is used to build web projects, and like any other plugin, it can be added to the build file by adding the following line: apply plugin: 'war' War plugin extends the Java plugin and helps to create the war archives. The war plugin automatically applies the Java plugin to the build file. During the build process, the plugin creates a war file instead of a jar file. The war plugin disables the jar task of the Java plugin and adds a default war archive task. By default, the content of the war file will be compiled classes from src/main/java; content from src/main/webapp and all the runtime dependencies. The content can be customized using the war closure as well. In our example, we have created a simple servlet file to display the current date and time, a web.xml file and a build.gradle file. The project structure is displayed in the following screenshot: Figure 6.1 The SimpleWebApp/build.gradle file has the following content: apply plugin: 'war'   repositories { mavenCentral() }   dependencies { providedCompile "javax.servlet:servlet-api:2.5" compile("commons-io:commons-io:2.4") compile 'javax.inject:javax.inject:1' } The war plugin adds the providedCompile and providedRuntime dependency configurations on top of the Java plugin. The providedCompile and providedRuntime configurations have the same scope as compile and runtime respectively, but the only difference is that the libraries defined in these configurations will not be a part of the war archive. In our example, we have defined servlet-api as the providedCompile time dependency. So, this library is not included in the WEB-INF/lib/ folder of the war file. This is because this library is provided by the servlet container such as Tomcat. So, when we deploy the application in a container, it is added by the container. You can confirm this by expanding the war file as follows: SimpleWebApp$ jar -tvf build/libs/SimpleWebApp.war    0 Mon Mar 16 17:56:04 IST 2015 META-INF/    25 Mon Mar 16 17:56:04 IST 2015 META-INF/MANIFEST.MF    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/ch6/ 1148 Mon Mar 16 17:56:04 IST 2015 WEB-INF/classes/ch6/DateTimeServlet.class    0 Mon Mar 16 17:56:04 IST 2015 WEB-INF/lib/ 185140 Mon Mar 16 12:32:50 IST 2015 WEB-INF/lib/commons-io-2.4.jar 2497 Mon Mar 16 13:49:32 IST 2015 WEB-INF/lib/javax.inject-1.jar 578 Mon Mar 16 16:45:16 IST 2015 WEB-INF/web.xml Sometimes, we might need to customize the project's structure as well. For example, the webapp folder could be under the root project folder, not in the src folder. The webapp folder can also contain new folders such as conf and resource to store the properties files, Java scripts, images, and other assets. We might want to rename the webapp folder to WebContent. The proposed directory structure might look like this: Figure 6.2 We might also be interested in creating a war file with a custom name and version. Additionally, we might not want to copy any empty folder such as images or js to the war file. To implement these new changes, add the additional properties to the build.gradle file as described here. The webAppDirName property sets the new webapp folder location to the WebContent folder. The war closure defines properties such as version and name, and sets the includeEmptyDirs option as false. By default, includeEmptyDirs is set to true. This means any empty folder in the webapp directory will be copied to the war file. By setting it to false, the empty folders such as images and js will not be copied to the war file. The following would be the contents of CustomWebApp/build.gradle: apply plugin: 'war'   repositories { mavenCentral() } dependencies { providedCompile "javax.servlet:servlet-api:2.5" compile("commons-io:commons-io:2.4") compile 'javax.inject:javax.inject:1' } webAppDirName="WebContent"   war{ baseName = "simpleapp" version = "1.0" extension = "war" includeEmptyDirs = false } After the build is successful, the war file will be created as simpleapp-1.0.war. Execute the jar -tvf build/libs/simpleapp-1.0.war command and verify the content of the war file. You will find the conf folder is added to the war file, whereas images and js folders are not included. You might also find the Jetty plugin interesting for web application deployment, which enables you to deploy the web application in an embedded container. This plugin automatically applies the War plugin to the project. The Jetty plugin defines three tasks; jettyRun, jettyRunWar, and jettyStop. Task jettyRun runs the web application in an embedded Jetty web container, whereas the jettyRunWar task helps to build the war file and then run it in the embedded web container. Task jettyStopstops the container instance. For more information please refer to the Gradle API documentation. Here is the link: https://docs.gradle.org/current/userguide/war_plugin.html. The Scala plugin The Scala plugin helps you to build the Scala application. Like any other plugin, the Scala plugin can be applied to the build file by adding the following line: apply plugin: 'scala' The Scala plugin also extends the Java plugin and adds a few more tasks such as compileScala, compileTestScala, and scaladoc to work with Scala files. The task names are pretty much all named after their Java equivalent, simply replacing the java part with scala. The Scala project's directory structure is also similar to a Java project structure where production code is typically written under src/main/scala directory and test code is kept under the src/test/scala directory. Figure 6.3 shows the directory structure of a Scala project. You can also observe from the directory structure that a Scala project can contain a mix of Java and Scala source files. The HelloScala.scala file has the following content. The output is Hello, Scala... on the console. This is a very basic code and we will not be able to discuss much detail on the Scala programming language. We request readers to refer to the Scala language documentation available at http://www.scala-lang.org/. package ch6   object HelloScala {    def main(args: Array[String]) {      println("Hello, Scala...")    } } To support the compilation of Scala source code, Scala libraries should be added in the dependency configuration: dependencies { compile('org.scala-lang:scala-library:2.11.6') } Figure 6.3 As mentioned, the Scala plugin extends the Java plugin and adds a few new tasks. For example, the compileScala task depends on the compileJava task and the compileTestScala task depends on the compileTestJava task. This can be understood easily, by executing classes and testClasses tasks and looking at the output. $ gradle classes :compileJava :compileScala :processResources UP-TO-DATE :classes   BUILD SUCCESSFUL $ gradle testClasses :compileJava UP-TO-DATE :compileScala UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :compileTestJava UP-TO-DATE :compileTestScala UP-TO-DATE :processTestResources UP-TO-DATE :testClasses UP-TO-DATE   BUILD SUCCESSFUL Scala projects are also packaged as jar files. The jar task or assemble task creates a jar file in the build/libs directory. $ jar -tvf build/libs/ScalaApplication-1.0.jar 0 Thu Mar 26 23:49:04 IST 2015 META-INF/ 94 Thu Mar 26 23:49:04 IST 2015 META-INF/MANIFEST.MF 0 Thu Mar 26 23:49:04 IST 2015 ch6/ 1194 Thu Mar 26 23:48:58 IST 2015 ch6/Customer.class 609 Thu Mar 26 23:49:04 IST 2015 ch6/HelloScala$.class 594 Thu Mar 26 23:49:04 IST 2015 ch6/HelloScala.class 1375 Thu Mar 26 23:48:58 IST 2015 ch6/Order.class The Scala plugin does not add any extra convention to the Java plugin. Therefore, the conventions defined in the Java plugin, such as lib directory and report directory can be reused in the Scala plugin. The Scala plugin only adds few sourceSet properties such as allScala, scala.srcDirs, and scala to work with source set. The following task example displays different properties available to the Scala plugin. The following is a code snippet from ScalaApplication/build.gradle: apply plugin: 'java' apply plugin: 'scala' apply plugin: 'eclipse'   version = '1.0'   jar { manifest { attributes 'Implementation-Title': 'ScalaApplication',     'Implementation-Version': version } }   repositories { mavenCentral() }   dependencies { compile('org.scala-lang:scala-library:2.11.6') runtime('org.scala-lang:scala-compiler:2.11.6') compile('org.scala-lang:jline:2.9.0-1') }   task displayScalaPluginConvention << { println "Lib Directory: $libsDir" println "Lib Directory Name: $libsDirName" println "Reports Directory: $reportsDir" println "Test Result Directory: $testResultsDir"   println "Source Code in two sourcesets: $sourceSets" println "Production Code: ${sourceSets.main.java.srcDirs},     ${sourceSets.main.scala.srcDirs}" println "Test Code: ${sourceSets.test.java.srcDirs},     ${sourceSets.test.scala.srcDirs}" println "Production code output:     ${sourceSets.main.output.classesDir} &        ${sourceSets.main.output.resourcesDir}" println "Test code output: ${sourceSets.test.output.classesDir}      & ${sourceSets.test.output.resourcesDir}" } The output of the task displayScalaPluginConvention is shown in the following code: $ gradle displayScalaPluginConvention … :displayScalaPluginConvention Lib Directory: <path>/ build/libs Lib Directory Name: libs Reports Directory: <path>/build/reports Test Result Directory: <path>/build/test-results Source Code in two sourcesets: [source set 'main', source set 'test'] Production Code: [<path>/src/main/java], [<path>/src/main/scala] Test Code: [<path>/src/test/java], [<path>/src/test/scala] Production code output: <path>/build/classes/main & <path>/build/resources/main Test code output: <path>/build/classes/test & <path>/build/resources/test   BUILD SUCCESSFUL Finally, we will conclude this section by discussing how to execute Scala application from Gradle; we can create a simple task in the build file as follows. task runMain(type: JavaExec){ main = 'ch6.HelloScala' classpath = configurations.runtime + sourceSets.main.output +     sourceSets.test.output } The HelloScala source file has a main method which prints Hello, Scala... in the console. The runMain task executes the main method and displays the output in the console: $ gradle runMain .... :runMain Hello, Scala...   BUILD SUCCESSFUL Logging Until now we have used println everywhere in the build script to display the messages to the user. If you are coming from a Java background you know a println statement is not the right way to give information to the user. You need logging. Logging helps the user to classify the categories of messages to show at different levels. These different levels help users to print a correct message based on the situation. For example, when a user wants complete detailed tracking of your software, they can use debug level. Similarly, whenever a user wants very limited useful information while executing a task, they can use quiet or info level. Gradle provides the following different types of logging: Log Level Description ERROR This is used to show error messages QUIET This is used to show limited useful information WARNING This is used to show warning messages LIFECYCLE This is used to show the progress (default level) INFO This is used to show information messages DEBUG This is used to show debug messages (all logs) By default, the Gradle log level is LIFECYCLE. The following is the code snippet from LogExample/build.gradle: task showLogging << { println "This is println example" logger.error "This is error message" logger.quiet "This is quiet message" logger.warn "This is WARNING message" logger.lifecycle "This is LIFECYCLE message" logger.info "This is INFO message" logger.debug "This is DEBUG message" } Now, execute the following command: $ gradle showLogging   :showLogging This is println example This is error message This is quiet message This is WARNING message This is LIFECYCLE message   BUILD SUCCESSFUL Here, Gradle has printed all the logger statements upto the lifecycle level (including lifecycle), which is Gradle's default log level. You can also control the log level from the command line. -q This will show logs up to the quiet level. It will include error and quiet messages -i This will show logs up to the info level. It will include error, quiet, warning, lifecycle and info messages. -s This prints out the stacktrace for all exceptions. -d This prints out all logs and debug information. This is most expressive log level, which will also print all the minor details. Now, execute gradle showLogging -q: This is println example This is error message This is quiet message Apart from the regular lifecycle, Gradle provides an additional option to provide stack trace in case of any exception. Stack trace is different from debug. In case of any failure, it allows tracking of all the nested functions, which are called in sequence up to the point where the stack trace is generated. To verify, add the assert statement in the preceding task and execute the following: task showLogging << { println "This is println example" .. assert 1==2 }   $ gradle showLogging -s …… * Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':showLogging'. at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter. executeActions(ExecuteActionsTaskExecuter.java:69)        at …. org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter. execute(SkipOnlyIfTaskExecuter.java:53)        at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter. execute(ExecuteAtMostOnceTaskExecuter.java:43)        at org.gradle.api.internal.AbstractTask.executeWithoutThrowingTaskFailure (AbstractTask.java:305) ... With stracktrace, Gradle also provides two options: -s or --stracktrace: This will print truncated stracktrace -S or --full-stracktrace: This will print full stracktrace File management One of the key features of any build tool is I/O operations and how easily you can perform the I/O operations such as reading files, writing files, and directory-related operations. Developers with Ant or Maven backgrounds know how painful and complex it was to handle the files and directory operations in old build tools; sometimes you had to write custom tasks and plugins to perform these kinds of operations due to XML limitations in Ant and Maven. Since Gradle uses Groovy, it will make your life much easier while dealing with files and directory-related operations. Reading files Gradle provides simple ways to read the file. You just need to use the File API (application programing interface) and it provides everything to deal with the file. The following is the code snippet from FileExample/build.gradle: task showFile << { File file1 = file("readme.txt") println file1   // will print name of the file file1.eachLine {    println it // will print contents line by line } } To read the file, we have used file(<file Name>). This is the default Gradle way to reference files because Gradle adds some path behavior ($PROJECT_PATH/<filename>) due to absolute and relative referencing of files. Here, the first println statement will print the name of the file which is readme.txt. To read a file, Groovy provides the eachLine method to the File API, which reads all the lines of the file one by one. To access the directory, you can use the following file API: def dir1 = new File("src") println "Checking directory "+dir1.isFile() // will return false   for directory println "Checking directory "+dir1.isDirectory() // will return true for directory Writing files To write to the files, you can use either the append method to add contents to the end of the file or overwrite the file using the setText or write methods: task fileWrite << { File file1 = file ("readme.txt")   // will append data at the end file1.append("nAdding new line. n")   // will overwrite contents file1.setText("Overwriting existing contents")   // will overwrite contents file1.write("Using write method") } Creating files/directories You can create a new file by just writing some text to it: task createFile << { File file1 = new File("newFile.txt") file1.write("Using write method") } By writing some data to the file, Groovy will automatically create the file if it does not exist. To write content to file you can also use the leftshift operator (<<), it will append data at the end of the file: file1 << "New content" If you want to create an empty file, you can create a new file using the createNewFile() method. task createNewFile << { File file1 = new File("createNewFileMethod.txt") file1.createNewFile() } A new directory can be created using the mkdir command. Gradle also allows you to create nested directories in a single command using mkdirs: task createDir << { def dir1 = new File("folder1") dir1.mkdir()   def dir2 = new File("folder2") dir2.createTempDir()   def dir3 = new File("folder3/subfolder31") dir3.mkdirs() // to create sub directories in one command } In the preceding example, we are creating two directories, one using mkdir() and the other using createTempDir(). The difference is when we create a directory using createTempDir(), that directory gets automatically deleted once your build script execution is completed. File operations We will see examples of some of the frequently used methods while dealing with files, which will help you in build automation: task fileOperations << { File file1 = new File("readme.txt") println "File size is "+file1.size() println "Checking existence "+file1.exists() println "Reading contents "+file1.getText() println "Checking directory "+file1.isDirectory() println "File length "+file1.length() println "Hidden file "+file1.isHidden()   // File paths println "File path is "+file1.path println "File absolute path is "+file1.absolutePath println "File canonical path is "+file1.canonicalPath   // Rename file file1.renameTo("writeme.txt")   // File Permissions file1.setReadOnly() println "Checking read permission "+ file1.canRead()+" write permission "+file1.canWrite() file1.setWritable(true) println "Checking read permission "+ file1.canRead()+" write permission "+file1.canWrite()   } Most of the preceding methods are self-explanatory. Try to execute the preceding task and observe the output. If you try to execute the fileOperations task twice, you will get the exception readme.txt (No such file or directory) since you have renamed the file to writeme.txt. Filter files Certain file methods allow users to pass a regular expression as an argument. Regular expressions can be used to filter out only the required data, rather than fetch all the data. The following is an example of the eachFileMatch() method, which will list only the Groovy files in a directory: task filterFiles << { def dir1 = new File("dir1") dir1.eachFileMatch(~/.*.groovy/) {    println it } dir1.eachFileRecurse { dir ->    if(dir.isDirectory()) {      dir.eachFileMatch(~/.*.groovy/) {        println it      }    } } } The output is as follows: $ gradle filterFiles   :filterFiles dir1groovySample.groovy dir1subdir1groovySample1.groovy dir1subdir2groovySample2.groovy dir1subdir2subDir3groovySample3.groovy   BUILD SUCCESSFUL Delete files and directories Gradle provides the delete() and deleteDir() APIs to delete files and directories respectively: task deleteFile << { def dir2 = new File("dir2") def file1 = new File("abc.txt") file1.createNewFile() dir2.mkdir() println "File path is "+file1.absolutePath println "Dir path is "+dir2.absolutePath file1.delete() dir2.deleteDir() println "Checking file(abc.txt) existence: "+file1.exists()+" and Directory(dir2) existence: "+dir2.exists() } The output is as follows: $ gradle deleteFile :deleteFile File path is Chapter6/FileExample/abc.txt Dir path is Chapter6/FileExample/dir2 Checking file(abc.txt) existence: false and Directory(dir2) existence: false   BUILD SUCCESSFUL The preceding task will create a directory dir2 and a file abc.txt. Then it will print the absolute paths and finally delete them. You can verify whether it is deleted properly by calling the exists() function. FileTree Until now, we have dealt with single file operations. Gradle provides plenty of user-friendly APIs to deal with file collections. One such API is FileTree. A FileTree represents a hierarchy of files or directories. It extends the FileCollection interface. Several objects in Gradle such as sourceSets, implement the FileTree interface. You can initialize FileTree with the fileTree() method. The following are the different ways you can initialize the fileTree method: task fileTreeSample << { FileTree fTree = fileTree('dir1') fTree.each {    println it.name } FileTree fTree1 = fileTree('dir1') {    include '**/*.groovy' } println "" fTree1.each {    println it.name } println "" FileTree fTree2 = fileTree(dir:'dir1',excludes:['**/*.groovy']) fTree2.each {    println it.absolutePath } } Execute the gradle fileTreeSample command and observe the output. The first iteration will print all the files in dir1. The second iteration will only include Groovy files (with extension .groovy). The third iteration will exclude Groovy files (with extension .groovy) and print other files with absolute path. You can also use FileTree to read contents from the archive files such as ZIP, JAR, or TAR files: FileTree jarFile = zipTree('SampleProject-1.0.jar') jarFile.each { println it.name } The preceding code snippet will list all the files contained in a jar file. Summary In this article, we have explored different topics of Gradle such as I/O operations, logging, Multi-Project build and testing using Gradle. We also learned how easy it is to generate assets for web applications and Scala projects with Gradle. In the Testing with Gradle section, we learned some basics to execute tests with JUnit and TestNG. In the next article, we will learn the code quality aspects of a Java project. We will analyze a few Gradle plugins such as Checkstyle and Sonar. Apart from learning these plugins, we will discuss another topic called Continuous Integration. These two topics will be combined and presented by exploration of two different continuous integration servers, namely Jenkins and TeamCity. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Defining Dependencies [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 5320
Modal Close icon
Modal Close icon