Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-selecting-and-initializing-database
Packt
10 Jun 2014
7 min read
Save for later

Selecting and initializing the database

Packt
10 Jun 2014
7 min read
(For more resources related to this topic, see here.) In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" } We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } } It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } }); We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } } The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." } The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package.json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" } Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection; The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, consider the following code: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } }); The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if(err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } } The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Summary In this article, we saw how to select and initialize database using NoSQL with MongoDB and using MySQL required for writing a blog application with Node.js and AngularJS. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] Understanding and Developing Node Modules [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 7154

article-image-automating-performance-analysis-yslow-and-phantomjs
Packt
10 Jun 2014
12 min read
Save for later

Automating performance analysis with YSlow and PhantomJS

Packt
10 Jun 2014
12 min read
(For more resources related to this topic, see here.) Getting ready To run this article, the phantomjs binary will need to be accessible to the continuous integration server, which may not necessarily share the same permissions or PATH as our user. We will also need a target URL. We will use the PhantomJS port of the YSlow library to execute the performance analysis on our target web page. The YSlow library must be installed somewhere on the filesystem that is accessible to the continuous integration server. For our example, we have placed the yslow.js script in the tmp directory of the jenkins user's home directory. To find the jenkins user's home directory on a POSIX-compatible system, first switch to that user using the following command: sudo su - jenkins Then print the home directory to the console using the following command: echo $HOME We will need to have a continuous integration server set up where we can configure the jobs that will execute our automated performance analyses. The example that follows will use the open source Jenkins CI server. Jenkins CI is too large a subject to introduce here, but this article does not assume any working knowledge of it. For information about Jenkins CI, including basic installation or usage instructions, or to obtain a copy for your platform, visit the project website at http://jenkins-ci.org/. Our article uses version 1.552. The combination of PhantomJS and YSlow is in no way unique to Jenkins CI. The example aims to provide a clear illustration of automated performance testing that can easily be adapted to any number of continuous integration server environments. The article also uses several plugins on Jenkins CI to help facilitate our automated testing. These plugins include: Environment Injector Plugin JUnit Attachments Plugin TAP Plugin xUnit Plugin To run that demo site, we must have Node.js installed. In a separate terminal, change to the phantomjs-sandbox directory (in the sample code's directory), and start the app with the following command: node app.js How to do it… To execute our automated performance analyses in Jenkins CI, the first thing that we need to do is set up the job as follows: Select the New Item link in Jenkins CI. Give the new job a name (for example, YSlow Performance Analysis), select Build a free-style software project, and then click on OK. To ensure that the performance analyses are automated, we enter a Build Trigger for the job. Check off the appropriate Build Trigger and enter details about it. For example, to run the tests every two hours, during business hours, Monday through Friday, check Build periodically and enter the Schedule as H 9-16/2 * * 1-5. In the Build block, click on Add build step and then click on Execute shell. In the Command text area of the Execute Shell block, enter the shell commands that we would normally type at the command line, for example: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f junit http ://localhost:3000/css-demo > yslow.xml In the Post-build Actions block, click on Add post-build action and then click on Publish JUnit test result report. In the Test report XMLs field of the Publish JUnit Test Result Report block, enter *.xml. Lastly, click on Save to persist the changes to this job. Our performance analysis job should now run automatically according to the specified schedule; however, we can always trigger it manually by navigating to the job in Jenkins CI and clicking on Build Now. After a few of the performance analyses have completed, we can navigate to those jobs in Jenkins CI and see the results shown in the following screenshots: The landing page for a performance analysis project in Jenkins CI Note the Test Result Trend graph with the successes and failures. The Test Result report page for a specific build Note that the failed tests in the overall analysis are called out and that we can expand specific items to view their details. The All Tests view of the Test Result report page for a specific build Note that all tests in the performance analysis are listed here, regardless of whether they passed or failed, and that we can click into a specific test to view its details. How it works… The driving principle behind this article is that we want our continuous integration server to periodically and automatically execute the YSlow analyses for us so that we can monitor our website's performance over time. This way, we can see whether our changes are having an effect on overall site performance, receive alerts when performance declines, or even fail builds if we fall below our performance threshold. The first thing that we do in this article is set up the build job. In our example, we set up a new job that was dedicated to the YSlow performance analysis task. However, these steps could be adapted such that the performance analysis task is added onto an existing multipurpose job. Next, we configured when our job will run, adding Build Trigger to run the analyses according to a schedule. For our schedule, we selected H 9-16/2 * * 1-5, which runs the analyses every two hours, during business hours, on weekdays. While the schedule that we used is fine for demonstration purposes, we should carefully consider the needs of our project—chances are that a different Build Trigger will be more appropriate. For example, it may make more sense to select Build after other projects are built, and to have the performance analyses run only after the new code has been committed, built, and deployed to the appropriate QA or staging environment. Another alternative would be to select Poll SCM and to have the performance analyses run only after Jenkins CI detects new changes in source control. With the schedule configured, we can apply the shell commands necessary for the performance analyses. As noted earlier, the Command text area accepts the text that we would normally type on the command line. Here we type the following: phantomjs: This is for the PhantomJS executable binary ${HOME}/tmp/yslow.js: This is to refer to the copy of the YSlow library accessible to the Jenkins CI user -i grade: This is to indicate that we want the "Grade" level of report detail -threshold "B": This is to indicate that we want to fail builds with an overall grade of "B" or below -f junit: This is to indicate that we want the results output in the JUnit format http://localhost:3000/css-demo: This is typed in as our target URL > yslow.xml: This is to redirect the JUnit-formatted output to that file on the disk What if PhantomJS isn't on the PATH for the Jenkins CI user? A relatively common problem that we may experience is that, although we have permission on Jenkins CI to set up new build jobs, we are not the server administrator. It is likely that PhantomJS is available on the same machine where Jenkins CI is running, but the jenkins user simply does not have the phantomjs binary on its PATH. In these cases, we should work with the person administering the Jenkins CI server to learn its path. Once we have the PhantomJS path, we can do the following: click on Add build step and then on Inject environment variables; drag-and-drop the Inject environment variables block to ensure that it is above our Execute shell block; in the Properties Content text area, apply the PhantomJS binary's path to the PATH variable, as we would in any other script as follows: PATH=/path/to/phantomjs/bin:${PATH} After setting the shell commands to execute, we jump into the Post-build Actions block and instruct Jenkins CI where it can find the JUnit XML reports. As our shell command is redirecting the output into a file that is directly in the workspace, it is sufficient to enter an unqualified *.xml here. Once we have saved our build job in Jenkins CI, the performance analyses can begin right away! If we are impatient for our first round of results, we can click on Build Now for our job and watch as it executes the initial performance analysis. As the performance analyses are run, Jenkins CI will accumulate the results on the filesystem, keeping them until they are either manually removed or until a discard policy removes old build information. We can browse these accumulated jobs in the web UI for Jenkins CI, clicking on the Test Result link to drill into them. There's more… The first thing that bears expanding upon is that we should be thoughtful about what we use as the target URL for our performance analysis job. The YSlow library expects a single target URL, and as such, it is not prepared to handle a performance analysis job that is otherwise configured to target two or more URLs. As such, we must select a strategy to compensate for this, for example: Pick a representative page: We could manually go through our site and select the single page that we feel best represents the site as a whole. For example, we could pick the page that is "most average" compared to the other pages ("most will perform at about this level"), or the page that is most likely to be the "worst performing" page ("most pages will perform better than this"). With our representative page selected, we can then extrapolate performance for other pages from this specimen. Pick a critical page: We could manually select the single page that is most sensitive to performance. For example, we could pick our site's landing page (for example, "it is critical to optimize performance for first-time visitors"), or a product demo page (for example, "this is where conversions happen, so this is where performance needs to be best"). Again, with our performance-sensitive page selected, we can optimize the general cases around the specific one. Set up multiple performance analysis jobs: If we are not content to extrapolate site performance from a single specimen page, then we could set up multiple performance analysis jobs—one for each page on the site that we want to test. In this way, we could (conceivably) set up an exhaustive performance analysis suite. Unfortunately, the results will not roll up into one; however, once our site is properly tuned, we need to only look for the telltale red ball of a failed build in Jenkins CI. The second point worth considering is—where do we point PhantomJS and YSlow for the performance analysis? And how does the target URL's environment affect our interpretation of the results? If we are comfortable running our performance analysis against our production deploys, then there is not much else to discuss—we are assessing exactly what needs to be assessed. But if we are analyzing performance in production, then it's already too late—the slow code has already been deployed! If we have a QA or staging environment available to us, then this is potentially better; we can deploy new code to one of these environments for integration and performance testing before putting it in front of the customers. However, these environments are likely to be different from production despite our best efforts. For example, though we may be "doing everything else right", perhaps our staging server causes all traffic to come back from a single hostname, and thus, we cannot properly mimic a CDN, nor can we use cookie-free domains. Do we lower our threshold grade? Do we deactivate or ignore these rules? How can we tell apart the false negatives from the real warnings? We should put some careful thought into this—but don't be disheartened—better to have results that are slightly off than to have no results at all! Using TAP format If JUnit formatted results turn out to be unacceptable, there is also a TAP plugin for Jenkins CI. Test Anything Protocol (TAP) is a plain text-based report format that is relatively easy for both humans and machines to read. With the TAP plugin installed in Jenkins CI, we can easily configure our performance analysis job to use it. We would just make the following changes to our build job: In the Command text area of our Execute shell block, we would enter the following command: phantomjs ${HOME}/tmp/yslow.js -i grade -threshold "B" -f tap http ://localhost:3000/css-demo > yslow.tap In the Post-build Actions block, we would select Publish TAP Results instead of Publish JUnit test result report and enter yslow.tap in the Test results text field. Everything else about using TAP instead of JUnit-formatted results here is basically the same. The job will still run on the schedule we specify, Jenkins CI will still accumulate test results for comparison, and we can still explore the details of an individual test's outcomes. The TAP plugin adds an additional link in the job for us, TAP Extended Test Results, as shown in the following screenshot: One thing worth pointing out about using TAP results is that it is much easier to set up a single job to test multiple target URLs within a single website. We can enter multiple tests in the Execute Shell block (separating them with the && operator) and then set our Test Results target to be *.tap. This will conveniently combine the results of all our performance analyses into one. Summary In this article, we saw setting up of an automated performance analysis task on a continuous integration server (for example, Jenkins CI) using PhantomJS and the YSlow library. Resources for Article: Further resources on this subject: Getting Started [article] Introducing a feature of IntroJs [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 5514

article-image-building-private-app
Packt
23 May 2014
14 min read
Save for later

Building a Private App

Packt
23 May 2014
14 min read
(For more resources related to this topic, see here.) Even though the app will be simple and only take a few hours to build, we'll still use good development practices to ensure we create a solid foundation. There are many different approaches to software development and discussing even a fraction of them is beyond the scope of this book. Instead, we'll use a few common concepts, such as requirements gathering, milestones, Test-Driven Development (TDD), frequent code check-ins, and appropriate commenting/documentation. Personal discipline in following development procedures is one of the best things a developer can bring to a project; it is even more important than writing code. This article will cover the following topics: The structure of the app we'll be building The development process Working with the Shopify API Using source control Deploying to production Signing up for Shopify Before we dive back into code, it would be helpful to get the task of setting up a Shopify store out of the way. Sign up as a Shopify partner by going to http://partners.shopify.com. The benefit of this is that partners can provision stores that can be used for testing. Go ahead and make one now before reading further. Keep your login information close at hand; we'll need it in just a moment. Understanding our workflow The general workflow for developing our application is as follows: Pull down the latest version of the master branch. Pick a feature to implement from our requirements list. Create a topic branch to keep our changes isolated. Write tests that describe the behavior desired by our feature. Develop the code until it passes all the tests. Commit and push the code into the remote repository. Pull down the latest version of the master branch and merge it with our topic branch. Run the test suite to ensure that everything still works. Merge the code back with the master branch. Commit and push the code to the remote repository. The previous list should give you a rough idea of what is involved in a typical software project involving multiple developers. The use of topic branches ensures that our work in progress won't affect other developers (called breaking the build) until we've confirmed that our code has passed all the tests and resolved any conflicts by merging in the latest stable code from the master branch. The practical upside of this methodology is that it allows bug fixes or work from another developer to be added to the project at any time without us having to worry about incomplete code polluting the build. This also gives us the ability to deploy production from a stable code base. In practice, a lot of projects will also have a production branch (or tagged release) that contains a copy of the code currently running in production. This is primarily in case of a server failure so that the application can be restored without having to worry about new features being released ahead of schedule, and secondly so that if a new deploy introduces bugs, it can easily be rolled back. Building the application We'll be building an application that allows Shopify storeowners to organize contests for their shoppers and randomly select a winner. Contests can be configured based on purchase history and timeframe. For example, a contest could be organized for all the customers who bought the newest widget within the last three days, or anyone who has made an order for any product in the month of August. To accomplish this, we'll need to be able to pull down order information from the Shopify store, generate a random winner, and show the storeowner the results. Let's start out by creating a list of requirements for our application. We'll use this list to break our development into discrete pieces so we can easily measure our progress and also keep our focus on the important features. Of course, it's difficult to make a complete list of all the requirements and have it stick throughout the development process, which is why a common strategy is to develop in iterations (or sprints). The result of an iteration is a working app that can be reviewed by the client so that the remaining features can be reprioritized if necessary. High-level requirements The requirements list comprises all the tasks we're going to accomplish in this article. The end result will be an application that we can use to run a contest for a single Shopify store. Included in the following list are any related database, business logic, and user interface coding necessary. Install a few necessary gems. Store Shopify API credentials. Connect to Shopify. Retrieve order information from Shopify. Retrieve product information from Shopify. Clean up the UI. Pick a winner from a list. Create contests. Now that we have a list of requirements, we can treat each one as a sprint. We will work in a topic branch and merge our code to the master branch at the end of the sprint. Installing a few necessary gems The first item on our list is to add a few code libraries (gems) to our application. Let's create a topic branch and do just that. To avoid confusion over which branch contains code for which feature, we can start the branch name with the requirement number. We'll additionally prepend the chapter number for clarity, so our format will be <chapter #>_<requirement #>_<branch name>. Execute the following command line in the root folder of the app: git checkout -b ch03_01_gem_updates This command will create a local branch called ch03_01_gem_updates that we will use to isolate our code for this feature. Once we've installed all the gems and verified that the application runs correctly, we'll merge our code back with the master branch. At a minimum we need to install the gems we want to use for testing. For this app we'll use RSpec. We'll need to use the development and test group to make sure the testing gems aren't loaded in production. Add the following code in bold to the block present in the Gemfile: group :development, :test do gem "sqlite3" # Helpful gems gem "better_errors" # improves error handling gem "binding_of_caller" # used by better errors # Testing frameworks gem 'rspec-rails' # testing framework gem "factory_girl_rails" # use factories, not fixtures gem "capybara" # simulate browser activity gem "fakeweb" # Automated testing gem 'guard' # automated execution of test suite upon change gem "guard-rspec" # guard integration with rspec # Only install the rb-fsevent gem if on Max OSX gem 'rb-fsevent' # used for Growl notifications end Now we need to head over to the terminal and install the gems via Bundler with the following command: bundle install The next step is to install RSpec: rails generate rspec:install The final step is to initialize Guard: guard init rspec This will create a Guard file, and fill it with the default code needed to detect the file changes. We can now restart our Rails server and verify that everything works properly. We have to do a full restart to ensure that any initialization files are properly picked up. Once we've ensured that our page loads without issue, we can commit our code and merge it back with the master branch: git add --all git commit -am "Added gems for testing" git checkout master git merge ch03_01_gem_updates git push Great! We've completed our first requirement. Storing Shopify API credentials In order to access our test store's API, we'll need to create a Private App and store the provided credentials there for future use. Fortunately, Shopify makes this easy for us via the Admin UI: Go to the Apps page. At the bottom of the page, click on the Create a private API key… link. Click on the Generate new Private App button. We'll now be provided with three important pieces of information: the API Key, password, and shared secret. In addition, we can see from the example URL field that we need to track our Shopify URL as well. Now that we have credentials to programmatically access our Shopify store, we can save this in our application. Let's create a topic branch and get to work: git checkout -b ch03_02_shopify_credentials Rails offers a generator called a scaffold that will create the database migration model, controller, view files, and test stubs for us. Run the following from the command line to create the scaffold for the Account vertical (make sure it is all on one line): rails g scaffold Account shopify_account_url:string shopify_api_key:string shopify_password:string shopify_shared_secret:string We'll need to run the database migration to create the database table using the following commands: bundle exec rake db:migrate bundle exec rake db:migrate RAILS_ENV=test Use the following command to update the generated view files to make them bootstrap compatible: rails g bootstrap:themed Accounts -f Head over to http://localhost:3000/accounts and create a new account in our app that uses the Shopify information from the Private App page. It's worth getting Guard to run our test suite every time we make a change so we can ensure that we don't break anything. Open up a new terminal in the root folder of the app and start up Guard: bundle exec guard After booting up, Guard will automatically run all our tests. They should all pass because we haven't made any changes to the generated code. If they don't, you'll need to spend time sorting out any failures before continuing. The next step is to make the app more user friendly. We'll make a few changes now and leave the rest for you to do later. Update the layout file so it has accurate navigation. Boostrap created several dummy links in the header navigation and sidebar. Update the navigation list in /app/views/layouts/application.html.erb to include the following code: <a class="brand" href="/">Contestapp</a> <div class="container-fluid nav-collapse"> <ul class="nav"> <li><%= link_to "Accounts", accounts_path%></li> </ul> </div><!--/.nav-collapse --> Add validations to the account model to ensure that all fields are required when creating/updating an account. Add the following lines to /app/models/account.rb: validates_presence_of :shopify_account_url validates_presence_of :shopify_api_key validates_presence_of :shopify_password validates_presence_of :shopify_shared_secret This will immediately cause the controller tests to fail due to the fact that it is not passing in all the required fields when attempting to submit the created form. If you look at the top of the file, you'll see some code that creates the :valid_attributes hash. If you read the comment above it, you'll see that we need to update the hash to contain the following minimally required fields: # This should return the minimal set of attributes required # to create a valid Account. As you add validations to # Account, be sure to adjust the attributes here as well. let(:valid_attributes) { { "shopify_account_url" => "MyString", "shopify_password" => "MyString", "shopify_api_ key" => "MyString", "shopify_shared_secret" => "MyString" } } This is a prime example of why having a testing suite is important. It keeps us from writing code that breaks other parts of the application, or in this case, helps us discover a weakness we might not have known we had: the ability to create a new account record without filling in any fields! Now that we have satisfied this requirement and all our tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Account model and related files" git checkout master git merge ch03_02_shopify_credentials git push Excellent! We've now completed another critical piece! Connecting to Shopify Now that we have a test store to work with, we're ready to implement the code necessary to connect our app to Shopify. First, we need to create a topic branch: git checkout -b ch03_03_shopify_connection We are going to use the official Shopify gem to connect our app to our test store, as well as interact with the API. Add this to the Gemfile under the gem 'bootstrap-sass' line: gem 'shopify_api' Update the bundle from the command line: bundle install We'll also need to restart Guard in order for it to pick up the new gem. This is typically done by using a key combination like Ctrl + Z< (Windows) or Cmd + C (Mac OS X) or by typing the word exit and pressing the Enter key. I've written a class that encapsulates the Shopify connection logic and initializes the global ShopifyAPI class that we can then use to interact with the API. You can find the code for this class in ch03_shopify_integration.rb. You'll need to copy the contents of this file to your app in a new file located at /app/services/shopify_integration.rb. The contents of the spec file ch03_shopify_integration_spec.rb need to be pasted in a new file located at /spec/services/shopify_integration_spec.rb. Using this class will allow us to execute something like ShopifyAPI::Order.find(:all) to get a list of orders, or ShopifyAPI::Product.find(1234) to retrieve the product with the ID 1234. The spec file contains tests for functionality that we haven't built yet and will initially fail. We'll fix this soon! We are going to add a Test Connection button to the account page that will give the user instant feedback as to whether or not the credentials are valid. Because we will be adding a new action to our application, we will need to first update controller, request, routing, and view tests before proceeding. Given the nature of this article and because in this case, we're connecting to an external service, the topics such as mocking, test writing, and so on will have to be reviewed as homework. I recommend watching the excellent screencasts created by Ryan Bates at http://railscasts.com as a primer on testing in Rails. The first step is to update the resources :accounts route in the /config/routes.rb file with the following block: resources :accounts do member do get 'test_connection' end end Copy the controller code from ch03_accounts_controller.rb and replace the code in /app/controllers/accounts_controller.rb file. This new code adds the test_connection method as well as ensuring the account is loaded properly. Finally, we need to add a button to /app/views/account/show.html.erb that will call this action in div.form-actions: <%= link_to "Test Connection",test_connection_account_path(@account), :class => 'btn' %> If we view the account page in our browser, we can now test our Shopify integration. Assuming that everything was copied correctly, we should see a success message after clicking on the Test Connection button. If everything was not copied correctly, we'll see the message that the Shopify API returned to us as a clue to what isn't working. Once all the tests pass, we can commit the code and merge it with the master branch: git add --all git commit -am "Shopify connection and related UI" git checkout master git merge ch03_03_shopify_connection git push Having fun? Good, because things are about to get heavy. Summary: As you can see and understand this article explains briefly about, the integration with Shopify's API in order to retrieve product and order information from the shop. The UI is then streamlined a bit before the logic to create a contest is created. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Xen Virtualization: Work with MySQL Server, Ruby on Rails, and Subversion [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 11262

article-image-3d-websites
Packt
23 May 2014
10 min read
Save for later

3D Websites

Packt
23 May 2014
10 min read
(For more resources related to this topic, see here.) Creating engaging scenes There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book. Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential. Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan). This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex. Engage thrusters This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object: streetLight1Object = copyObject( streetLight0Object, "streetLight1", streetLight1Location, [1, 1, 1], [0, 0, 0] ); Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale: function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation; The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object: newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap; We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background: newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; } There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene(): streetLightCover1Object.meshLoaded = streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap = streetLightCover0Object.textureMap; This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies. Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance. Opening scene with four light sources: two streetlights, the Products neon sign, and the moon This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers. The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0); The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign. The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function: void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight, uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution(uProductTextLoc, vec3(0.0, 0.0, 0.0), true); All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0). The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0: if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0; Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); } In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance): vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot( -vectorLightPosToPixel, vTransformedNormal ); A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight: With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt = (angleLightToPixel - uStreetLightCutOffAngle) / (uStreetLightBeamWidth - uStreetLightCutOffAngle); } } After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light: attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/maxDist; } else attenuation = 0.0; } Finally, we multiply the values that make the light contribution and we are done: lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt; Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707): vec4 textureMapNormal = vec4( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix * normalize(textureMapNormal.rgb) ); A normal mapped brick (without red brick texture image) reveals how changing the pixel normal altersthe shading with various light sources We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map: calculateLightContribution(uSpotLight0Loc, uSpotLightDir, pixelNormal, false); From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources: float angleLightToTextureMap = dot( -vectorLightPosToPixel, pixelNormal ); Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png. A normal mapped brick wall with various light sources Objective complete – mini debriefing This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.
Read more
  • 0
  • 0
  • 12252

Packt
21 May 2014
8 min read
Save for later

Running our first web application

Packt
21 May 2014
8 min read
(For more resources related to this topic, see here.) The standalone/deployments directory, as in the previous releases of JBoss Application Server, is the location used by end users to perform their deployments and applications are automatically deployed into the server at runtime. The artifacts that can be used to deploy are as follows: WAR (Web application Archive): This is a JAR file used to distribute a collection of JSP (Java Server Pages), servlets, Java classes, XML files, libraries, static web pages, and several other features that make up a web application. EAR (Enterprise Archive): This type of file is used by Java EE for packaging one or more modules within a single file. JAR (Java Archive): This is used to package multiple Java classes. RAR (Resource Adapter Archive): This is an archive file that is defined in the JCA specification as the valid format for deployment of resource adapters on application servers. You can deploy a RAR file on the AS Java as a standalone component or as part of a larger application. In both cases, the adapter is available to all applications using a lookup procedure. The deployment in WildFly has some deployment file markers that can be identified quickly, both by us and by WildFly, to understand what is the status of the artifact, whether it was deployed or not. The file markers always have the same name as the artifact that will deploy. A basic example is the marker used to indicate that my-first-app.war, a deployed application, will be the dodeploy suffix. Then in the directory to deploy, there will be a file created with the name my-first-app.war.dodeploy. Among these markers, there are others, explained as follows: dodeploy: This suffix is inserted by the user, which indicates that the deployment scanner will deploy the artifact indicated. This marker is mostly important for exploded deployments. skipdeploy: This marker disables the autodeploy mode while this file is present in the deploy directory, only for the artifact indicated. isdeploying: This marker is placed by the deployment scanner service to indicate that it has noticed a .dodeploy file or a new or updated autodeploy mode and is in the process of deploying the content. This file will be erased by the deployment scanner so the deployment process finishes. deployed: This marker is created by the deployment scanner to indicate that the content was deployed in the runtime. failed: This marker is created by the deployment scanner to indicate that the deployment process failed. isundeploying: This marker is created by the deployment scanner and indicates the file suffix .deployed was deleted and its contents will be undeployed. This marker will be deleted when the process is completely undeployed. undeployed: This marker is created by the deployment scanner to indicate that the content was undeployed from the runtime. pending: This marker is placed by the deployment scanner service to indicate that it has noticed the need to deploy content but has not yet instructed the server to deploy it. When we deploy our first application, we'll see some of these marker files, making it easier to understand their functions. To support learning, the small applications that I made will be available on GitHub (https://github.com) and packaged using Maven (for further details about Maven, you can visit http://maven.apache.org/). To begin the deployment process, we perform a checkout of the first application. First of all you need to install the Git client for Linux. To do this, use the following command: [root@wfly_book ~]# yum install git –y Git is also necessary to perform the Maven installation so that it is possible to perform the packaging process of our first application. Maven can be downloaded from http://maven.apache.org/download.cgi. Once the download is complete, create a directory that will be used to perform the installation of Maven and unzip it into this directory. In my case, I chose the folder /opt as follows: [root@wfly_book ~]# mkdir /opt/maven Unzip the file into the newly created directory as follows: [root@wfly_book maven]# tar -xzvf /root/apache-maven-3.2.1-bin.tar.gz [root@wfly_book maven]# cd apache-maven-3.2.1/ Run the mvn command and, if any errors are returned, we must set the environment variable M3_HOME, described as follows: [root@wfly_book ~]# mvn -bash: mvn: command not found If the error indicated previously occurs again, it is because the Maven binary was not found by the operating system; in this scenario, we must create and configure the environment variable that is responsible for this. First, two settings, populate the environment variable with the Maven installation directory and enter the directory in the PATH environment variable in the necessary binaries. Access and edit the /etc/profile file, taking advantage of the configuration that we did earlier with the Java environment variable, and see how it will look with the Maven configuration file as well: #Java and Maven configuration export JAVA_HOME="/usr/java/jdk1.7.0_45" export M3_HOME="/opt/maven/apache-maven-3.2.1" export PATH="$PATH:$JAVA_HOME/bin:$M3_HOME/bin" Save and close the file and then run the following command to apply the following settings: [root@wfly_book ~]# source /etc/profile To verify the configuration performed, run the following command: [root@wfly_book ~]# mvn -version Well, now that we have the necessary tools to check out the application, let's begin. First, set a directory where the application's source codes will be saved as shown in the following command: [root@wfly_book opt]# mkdir book_apps [root@wfly_book opt]# cd book_apps/ Let's check out the project using the command, git clone; the repository is available at https://github.com/spolti/wfly_book.git. Perform the checkout using the following command: [root@wfly_book book_apps]# git clone https://github.com/spolti/wfly_book.git Access the newly created directory using the following command: [root@wfly_book book_apps]# cd wfly_book/ For the first example, we will use the application called app1-v01, so access this directory and build and deploy the project by issuing the following commands. Make sure that the WildFly server is already running. The first build is always very time-consuming, because Maven will download all the necessary libs to compile the project, project dependencies, and Maven libraries. [root@wfly_book wfly_book]# cd app1-v01/ [root@wfly_book app1-v01]# mvn wildfly:deploy For more details about the WildFly Maven plugin, please take a look at https://docs.jboss.org/wildfly/plugins/maven/latest/index.html. The artifact will be generated and automatically deployed on WildFly Server. Note that a message similar to the following is displayed stating that the application was successfully deployed: INFO [org.jboss.as.server] (ServerService Thread Pool -- 29) JBAS018559: Deployed "app1-v01.war" (runtime-name : "app1-v01.war") When we perform the deployment of some artifact, and if we have not configured the virtual host or context root address, then in order to access the application we always need to use the application name without the suffix, because our application's address will be used for accessing it. The structure to access the application is http://<your-ip-address>:<port-number>/app1-v01/. In my case, it would be http://192.168.11.109:8080/app1-v01/. See the following screenshot of the application running. This application is very simple and is made using JSP and rescuing some system properties. Note that in the deployments directory we have a marker file that indicates that the application was successfully deployed, as follows: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 07:33 app1-v01.war.deployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt To undeploy the application without having to remove the artifact, we need only remove the app1-v01.war.deployed file. This is done using the following command: [root@wfly_book ~]# cd $JBOSS_HOME/standalone/deployments [root@wfly_book deployments]# rm app1-v01.war.deployed rm: remove regular file `app1-v01.war.deployed'? y In the previous option, you will also need to press Y to remove the file. You can also use the WildFly Maven plugin for undeployment, using the following command: [root@wfly_book deployments]# mvn wildfly:undeploy Notice that the log is reporting that the application was undeployed and also note that a new marker, .undeployed, has been added indicating that the artifact is no longer active in the runtime server as follows: INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) JBAS018558: Undeployed "app1-v01.war" (runtime-name: "app1-v01.war") And run the following command: [root@wfly_book deployments]# ls -l total 20 -rw-r--r--. 1 wildfly wildfly 2544 Jan 21 07:33 app1-v01.war -rw-r--r--. 1 wildfly wildfly 12 Jan 21 09:44 app1-v01.war.undeployed -rw-r--r--. 1 wildfly wildfly 8870 Dec 22 04:12 README.txt [root@wfly_book deployments]# If you make undeploy using the WildFly Maven plugin, the artifact will be deleted from the deployments directory. Summary In this article, we learn how to configure an application using a virtual host, the context root, and also how to use the logging tools that we now have available to use Java in some of our test applications, among several other very interesting settings. Resources for Article: Further resources on this subject: JBoss AS Perspective [Article] JBoss EAP6 Overview [Article] JBoss RichFaces 3.3 Supplemental Installation [Article]
Read more
  • 0
  • 0
  • 1605

article-image-optimizing-magento-performance-using-hhvm
Packt
16 May 2014
5 min read
Save for later

Optimizing Magento Performance — Using HHVM

Packt
16 May 2014
5 min read
(For more resources related to this topic, see here.) HipHop Virtual Machine As we can write a whole book (or two) about HHVM, we will just give the key ideas here. HHVM is a virtual machine that will translate any called PHP file into a HHVM byte code in the same spirit as the Java or .NET virtual machine. HHVM transforms your PHP code into a lower level language that is much faster to execute. Of course, the transformation time (compiling) does cost a lot of resources, therefore, HHVM is shipped with a cache mechanism similar to APC. This way, the compiled PHP files are stored and reused when the original file is requested. With HHVM, you keep the PHP flexibility and ease in writing, but you now have a performance like that of C++. Hear the words of the HHVM team at Facebook: "HHVM (aka the HipHop Virtual Machine) is a new open-source virtual machine designed for executing programs written in PHP. HHVM uses a just-in-time compilation approach to achieve superior performance while maintaining the flexibility that PHP developers are accustomed to. To date, HHVM (and its predecessor HPHPc) has realized over a 9x increase in web request throughput and over a 5x reduction in memory consumption for Facebook compared with the Zend PHP 5.2 engine + APC. HHVM can be run as a standalone webserver (in other words, without the Apache webserver and the "modphp" extension). HHVM can also be used together with a FastCGI-based webserver, and work is in progress to make HHVM work smoothly with Apache." If you think this is too good to be true, you're right! Indeed, HHVM have a major drawback. HHVM was and still is focused on the needs of Facebook. Therefore, you might have a bad time trying to use your custom made PHP applications inside it. Nevertheless, this opportunity to speed up large PHP applications has been seen by talented developers who improve it, day after day, in order to support more and more framework. As our interest is in Magento, I will introduce you Daniel Sloof who is a developer from Netherland. More interestingly, Daniel has done (and still does) an amazing work at adapting HHVM for Magento. Here are the commands to install Daniel Sloof's version of HHVM for Magento: $ sudo apt-get install git $ git clone https://github.com/danslo/hhvm.git $ sudo chmod +x configure_ubuntu_12.04.sh $ sudo ./configure_ubuntu_12.04.sh $ sudo CMAKE_PREFIX_PATH=`pwd`/.. make If you thought that the first step was long, you will be astonished by the time required to actually build HHVM. Nevertheless, the wait is definitely worth it. The following screenshot shows how your terminal will look for the next hour or so: Create a file named hhvm.hdf under /etc/hhvm and write the following code inside: Server { Port = 80 SourceRoot = /var/www/_MAGENTO_HOME_ } Eval { Jit = true } Log { Level = Error UseLogFile = true File = /var/log/hhvm/error.log Access { * { File = /var/log/hhvm/access.log Format = %h %l %u %t \"%r\" %>s %b } } } VirtualHost { * { Pattern = .* RewriteRules { dirindex { pattern = ^/(.*)/$ to = $1/index.php qsa = true } } } } StaticFile { FilesMatch { * { pattern = .*\.(dll|exe) headers { * = Content-Disposition: attachment } } } Extensions { css = text/css gif = image/gif html = text/html jpe = image/jpeg jpeg = image/jpeg jpg = image/jpeg png = image/png tif = image/tiff tiff = image/tiff txt = text/plain } } Now, run the following command: $ sudo ./hhvm –mode daemon –config /etc/hhvm.hdf The hhvm executable is under hhvm/hphp/hhvm. Is all of this worth it? Here's the response: ab -n 100 -c 5 http://192.168.0.105192.168.0.105/index.php/furniture/livingroom.html Server Software: Server Hostname: 192.168.0.105192.168.0.105 Server Port: 80 Document Path: /index.php/furniture/living-room.html Document Length: 35552 bytes Concurrency Level: 5 Time taken for tests: 4.970 seconds Requests per second: 20.12 [#/sec] (mean) Time per request: 248.498 [ms] (mean) Time per request: 49.700 [ms] (mean, across all concurrent requests) Transfer rate: 707.26 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 2 12.1 0 89 Processing: 107 243 55.9 243 428 Waiting: 107 242 55.9 242 427 Total: 110 245 56.7 243 428 We literally reach a whole new world here. Indeed, our Magento instance is six times faster than after all our previous optimizations and about 20 times faster than the default Magento served by Apache. The following graph shows the performances: Our Magento instance is now flying at lightening speed, but what are the drawbacks? Is it still as stable as before? All the optimization we did so far, are they still effective? Can we go even further? In what follows, we present a non-exhaustive list of answers: Fancy extensions and modules may (and will) trigger HHVM incompatibilities. Magento is a relatively old piece of software and combining it with a cutting edge technology such as HHVM can have some unpredictable (and undesirable) effects. HHVM is so complex that fixing a Magento-related bug requires a lot of skill and dedication. HHVM takes care of PHP, not of cache mechanisms or the accelerator we installed before. Therefore, APC, memcached, and Varnish are still running and helping to improve our performances. If you become addicted to performances, HHVM is now supporting Fast-CGI through Nginx and Apache. You can find out more about that at: http://www.hhvm.com/blog/1817/fastercgi-with-hhvm. Summary In this article, we successfully used the HipHop Virtual Machine (HHVM) from Facebook to serve Magento. This improvement optimizes our Magento performance incredibly (20 times faster), that is, the time required initially was 110 seconds while now it is less then 5 seconds. Resources for Article: Further resources on this subject: Magento: Exploring Themes [article] Getting Started with Magento Development [article] Enabling your new theme in Magento [article] Call Send SMS Add to Skype You'll need Skype CreditFree via Skype
Read more
  • 0
  • 0
  • 8858
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-simple-blog
Packt
15 May 2014
8 min read
Save for later

Building a Simple Blog

Packt
15 May 2014
8 min read
(For more resources related to this topic, see here.) Setting up the application Every application has to be set up, so we'll begin with that. Create a folder for your project—I'll call mine simpleBlog—and inside that, create a file named package.json. If you've used Node.js before, you know that the package.json file describes the project; lists the project home page, repository, and other links; and (most importantly for us) outlines the dependencies for the application. Here's what the package.json file looks like: { "name": "simple-blog", "description": "This is a simple blog.", "version": "0.1.0", "scripts": { "start": "nodemon server.js" }, "dependencies": { "express": "3.x.x", "ejs" : "~0.8.4", "bourne" : "0.3" }, "devDependencies": { "nodemon": "latest" } } This is a pretty bare-bones package.json file, but it has all the important bits. The name, description, and version properties should be self-explanatory. The dependencies object lists all the npm packages that this project needs to run: the key is the name of the package and the value is the version. Since we're building an ExpressJS backend, we'll need the express package. The ejs package is for our server-side templates and bourne is our database (more on this one later). The devDependencies property is similar to the dependencies property, except that these packages are only required for someone working on the project. They aren't required to just use the project. For example, a build tool and its components, such as Grunt, would be development dependencies. We want to use a package called nodemon. This package is really handy when building a Node.js backend: we can have a command line that runs the nodemon server.js command in the background while we edit server.js in our editor. The nodemon package will restart the server whenever we save changes to the file. The only problem with this is that we can't actually run the nodemon server.js command on the command line, because we're going to install nodemon as a local package and not a global process. This is where the scripts property in our package.json file comes in: we can write simple script, almost like a command-line alias, to start nodemon for us. As you can see, we're creating a script called start, and it runs nodemon server.js. On the command line, we can run npm start; npm knows where to find the nodemon binary and can start it for us. So, now that we have a package.json file, we can install the dependencies we've just listed. On the command line, change to the current directory to the project directory, and run the following command: npm install You'll see that all the necessary packages will be installed. Now we're ready to begin writing the code. Starting with the server I know you're probably eager to get started with the actual Backbone code, but it makes more sense for us to start with the server code. Remember, good Backbone apps will have strong server-side components, so we can't ignore the backend completely. We'll begin by creating a server.js file in our project directory. Here's how that begins: var express = require('express'); var path = require('path'); var Bourne = require("bourne"); If you've used Node.js, you know that the require function can be used to load Node.js components (path) or npm packages (express and bourne). Now that we have these packages in our application, we can begin using them as follows: var app = express(); var posts = new Bourne("simpleBlogPosts.json"); var comments = new Bourne("simpleBlogComments.json"); The first variable here is app. This is our basic Express application object, which we get when we call the express function. We'll be using it a lot in this file. Next, we'll create two Bourne objects. As I said earlier, Bourne is the database we'll use in our projects in this article. This is a simple database that I wrote specifically for this article. To keep the server side as simple as possible, I wanted to use a document-oriented database system, but I wanted something serverless (for example, SQLite), so you didn't have to run both an application server and a database server. What I came up with, Bourne, is a small package that reads from and writes to a JSON file; the path to that JSON file is the parameter we pass to the constructor function. It's definitely not good for anything bigger than a small learning project, but it should be perfect for this article. In the real world, you can use one of the excellent document-oriented databases. I recommend MongoDB: it's really easy to get started with, and has a very natural API. Bourne isn't a drop-in replacement for MongoDB, but it's very similar. You can check out the simple documentation for Bourne at https://github.com/andrew8088/bourne. So, as you can see here, we need two databases: one for our blog posts and one for comments (unlike most databases, Bourne has only one table or collection per database, hence the need for two). The next step is to write a little configuration for our application: app.configure(function(){ app.use(express.json()); app.use(express.static(path.join(__dirname, 'public'))); }); This is a very minimal configuration for an Express app, but it's enough for our usage here. We're adding two layers of middleware to our application; they are "mini-programs" that the HTTP requests that come to our application will run through before getting to our custom functions (which we have yet to write). We add two layers here: the first is express.json(), which parses the JSON requests bodies that Backbone will send to the server; the second is express.static(), which will statically serve files from the path given as a parameter. This allows us to serve the client-side JavaScript files, CSS files, and images from the public folder. You'll notice that both these middleware pieces are passed to app.use(), which is the method we call to choose to use these pieces. You'll notice that we're using the path.join() method to create the path to our public assets folder, instead of just doing __dirname and 'public'. This is because Microsoft Windows requires the separating slashes to be backslashes. The path.join() method will get it right for whatever operating system the code is running on. Oh, and __dirname (two underscores at the beginning) is just a variable for the path to the directory this script is in. The next step is to create a route method: app.get('/*', function (req, res) { res.render("index.ejs"); }); In Express, we can create a route calling a method on the app that corresponds to the desired HTTP verb (get, post, put, and delete). Here, we're calling app.get() and we pass two parameters to it. The first is the route; it's the portion of the URL that will come after your domain name. In our case, we're using an asterisk, which is a catchall; it will match any route that begins with a forward slash (which will be all routes). This will match every GET request made to our application. If an HTTP request matches the route, then a function, which is the second parameter, will be called. This function takes two parameters; the first is the request object from the client and the second is the response object that we'll use to send our response back. These are often abbreviated to req and res, but that's just a convention, you could call them whatever you want. So, we're going to use the res.render method, which will render a server-side template. Right now, we're passing a single parameter: the path to the template file. Actually, it's only part of the path, because Express assumes by default that templates are kept in a directory named views, a convention we'll be using. Express can guess the template package to use based on the file extension; that's why we don't have to select EJS as the template engine anywhere. If we had values that we want to interpolate into our template, we would pass a JavaScript object as the second parameter. We'll come back and do this a little later. Finally, we can start up our application; I'll choose to use the port 3000: app.listen(3000); We'll be adding a lot more to our server.js file later, but this is what we'll start with. Actually, at this point, you can run npm start on the command line and open up http://localhost:3000 in a browser. You'll get an error because we haven't made the view template file yet, but you can see that our server is working.
Read more
  • 0
  • 0
  • 4061

article-image-using-webrtc-data-api
Packt
09 May 2014
10 min read
Save for later

Using the WebRTC Data API

Packt
09 May 2014
10 min read
(For more resources related to this topic, see here.) What is WebRTC? Web Real-Time Communication is a new (still under an active development) open framework for the Web to enable browser-to-browser applications for audio/video calling, video chat, peer-to-peer file sharing without any third-party additional software/plugins. It was open sourced by Google in 2011 and includes the fundamental building components for high-quality communications on the Web. These components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to build their own rich, media web applications. Google, Mozilla, and Opera support WebRTC and are involved in the development process. Major components of WebRTC API are as follows: getUserMedia: This allows a web browser to access the camera and microphone PeerConnection: This sets up audio/video calls DataChannels: This allow browsers to share data via peer-to-peer connection Benefits of using WebRTC in your business Reducing costs: It is a free and open source technology. You don't need to pay for complex proprietary solutions ever. IT deployment and support costs can be lowered because now you don't need to deploy special client software for your customers. Plugins?: You don't need it ever. Before now you had to use Flash, Java applets, or other tricky solutions to build interactive rich media web applications. Customers had to download and install third-party plugins to be able using your media content. You also had to keep in mind different solutions/plugins for variety of operating systems and platforms. Now you don't need to care about it. Peer-to-peer communication: In most cases communication will be established directly between your customers and you don't need to have a middle point. Easy to use: You don't need to be a professional programmer or to have a team of certified developers with some kind of specific knowledge. In a basic case, you can easily integrate WebRTC functionality into your web services/sites by using open JavaScript API or even using a ready-to-go framework. Single solution for all platforms: You don't need to develop special native version of your web service for different platforms (iOS, Android, Windows, or any other). WebRTC is developed to be a cross-platform and universal tool. WebRTC is open source and free: Community can discover new bugs and solve them effectively and quick. Moreover, it is developed and standardized by Mozilla, Google, and Opera—world software companies. Topics The article covers the following topics: Developing a WebRTC application: You will learn the basics of the technology and build a complete audio/video conference real-life web application. We will also talk on SDP (Session Description Protocol), signaling, client-server sides' interoperation, and configuring STUN and TURN servers. In Data API, you will learn how to build a peer-to-peer, cross-platform file sharing web service using the WebRTC Data API. Media streaming and screen casting introduces you into streaming prerecorded media content peer-to-peer and desktop sharing. In this article, you will build a simple application that provides such kind of functionality. Nowadays, security and authentication is very important topic and you definitely don't want to forget on it while developing your applications. So, in this article, you will learn how to make your WebRTC solutions to be secure, why authentication might be very important, and how you can implement this functionality in your products. Nowadays, mobile platforms are literally part of our life, so it's important to make your interactive application to be working great on mobile devices also. This article will introduce you into aspects that will help you in developing great WebRTC products keeping mobile devices in mind. Session Description Protocol SDP is an important part of WebRTC stack. It used to negotiate on session/media options during establishing peer connection. It is a protocol intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. It does not deliver media data itself, but is used for negotiation between peers of media type, format, and all associated properties/options (resolution, encryption, codecs, and so on). The set of properties and parameters are usually called a session profile. Peers have to exchange SDP data using signaling channel before they can establish a direct connection. The following is example of an SDP offer: v=0 o=alice 2890844526 2890844526 IN IP4 host.atlanta.example.com s= c=IN IP4 host.atlanta.example.com t=0 0 m=audio 49170 RTP/AVP 0 8 97 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:97 iLBC/8000 m=video 51372 RTP/AVP 31 32 a=rtpmap:31 H261/90000 a=rtpmap:32 MPV/90000 Here we can see that this is a video and audio session, and multiple codecs are offered. The following is example of an SDP answer: v=0 o=bob 2808844564 2808844564 IN IP4 host.biloxi.example.com s= c=IN IP4 host.biloxi.example.com t=0 0 m=audio 49174 RTP/AVP 0 a=rtpmap:0 PCMU/8000 m=video 49170 RTP/AVP 32 a=rtpmap:32 MPV/90000 Here we can see that only one codec is accepted in reply to the offer above. You can find more SDP sessions examples at https://www.rfc-editor.org/rfc/rfc4317.txt. You can also find in-dept details on SDP in the appropriate RFC at http://tools.ietf.org/html/rfc4566. Configuring and installing your own STUN server As you already know, it is important to have an access to STUN/TURN server to work with peers located behind NAT or firewall. In this article, developing our application, we used pubic STUN servers (actually, they are public Google servers accessible from other networks). Nevertheless, if you plan to build your own service, you should install your own STUN/TURN server. This way your application will not be depended on a server you even can't control. Today we have public STUN servers from Google, tomorrow they can be switched off. So, the right way is to have your own STUN/TURN server. In this section, you will be introduced to installing STUN server as the simpler case. There are several implementations of STUN servers that can be found on the Internet. You can take one from http://www.stunprotocol.org. It is cross-platform and can be used under Windows, Mac OS X, or Linux. To start STUN server, you should use the following command line: stunserver --mode full --primaryinterface x1.x1.x1.x1 --altinterface x2.x2.x2.x2 Please, pay attention that you need two IP addresses on your machine to run STUN server. It is mandatory to make STUN protocol work correct. The machine can have only one physical network interface, but it should have then a network alias with IP address different of that used on the main network interface. WebSocket WebSocket is a protocol that provides full-duplex communication channels over a single TCP connection. This is a relatively young protocol but today all major web browsers including Chrome, Internet Explorer, Opera, Firefox, and Safari support it. WebSocket is a replacement for long-polling to get two-way communications between browser and server. In this article, we will use WebSocket as a transport channel to develop a signaling server for our videoconference service. Using it, our peers will communicate with the signaling server. The two important benefits of WebSocket is that it does support HTTPS (secure channel) and can be used via web proxy (nevertheless, some proxies can block WebSocket protocol). NAT traversal WebRTC has in-built mechanism to use such NAT traversal options like STUN and TURN servers. In this article, we used public STUN (Session Traversal Utilities for NAT) servers, but in real life you should install and configure your own STUN or TURN (Traversal Using Relay NAT) server. In most cases, you will use a STUN server. It helps to do NAT/firewall traversal and establish direct connection between peers. In other words, STUN server is utilized during connection establishing stage only. After the connection has been established, peers will transfer media data directly between them. In some cases (unfortunately, they are not so rare), STUN server won't help you to get through a firewall or NAT and establishing direct connection between peers will be impossible. For example, if both peers are behind symmetric NAT. In this case TURN server can help you. TURN server works as a retransmitter between peers. Using TURN server, all media data between peers will be transmitted through the TURN server. If your application gives a list of several STUN/TURN servers to the WebRTC API, the web browser will try to use STUN servers first and in case if connection failed it will try to use TURN servers automatically. Preparing environment We can prepare the environment by performing the following steps: Create a folder for the whole application somewhere on your disk. Let's call it my_rtc_project. Make a directory named my_rtc_project/www here, we will put all the client-side code (JavaScript files or HTML pages). Signaling server's code will be placed under its separate folder, so create directory for it my_rtc_project/apps/rtcserver/src. Kindly note that we will use Git, which is free and open source distributed version control system. For Linux boxes it can be installed using default package manager. For Windows system, I recommend to install and use this implementation: https://github.com/msysgit/msysgit. If you're using Windows box, install msysgit and add path to its bin folder to your PATH environment variable. Installing Erlang The signaling server is developed in Erlang language. Erlang is a great choice to develop server-side applications due to the following reasons: It is very comfortable and easy for prototyping Its processes (aktors) are very lightweight and cheap It does support network operations with no need of any external libraries The code been compiled to a byte code running by a very powerful Erlang Virtual Machine Some great projects The following projects are developed using Erlang: Yaws and Cowboy: These are web servers Riak and CouchDB: These are distributed databases Cloudant: This is a database service based on fork of CouchDB Ejabberd: This is a XMPP instant messaging service Zotonic: This is a Content Management System RabbitMQ: This is a message bus Wings 3D: This is a 3D modeler GitHub: This a web-based hosting service for software development projects that use Git. GitHub uses Erlang for RPC proxies to Ruby processes WhatsApp: This is a famous mobile messenger, sold to Facebook Call of Duty: This computer game uses Erlang on server side Goldman Sachs: This is high-frequency trading computer programs A very brief history of Erlang 1982 to 1985: During this period, Ericsson starts experimenting with programming of telecom. Existing languages do not suit for the task. 1985 to 1986: During this period, Ericsson decides they must develop their own language with desirable features from Lisp, Prolog, and Parlog. The language should have built-in concurrency and error recovery. 1987: In this year, first experiments with the new language Erlang were conducted. 1988: In this year, Erlang firstly used by external users out of the lab. 1989: In this year, Ericsson works on fast implementation of Erlang. 1990: In this year, Erlang is presented on ISS'90 and gets new users. 1991: In this year, Fast implementation of Erlang is released to users. Erlang is presented on Telecom'91, and has compiler and graphic interface. 1992: In this year, Erlang gets a lot of new users. Ericsson ported Erlang to new platforms including VxWorks and Macintosh. 1993: In this year, Erlang gets distribution. It makes it possible to run homogeneous Erlang system on a heterogeneous hardware. Ericsson starts selling Erlang implementations and Erlang Tools. Separate organization in Ericsson provides support. Erlang is supported by many platforms. You can download and install it using the main website: http://www.erlang.org. Summary In this article, we have discussed in detail about the WebRTC technology, and also about the WebRTC API. Resources for Article: Further resources on this subject: Applying WebRTC for Education and E-learning [Article] Spring Roo 1.1: Working with Roo-generated Web Applications [Article] WebSphere MQ Sample Programs [Article]
Read more
  • 0
  • 0
  • 3231

article-image-best-practices-modern-web-applications
Packt
22 Apr 2014
9 min read
Save for later

Best Practices for Modern Web Applications

Packt
22 Apr 2014
9 min read
(For more resources related to this topic, see here.) The importance of search engine optimization Every day, web crawlers scrape the Internet for updates on new content to update their associated search engines. People's immediate reaction to finding web pages is to load a query on a search engine and select the first few results. Search engine optimization is a set of practices used to maintain and improve search result ranks over time. Item 1 – using keywords effectively In order to provide information to web crawlers, websites provide keywords in their HTML meta tags and content. The optimal procedure to attain effective keyword usage is to: Come up with a set of keywords that are pertinent to your topic Research common search keywords related to your website Take an intersection of these two sets of keywords and preemptively use them across the website Once this final set of keywords is determined, it is important to spread them across your website's content whenever possible. For instance, a ski resort in California should ensure that their website includes terms such as California, skiing, snowboarding, and rentals. These are all terms that individuals would look up via a search engine when they are interested in a weekend at a ski resort. Contrary to popular belief, the keywords meta tag does not create any value for site owners as many search engines consider it a deprecated index for search relevance. The reasoning behind this goes back many years to when many websites would clutter their keywords meta tag with irrelevant filler words to bait users into visiting their sites. Today, many of the top search engines have decided that content is a much more powerful indicator for search relevance and have concentrated on this instead. However, other meta tags, such as description, are still being used for displaying website content on search rankings. These should be brief but powerful passages to pull in users from the search page to your website. Item 2 – header tags are powerful Header tags (also known as h-tags) are often used by web crawlers to determine the main topic of a given web page or section. It is often recommended to use only one set of h1 tags to identify the primary purpose of the web page, and any number of the other header tags (h2, h3, and so on) to identify section headings. Item 3 – make sure to have alternative attributes for images Despite the recent advance in image recognition technology, web crawlers do not possess the resources necessary for parsing images for content through the Internet today. As a result, it is advisable to leave an alt attribute for search engines to parse while they scrape your web page. For instance, let us suppose you were the webmaster of Seattle Water Sanitation Plant and wished to upload the following image to your website: Since web crawlers make use of the alt tag while sifting through images, you would ideally upload the preceding image using the following code: <img src = "flow_chart.png" alt="Seattle Water Sanitation Process Flow Chart" /> This will leave the content in the form of a keyword or phrase that can help contribute to the relevancy of your web page on search results. Item 4 – enforcing clean URLs While creating web pages, you'll often find the need to identify them with a URL ID. The simplest way often is to use a number or symbol that maps to your data for simple information retrieval. The problem with this is that a number or symbol does not help to identify the content for web crawlers or your end users. The solution to this is to use clean URLs. By adding a topic name or phrase into the URL, you give web crawlers more keywords to index off. Additionally, end users who receive the link will be given the opportunity to evaluate the content with more information since they know the topic discussed in the web page. A simple way to integrate clean URLs while retaining the number or symbol identifier is to append a readable slug, which describes the topic, to the end of the clean URL and after the identifier. Then, apply a regular expression to parse out the identifier for your own use; for instance, take a look at the following sample URL: http://www.example.com/post/24/golden-dragon-review The number 24, when parsed out, helps your server easily identify the blog post in question. The slug, golden-dragon-review, communicates the topic at hand to both web crawlers and users. While creating the slug, the best practice is often to remove all non-alphanumeric characters and replace all spaces with dashes. Contractions such as can't, don't, or won't, can be replaced by cant, dont, or wont because search engines can easily infer their intended meaning. It is important to also realize that spaces should not be replaced by underscores as they are not interpreted appropriately by web crawlers. Item 5 – backlink whenever safe and possible Search rankings are influenced by your website's clout throughout websites that search engines deem as trustworthy. For instance, due to the restrictive access of .edu or .gov domains, websites that use these domains are deemed trustworthy and given a higher level of authority when it comes down to search rankings. This means that any websites that are backlinked on trustworthy websites are seen at a higher value as a result. Thus, it is important to often consider backlinking on relevant websites where users would actively be interested in the content. If you choose to backlink irrelevantly, there are often consequences that you'll face, as this practice can often be caught automatically by web crawlers while comparing the keywords between your link and the backlink host. Item 6 – handling HTTP status codes properly Server errors help the client and server communicate the status of page requests in a clean and consistent manner. The following chart will review the most important server errors and what they do: Status Code Alias Effect on SEO 200 Success This loads the page and the content is contributed to SEO 301 Permanent redirect This redirects the page and the redirected content is contributed to SEO 302 Temporary redirect This redirects the page and the redirected content doesn't contribute to SEO 404 Client error (not found) This loads the page and the content does not contribute to SEO 500 Server error This will not load the page and there is no content to contribute to SEO In an ideal world, all pages would return the 200 status code. Unfortunately, URLs get misspelled, servers throw exceptions, and old pages get moved, which leads to the need for other status codes. Thus, it is important that each situation be handled to maximize communication to both web crawlers and users and minimize damage to one's search ranking. When a URL gets misspelled, it is important to provide a 301 redirect to a close match or another popular web page. This can be accomplished by using a clean URL and parsing out an identifier, regardless of the slug that follows it. This way, there exists content that contributes directly to the search ranking instead of just leaving a 404 page. Server errors should be handled as soon as possible. When a page does not load, it harms the experience for both users and web crawlers, and over an extended period of time, can expire that page's rank. Lastly, the 404 pages should be developed with your users in mind. When you choose not to redirect them to the most relevant link, it is important to either pass in suggested web pages or a search menu to keep them engaged with your content. The connect-rest-test Grunt plugin can be a healthy addition to any software project to test the status codes and responses from a RESTful API. You can find it at https://www.npmjs.org/package/connect-rest-test. Alternatively, while testing pages outside of your RESTful API, you may be interested in considering grunt-http-verify to ensure that status codes are returned properly. You can find it at https://www.npmjs.org/package/grunt-http-verify. Item 7 – making use of your robots.txt and site map files Often, there exist directories in a website that are available to the public but should not be indexed by a search engine. The robots.txt file, when placed in your website's root, helps to define exclusion rules for web crawling and prevent a user-defined set of search engines from entering certain directories. For instance, the following example disallows all search engines that choose to parse your robots.txt file from visiting the music directory on a website: User-agent: * Disallow: /music/ While writing navigation tools with dynamic content such as JavaScript libraries or Adobe Flash widgets, it's important to understand that web crawlers have limited capability in scraping these. Site maps help to define the relational mapping between web pages when crawlers cannot heuristically infer it themselves. On the other hand, the robots.txt file defines a set of search engine exclusion rules, and the sitemap.xml file, also located in a website's root, helps to define a set of search engine inclusion rules. The following XML snippet is a brief example of a site map that defines the attributes: <?xml version="1.0" encoding="utf-8"?> <urlset > <url> <loc>http://example.com/</loc> <lastmod>2014-11-24</lastmod> <changefreq>always</changefreq> <priority>0.8</priority> </url> <url> <loc>http://example.com/post/24/golden-dragon-review</loc> <lastmod>2014-07-13</lastmod> <changefreq>never</changefreq> <priority>0.5</priority> </url> </urlset> The attributes mentioned in the preceding code are explained in the following table: Attribute Meaning loc This stands for the URL location to be crawled lastmod This indicates the date on which the web page was last modified changefreq This indicates the page is modified and the number of times the crawler should visit as a result priority This indicates the web page's priority in comparison to the other web pages Using Grunt to reinforce SEO practices With the rising popularity of client-side web applications, SEO practices are often not met when page links do not exist without JavaScript. Certain Grunt plugins provide a workaround for this by loading the web pages, waiting for an amount of time to allow the dynamic content to load, and taking an HTML snapshot. These snapshots are then provided to web crawlers for search engine purposes and the user-facing dynamic web applications are excluded from scraping completely. Some examples of Grunt plugins that accomplish this need are: grunt-html-snapshots (https://www.npmjs.org/package/grunt-html-snapshots) grunt-ajax-seo (https://www.npmjs.org/package/grunt-ajax-seo)
Read more
  • 0
  • 0
  • 2118

article-image-creating-real-time-widget
Packt
22 Apr 2014
11 min read
Save for later

Creating a real-time widget

Packt
22 Apr 2014
11 min read
(For more resources related to this topic, see here.) The configuration options and well thought out methods of socket.io make for a highly versatile library. Let's explore the dexterity of socket.io by creating a real-time widget that can be placed on any website and instantly interfacing it with a remote Socket.IO server. We're doing this to begin providing a constantly updated total of all users currently on the site. We'll name it the live online counter (loc for short). Our widget is for public consumption and should require only basic knowledge, so we want a very simple interface. Loading our widget through a script tag and then initializing the widget with a prefabricated init method would be ideal (this allows us to predefine properties before initialization if necessary). Getting ready We'll need to create a new folder with some new files: widget_server.js, widget_client.js, server.js, and index.html. How to do it... Let's create the index.html file to define the kind of interface we want as follows: <html> <head> <style> #_loc {color:blue;} /* widget customization */ </style> </head> <body> <h1> My Web Page </h1> <script src = http://localhost:8081 > </script> <script> locWidget.init(); </script> </body> </html> The localhost:8081 domain is where we'll be serving a concatenated script of both the client-side socket.io code and our own widget code. By default, Socket.IO hosts its client-side library over HTTP while simultaneously providing a WebSocket server at the same address, in this case localhost:8081. See the There's more… section for tips on how to configure this behavior. Let's create our widget code, saving it as widget_client.js: ;(function() { window.locWidget = { style : 'position:absolute;bottom:0;right:0;font-size:3em', init : function () { var socket = io.connect('http://localhost:8081'), style = this.style; socket.on('connect', function () { var head = document.head, body = document.body, loc = document.getElementById('_lo_count'); if (!loc) { head.innerHTML += '<style>#_loc{' + style + '}</style>'; loc = document.createElement('div'); loc.id = '_loc'; loc.innerHTML = '<span id=_lo_count></span>'; body.appendChild(loc); } socket.on('total', function (total) { loc.innerHTML = total; }); }); } } }()); We need to test our widget from multiple domains. We'll just implement a quick HTTP server (server.js) to serve index.html so we can access it by http://127.0.0.1:8080 and http://localhost:8080, as shown in the following code: var http = require('http'); var fs = require('fs'); var clientHtml = fs.readFileSync('index.html'); http.createServer(function (request, response) { response.writeHead(200, {'Content-type' : 'text/html'}); response.end(clientHtml); }).listen(8080); Finally, for the server for our widget, we write the following code in the widget_server.js file: var io = require('socket.io')(), totals = {}, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, require('fs').readFileSync('widget_client.js') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.write(sioclient.source); res.write(widgetScript); res.end(); }).listen(8081)); io.on('connection', function (socket) { var origin = socket.request.socket.domain || 'local'; totals[origin] = totals[origin] || 0; totals[origin] += 1; socket.join(origin); io.sockets.to(origin).emit('total', totals[origin]); socket.on('disconnect', function () { totals[origin] -= 1; io.sockets.to(origin).emit('total', totals[origin]); }); }); To test it, we need two terminals; in the first one, we execute the following command: node widget_server.js In the other terminal, we execute the following command: node server.js We point our browser to http://localhost:8080 by opening a new tab or window and navigating to http://localhost:8080. Again, we will see the counter rise by one. If we close either window, it will drop by one. We can also navigate to http://127.0.0.1:8080 to emulate a separate origin. The counter at this address is independent from the counter at http://localhost:8080. How it works... The widget_server.js file is the powerhouse of this recipe. We start by using require with socket.io and calling it (note the empty parentheses following require); this becomes our io instance. Under this is our totals object; we'll be using this later to store the total number of connected clients for each domain. Next, we create our clientScript variable; it contains both the socket.io client code and our widget_client.js code. We'll be serving this to all HTTP requests. Both scripts are stored as buffers, not strings. We could simply concatenate them with the plus (+) operator; however, this would force a string conversion first, so we use Buffer.concat instead. Anything that is passed to res.write or res.end is converted to a Buffer before being sent across the wire. Using the Buffer.concat method means our data stays in buffer format the whole way through instead of being a buffer, then a string then a buffer again. When we require socket.io at the top of widget_server.js, we call it to create an io instance. Usually, at this point, we would pass in an HTTP server instance or else a port number, and optionally pass in an options object. To keep our top variables tidy, however, we use some configuration methods available on the io instance after all our requires. The io.static(false) call prevents socket.io from providing its client-side code (because we're providing our own concatenated script file that contains both the socket.io client-side code and our widget code). Then we use the io.attach call to hook up our socket.io server with an HTTP server. All requests that use the http:// protocol will be handled by the server we pass to io.attach, and all ws:// protocols will be handled by socket.io (whether or not the browser supports the ws:// protocol). We're only using the http module once, so we require it within the io.attach call; we use it's createServer method to serve all requests with our clientScript variable. Now, the stage is set for the actual socket action. We wait for a connection by listening for the connection event on io.sockets. Inside the event handler, we use a few as yet undiscussed socket.io qualities. WebSocket is formed when a client initiates a handshake request over HTTP and the server responds affirmatively. We can access the original request object with socket.request. The request object itself has a socket (this is the underlying HTTP socket, not our socket.io socket; we can access this via socket.request.socket. The socket contains the domain a client request came from. We load socket.request.socket.domain into our origin object unless it's null or undefined, in which case we say the origin is 'local'. We extract (and simplify) the origin object because it allows us to distinguish between websites that use a widget, enabling site-specific counts. To keep count, we use our totals object and add a property for every new origin object with an initial value of 0. On each connection, we add 1 to totals[origin] while listening to our socket; for the disconnect event, we subtract 1 from totals[origin]. If these values were exclusively for server use, our solution would be complete. However, we need a way to communicate the total connections to the client, but on a site by site basis. Socket.IO has had a handy new feature since Socket.IO version 0.7 that allows us to group sockets into rooms by using the socket.join method. We cause each socket to join a room named after its origin, then we use the io.sockets.to(origin).emit method to instruct socket.io to only emit to sockets that belongs to the originating sites room. In both the io.sockets connection and socket disconnect events, we emit our specific totals to corresponding sockets to update each client with the total number of connections to the site the user is on. The widget_client.js file simply creates a div element called #_loc and updates it with any new totals it receives from widget_server.js. There's more... Let's look at how our app could be made more scalable, as well as looking at another use for WebSockets. Preparing for scalability If we were to serve thousands of websites, we would need scalable memory storage, and Redis would be a perfect fit. It operates in memory but also allows us to scale across multiple servers. We'll need Redis installed along with the Redis module. We'll alter our totals variable so it contains a Redis client instead of a JavaScript object: var io = require('socket.io')(), totals = require('redis').createClient(), //other variables Now, we modify our connection event handler as shown in the following code: io.sockets.on('connection', function (socket) { var origin = (socket.handshake.xdomain) ? url.parse(socket.handshake.headers.origin).hostname : 'local'; socket.join(origin); totals.incr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); socket.on('disconnect', function () { totals.decr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); }); }); Instead of adding 1 to totals[origin], we use the Redis INCR command to increment a Redis key named after origin. Redis automatically creates the key if it doesn't exist. When a client disconnects, we do the reverse and readjust totals using DECR. WebSockets as a development tool When developing a website, we often change something small in our editor, upload our file (if necessary), refresh the browser, and wait to see the results. What if the browser would refresh automatically whenever we saved any file relevant to our site? We can achieve this with the fs.watch method and WebSockets. The fs.watch method monitors a directory, executing a callback whenever a change to any files in the folder occurs (but it doesn't monitor subfolders). The fs.watch method is dependent on the operating system. To date, fs.watch has also been historically buggy (mostly under Mac OS X). Therefore, until further advancements, fs.watch is suited purely to development environments rather than production (you can monitor how fs.watch is doing by viewing the open and closed issues at https://github.com/joyent/node/search?q=fs.watch&ref=cmdform&state=open&type=Issues). Our development tool could be used alongside any framework, from PHP to static files. For the server counterpart of our tool, we'll configure watcher.js: var io = require('socket.io')(), fs = require('fs'), totals = {}, watcher = function () { var socket = io.connect('ws://localhost:8081'); socket.on('update', function () { location.reload(); }); }, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, Buffer(';(' + watcher + '());') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.end(clientScript); }).listen(8081)); fs.watch('content', function (e, f) { if (f[0] !== '.') { io.sockets.emit('update'); } }); Most of this code is familiar. We make a socket.io server (on a different port to avoid clashing), generate a concatenated socket.io.js plus client-side watcher code file, and deliver it via our attached server. Since this is a quick tool for our own development uses, our client-side code is written as a normal JavaScript function (our watcher variable), converted to a string while wrapping it in self-calling function code, and then changed to Buffer so it's compatible with Buffer.concat. The last piece of code calls the fs.watch method where the callback receives the event name (e) and the filename (f). We check that the filename isn't a hidden dotfile. During a save event, some filesystems or editors will change the hidden files in the directory, thus triggering multiple callbacks and sending several messages at high speed, which can cause issues for the browser. To use it, we simply place it as a script within every page that is served (probably using server-side templating). However, for demonstration purposes, we simply place the following code into content/index.html: <script src = http://localhost:8081/socket.io/watcher.js > </script> Once we fire up server.js and watcher.js, we can point our browser to http://localhost:8080 and see the familiar excited Yay!. Any changes we make and save (either to index.html, styles.css, script.js, or the addition of new files) will be almost instantly reflected in the browser. The first change we can make is to get rid of the alert box in the script.js file so that the changes can be seen fluidly. Summary We saw how we could create a real-time widget in this article. We also used some third-party modules to explore some of the potential of the powerful combination of Node and WebSockets. Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [Article] So, what is Node.js? [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 3142
article-image-bootstrap-3-and-other-applications
Packt
21 Apr 2014
10 min read
Save for later

Bootstrap 3 and other applications

Packt
21 Apr 2014
10 min read
(For more resources related to this topic, see here.) Bootstrap 3 Bootstrap 3, formerly known as Twitter's Bootstrap, is a CSS and JavaScript framework for building application frontends. The third version of Bootstrap has important changes over the earlier versions of the framework. Bootstrap 3 is not compatible with the earlier versions. Bootstrap 3 can be used to build great frontends. You can download the complete framework, including CSS and JavaScript, and start using it right away. Bootstrap also has a grid. The grid of Bootstrap is mobile-first by default and has 12 columns. In fact, Bootstrap defines four grids: the extra-small grid up to 768 pixels (mobile phones), the small grid between 768 and 992 pixels (tablets), the medium grid between 992 and 1200 pixels (desktop), and finally, the large grid of 1200 pixels and above for large desktops. The grid, all other CSS components, and JavaScript plugins are described and well documented at http://getbootstrap.com/. Bootstrap's default theme looks like the following screenshot: Example of a layout built with Bootstrap 3 The time when all Bootstrap websites looked quite similar is far behind us now. Bootstrap will give you all the freedom you need to create innovative designs. There is much more to tell about Bootstrap, but for now, let's get back to Less. Working with Bootstrap's Less files All the CSS code of Bootstrap is written in Less. You can download Bootstrap's Less files and recompile your own version of the CSS. The Less files can be used to customize, extend, and reuse Bootstrap's code. In the following sections, you will learn how to do this. To download the Less files, follow the links at http://getbootstrap.com/ to Bootstrap's GitHub pages at https://github.com/twbs/bootstrap. On this page, choose Download Zip on the right-hand side column. Building a Bootstrap project with Grunt After downloading the files mentioned earlier, you can build a Bootstrap project with Grunt. Grunt is a JavaScript task runner; it can be used for the automation of your processes. Grunt helps you when performing repetitive tasks such as minifying, compiling, unit testing, and linting your code. Grunt runs on node.js and uses npm, which you saw while installing the Less compiler. Node.js is a standalone JavaScript interpreter built on Google's V8 JavaScript runtime, as used in Chrome. Node.js can be used for easily building fast, scalable network applications. When you unzip the files from the downloaded file, you will find Gruntfile.js and package.json among others. The package.json file contains the metadata for projects published as npm modules. The Gruntfile.js file is used to configure or define tasks and load Grunt plugins. The Bootstrap Grunt configuration is a great example to show you how to set up automation testing for projects containing HTML, Less (CSS), and JavaScript. The parts that are interesting for you as a Less developer are mentioned in the following sections. In package.json file, you will find that Bootstrap compiles its Less files with grunt-contrib-less. At the time of writing this article, the grunt-contrib-less plugin compiles Less with less.js Version 1.7. In contrast to Recess (another JavaScript build tool previously used by Bootstrap), grunt-contrib-less also supports source maps. Apart from grunt-contrib-less, Bootstrap also uses grunt-contrib-csslint to check the compiled CSS for syntax errors. The grunt-contrib-csslint plugin also helps improve browser compatibility, performance, maintainability, and accessibility. The plugin's rules are based on the principles of object-oriented CSS (http://www.slideshare.net/stubbornella/object-oriented-css). You can find more information by visiting https://github.com/stubbornella/csslint/wiki/Rules. Bootstrap makes heavy use of Less variables, which can be set by the customizer. Whoever has studied the source of Gruntfile.js may very well also find a reference to the BsLessdocParser Grunt task. This Grunt task is used to build Bootstrap's customizer dynamically based on the Less variables used by Bootstrap. Though the process of parsing Less variables to build, for instance, documentation will be very interesting, this task is not discussed here further. This section ends with the part of Gruntfile.js that does the Less compiling. The following code from Gruntfile.js should give you an impression of how this code will look: less: { compileCore: { options: { strictMath: true, sourceMap: true, outputSourceFiles: true, sourceMapURL: '<%= pkg.name %>.css.map', sourceMapFilename: 'dist/css/<%= pkg.name %>.css.map' }, files: { 'dist/css/<%= pkg.name %>.css': 'less/bootstrap.less' } } Last but not least, let's have a look at the basic steps to run Grunt from the command line and build Bootstrap. Grunt will be installed with npm. Npm checks Bootstrap's package.json file and automatically installs the necessary local dependencies listed there. To build Bootstrap with Grunt, you will have to enter the following commands on the command line: > npm install -g grunt-cli > cd /path/to/extracted/files/bootstrap After this, you can compile the CSS and JavaScript by running the following command: > grunt dist This will compile your files into the /dist directory. The > grunt test command will also run the built-in tests. Compiling your Less files Although you can build Bootstrap with Grunt, you don't have to use Grunt. You will find the Less files in a separate directory called /less inside the root /bootstrap directory. The main project file is bootstrap.less; other files will be explained in the next section. You can include bootstrap.less together with less.js into your HTML for the purpose of testing as follows: <link rel="bootstrap/less/bootstrap.less"type="text/css" href="less/styles.less" /> <script type="text/javascript">less = { env: 'development' };</script> <script src = "less.js" type="text/javascript"></script> Of course, you can compile this file server side too as follows: lessc bootstrap.less > bootstrap.css Dive into Bootstrap's Less files Now it's time to look at Bootstrap's Less files in more detail. The /less directory contains a long list of files. You will recognize some files by their names. You have seen files such as variables.less, mixins.less, and normalize.less earlier. Open bootstrap.less to see how the other files are organized. The comments inside bootstrap.less tell you that the Less files are organized by functionality as shown in the following code snippet: // Core variables and mixins // Reset // Core CSS // Components Although Bootstrap is strongly CSS-based, some of the components don't work without the related JavaScript plugins. The navbar component is an example of this. Bootstrap's plugins require jQuery. You can't use the newest 2.x version of jQuery because this version doesn't have support for Internet Explorer 8. To compile your own version of Bootstrap, you have to change the variables defined in variables.less. When using the last declaration wins and lazy loading rules, it will be easy to redeclare some variables. Creating a custom button with Less By default, Bootstrap defines seven different buttons, as shown in the following screenshot: The seven different button styles of Bootstrap 3 Please take a look at the following HTML structure of Bootstrap's buttons before you start writing your Less code: <!-- Standard button --> <button type="button" class="btn btn-default">Default</button> A button has two classes. Globally, the first .btn class only provides layout styles, and the second .btn-default class adds the colors. In this example, you will only change the colors, and the button's layout will be kept intact. Open buttons.less in your text editor. In this file, you will find the following Less code for the different buttons: // Alternate buttons // -------------------------------------------------- .btn-default { .button-variant(@btn-default-color; @btn-default-bg; @btn-default-border); } The preceding code makes it clear that you can use the .button-variant() mixin to create your customized buttons. For instance, to define a custom button, you can use the following Less code: // Customized colored button // -------------------------------------------------- .btn-colored { .button-variant(blue;red;green); } In the preceding case, you want to extend Bootstrap with your customized button, add your code to a new file, and call this file custom.less. Appending @import custom.less to the list of components inside bootstrap.less will work well. The disadvantage of doing this will be that you will have to change bootstrap.less again when updating Bootstrap; so, alternatively, you could create a file such as custombootstrap.less which contains the following code: @import "bootstrap.less"; @import "custom.less"; The previous step extends Bootstrap with a custom button; alternatively, you could also change the colors of the default button by redeclaring its variables. To do this, create a new file, custombootstrap.less again, and add the following code into it: @import "bootstrap.less"; //== Buttons // //## For each of Bootstrap's buttons, define text,background and border color. @btn-default-color: blue; @btn-default-bg: red; @btn-default-border: green; In some situations, you will, for instance, need to use the button styles without everything else of Bootstrap. In these situations, you can use the reference keyword with the @import directive. You can use the following Less code to create a Bootstrap button for your project: @import (reference) "bootstrap.less"; .btn:extend(.btn){}; .btn-colored { .button-variant(blue;red;green); } You can see the result of the preceding code by visiting http://localhost/index.html in your browser. Notice that depending on the version of less.js you use, you may find some unexpected classes in the compiled output. Media queries or extended classes sometimes break the referencing in older versions of less.js. Use CSS source maps for debugging When working with large LESS code bases finding the original source can be become complex when viewing your results in the browsers. Since version 1.5 LESS offers support for CSS source maps. CSS source maps enable developer tools to map calls back to their location in original source files. This also works for compressed files. The latest versions of Google's Chrome Developers Tools offer support for these sources files. Currently CSS source maps debugging won't work for client side compiling as used for the examples in this book. The server-side lessc compiler can generate useful CSS source maps. After installing the lessc compiler you can run: >> lessc –source-map=styles.css.map styles.less > styles.css The preceding code will generate two files: styles.css.map and styles.css. The last line of styles.css contains now an extra line which refers to the source map: /*# sourceMappingURL=boostrap.css.map */ In your HTML you only have to include the styles.css as you used to: <link href="styles.css" rel="stylesheet"> When using CSS source maps as described earlier and inspecting your HTML with Google's Chrome Developers Tools, you will see something like the following screenshot: Inspect source with Google's Chrome Developers Tools and source maps As you see styles now have a reference to their original LESS file such as grid.less, including line number, which helps you in the process of debugging. The styles.css.map file should be in the same directory as the styles.css file. You don't have to include your LESS files in this directory. Summary This article has covered the concept of Bootstrap, how to use Bootstrap's Less files, and how the files can be modified to be used according to your convenience. Resources for Article: Further resources on this subject: Getting Started with Bootstrap [Article] Bootstrap 3.0 is Mobile First [Article] Downloading and setting up Bootstrap [Article]
Read more
  • 0
  • 0
  • 17767

article-image-creating-responsive-magento-theme-bootstrap-3
Packt
21 Apr 2014
13 min read
Save for later

Creating a Responsive Magento Theme with Bootstrap 3

Packt
21 Apr 2014
13 min read
In this article, by Andrea Saccà, the author of Mastering Magento Theme Design, we will learn how to integrate the Bootstrap 3 framework and how to develop the main theme blocks. The following topics will be covered in this article: An introduction to Bootstrap Downloading Bootstrap (the current Version 3.1.1) Downloading and including jQuery Integrating the files into the theme Defining the main layout design template (For more resources related to this topic, see here.) An introduction to Bootstrap 3 Bootstrap is a sleek, intuitive, powerful, mobile-first frontend framework that enables faster and easier web development, as shown in the following screenshot: Bootstrap 3 is the most popular frontend framework that is used to create mobile-first websites. It includes a free collection of buttons, CSS components, and JavaScript to create websites or web applications; it was created by the Twitter team. Downloading Bootstrap (the current Version 3.1.1) First, you need to download the latest version of Bootstrap. The current version is 3.0. You can download the framework from http://getbootstrap.com/. The fastest way to download Bootstrap 3 is to download the precompiled and minified versions of CSS, JavaScript, and fonts. So, click on the Download Bootstrap button and unzip the file you downloaded. Once the archive is unzipped, you will see the following files: We need to take only the minified version of the files, that is, bootstrap.min.css from css, bootstrap.min.js from js, and all the files from font. For development, you can use bootstrap.css so that you can inspect the code and learn, and then switch to bootstrap.min.css when you go live. Copy all the selected files (CSS files inside the css folder, the .js files inside the js folder, and the font files inside the fonts folder) in the theme skin folder at skin/frontend/bookstore/default. Downloading and including jQuery Bootstrap is dependent on jQuery, so we have to download and include it before including boostrap.min.js. So, download jQuery from http://jquery.com/download/. The preceding URL takes us to the following screenshot: We will use the compressed production Version 1.10.2. Once you download jQuery, rename the file as jquery.min.js and copy it into the js skin folder at skin/frontend/bookstore/default/js/. In the same folder, also create the jquery.scripts.js file, where we will insert our custom scripts. Magento uses Prototype as the main JavaScript library. To make jQuery work correctly without conflicts, you need to insert the no conflict code in the jquery.scripts.js file, as shown in the following code: // This is important!jQuery.noConflict(); jQuery(document).ready(function() { // Insert your scripts here }); The following is a quick recap of CSS and JS files: Integrating the files into the theme Now that we have all the files, we will see how to integrate them into the theme. To declare the new JavaScript and CSS files, we have to insert the action in the local.xml file located at app/design/frontend/bookstore/default/layout. In particular, the file declaration needs to be done in the default handle to make it accessible by the whole theme. The default handle is defined by the following tags: <default> . . . </default> The action to insert the JavaScript and CSS files must be placed inside the reference head block. So, open the local.xml file and first create the following block that will define the reference: <reference name="head"> … </reference> Declaring the .js files in local.xml The action tag used to declare a new .js file located in the skin folder is as follows: <action method="addItem"> <type>skin_js</type><name>js/myjavascript.js</name> </action> In our skin folder, we copied the following three .js files: jquery.min.js jquery.scripts.js bootstrap.min.js Let's declare them as follows: <action method="addItem"> <type>skin_js</type><name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type><name>js/jquery.scripts.js</name> </action> Declaring the CSS files in local.xml The action tag used to declare a new CSS file located in the skin folder is as follows: <action method="addItem"> <type>skin_css</type><name>css/mycss.css</name> </action> In our skin folder, we have copied the following three .css files: bootstrap.min.css styles.css print.css So let's declare these files as follows: <action method="addItem"> <type>skin_css</type><name>css/bootstrap.min.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/styles.css</name> </action> <action method="addItem"> <type>skin_css</type><name>css/print.css</name> </action> Repeat this action for all the additional CSS files. All the JavaScript and CSS files that you insert into the local.xml file will go after the files declared in the base theme. Removing and adding the style.css file By default, the base theme includes a CSS file called styles.css, which is hierarchically placed before the bootstrap.min.css. One of the best practices to overwrite the Bootstrap CSS classes in Magento is to remove the default CSS files declared by the base theme of Magento, and declare it after Bootstrap's CSS files. Thus, the styles.css file loads after Bootstrap, and all the classes defined in it will overwrite the boostrap.min.css file. To do this, we need to remove the styles.css file by adding the following action tag in the xml part, just before all the css declaration we have already made: <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> Hence, we removed the styles.css file and added it again just after adding Bootstrap's CSS file (bootstrap.min.css): <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> If it seems a little confusing, the following is a quick view of the CSS declaration: <!-- Removing the styles.css declared in the base theme --> <action method="removeItem"> <type>skin_css</type> <name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css again --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> Adding conditional JavaScript code If you check the Bootstrap documentation, you can see that in the HTML5 boilerplate template, the following conditional JavaScript code is added to make Internet Explorer (IE) HTML 5 compliant: <!--[if lt IE 9]> <script src = "https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"> </script> <script src = "https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js"> </script> <![endif]--> To integrate them into the theme, we can declare them in the same way as the other script tags, but with conditional parameters. To do this, we need to perform the following steps: Download the files at https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js and https://oss.maxcdn.com/libs/respond.js/1.3.0/respond.min.js. Move the downloaded files into the js folder of the theme. Always integrate JavaScript through the .xml file, but with the conditional parameters as follows: <action method="addItem"> <type>skin_js</type><name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type><name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> A quick recap of our local.xml file Now, after we insert all the JavaScript and CSS files in the .xml file, the final local.xml file should look as follows: <?xml version="1.0" encoding="UTF-8"?> <layout version="0.1.0"> <default translate="label" module="page"> <reference name="head"> <!-- Adding Javascripts --> <action method="addItem"> <type>skin_js</type> <name>js/jquery.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/bootstrap.min.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/jquery.scripts.js</name> </action> <action method="addItem"> <type>skin_js</type> <name>js/html5shiv.js</name> <params/><if>lt IE 9</if> </action> <action method="addItem"> <type>skin_js</type> <name>js/respond.min.js</name> <params/><if>lt IE 9</if> </action> <!-- Removing the styles.css --> <action method="removeItem"> <type>skin_css</type><name>css/styles.css</name> </action> <!-- Adding Bootstrap Css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/bootstrap.min.css</stylesheet> </action> <!-- Adding the styles.css --> <action method="addItem"> <type>skin_css</type> <stylesheet>css/styles.css</stylesheet> </action> </reference> </default> </layout> Defining the main layout design template A quick tip for our theme is to define the main template for the site in the default handle. To do this, we have to define the template into the most important reference, root. In a few words, the root reference is the block that defines the structure of a page. Let's suppose that we want to use a main structure having two columns with the left sidebar for the theme To change it, we should add the setTemplate action in the root reference as follows: <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> You have to insert the reference name "root" tag with the action inside the default handle, usually before every other reference. Defining the HTML5 boilerplate for main templates After integrating Bootstrap and jQuery, we have to create our HTML5 page structure for the entire base template. The following are the structure files that are located at app/design/frontend/bookstore/template/page/: 1column.phtml 2columns-left.phtml 2columns-right.phtml 3columns.phtml The Twitter Bootstrap uses scaffolding with containers, a row, and 12 columns. So, its page layout would be as follows: <div class="container"> <div class="row"> <div class="col-md-3"></div> <div class="col-md-9"></div> </div> </div> This structure is very important to create responsive sections of the store. Now we will need to edit the templates to change to HMTL5 and add the Bootstrap scaffolding. Let's look at the following 2columns-left.phtml main template file: <!DOCTYPE HTML> <html><head> <?php echo $this->getChildHtml('head') ?> </head> <body <?php echo $this->getBodyClass()?' class="'.$this->getBodyClass().'"':'' ?>> <?php echo $this->getChildHtml('after_body_start') ?> <?php echo $this->getChildHtml('global_notices') ?> <header> <?php echo $this->getChildHtml('header') ?> </header> <section id="after-header"> <div class="container"> <?php echo $this->getChildHtml('slider') ?> </div> </section> <section id="maincontent"> <div class="container"> <div class="row"> <?php echo $this->getChildHtml('breadcrumbs') ?> <aside class="col-left sidebar col-md-3"> <?php echo $this->getChildHtml('left') ?> </aside> <div class="col-main col-md-9"> <?php echo $this->getChildHtml('global_messages') ?> <?php echo $this->getChildHtml('content') ?> </div> </div> </div> </section> <footer id="footer"> <div class="container"> <?php echo $this->getChildHtml('footer') ?> </div> </footer> <?php echo $this->getChildHtml('before_body_end') ?> <?php echo $this->getAbsoluteFooter() ?> </body> </html> You will notice that I removed the Magento layout classes col-main, col-left, main, and so on, as these are being replaced by the Bootstrap classes. I also added a new section, after-header, because we will need it after we develop the home page slider. Don't forget to replicate this structure on the other template files 1column.phtml, 2columns-right.phtml, and 3columns.phtml, changing the columns as you need. Summary We've seen how to integrate Bootstrap and start the development of a Magento theme with the most famous framework in the world. Bootstrap is very neat, flexible, and modular, and you can use it as you prefer to create your custom theme. However, please keep in mind that it can be a big drawback on the loading time of the page. Following these techniques by adding the JavaScript and CSS classes via XML, you can allow Magento to minify them to speed up the loading time of the site. Resources for Article: Further resources on this subject: Integrating Twitter with Magento [article] Magento : Payment and shipping method [article] Magento: Exploring Themes [article]
Read more
  • 0
  • 0
  • 9849

article-image-skeuomorphic-versus-flat
Packt
18 Apr 2014
8 min read
Save for later

Skeuomorphic versus flat

Packt
18 Apr 2014
8 min read
(For more resources related to this topic, see here.) Skeuomorphism is defined as an element of design or structure that serves little or no purpose in the artifact fashioned from the new material but was essential to the object made from the original material (courtesy: Wikipedia — http://en.wikipedia.org/wiki/Skeuomorph). Apple created several skeuomorphic interfaces for their desktop and mobile apps; apps such as iCal, iBooks, Find My Friends, Podcast apps, and several others. This kind of interface was both loved and hated among the design community and users. It was a style that focused a lot on the detail and texture, making the interface heavier and often more complex, but interesting because of the clear connection to the real objects depicted here. It was an enjoyable and rich experience for the user due to the high detail and interaction that a skeuomorphic interface presented, which served to attract the eye to the detail and care put into these designs; for example, the page flip in iBooks, visually representing the swipe of a page as in a traditional book. But this style also had its downsides. Besides being a harsh transition from the traditional interfaces (as in the case of Apple, in which it meant coming from its famous glassy and clean looking Aqua interface), several skeuomorphic applications on the desktop didn't seem to fit in the overall OS look. Apart from stylistic preferences and incoherent looks, skeuomorphic design is also a bad design choice because the style in itself is a limitation to innovation. By replicating the traditional and analogical designs, the designer doesn't have the option or the freedom to imagine, create, and design new interfaces and interactions with the user. Flat design, being the extremely simple and clear style that it is, gives all the freedom to the designer by ignoring any kind of limitations and effects. But both styles have a place and time to be used, and skeuomorphic is great for applications such as Propellerheads that are directly replacing hardware, such as audio mixers. Using these kinds of interfaces makes it easier for new users to learn how to use the real hardware counterpart, while at the same time previous users of the hardware will already know how to use the interface with ease. Regardless of the style, a good designer must be ready to create an interface that is adapted to the needs of the user and the market. To exemplify this and to better learn the basic differences between flat and skeuomorphic, let's do a quick exercise. Exercise – the skeuomorphic and flat buttons In this exercise, we'll create a simple call to an action button, the copy of Buy Now. We'll create this element twice; first we'll take a look at the skeuomorphic approach by creating a realistic looking button with texture, shadow, and depth. Next, we will simply convert it to its flat counterpart by removing all those extra elements and adapting it to a minimalistic style. You should have all the materials you'll need for this exercise. We will use the typeface Lato, also available for free on Google Fonts, and the image wood.jpg for the texture on the skeuomorphic button. We'll just need Photoshop for this exercise, so let's open it up and use the following steps: Create a new Photoshop document with 800 x 600 px. This is where we will create our buttons. Let's start by creating the skeuomorphic one. We start by creating a rectangle with the rounded rectangle tool, with a radius of 20 px. This will be the face of our button. To make it easier to visualize the element while we create it, let's make it gray (#a2a2a2). Now that we have our button face created, let's give some depth to this button. Just duplicate the layer (command + J on Mac or Ctrl + J on Windows) and pull it down to 10 or 15 px, whichever you prefer. Let's make this new rectangle a darker shade of gray (#393939) and make sure that this layer is below the face layer. You should now have a simple gray button with some depth. The side layer simulates the depth of the button by being pulled down for just a couple of pixels, and since we made it darker, it resembles a shadow. Now for the call to action. Create a textbox on top of the button face, set its width to that of the button, and center the text. In there, write Buy Now, and set the text to Lato, weight to Black, and size to 50 pt. Center it vertically just by looking at the screen, until you find that it sits correctly in the center of the button. Now to make this button really skeuomorphic, let's get our image wood.jpg, and let's use it as our texture. Create a new layer named wood-face and make sure it's above our face layer. Now to define the layer as a texture and use our button as a mask, we're going to right-click on the layer and click on Create clipping mask. This will mask our texture to overlay the button face. For the side texture, duplicate the wood-face layer, rename it to wood-side and repeat the preceding instructions for the side layer. After that, and to have a different look, move the wood-face layer around and look for a good area of the texture to use on the side, ideally something with some up strips to make it look more realistic. To finish the side, create a new layer style in the side layer, gradient overlay, and make a gradient from black to transparent and change the settings as shown in the following screenshot. This will make a shadow effect on top of the wood, making it look a lot better. To finish our skeuomorphic button, let's go back to the text and define the color as #7b3201 (or another shade of brown; try to pick from the button and make it slightly darker until you find that it looks good), so that it looks like the text is carved in the wood. The last touch will be to add an Inner Shadow layer style in the text with the settings shown. Group all the layers and name it Skeuomorphic and we're done. And now we have our skeuomorphic button. It's a really simple way of doing it but we recreated the look of a button made out of wood just by using shapes, texture, and some layer styles. Now for our flat version: Duplicate the group we just created and name it flat. Move it to the other half of the workspace. Delete the following layers: wood-face, wood-side, and side. This button will not have any depth, so we do not need the side layer as well as the textures. To keep the button in the same color scheme as our previous one, we'll use the color #7b3201 for our text and face. Your document should look like what is shown in the following screenshot: Create a new layer style and choose Stroke with the following settings. This will create the border of our button. To make the button transparent, let's reduce the Layer Fill option to 0 percent, which will leave only the layer styles applied. Let's remove the layer styles from our text to make it flat, reduce the weight of the font to Bold to make it thinner and roughly the same weight of the border, and align it visually, and our flat button is done! This type of a transparent button is great for flat interfaces, especially when used over a blurred color background. This is because it creates an impactful button with very few elements to it, creating a transparent control and making great use of the white space in the design. In design, especially when designing flat, remember that less is more. With this exercise, you were able to build a skeuomorphic element and deconstruct it down to its flat version, which is as simple as a rounded rectangle with border and text. The font we chose is frequently used for flat design layouts; it's simple but rounded and it works great with rounded-corner shapes such as the ones we just created. Summary Flat design is a digital style of design that has been one of the biggest trends in recent years in web and user interface design. It is famous for its extremely minimalistic style. It has appeared at a time when skeuomorphic, a style of creating realistic interfaces, was considered to be the biggest and most famous trend, making this a really rough and extreme transition for both users and designers. We covered how to design in skeuomorphic and in flat, and what their main differences are. Resources for Article: Further resources on this subject: Top Features You Need to Know About – Responsive Web Design [Article] Web Design Principles in Inkscape [Article] Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 2 [Article]
Read more
  • 0
  • 0
  • 11168
article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 4767

article-image-moodle-online-communities
Packt
14 Apr 2014
9 min read
Save for later

Moodle for Online Communities

Packt
14 Apr 2014
9 min read
(For more resources related to this topic, see here.) Now that you're familiar with the ways to use Moodle for different types of courses, it is time to take a look at how groups of people can come together as an online community and use Moodle to achieve their goals. For example, individuals who have the same interests and want to discuss and share information in order to transfer knowledge can do so very easily in a Moodle course that has been set up for that purpose. There are many practical uses of Moodle for online communities. For example, members of an association or employees of a company can come together to achieve a goal and finish a task. In this case, Moodle provides a perfect place to interact, collaborate, and create a final project or achieve a task. Online communities can also be focused on learning and achievement, and Moodle can be a perfect vehicle for encouraging online communities to support each other to learn, take assessments, and display their certificates and badges. Moodle is also a good platform for a Massive Open Online Course (MOOC). In this article, we'll create flexible Moodle courses that are ideal for online communities and that can be modified easily to create opportunities to harness the power of individuals in many different locations to teach and learn new knowledge and skills. In this article, we'll show you the benefit of Moodle and how to use Moodle for the following online communities and purposes: Knowledge-transfer-focused communities Task-focused communities Communities focused on learning and achievement Moodle and online communities It is often easy to think of Moodle as a learning management system that is used primarily by organizations for their students or employees. The community tends to be well defined as it usually consists of students pursuing a common end, employees of a company, or members of an association or society. However, there are many informal groups and communities that come together because they share interests, the desire to gain knowledge and skills, the need to work together to accomplish tasks, and let people know that they've reached milestones and acquired marketable abilities. For example, an online community may form around the topic of climate change. The group, which may use social media to communicate with each other, would like to share information and get in touch with like-minded individuals. While it's true that they can connect via Facebook, Twitter, and other social media formats, they may lack a platform that gives a "one-stop shopping" solution. Moodle makes it easy to share documents, videos, maps, graphics, audio files, and presentations. It also allows the users to interact with each other via discussion forums. Because we can use but not control social networks, it's important to be mindful of security issues. For that reason, Moodle administrators may wish to consider ways to back up or duplicate key posts or insights within the Moodle installation that can be preserved and stored. In another example, individuals may come together to accomplish a specific task. For example, a group of volunteers may come together to organize a 5K run fundraiser for epilepsy awareness. For such a case, Moodle has an array of activities and resources that can make it possible to collaborate in the planning and publicity of the event and even in the creation of post event summary reports and press releases. Finally, let's consider a person who may wish to ensure that potential employers know the kinds of skills they possess. They can display the certificates they've earned by completing online courses as well as their badges, digital certificates, mentions in high achievers lists, and other gamified evidence of achievement. There are also the MOOCs, which bring together instructional materials, guided group discussions, and automated assessments. With its features and flexibility, Moodle is a perfect platform for MOOCs. Building a knowledge-based online community For our knowledge-based online community, let's consider a group of individuals who would like to know more about climate change and its impact. To build a knowledge-based online community, the following are the steps we need to perform: Choose a mobile-friendly theme. Customize the appearance of your site. Select resources and activities. Moodle makes it possible for people from all locations and affiliations to come together and share information in order to achieve a common objective. We will see how to do this in the following sections. Choosing the best theme for your knowledge-based Moodle online communities As many of the users in the community access Moodle using smartphones, tablets, laptops, and desktops, it is a good idea to select a theme that is responsive, which means that it will be automatically formatted in order to display properly on all devices. You can learn more about themes for Moodle, review them, find out about the developers, read comments, and then download them at https://moodle.org/plugins/browse.php?list=category&id=3. There are many good responsive themes, such as the popular Buckle theme and the Clean theme, that also allow you to customize them. These are the core and contributed themes, which is to say that they were created by developers and are either part of the Moodle installation or available for free download. If you have Moodle 2.5 or a later version installed, your installation of Moodle includes many responsive themes. If it does not, you will need to download and install a theme. In order to select an installed theme, perform the following steps: In the Site administration menu, click on the Appearance menu. Click on Themes. Click on Theme selector. Click on the Change theme button. Review all the themes. Click on the Use theme button next to the theme you want to choose and then click on Continue. Using the best settings for knowledge-based Moodle online communities There are a number of things you can do to customize the appearance of your site so that it is very functional for knowledge-transfer-based Moodle online communities. The following is a brief checklist of items: Select Topics format under the Course format section in the Course default settings window. By selecting topics, you'll be able to organize your content around subjects. Use the General section, which is included as the first topic in all courses. It has the News forum link. You can use this for announcements highlighting resources shared by the community. Include the name of the main contact along with his/her photograph and a brief biographical sketch in News forum. You'll create the sense that there is a real "go-to" person who is helping guide the endeavor. Incorporate social media to encourage sharing and dissemination of new information. Brief updates are very effective, so you may consider including a Twitter feed by adding your Twitter account as one of your social media sites. Even though your main topic of discussion may contain hundreds of subtopics that are of great interest, when you create your Moodle course, it's best to limit the number of subtopics to four or five. If you have too many choices, your users will be too scattered and will not have a chance to connect with each other. Think of your Moodle site as a meeting point. Do you want to have too many breakout sessions and rooms or do you want to have a main networking site? Think of how you would like to encourage users to mingle and interact. Selecting resources and activities for a knowledge-based Moodle online community The following are the items to include if you want to configure Moodle such that it is ideal for individuals who have come together to gain knowledge on a specific topic or problem: Resources: Be sure to include multiple types of files: documents, videos, audio files, and presentations. Activities: Include Quiz and other such activities that allow individuals to test their knowledge. Communication-focused activities: Set up a discussion forum to enable community members to post their thoughts and respond to each other. The key to creating an effective Moodle course for knowledge-transfer-based communities is to give the individual members a chance to post critical and useful information, no matter what the format or the size, and to accommodate social networks. Building a task-based online community Let's consider a group of individuals who are getting together to plan a fundraising event. They need to plan activities, develop materials, and prepare a final report. Moodle can make it fairly easy for people to work together to plan events, collaborate on the development of materials, and share information for a final report. Choosing the best theme for your task-based Moodle online communities If you're using volunteers or people who are using Moodle just for the tasks or completion of tasks, you may have quite a few Moodle "newbies". Since people will be unfamiliar with navigating Moodle and finding the places they need to go, you'll need a theme that is clear, attention-grabbing, and that includes easy-to-follow directions. There are a few themes that are ideal for collaborations and multiple functional groups. We highly recommend the Formal white theme because it is highly customizable from the Theme settings page. You can easily customize the background, text colors, logos, font size, font weight, block size, and more, enabling you to create a clear, friendly, and brand-recognizable site. Formal white is a standard theme, kept up to date, and can be used on many versions of Moodle. You can learn more about the Formal white theme and download it by visiting http://hub.packtpub.com/wp-content/uploads/2014/04/Filetheme_formalwhite.png. In order to customize the appearance of your entire site, perform the following steps: In the Site administration menu, click on Appearance. Click on Themes. Click on Theme settings. Review all the themes settings. Enter the custom information in each box.
Read more
  • 0
  • 0
  • 3392
Modal Close icon
Modal Close icon