Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-aspnet-social-networks-blogs-fisharoo
Packt
27 Oct 2009
8 min read
Save for later

ASP.NET Social Networks—Blogs in Fisharoo

Packt
27 Oct 2009
8 min read
Problem This article, as stated in Introduction, is all about adding the Blogging feature to our site. This will handle creating and managing a post. It will also handle sending alerts to your friends' filter page. And finally we will handle creating a friendly URL for your blog posts. Here we are making our first post to our blog: Once our post is created, we will then see it on the Blogs homepage and the My Posts section. From here we can edit the post or delete it. Also, we can click into the post to view what we have seen so far. The following screenshot shows what one will see when he/she clicks on the post: I have the blog post set up to show the poster's avatar. This is a feature that you can easily add to or remove. Most of your users want to be able to see who the author is that they are currently reading! Also, we will add a friendly URL to our blog post's pages. Design The design of this application is actually quite simple. We will only need one table to hold our blog posts. After that we need to hook our blog system into our existing infrastructure. Blogs In order for us to store our blog, we will need one simple table. This table will handle all the standard attributes of a normal blog post to include the title, subject, page name, and the post itself. It has only one relationship out to the Accounts table so that we know who owns the post down the road. That's it! Solution Let's take a look at the solution for these set of features. Implementing the database Let's take a look at the tables required by our solution. Blogs The blogs table is super simple. We discussed most of this under the Blogs section. The one thing that is interesting here is the Post column. Notice that I have this set to a varchar(MAX) field. This may be too big for your community, so feel free to change it down the road. For my community I am not overly worried. I can always add a UI restriction down the road without impacting my database design using a validation control. After that we will look at the IsPublished flag. This flag tells the system whether or not to show the post in the public domain. Next to that we will also be interested in the PageName column. This column is what we will display in the browser's address bar. As it will be displayed in the address bar, we need to make sure that the input is clean so that we don't have parsing issues (responsible for causing data type exceptions) down the road. We will handle that on the input side in our presenter later. Creating the relationships Once all the tables are created, we can then create all the relationships. For this set of tables we have relationships between the following tables: Blogs and Accounts Setting up the data access layer To set up the data access layer follow the steps mentioned next: Open the Fisharoo.dbml file. Open up your Server Explorer window. Expand your Fisharoo connection. Expand your tables. If you don't see your new tables try hitting the Refresh icon or right-clicking on tables and clicking Refresh. Then drag your new tables onto the design surface. Hit Save and you should now have the following domain objects to work with! Keep in mind that we are not letting LINQ track our relationships, so go ahead and delete them from the design surface. Your design surface should have all the same items as you see in the screenshot (though perhaps in a different arrangement!). Building repositories With the addition of new tables will come the addition of new repositories so that we can get at the data stored in those tables. We will be creating the following repository to support our needs. BlogRepository Our repository will generally have a method for select by ID, select all by parent ID, save, and delete. We will start with a method that will allow us to get at a blog by its page name that we can capture from the browser's address bar. public Blog GetBlogByPageName(string PageName, Int32 AccountID){Blog result = new Blog();using(FisharooDataContext dc = _conn.GetContext()){result = dc.Blogs.Where(b => b.PageName == PageName &&b.AccountID == AccountID).FirstOrDefault();}return result;} Notice that for this system to work we can only have one blog with one unique page name. If we forced our entire community to use unique page names across the community, we would eventually have some upset users. We want to make sure to enforce unique page names across users only for this purpose. To do this, we require that an AccountID be passed in with the page name, which gives our users more flexibility with their page name overlaps! I will show you how we get the AccountID later. Other than that we are performing a simple lambda expression to select the appropriate blog out of the collection of blogs in the data context. Next, we will discuss a method to get all the latest blog posts via the GetLatestBlogs() method. This method will also get and attach the appropriate Account for each blog. Before we dive into this method, we will need to extend the Blog class to have an Account property. To extend the Blog class we will need to create a public partial class in the Domain folder. using System;using System.Collections.Generic;using System.Linq;using System.Text;namespace Fisharoo.FisharooCore.Core.Domain{ public partial class Blog { public Account Account { get; set; } }} Now we can look at the GetLatestBlogs() method. public List<Blog> GetLatestBlogs(){ List<Blog> result = new List<Blog>(); using(FisharooDataContext dc = _conn.GetContext()) { IEnumerable<Blog> blogs = (from b in dc.Blogs where b.IsPublished orderby b.UpdateDate descending select b).Take(30); IEnumerable<Account> accounts = dc.Accounts.Where(a => blogs.Select(b => b.AccountID).Distinct().Contains(a.AccountID)); foreach (Blog blog in blogs) { blog.Account = accounts.Where(a => a.AccountID == blog.AccountID).FirstOrDefault(); } result = blogs.ToList(); result.Reverse(); } return result;} The first expression in this method gets the top N blogs ordered by their UpdateDate in descending order. This gets us the newest entries. We then add a where clause looking for only blogs that are published. We then move to getting a list of Accounts that are associated with our previously selected blogs. We do this by selecting a list of AccountIDs from our blog list and then doing a Contains search against our Accounts table. This gives us a list of accounts that belong to all the blogs that we have in hand. With these two collections in hand we can iterate through our list of blogs and attach the appropriate Account to each blog. This gives us a full listing of blogs with accounts. As we discussed earlier, it is very important for us to make sure that we keep the page names unique on a per user basis. To do this we need to have a method that allows our UI to determine if a page name is unique or not. To do this we will have the CheckPageNameIsUnique() method. public bool CheckPageNameIsUnique(Blog blog){ blog = CleanPageName(blog); bool result = true; using(FisharooDataContext dc = _conn.GetContext()) { int count = dc.Blogs.Where(b => b.PageName == blog.PageName && b.AccountID == blog.AccountID).Count(); if(count > 0) result = false; } return result;} This method looks at all the blog entries except itself to determine if there are other blog posts with the same page name that are also by the same Account. This allows us to effectively lock down our users from creating duplicate page names. This will be important down the road when we start to discuss our pretty URLs. Next, we will look at a private method that will help us clean up these page name inputs. Keep in mind that these page names will be displayed in the browser's address bar and therefore need not have any characters in them that the browser would want to encode. While we can decode the URL easily, this conversation is more about keeping the URL pretty so that the user and search engine spiders can easily read where they are at. When we have characters in the URL that are encoded, we will end up with something like %20 where %20 is the equivalent to a space. But to read my%20blog%20post is not that easy. It is much easier to ready my-blog-post. So we will strip out all of our so called special characters and replace all spaces with hyphens. This method will be the CleanPageName() method. private Blog CleanPageName(Blog blog){ blog.PageName = blog.PageName.Replace(" ", "-").Replace("!", "") .Replace("&", "").Replace("?", "").Replace(",", ""); return blog;} You can add to this as many filters as you like. For the time being I am replacing the handful of special characters that we have just seen in the code. Next, we will get into the service layers that we will use to handle our interactions with the system.
Read more
  • 0
  • 0
  • 4476

article-image-building-admin-interface-drupal-7
Packt
03 Dec 2010
11 min read
Save for later

Building an admin interface in Drupal 7

Packt
03 Dec 2010
11 min read
The User Warn module In this article we will be creating the User Warn module. This module allows administrators to send users a warning via e-mail when that user violates a site's terms of service or otherwise behaves in a way that is inappropriate. The User Warn module will implement the following features: The module will expose configuration settings to site administrators, including default mail text This e-mail will include Drupal tokens, which allow the admin to replace and/or add site-specific variables to the e-mail Site administrators will be able to send a user mail via a new tab on their user profile page Warning e-mails will be sent using Drupal's default mail implementation Starting our module We will begin by creating a new folder for our module called user_warn in the sites/default/modules directory in our Drupal installation. We can then create a user_warn.info file as shown in the following: ;$Id$ name = User Warn description = Exposes an admin interface to send behavior warning e-mails to users. core = 7.x package = Drupal 7 Development files[] = user_warn.module You should be pretty familiar with this now. We will also create our user_warn.module file and add an implementation of hook_help() to let site administrators know what our module does. http://api.drupal.org/api/function/hook_menu/7 The example is as follows: /** * Implement hook_menu(). */ function user_warn_menu() { $items = array(); $items['admin/config/people/user_warn'] = array( 'title' => 'User Warn', 'description' => 'Configuration for the User Warn module.', 'page callback' => 'drupal_get_form', 'page arguments' => array('user_warn_form'), 'access arguments' => array('administer users'), 'type' => MENU_NORMAL_ITEM, ); $items['user/%/warn'] = array( 'title' => 'Warn', 'description' => 'Send e-mail to a user about improper site behavior.', 'page callback' => 'drupal_get_form', 'page arguments' => array('user_warn_confirm_form', 1), 'access arguments' => array('administer users'), 'type' => MENU_LOCAL_TASK, ); return $items; } Like many Drupal hook implementations, hook_menu() returns a structured associative array with information about the menu items being defined. The first item in our example defines the module configuration page, and the second one defines the user tab where administrators can go to send the actual e-mail. Let's look at the first item in more detail. Menu items are keyed off their path. This is an internal Drupal path with no leading or trailing slashes. This path not only defines the location of a page, but also its place in the menu hierarchy, with each part of the URL being a child of the last. In this example, people is a child of config which is itself a child of admin. If a requested path does not exist, Drupal will work its way up the hierarchy until it encounters a page that does exist. You can see this in action by requesting admin/config/people/xyzzy which displays the page at admin/config/people. If you are creating a menu item for site administration it must begin with admin. This places it into Drupal's administrative interface and applies the admin theme defined by the site settings. Module-specific settings should always be present under admin/config. Drupal 7 offers several categories which module developers should use to better organize their settings according to Drupal functional groups like People and Permissions or Content Authoring. The value associated with this key is itself an associative array with several keys that define what action should be taken when this URL is requested. We can now look at those in detail. The first item defines your page title: 'title' => 'User Warn', This is used in a variety of display contexts—as your page's heading, in the HTML &lttitle&gt tag and as a subheading in the administration interface in combination with the description (if you are defining an administrative menu item). 'description' => 'Configuration for the User Warn module.', The description is just that—a longer text description of the page that this menu item defines. This should provide the user with more detailed information about the actions they can take on this page. This description is also used as the title tag when you hover over a link. Menu item titles and descriptions are passed through t() internally by Drupal, so this is one case where we don't need to worry about doing that ourselves. For an administration page, these two items define how your page is listed in Drupal's admin area as shown in the following: The next two items define what will happen when your page is requested: 'page callback' => 'drupal_get_form', 'page arguments' => array('user_warn_form'), 'page callback' defines the function that will get called (without the parentheses) and 'page arguments' contains an array of arguments that get passed to this function. Often you will create a custom function that processes, formats, and returns specific data. However, in our case we are calling the internal Drupal function drupal_get_form() that returns an array as defined by Drupal's Form API. As an argument we are passing the form ID of the form we want to display. The fifth item controls who can access your page. 'access arguments' => array('administer users'), 'access arguments' takes an array containing a permissions strings. Any user who has been assigned one of these permissions will be able to access this page. Anyone else will be given an Access denied page. Permissions are defined by modules using hook_permission(). You can see a full list of the currently defined permissions at admin/people/permissions as shown: You can see the 'administer users' permission at the bottom of this list. In the preceding example, only the Administrator role has this permission, and as a result only those users assigned this role will be able to access our page. Note that the titles of the permissions here do not necessarily match what you will need to enter in the access arguments array. Unfortunately, the only good way to find this information is by checking the hook_perm() implementation of the module in question. The final item defines what type of menu item we are creating: 'type' => MENU_NORMAL_ITEM, The 'type' is a bitmask of flags that describe what features we want our menu item to have (for instance, whether it is visible in the breadcrumb trail). Drupal defines over 20 constants for menu items that should cover any situation developers will find themselves in. The default type is MENU_NORMAL_ITEM, which indicates that this item will be visible in the menu tree as well as the breadcrumb trail. This is all the information that is needed to register our path. Now when Drupal receives a request for this URL, it will return the results of drupal_get_form(user_warn_form). Drupal caches the entire menu, so new/updated menu items will not be reflected immediately. To manually clear the cache, visit Admin | Configuration | Development | Performance and click on Clear all caches. Using wildcards in menu paths We have created a simple menu item, but sometimes simple won't do the job. In the User Warn module we want to have a menu item that is tied to each individual user's profile page. Profile pages in Drupal live at the path user/&ltuser_id&gt, so how do we create a distinct menu item for each user? Fortunately the menu system allows us to use wildcards when we define our menu paths. If you look at the second menu item defined in the preceding example, you will see that its definition differs a bit from our first example. $items['user/%/warn'] = array( 'title' => 'Warn', 'description' => 'Send e-mail to a user about improper site behavior.', 'page callback' => 'drupal_get_form', 'page arguments' => array('user_warn_confirm_form', 1), 'access arguments' => array('administer users'), 'type' => MENU_LOCAL_TASK, ); The first difference is that the path is defined with % as one of the path entries. This indicates a wildcard; anything can be entered here and the menu item's hierarchy will be maintained. In Drupal, that will always be a user's ID. However, there is nothing stopping any user from entering a URL like user/xyzzy/warn or something else potentially more malicious. Your code should always be written in such a way as to handle these eventualities, for instance by verifying that the argument actually maps to a Drupal user. This would be a good improvement. The other difference in this example is that we have added 1 as an additional argument to be passed to our page callback. Each argument in a menu item's path can be accessed as an argument that is available to be passed to our page callback, starting with 0 for the root argument. So here the string user is item 0, and the user's ID is item 1. To use the user's ID as a page callback argument, we reference it by its number. The result in this case is that the user's ID will be passed as an additional argument to drupal_get_form(). We have one other difference in this second menu item: 'type' => MENU_LOCAL_TASK, We have defined our type as MENU_LOCAL_TASK. This tells Drupal that our menu item describes actions that can be performed on the parent item. In this example, Warn is an action that can be performed on a user. These are usually rendered as an additional tab on the page in question, as you can see in the following example user profile screen: Having defined the paths for our pages through hook_menu(), we now need to build our forms. Form API In standard web development, one of the most tedious and unrewarding tasks is defining HTML forms and handling their submissions. Lay out the form, create labels, write the submission function, figure out error handling, and the worst part is that from site to site much of this code is boilerplate—it's fundamentally the same, differing only in presentation. Drupal's Form API is a powerful tool allowing developers to create forms and handle form submissions quickly and easily. This is done by defining arrays of form elements and creating validation and submit callbacks for the form. In past versions of Drupal, Form API was commonly referred to as FAPI. However, Drupal 7 now has three APIs which could fit this acronym—Form API, Field API and File API. We will avoid using the acronym FAPI completely, to prevent confusion, but you will still encounter it widely in online references. Form API is also a crucial element in Drupal's security. It provides unique form tokens that are verified on form submission, preventing Cross-site Request Forgery attacks, and automatically validating required fields, field lengths, and a variety of other form element properties. Using drupal_get_form() In our first menu implementation seen earlier, we defined the page callback as drupal_get_form(). This Form API function returns a structured array that represents an HTML form. This gets rendered by Drupal and presented as an HTML form for the user. drupal_get_form() takes a form ID as a parameter. This form ID can be whatever you want, but it must be unique within Drupal. Typically it will be &ltmodule_name&gt_&ltdescription&gt_form. The form ID is also the name of the callback function drupal_get_form() will call to build your form. The specified function should return a properly formatted array containing all the elements your form needs. Since the form ID also serves as the form's callback function, it must be a valid PHP variable name. Spaces and hyphens are not allowed. All form IDs should be prefaced by the name of your module followed by an underscore, in order to prevent name collision. Other parameters can be passed into drupal_get_form() in addition to the form ID. These extra parameters simply get passed through to the callback function for its own use. In Drupal 6, drupal_get_form() returned a fully rendered HTML form. This has been changed in Drupal 7 in order to allow more flexibility in theming and easier form manipulation. drupal_get_form() now returns an unrendered form array which must be passed to drupal_render() for final output. In the preceding example the menu system handles the change transparently, but other code converted from Drupal 6 may need to be changed.
Read more
  • 0
  • 0
  • 4475

article-image-adding-health-checks
Packt
14 Feb 2014
3 min read
Save for later

Adding health checks

Packt
14 Feb 2014
3 min read
(For more resources related to this topic, see here.) A health check is a runtime test for our application. We are going to create a health check that tests the creation of new contacts using the Jersey client. The health check results are accessible through the admin port of our application, which by default is 8081. How to do it… To add a health check perform the following steps: Create a new package called com.dwbook.phonebook.health and a class named NewContactHealthCheck in it: import javax.ws.rs.core.MediaType; import com.codahale.metrics.health.HealthCheck; import com.dwbook.phonebook.representations.Contact; import com.sun.jersey.api.client.*; public class NewContactHealthCheck extends HealthCheck { private final Client client; public NewContactHealthCheck(Client client) { super(); this.client = client; } @Override protected Result check() throws Exception { WebResource contactResource = client .resource("http://localhost:8080/contact"); ClientResponse response = contactResource.type( MediaType.APPLICATION_JSON).post( ClientResponse.class, new Contact(0, "Health Check First Name", "Health Check Last Name", "00000000")); if (response.getStatus() == 201) { return Result.healthy(); } else { return Result.unhealthy("New Contact cannot be created!"); } } } Register the health check with the Dropwizard environment by using the HealthCheckRegistry#register() method within the #run() method of the App class. You will first need to import com.dwbook.phonebook.health.NewContactHealthCheck. The HealthCheckRegistry can be accessed using the Environment#healthChecks() method: // Add health checks e.healthChecks().register ("New Contact health check", new NewContactHealthCheck(client)); After building and starting your application, navigate with your browser to http://localhost:8081/healthcheck: The results of the defined health checks are presented in the JSON format. In case the custom health check we just created or any other health check fails, it will be flagged as "healthy": false, letting you know that your application faces runtime problems. How it works… We used exactly the same code used by our client class in order to create a health check; that is, a runtime test that confirms that the new contacts can be created by performing HTTP POST requests to the appropriate endpoint of the ContactResource class. This health check gives us the required confidence that our web service is functional. All we need for the creation of a health check is a class that extends HealthCheck and implements the #check() method. In the class's constructor, we call the parent class's constructor specifying the name of our check—the one that will be used to identify our health check. In the #check() method, we literally implement a check. We check that everything is as it should be. If so, we return Result.healthy(), else we return Result.unhealthy(), indicating that something is going wrong. Summary This article showed what a health check is and demonstrated how to add a health check. The health check we created tested the creation of new contacts using the Jersey client. Resources for Article: Further resources on this subject: RESTful Web Services – Server-Sent Events (SSE) [Article] Connecting to a web service (Should know) [Article] Web Services and Forms [Article]
Read more
  • 0
  • 0
  • 4467

article-image-angular-20
Packt
30 Apr 2015
12 min read
Save for later

Angular 2.0

Packt
30 Apr 2015
12 min read
Angular 2.0 was officially announced in ng-conference on October 2014. Angular 2.0 will not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include major changes. In this article by Mohammad Wadood Majid, coauthor of the book Mastering AngularJS for .NET Developers, we will learn the following topics: Why Angular 2.0 Design and features of Angular 2.0 AtScript Routing solution Dependency injection Annotations Instance scope Child injector Data binding and templating (For more resources related to this topic, see here.) Why Angular 2.0 AngularJS is one of the most popular open source frameworks available for client-side web application development. From the last few years, AngularJS's adaption and community support has been remarkable. The current AngularJS Version 1.3 is stable and used by many developers. There are over 1600 applications inside Google that use AngularJS 1.2 or 1.3. In the last few years, the Web has changed significantly, such as in the past, it was very difficult to build a cross-browser application; however, today's browsers are more consistent in their DOM implementations and the Web will continue to change. Angular 2.0 will address the following concerns: Mobile: Angular 2.0 will focus on mobile application development. Modular: Different modules will be removed from the core AngularJS, which will result in a better performance. Angular 2.0 will provide us the ability to pick the module parts we need. Modern: Angular 2.0 will include ECMAScript 6 (ES6). ECMAScript is a scripting language standard developed by Ecma International. It is widely used in client-side scripting, such as JavaScript, JScript, and ActionScript on the Web. Performance: AngularJS was developed around 5 years ago and it was not developed for developers. It was a tool targeting developers to quickly create persistent HTML forms. However, over time, it has been used to build more complex applications. The Angular 1.x team worked over the years to make changes to the current design, allowing it to continue to be relevant as needed for modern web applications. However, there are limits to improve the current AngularJS framework. A number of these limits are related to the performance that results to the current binding and template infrastructure. In order to fix these problems, a new infrastructure is required. In these days, the modern browser already supports some of the features of ES6, but the final implementation in progress will be available in 2015. With new features, developers will be able to describe their own views (template element) and package them for distribution to other developers (HTML imports). When in 2015 all these new features are available in all the browsers, the developer will be able to create as many endeavors for reusable components as required to resolve common problems. However, most of the frameworks, such as AngularJS 1.x, are not prepared, the data binding of the AngularJS 1.x framework works on the assumption of a small number of known HTML elements. In order to take advantage of the new components, an implementation in Angular is required. Design and features of AngularJS 2.0 The current AngularJS framework design in an amalgamation of the changing Web and the general computing landscape; however, it still needs some changes. The current Angular 1.x framework cannot work with new web components, as it lends itself to mobile applications and pushes its own module and class API against the standards. To answer these issues, the AngularJS team is coming up with the AngularJS 2.0 framework. AngularJS 2.0 is a reimaging of AngularJS 1.x for the modern web browser. The following are the changes in Angular 2.0: AtScript AtScript is a language used to develop AngularJS 2.0, which is a superset of ES6. It's managed by the Traceur compiler with ES6 to produce the ES5 code and it will use the TypeScript's syntax to generate runtime type proclamations instead of compile time checks. However, the developer will still be able to use the plain JavaScript (ES5) instead to using AtScript to write AngularJS 2.0 applications. The following is an example of an AtScript code: import {Component} from 'angulat';   import {server} from './server';   @Componenet({selector: 'test'})   export class MyNewComponent {   constructor(serve:Server){      this.sever=server } } In the preceding code, the import and the class come from ES6. There are constructor functions and a server parameter that specifies a type. In AtScript, this type is used to generate a runtime type assertion. The reference is stored, so that the dependency injection framework can use it. The @Component annotation is a metadata annotation. When we decorate some code within @Component, the compiler generates code that instantiates the annotation and stores it in a known location, so that it can be accessed by the AngularJS 2.0 framework. Routing solution In AngularJS 1.x, the routing was designed to handle a few simple cases. As the framework grew, more features were added to it. AngularJS 2.0 includes the following basic routing features, but will still be able to extend: JSON route configuration Optional convention over configuration Static, parameterized, and splat route patterns URL resolver Query string Push state or hash change Navigation model Document title updates 404 route handling Location service History manipulation Child router Screen activate: canActivate activate deactivate Dependency Injection The main feature of AngularJS 1.x was Dependency Injection (DI). It is very easy to used DI and follow the divide and conquer software development approach. In this way, the complex problems can be abstracted together and the applications that are developed in this way can be assembled at runtime with the use of DI. However, there are few issues in the AngularJS 1.x framework. First, the DI implementation was associated with minification; DI was dependant on parsing parameter names from functions, and whenever the names were changed, they were no longer matching with the services, controllers, and other components. Second, the missing features, which are more common to advance server-side DI features, are available in .NET and Java. These two features add constraints to scope control and child injectors. Annotations With the use of AtScript in the AngularJS 2.0 framework, a way to relate metadata with any function was introduced. The formatting data for metadata with AtScript is strong in the face of minification and is easy to write by pointer with ES5. The instance scope In the AngularJS framework 1.x, in the DI container, all the instances were singletons. The same is the case with AngularJS 2.0 by default. However, to get different behavior, we need to use services, providers, constants, and so on. The following code can be used to create a new instance every time the DI. It will become more useful if you create your own scope identifiers for use in the combination with child injectors, as shown: @TransientScope   Export class MyClass{…} The child injector The child injector is a major new feature in AngularJS 2.0. The child injector is inherited from its parent; however, the child injector has the ability to override the parents at the child level. Using this new feature, we can call certain type of objects in the application that will be automatically overridden in various scopes. For example, when a new route has a child route ability, each child route creates its own child injector. This allows each route to inherit from parent routes or to override those services during different navigation scenarios. Data binding and templating The data binding and template are considered a single unit while developing the application. In other words, the data binding and templates go hand in hand while writing an application with the AngularJS framework. When we bind the DOM, the HTML is handed to the template compiler. The compiler goes across the HTML to find out any directives, binding expressions, event handlers, and so on. All of the data is extracted from the DOM to data structures, which can be used to instantiate the template. During this phase, some processing is done on the data, for example, parsing the binding expression. Every node that contains the instructions is tagged with the class to cache the result of the process, so that work does not need to be repeated. Dynamic loading Dynamic loading was missing in AngularJS 1.x. It is very hard to add new directives or controllers at runtime. However, dynamic loading is added to Angular 2.0. Whenever any template is compiled, not only is the compiler provided with a template, but it is also provided with a component definition. The component definition contains metadata of directives, filters, and so on. This confirms that the necessary dependencies are loaded before the template gets managed by the compiler. Directives Directives in the AngularJS framework are meant to extend the HTML. In AngularJS 1.x, the Directive Definition Object (DDO) is used to create directives. In AngularJS 2.0, the directives are made simpler. There are three types of directives in AngularJS 2.0: The component directive: This is a collection of a view and a controller to create custom components. It can be used as an HTML element as well as a router that can map routes to the components. The decorator directive: Use this directive to decorate the HTML element with additional behavior, such as ng-show. The template directive: This directive transforms HTML into a reusable template. The directive developer can control how the template is instantiated and inserted into the DOM, such as ng-if and ng-repeat. The controller in AngularJS 2.0 is not a part of the component. However, the component contains the view and controller, where view is an HTML and controller is JavaScript. In AngularJS 2.0, the developer creates a class with some annotations, as shown in the following code: @dirComponent({   Selector: 'divTabContainter'   Directives:[NgRepeat]   })   Export class TabContainer{      constructor (panes:Query<Pane>){      this.panes=panes      } select(selectPane:Pane){…}   } In the preceding code, the controller of the component is a class. The dependencies are injected automatically into the constructor because the child injectors will be used. It can get access to any service up to the DOM hierarchy as well as it will local to service element. It can be seen in the preceding code that Query is injected. This is a special collection that is automatically synchronized with the child elements and lets us know when anything is added or removed. Templates In the preceding section, we created a dirTabContainer directive using AngularJS 2.0. The following code shows how to use the preceding directive in the DOM: <template>      <div class="border">          <div class="tabs">              <div [ng-repeat|pane]="panes" class="tab"   (^click)="select(pane)">                  <img [src]="pane.icon"><span>${pane.name}</span>              </div>          </div>          <content>        </content>      </div> </template> As you can see in the preceding code, in the <img [src]="pane.icon"><span>${pane.name}</span> image tag, the src attribute is surrounded with [], which tells us that the attribute has binding express. When we see ${}, it means that there is an expression that should be interpolated in the content. These bindings are unidirectional from the model or controller to the view. If you see div in the preceding <div [ng-repeat|pane]="panes" class="tab" (^click)="select(pane)"> template code, it is noticeable that ng-repeat is a template directive and is included with | and the pane word, where pane is the local variable. (^click) indicates that there is an event handler, where ^ means that the event is not a directory attached to the DOM, rather, we let it bubble and will be handled at the document level. In the following code example, we will compare the code of the Angular framework 1.x and AngularJS 2.0; let's create a hello world example for this demonstration. The following code is used to enable the write option in the AngularJS framework 1.x: var module = angular.module("example", []);   module.controller("FormExample", function() {   this.username = "World";   });   <div ng-controller="FormExample as ctrl">   <input ng-model="ctrl.username"> Hello {{ctrl.username}}!   </div> The following code is used to write in the AngluraJS framework 1.x: @Component({   selector: 'form-example'   })   @Template({   // we are binding the input element to the control object // defined in the component's class   inline: '<input [control]="username">Hello            {{username.value}}!', directives: [forms]   })   class FormExample {   constructor() {      this.username = new Control('World');   }   } In the preceding code example, TypeScript 1.5 is used, which will support the metadata annotations. However, the preceding code can be written in the ES5/ES6 JavaScript. More information on annotations can be found in the annotation guide at https://docs.google.com/document/d/1uhs-a41dp2z0NLs-QiXYY-rqLGhgjmTf4iwBad2myzY/edit#heading=h.qbaubqkoiqds. Here are some explanations from TypeScript 1.5: Form behavior cannot be unit tested without compiling the associated template. This is required because certain parts of the application behavior are contained in the template. We want to enable the dynamically generated data-driven forms in AngularJS 2.0 although it is present in AngularJS 1.x. This is because in Angular 1.x, this is not easy. The difficulty to reason your template statically arises because the ng-model directive was built using a generic two-way data binding. An atomic form that can easily be validated or reverted to its original state is required, which is missing from AngularJS 1.x. Although AngularJS 2.0 uses an extra level of indirection, it grants major benefits. The control object decouples form behavior from the template, so that you can test it in isolation. Tests are simpler to write and faster to execute. Summary In this article, we introduced the Angular 2.0 framework; it may not be a major update to the previous version, but it is a complete rewrite of the entire framework and will include breaking changes. We also talked about certain AngularJS 2.0 changes. AngularJS 2.0 will hopefully be released by the end of 2015. Resources for Article: Further resources on this subject: Setting Up The Rig [article] AngularJS Project [article] Working with Live Data and AngularJS [article]
Read more
  • 0
  • 0
  • 4466

article-image-making-specs-more-concise-intermediate
Packt
13 Sep 2013
6 min read
Save for later

Making specs more concise (Intermediate)

Packt
13 Sep 2013
6 min read
(For more resources related to this topic, see here.) Making specs more concise (Intermediate) So far, we've written specifications that work in the spirit of unit testing, but we're not yet taking advantage of any of the important features of RSpec to make writing tests more fluid. The specs illustrated so far closely resemble unit testing patterns and have multiple assertions in each spec. How to do it... Refactor our specs in spec/lib/location_spec.rb to make them more concise: require "spec_helper" describe Location do describe "#initialize" do subject { Location.new(:latitude => 38.911268, :longitude => -77.444243) } its (:latitude) { should == 38.911268 } its (:longitude) { should == -77.444243 } end end While running the spec, you see a clean output because we've separated multiple assertions into their own specifications: Location #initialize latitude should == 38.911268 longitude should == -77.444243 Finished in 0.00058 seconds 2 examples, 0 failures The preceding output requires either the .rspec file to contain the --format doc line, or when executing rspec in the command line, the --format doc argument must be passed. The default output format will print dots (.) for passing tests, asterisks (*) for pending tests, E for errors, and F for failures. It is time to add something meatier. As part of our project, we'll want to determine if Location is within a certain mile radius of another point. In spec/lib/location_spec.rb, we'll write some tests, starting with a new block called context. The first spec we want to write is the happy path test. Then, we'll write tests to drive out other states. I am going to re-use our Location instance for multiple examples, so I'll refactor that into another new construct, a let block: require "spec_helper" describe Location do let(:latitude) { 38.911268 } let(:longitude) { -77.444243 } let(:air_space) { Location.new(:latitude => 38.911268,: longitude => -77.444243) } describe "#initialize" do subject { air_space } its (:latitude) { should == latitude } its (:longitude) { should == longitude } end end Because we've just refactored, we'll execute rspec and see the specs pass. Now, let's spec out a Location#near? method by writing the code we wish we had: describe "#near?" do context "when within the specified radius" do subject { air_space.near?(latitude, longitude, 1) } it { should be_true } end end end Running rspec now results in failure because there's no Location#near? method defined. The following is the naive implementation that passes the test (in lib/location.rb): def near?(latitude, longitude, mile_radius) true end Now, we can drive a failure case, which will force a real implementation in spec/lib/location_spec.rb within the describe "#near?" block: context "when outside the specified radius" do subject { air_space.near?(latitude * 10, longitude * 10, 1) } it { should be_false } end Running the specs now results in the expected failure. The following is a passing implementation of the haversine formula in lib/location.rb that satisfies both cases: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) to_radians = Proc.new { |d| d * Math::PI / 180 } dist_lat = to_radians.call(lat - self.latitude) dist_long = to_radians.call(long - self.longitude) lat1 = to_radians.call(self.latitude) lat2 = to_radians.call(lat) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) + Math.sin(dist_long/2) * Math.sin(dist_long/2) * Math.cos(lat1) * Math.cos(lat2) c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) (R * c) <= mile_radius end Refactor both of the previous tests to be more expressive by utilizing predicate matchers: describe "#near?" do context "when within the specified radius" do subject { air_space } it { should be_near(latitude, longitude, 1) } end context "when outside the specified radius" do subject { air_space } it { should_not be_near(latitude * 10, longitude * 10, 1) } end end Now that we have a passing spec for #near?, we can alleviate a problem with our implementation. The #near? method is too complicated. It could be a pain to try and maintain this code in future. Refactor for ease of maintenance while ensuring that the specs still pass: R = 3_959 # Earth's radius in miles, approx def near?(lat, long, mile_radius) loc = Location.new(:latitude => lat,:longitude => long) R * haversine_distance(loc) <= mile_radius end private def to_radians(degrees) degrees * Math::PI / 180 end def haversine_distance(loc) dist_lat = to_radians(loc.latitude - self.latitude) dist_long = to_radians(loc.longitude - self.longitude) lat1 = to_radians(self.latitude) lat2 = to_radians(loc.latitude) a = Math.sin(dist_lat/2) * Math.sin(dist_lat/2) +Math.sin(dist_long/2) * Math.sin(dist_long/2) *Math.cos(lat1) * Math.cos(lat2) 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)) end Finally, run rspec again and see that the tests continue to pass. A successful refactor! How it works... The subject block takes the return statement of the block—a new instance of Location in the previous example—and binds it to a locally scoped variable named subject. Subsequent it and its blocks can refer to that subject variable. Furthermore, the its blocks implicitly operate on the subject variable to produce more concise tests. Here is an example illustrating how subject is used to produce easier-to-read tests: describe "Example" do subject { { :key1 => "value1", :key2 => "value2" } } it "should have a size of 2" do subject.size.should == 2 end end We can use subject from within the it block and this will refer to the anonymous hash returned by the subject block. In the preceding test, we could have been more concise with an its block: its (:size) { should == 2 } We're not limited to just sending symbols to an its block—we can use strings too: its ('size') { should == 2 } When there is an attribute of subject you want to assert but the value cannot easily be turned into a valid Ruby symbol, you'll need to use a string. This string is not evaluated as Ruby code; it's only evaluated against the subject under test as a method of that class. Hashes, in particular, allow you to define an anonymous array with the key value to assert the value for that key: its ([:key1]) { should == "value1" } There's more... In the previous code examples, another block known as the context block was presented. The context block is a grouping mechanism for associating tests. For example, you may have a conditional branch in your code that changes the outputs of a method. Here, you may use two context blocks, one for a value and the second for another value. In our example, we're separating the happy path (when a given point is within the specified mile radius) from the alternative (when a given point is outside the specified mile radius). context is a useful construct that allows you to declare let and other blocks within it, and those blocks apply only for the scope of the containing context. Summary This article demonstrated to us the idiomatic RSpec code that makes good use of the RSpec Domain Specific Language (DSL). Resources for Article : Further resources on this subject: Quick start - your first Sinatra application [Article] Behavior-driven Development with Selenium WebDriver [Article] External Tools and the Puppet Ecosystem [Article]
Read more
  • 0
  • 0
  • 4457

article-image-introduction-react-native
Eugene Safronov
23 Sep 2015
7 min read
Save for later

Introduction to React Native

Eugene Safronov
23 Sep 2015
7 min read
React is an open-sourced JavaScript library made by Facebook for building UI applications. The project has a strong emphasis on the component-based approach and utilizes the full power of JavaScript for constructing all elements. The React Native project was introduced during the first React conference in January 2015. It allows you to build native mobile applications using the same concepts from React. In this post I am going to explain the main building blocks of React Native through the example of an iOS demo application. I assume that you have previous experience in writing web applications with React. Setup Please go through getting started section on the React Native website if you would like to build an application on your machine. Quick start When all of the necessary tools are installed, let's initialize the new React application with the following command: react-native init LastFmTopArtists After the command fetches the code and the dependencies, you can open the new project (LastFmTopArtists/LastFmTopArtists.xcodeproj) in Xcode. Then you can build and run the app with cmd+R. You will see a similar screen on the iOS simulator: You can make changes in index.ios.js, then press cmd+R and see instant changes in the simulator. Demo app In this post I will show you how to build a list of popular artists using the Last.fm api. We will display them with help of ListView component and redirect on the artist page using WebView. First screen Let's start with adding a new screen into our application. For now it will contain dump text. Create file ArtistListScreen with the following code: var React = require('react-native'); var { ListView, StyleSheet, Text, View, } = React; class ArtistListScreen extendsReact.Component { render() { return ( <View style={styles.container}> <Text>Artist list would be here</Text> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', marginTop: 64 } }) module.exports = ArtistListScreen; Here are some things to note: I declare react components with ES6 Classes syntax. ES6 Destructuring assignment syntax is used for React objects declaration. FlexBox is a default layout system in React Native. Flex values can be either integers or doubles, indicating the relative size of the box. So, when you have multiple elements they will fill the relative proportion of the view based on their flex value. ListView is declared but will be used later. From index.ios.js we call ArtistListScreen using NavigatorIOS component: var React = require('react-native'); var ArtistListScreen = require('./ArtistListScreen'); var { AppRegistry, NavigatorIOS, StyleSheet } = React; var LastFmArtists = React.createClass({ render: function() { return ( <NavigatorIOS style={styles.container} initialRoute={{ title: "last.fm Top Artists", component: ArtistListScreen }} /> ); } }); var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: 'white', }, }); Switch to iOS Simulator, refresh with cmd+R and you will see: ListView After we have got the empty screen, let's render some mock data in a ListView component. This component has a number of performance improvements such as rendering of only visible elements and removing which are off screen. The new version of ArtistListScreen looks like the following: class ArtistListScreen extendsReact.Component { constructor(props) { super(props) this.state = { isLoading: false, dataSource: newListView.DataSource({ rowHasChanged: (row1, row2) => row1 !== row2 }) } } componentDidMount() { this.loadArtists(); } loadArtists() { this.setState({ dataSource: this.getDataSource([{name: 'Muse'}, {name: 'Radiohead'}]) }) } getDataSource(artists: Array<any>): ListView.DataSource { returnthis.state.dataSource.cloneWithRows(artists); } renderRow(artist) { return ( <Text>{artist.name}</Text> ); } render() { return ( <View style={styles.container}> <ListView dataSource={this.state.dataSource} renderRow={this.renderRow.bind(this)} automaticallyAdjustContentInsets={false} /> </View> ); } } Side notes: The DataSource is an interface that ListView is using to determine which rows have changed over the course of updates. ES6 constructor is an analog of getInitialState. The end result of the changes: Api token The Last.fm web api is free to use but you will need a personal api token in order to access it. At first it is necessary to join Last.fm and then get an API account. Fetching real data I assume you have successfully set up the API account. Let's call a real web service using fetch API: const API_KEY='put token here'; const API_URL = 'http://ws.audioscrobbler.com/2.0/?method=geo.gettopartists&country=ukraine&format=json&limit=40'; const REQUEST_URL = API_URL + '&api_key=' + API_KEY; loadArtists() { this.setState({ isLoading: true }); fetch(REQUEST_URL) .then((response) => response.json()) .catch((error) => { console.error(error); }) .then((responseData) => { this.setState({ isLoading: false, dataSource: this.getDataSource(responseData.topartists.artist) }) }) .done(); } After a refresh, the iOS simulator should display: ArtistCell Since we have real data, it is time to add artist's images and rank them on the display. Let's move artist cell display logic into separate component ArtistCell: 'use strict'; var React = require('react-native'); var { Image, View, Text, TouchableHighlight, StyleSheet } = React; class ArtistCell extendsReact.Component { render() { return ( <View> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> <View style={styles.separator}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, flexDirection: 'row', justifyContent: 'center', alignItems: 'center', padding: 5 }, artistImage: { height: 84, width: 126, marginRight: 10 }, rightContainer: { flex: 1 }, name: { textAlign: 'center', fontSize: 14, color: '#999999' }, rank: { textAlign: 'center', marginBottom: 2, fontWeight: '500', fontSize: 16 }, separator: { height: 1, backgroundColor: '#E3E3E3', flex: 1 } }) module.exports = ArtistCell; Changes in ArtistListScreen: // declare new component var ArtistCell = require('./ArtistCell'); // use it in renderRow method: renderRow(artist) { return ( <ArtistCell artist={artist} /> ); } Press cmd+R in iOS Simulator: WebView The last piece of the application would be to open a web page by clicking in ListView. Declare new component WebView: 'use strict'; var React = require('react-native'); var { View, WebView, StyleSheet } = React; class Web extendsReact.Component { render() { return ( <View style={styles.container}> <WebView url={this.props.url}/> </View> ); } } var styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#F6F6EF', flexDirection: 'column', }, }); Web.propTypes = { url: React.PropTypes.string.isRequired }; module.exports = Web; Then by using TouchableHighlight we will call onOpenPage from ArtistCell: class ArtistCell extendsReact.Component { render() { return ( <View> <TouchableHighlight onPress={this.props.onOpenPage} underlayColor='transparent'> <View style={styles.container}> <Image source={{uri: this.props.artist.image[2]["#text"]}} style={styles.artistImage} /> <View style={styles.rightContainer}> <Text style={styles.rank}>## {this.props.artist["@attr"].rank}</Text> <Text style={styles.name}>{this.props.artist.name}</Text> </View> </View> </TouchableHighlight> <View style={styles.separator}/> </View> ); } } Finally open web page from ArtistListScreen component: // declare new component var WebView = require('WebView'); class ArtistListScreen extendsReact.Component { // will be called on touch from ArtistCell openPage(url) { this.props.navigator.push({ title: 'Web View', component: WebView, passProps: {url} }); } renderRow(artist) { return ( <ArtistCell artist={artist} // specify artist's url on render onOpenPage={this.openPage.bind(this, artist.url)} /> ); } } Now a touch on any cell in ListView will load a web page for selected artist: Conclusion You can explore source code of the app on Github repo. For me it was a real fun to play with React Native. I found debugging in Chrome and error stack messages extremely easy to work with. By using React's component-based approach you can build complex UI without much effort. I highly recommend to explore this technology for rapid prototyping and maybe for your next awesome project. Useful links Building a flashcard app with React Native Examples of React Native apps React Native Videos Video course on React Native Want more JavaScript? Visit our dedicated page here. About the author Eugene Safronov is a software engineer with a proven record of delivering high quality software. He has an extensive experience building successful teams and adjusting development processes to the project’s needs. His primary focuses are Web (.NET, node.js stacks) and cross-platform mobile development (native and hybrid). He can be found on Twitter @sejoker.
Read more
  • 0
  • 0
  • 4446
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-setting-payment-model-opencart
Packt
25 Aug 2010
8 min read
Save for later

Setting Payment Model in OpenCart

Packt
25 Aug 2010
8 min read
(For more resources on OpenCart, see here.) Shopping cart system The shopping cart is special software which allows customers to add / delete products to a basket from a store catalogue and then complete the order. The shopping cart also automatically updates the total amount which the customer will pay according to product additions or deletions on the basket. OpenCart provides a built-in shopping cart system which provides all such functionality. So, you don't need to install or buy separate software for the shopping cart. Merchant account A merchant account is a special account type which differs from a usual bank account. Its sole purpose is to accept credit card payments. Opening a merchant account requires making a contract with the credit card network providers. Authorized card payments on the store are transferred to the merchant account. Then, as a merchant we can transfer the amount from merchant account to bank account (checking account). Since opening a merchant account can be a tiresome process for most businesses and individuals, there are various online businesses which can provide this functionality. Curious readers can learn the details of merchant accounts on the following links: http://en.wikipedia.org/wiki/Merchant_account http://www.merchantaccount.com/ Payment gateway A payment gateway is an online analogue of a physical credit card processing terminal that we can locate in retail shops. Its function is to process credit card information and return the results back to the store system. You can imagine the payment gateway as an element in the middle of an online store and credit card network. The software part of this service is included in OpenCart but we will have to use one of the payment gateway services. Understanding online credit card processing The following diagram shows the standard credit card processing flowchart in detail. Note that it is not essential to know every detail in steps shown in a red background color. These parts are executed on behalf of us by the payment system which we will use, so it is isolated both from the store and customer. For example, PayPal is such a system, which we will learn about now in detail. Let's explain the flowchart step by step to clearly understand the whole process: A customer successfully enters into the checkout page after filling the shopping cart with the products. Then, he/she enters the credit card information and clicks on the Pay button. Now, the store checkout page sends these details along with the total amount to the payment gateway securely. The payment gateway starts a series of processes. First of all, the information is passed to the merchant's bank processor where the merchant account was opened before. The information is then sent to the credit card network by this processor. Visa and MasterCard are two of the most popular credit card networks. The credit card network processes the validity of the credit card and sends the information to the customer's credit card issuer bank. As a result, the bank rejects or approves the transaction and sends the information back to the credit card network. Through the same routing in reverse, the payment information is finally submitted back to the online store with a special code. All this is done in a few seconds and the information flow starting from the payment gateway is isolated from both the customer and merchant. It means that we don't have to deal with what's going on after sending information to the payment gateway. As a merchant, we only need the result of the transaction. After the information is processed by credit card network during Step 6; the transaction funds are transferred to the merchant account by the credit card network as shown in Step a. Then, the merchant can transfer the funds from the merchant account to the usual checking bank account automatically or manually, as shown in Step b. OpenCart payment methods The current OpenCart version supports many established payment systems, including PayPal services, Authorize.net, Moneybookers, 2Checkout, and so on, as well as basic payment options such as Cash on Delivery, Bank Transfer, Check/money order, etc. We can also get more payment gateway modules on the OpenCart extensions section by searching in Payment Methods. http://www.opencart.com/index.php?route=extension/extension We will now briefly learn the most widely used methods and their differences and similarities to each other. PayPal PayPal is one of the most popular and easiest to use systems for accepting credit cards for an online store. PayPal has two major products to be used in OpenCart through built-in modules: PayPal Website Payment Standard PayPal Website Payment Pro Both of these payment methods provide both payment gateway and merchant account functionality. Let's understand the details of each now. PayPal Website Payment Standard It is the easiest method to implement accepting credit card payments on an online store. For merchants, a simple bank account and a PayPal account is enough to take payments. There are no monthly fees or setup costs charged by PayPal. The only cost is a fixed small percentage taken by PayPal for each transaction. So, you should consider this on price valuations of items in the store. Here is the link to learn about the latest commission rates per transaction: http://merchant.paypal.com When the customer clicks on the checkout button on OpenCart, he/she will be redirected to the PayPal site to continue with the payment. As you can see from the following sample screenshot, a customer can provide credit card information instantly or log in to his/her PayPal account to pay from the balance in the PayPal account: In the next step, after the final review, the user clicks on the Pay Now button. Notice that PayPal automatically localizes the total amount according to the PayPal owner's account currency. In this case, the price is calculated according to Dollar – Euro exchange rates. After the payment, the PayPal screen shows the result of the payment. The screen doesn't return to the merchant store automatically. There is a button for it: Return to Merchant. Finally, the website user is informed about the result of the purchase in the OpenCart store. The main advantage of PayPal Website Payment Standard is that it is easy to implement; many online people are familiar with using it. We can state one minor disadvantage. Some people may abandon the purchase since the payment gateway would leave the store temporarily to complete the transaction on the PayPal website. PayPal Website Payment Pro This is the paid PayPal solution for an online store as a payment gateway and merchant account. The biggest difference from PayPal Website Payment Standard is that customers do not leave the website for credit card processing. The credit card information is completely processed in the online store as it is the popular method of all established e-commerce websites. Even the customers will not know about the processor of the cards. Unless we put a PayPal logo ourselves, this information is well encapsulated. Using this method also only requires a bank account and PayPal account for the merchant. PayPal charges a monthly fee and a one-time setup fee for this service. The individual transactions are also commissioned by PayPal. This is a very professional way of processing credit cards online for a store but it can have a negative effect on some customers. Some customers can require seeing some indication of trust from the store before making a purchase. So, depending the on store owner's choice, it would be wise to put a remark and logo of PayPal stating that «Credit card is processed by PayPal safely and securely» For a beginner OpenCart administrator who wants to use PayPal for the online store, it is recommended to get experience with the free Standard payment option and then upgrade to the Pro option. We can get more information on PayPal Website Payment Pro service at: http://merchant.paypal.com At time of writing this book, PayPal only charges a fixed monthly fee ($30) and commissions on each transaction. There are no other setup costs or hidden charges. PayFlow Pro payment gateway If we already have a merchant account, we don't need to pay extra for it by using PayPal Standard or PayPal Pro. PayFlow Pro is cheaper than other PayPal services and allows us to accept credit card payments to an existing merchant account. Unfortunately, OpenCart currently does not support it as a built-in module but there are both free and paid modules. You can get them from the OpenCart official contributions page at: http://www.opencart.com/index.php?route=extension/extension
Read more
  • 0
  • 0
  • 4445

article-image-creating-content-drupal-7
Packt
03 Dec 2010
3 min read
Save for later

Creating Content in Drupal 7

Packt
03 Dec 2010
3 min read
Drupal 7 First Look Learn the new features of Drupal 7, how they work and how they will impact you Get to grips with all of the new features in Drupal 7 Upgrade your Drupal 6 site, themes, and modules to Drupal 7 Explore the new Drupal 7 administration interface and map your Drupal 6 administration interface to the new Drupal 7 structure Complete coverage of the DBTNG database layer with usage examples and all API changes for both Themes and Modules         Read more about this book       (For more resources on Drupal 7, see here.) Creating content for your site is at the core of any Content Management System like Drupal. The primary changes for Drupal 7 relate to an updated interface. Let's look at the new interface in detail. Selecting a content type to create To create content in Drupal 7, first log in to your site and then click on Content from the site toolbar. Drupal will now display the new Content Administration page. Move mouse over the image to enlarge it. In Drupal 6, this page could be displayed by selecting Administer | Content Management | Content from the Navigation menu. From here, click on the Add new content link. Drupal will now display a page allowing you to select the type of content you want to create. Depending on the modules you have installed and enabled, you will have different content types available. In previous versions of Drupal, this page could be reached by selecting the Create Content link from the Navigation menu. You can also select the type of content to add using the shortcut bar. You can access the shortcut bar by clicking on the toggle at the far right of the toolbar: The shortcut bar has a list of links in it that can be used to quickly access commonly used functionality, and it appears as follows: You can customize the links in the shortcut bar and users can use either the default set of shortcuts or they can have their own. Now select the type of content you want to create. For this example, we will use the Basic page type. Content UI The interface to create content has been altered drastically from Drupal 6. Let's go through the interface in detail. The top section of the page should be familiar to experienced editors. This is the place to enter your title as well as the full text of the page. In a departure from previous versions, the node summary, which is used when multiple nodes are displayed on a page, is an entirely separate optional field.
Read more
  • 0
  • 1
  • 4436

article-image-moodle-20-multimedia-creating-and-integrating-screencasts-and-videos
Packt
23 May 2011
8 min read
Save for later

Moodle 2.0 Multimedia: Creating and Integrating Screencasts and Videos

Packt
23 May 2011
8 min read
  Moodle 2.0 Multimedia Cookbook Add images, videos, music, and much more to make your Moodle course interactive and fun         Read more about this book       (For more resources on Moodle 2.0, see here.) Introduction Moodle 2.0 offers new features, which make it easier to insert videos, especially from the http://www.youtube.com website. You can find them easily from the file picker, provided you have administrative access to the course. You have to bear in mind that you need to be an administrator in order to enable this option. This article covers different ways to create and interact using either screencasts or videos. We will work with several multimedia assets, which will concern the baseline topic of Wildlife. This topic has many resources, which can be integrated with screencasts and videos available on the Web. Creating screencasts using several free and open source software available on the Web is one of the main goals of this chapter. There is plenty of commercial software, which can be used to create screencasts. We will not focus on them though. We add some special features to the screencasts in order to enhance them. Videos can be recorded in several ways. You may use your cell phone, camera, or the webcam of your computer. We are to focus on the way of creating them and uploading into our Moodle course. We can also use a recorded video from YouTube and upload it directly from the file picker in Moodle 2.0. You can also design a playlist in order to combine several videos and let your students watch them in a row. We do it by creating an account in YouTube. The channel in YouTube can be either public or private; it depends on how we want to carry it out. You can create some screencasts in order to present information to your students instead of showing presentations made using Open Office, PowerPoint, or Microsoft Word. Changing any of these into a screencast is more appealing to the students and not such a difficult task to carry out either. We can create an explanation by recording our voice, for which we will create a virtual board that we can choose to be visible to the audience; in the second case, our explanations can only be heard with no visualization. This is quite an important aspect to be taken into account, especially in teaching because students need a dynamic explanation by their teacher. There are several software available that can be used to create screencasts. One of them is Cam Studio. This software captures AVI files and it is open source. It captures onscreen video and audio. Its disadvantage is that only Windows users can use it. You can download it from http://camstudio.com/. It is time for Mac users. There is also a free program for Mac users that focuses on making quick films by saving the recorded video to get a quick access. It does not record audio. This is Copernicus and you can download it from http://danicsoft.com/software/copernicus/. We need a tool for both Mac and Windows, which is free and open source as well. So, JingProject.com is the software. It does not only record video, but also allows you to take a picture, draw, or add a message on it, and upload the media to a free hosting account. A URL is provided in order to watch the video or the image. You can download it from the following website: http://www.techsmith.com/download/jing/. Screencast-o-matic is another tool that is based on Java that does not need to be downloaded at all. It allows you to upload in an automatic way. It works well with both Mac and Windows machines. You can use this at http://www.screencast-o-matic.com/. This is the tool that we are to work with in the creation of a screencast. We may also modify the videos to make them suitable for learning. We can add annotations in different ways so as to interact through the video with our students. That is to say, we add our comments instead of adding our voice so that students read what we need to tell them. Creating a screencast In this recipe, we create a screencast and upload it to our Moodle course. The baseline topic is Wildlife. Therefore, in this recipe, we will explain to our students where wild animals are located. We can paste in a world map of the different animals, while we add extra data through the audio files. Thus, we can also add more information using different types of images that are inserted in the map. Getting ready Before creating the screencast, plan the whole sequence of the explanation that we want to show to our students, therefore, we will use a very useful Java applet available at http://www.screencast-o-matic.com/. Screencast-o-matic requires the free Java Run-time Environment (also known as JRE) for both the teacher and the students' computers. You can download and install its latest version from http://java.sun.com. How to do it... First of all, design the background scene of the screencast to work with. Afterwards, enter the website http://www.screencast-o-matic.com/. Follow these to create the screencast: Click on Start recording. Another pop-up window appears that looks as shown in the following screenshot: Resize the frame to surround the recording area that you want to record. Click on the recording button (red button). If you want to make a pause, click on the pause button or Alt + P, as shown in the following screenshot: If you want to integrate the webcam or a bluetooth video, click on the upwards arrow in this icon, as shown in the following screenshot: When the screencast is finished, click on Done. You can preview the screencast after you finish designing it. If you need to edit it, click on Go back to add more. If you are satisfied with the preview, click on Done with this screencast, as shown in the following screenshot: When the screencast is finished, our next task is to export it because we need to upload it to our Moodle course. Click on Export Movie. Click on the downwards arrow in Type and choose Flash (FLV), as shown in the following screenshot: Customize the Size and Options blocks, as shown in the previous screenshot or as you wish. When you finish, click on Export, as shown in the previous screenshot. Write a name for this file and click on Save. When the file is exported, click on Go back and do more with this screencast if you want to edit it. Click on Done with this screencast if you are satisfied with the result. A pop-up window appears, click on OK. How it works... We have just created the screencast teaching about wild animals, which students have to watch to learn about the places where wild animals live around the world. We need to upload it to our Moodle course. It is a passive resource; therefore, we can add a resource or design an activity out of it. In this case, we design an activity. Choose the weekly outline section where you want to insert it, and follow these steps: Click on Add an activity | Online text within Assignments. Complete the Assignment name and Description blocks. Click on the Moodle Media icon | Find or upload a sound, video or applet ... | Upload a file | Browse | look for the file that you want to upload and click on it. Click on Open | Upload this file | Insert. Click on Save and return to course. Click on the activity. It looks as shown in the following screenshot: There's more... In the case that we create a screencast, which lasts for around 30 minutes or longer, it will take a long time to upload it to our Moodle course. Therefore, it will be advisable to watch the screencast using a free and open source media player, that is to say VLC Media Player. VLC Media Player You can download the VLC Media Player from the following website: http://www.videolan.org/vlc/. It works with most popular video files formats such as AVI, MP4, and Flash, among others. Follow these steps in order to watch the screencast: Click on Media | Open File | browse for the file that you want to open and click on it. Click on Open. The screencast is displayed, as shown in the following screenshot: See also Enhancing a screencast with annotations  
Read more
  • 0
  • 0
  • 4430

article-image-media-file-management-joomla-ftp-and-third-party-extensions
Packt
29 Jan 2010
5 min read
Save for later

Media File management in Joomla with FTP and Third-party Extensions

Packt
29 Jan 2010
5 min read
Alternative method of managing files and media Depending on your circumstances, you may or may not have login access to your web hosting server. If you do not, then one of your only options for uploading files will be to use the Joomla! Media Manager, or the option of third-party Joomla! Extensions. If you do have web server access, then by using a method called FTP (File Transfer Protocol), you will be able to gain greater control over your file management. FTP is a standard network protocol that is used to exchange and manipulate files over a computer network. A common use of FTP is to transfer files over the Internet between a local computer and a web server, which is exactly what we are after! FTP clients There are numerous FTP clients (programs) that can be used to transfer files. Some of these graphically show the file and transfer processes, making the management of your files from your computer to your web server very easy to do. FTP programs FTP programs range from no cost applications through to commercial software. Which one you choose to use will be down to personal preference. Most of these programs provide drag and drop features as well as easy directory navigation tools: There are excellent FTP programs for all major operating systems. Some popular options for Windows and Mac are: Windows FileZilla CuteFTP SmartFTP WS_FTP cURL Mac FileZilla Transmit Cyberduck CuteFTP Fetch FTP Comm and in Terminal If you perform a search using a popular search engine, you will obtain a definitive list of FTP programs. A suggestion would be to download a trial version of these to see which one suits your operating system and requirements. For the purpose of this topic, I have chosen to use and take screenshots from an FTP program called "Transmit". It is a commercial product, but personally one of my favorites. For those whose use MAC computers, Transmit is recognized as a wonderful FTP tool. Connecting using an FTP program Almost all FTP programs require similar information in order to connect to the web server. Depending on the program you are using, the naming of these fields may be slightly different. In order to connect, we need the following information which is usually derived from setting up a user account (with FTP permissions) on your web hosting control panel: Server This is the server address that you wish to connect to. Typically this can look like ftp.yourserver.com. Depending on the domain name settings, it can sometimes be your domain name, such as www.yourserver.com. User Name This is your web server account username. Often this can be in the format of yourusername@yourdomain.com. Password This is your web server account password. Initial Path Some programs offer the ability to set an initial directory to navigate to upon login. If you do not have this option, then do not worry. Port If the server requires connections on a port other than the default one, some programs offer you the ability to enter in a port number. It is best to leave this as the default setting unless you know what you are doing. If you are having any issues connecting, then it's best to contact your web hosting administrator in order to check your login access details. There are typically three types of FTP login methods available. These are: Anonymous FTP access This is an easy connection method as you do not need to include any user information. Sometimes software companies offer updates or corporate and government websites offer forms and content to be downloaded via an anonymous connection method. Username A simplified but restricted access level where you will be required to enter just a username. This type of FTP login is often used in school and intranet environments. Username and Password This is a more restricted security access level where the user requires both a username and password in order to access the server. This method is often used by web hosting companies who offer people the ability to upload files to the web server. If your FTP connection is set to username (or username and password), then you should see a dialog box on the screen. This will prompt you to enter your security access details: Once connected, you will need to navigate to your webroot folder. Depending on your web server, this area may be named httpdocs, mainwebsite_html, public_html, or other naming conventions. It will be inside this directory where your Joomla! website files will reside. When you have located your main http documents area and clicked to navigate inside this, you will see the default Joomla! media folder named images. When you have successfully navigated to your remote main media folder (or any of the subfolders within this), you will need to make sure that you also navigate locally to the folders or files you wish to upload. Most FTP programs offer you the ability to create new directories, and drag and drop files from your local computer to the web server. By using an FTP program, you can gain additional control over your website files and content. FTP programs can allow you to: Create files and directories Rename files and directories Move files and directories Delete files and directories Set file and directory permissions FTP programs offer you the most flexibility in managing your website files, but it is important to take care when using FTP programs. By connecting to your directory structure on the web server, you are navigating amongst core Joomla! files.
Read more
  • 0
  • 0
  • 4428
article-image-opencart-layout-structure
Packt
29 Mar 2011
7 min read
Save for later

OpenCart: layout structure

Packt
29 Mar 2011
7 min read
In this article, we will see the default layout structure of OpenCart. We'll discuss the cascading stylesheet file from it and modify it according to our needs. First of all, we'll use the reset style properties to eliminate cross-browser problems. We will discuss every property here. We'll also see the corresponding effects on our site. Then, we'll do some basic styling for the site. For each change in style, we will see why we did that. Folder structure In our main OpenCart folder, we have the admin and catalog sections. These are two separate subsystems. As the name says, admin has the files and folders for administration operation. Catalog contains the store files and folders. Each admin and catalog section has a separate model, view, and controller. Under this admin and catalog folder, we will see the model, view, and controller folders. You will see different subfolders within those folders. So, let's discuss this MVC structure in the following paragraph. OpenCart is built with the MVC design pattern. So, it has model, view, and controller. A user requests the OpenCart controller to process the request. Then, the controller gets the data using the model and processes the fetched data to show the response with the view file. The following figure shows the above operation of MVC: For theme modification, we will focus only on the View folder of the catalog in this article. It has javascript and theme folders. We place our themes under the theme folder and the necessary JavaScript files in the JavaScript folder. Each theme has an image, stylesheet, and template folder. We will see how we can create a new theme later in this article. Theme file style As we stated earlier, OpenCart uses the MVC design pattern. So, the view files remain separated from the core code. These files are .tpl files. And, they are placed under catalogviewthemedefaulttemplate. These .tpl files are basically HTML files. They have PHP code within them to display the necessary data. OpenCart doesn't use the smarty template engine. Rather, it uses embedded PHP codes that are easy to use. We assign the PHP variables in the controller with necessary data. Then, we call the variable in the .tpl view file. We can also use the global class reference variable. In the controller, we will assign the value like this: $this->data['shop_name'] = 'store'; Here, we assigned store value to the shop_name variable. In the .tpl view file, we will display the value like this: <?php echo $shop_name; ?> Creating a new theme In this recipe, we will see the steps to create a new theme for OpenCart. There are some rules to create OpenCart themes. Getting started Let's get started with the steps to create a new theme with OpenCart. How to do it Following are the steps for creating a new theme for OpenCart: First of all, we need to create a new folder under catalogviewtheme. For example, we will name it shop for this project. Now, we need to copy some files from the default theme folder to our new theme folder. The files are the following: catalogviewthemedefaultstylesheet*.* catalogviewthemedefaultimage*.* catalogviewthemedefaulttemplatecommonheader.tpl We have to edit some values in the header.tpl file for our new theme. We will replace all default keywords with our new theme name shop. Actually, there are six places where we need to replace the new theme name. The lines are the following: <link rel="stylesheet" type="text/css" href="catalog/view/theme/shop/stylesheet/stylesheet.css" /> // other lines ... <link rel="stylesheet" type="text/css" href="catalog/view/theme/shop/stylesheet/ie6.css" /> //other lines ... <div class="div3"> <a href="<?php echo str_replace('&', '&amp;', $special); ?>" style="background-image: url('catalog/view/theme/shop/image/special.png');"> <?php echo $text_special; ?></a> <a onclick="bookmark(document.location, '<?php echo addslashes($title); ?>');" style="background-image: url('catalog/view/theme/shop/image/bookmark.png');"> <?php echo $text_bookmark; ?></a><a href="<?php echo str_replace('&', '&amp;', $contact); ?>" style="background-image: url('catalog/view/theme/shop/image/contact.png');"><?php echo $text_contact; ?></a><a href="<?php echo str_replace('&', '&amp;', $sitemap); ?>" style="background-image: url('catalog/view/theme/shop/image/sitemap.png');"><?php echo $text_sitemap; ?></a></div> //other lines ... And now, save it. Now, we will go to the admin area. First log in with our stored admin credentials. Go to System | Settings in the admin panel: We will go to the Store tab. We will change the theme from default to shop, our new theme from a select list. You can make changes on your theme's CSS file. Resetting layout styles Before beginning our theme styling work, we must first reset all the styles. This will help us with cross-browser problems. Getting started We need to modify the stylesheet.css file of our new theme first. We will go to catalogviewthemeshopstylesheet. And open up stylesheet.css in our favourite editor. How to do it Now, we will add reset styles to our stylesheet.css file. First, we need to change the browser's margin, padding, and border properties. We will set styles for several different HTML tags. We can add extra style properties into it: html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; } We will see the effect of our code in our store. The product images will come closer now. We adjust the line height of body tag. So, put the following code in the CSS file. body { line-height: 1; } This also squeezes the lines in the body element. The following image depicts this: By applying the above style, the line height of these tabs becomes shortened. We need to reset the style for ordered/unordered list elements. Hence, we use the following reset value: ol, ul { list-style: none; } It shows all the ul, ol tags without the default bullet properties. Now, we will reset the blockquote element styles. We will find the changes if we use blockquotes in our HTML code: blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } For all the elements, we are going to change the focus-styling attributes. We change the outline properties to 0. We set the styles like the following: :focus { outline: 0; } There could be some styling for insert and deletion in some browsers. So, we will use this styling for the purpose: ins { text-decoration: none; } del { text-decoration: line-through; } We will control the styling of our tables also. We set the border and spacing qualities like the following: table { border-collapse: collapse; border-spacing: 0; } We still need to set the attribute cell-spacing to 0. So, our reset styling becomes the following: html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td { margin: 0; padding: 0; border: 0; outline: 0; font-size: 100%; vertical-align: baseline; background: transparent; } body { line-height: 1; } ol, ul { list-style: none; } blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } :focus { outline: 0; } ins { text-decoration: none; } del { text-decoration: line-through; } table { border-collapse: collapse; border-spacing: 0; } We can place it at the start of our site's style file stylesheet.css, or we can create a new file and put the content there also. To do that, just create a new CSS file within the catalogviewthemeshopstylesheet folder. For example, we can name the new style file as reset.css. If we use a new style file, then we need to add the style file in the controller.  
Read more
  • 0
  • 0
  • 4419

article-image-null-12
Packt
23 Jul 2012
13 min read
Save for later

Ruby with MongoDB for Web Development

Packt
23 Jul 2012
13 min read
Creating documents Let's first see how we can create documents in MongoDB. As we have briefly seen, MongoDB deals with collections and documents instead of tables and rows. Time for action – creating our first document Suppose we want to create the book object having the following schema: book = { name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] }   On the Mongo CLI, we can add this book object to our collection using the following command: > db.books.insert(book)   Suppose we also add the shelf collection (for example, the floor, the row, the column the shelf is in, the book indexes it maintains, and so on that are part of the shelf object), which has the following structure: shelf : { name : 'Fiction', location : { row : 10, column : 3 }, floor : 1 lex : { start : 'O', end : 'P' }, }   Remember, it's quite possible that a few years down the line, some shelf instances may become obsolete and we might want to maintain their record. Maybe we could have another shelf instance containing only books that are to be recycled or donated. What can we do? We can approach this as follows: The SQL way: Add additional columns to the table and ensure that there is a default value set in them. This adds a lot of redundancy to the data. This also reduces the performance a little and considerably increases the storage. Sad but true! The NoSQL way: Add the additional fields whenever you want. The following are the MongoDB schemaless object model instances: > db.book.shelf.find() { "_id" : ObjectId("4e81e0c3eeef2ac76347a01c"), "name" : "Fiction", "location" : { "row" : 10, "column" : 3 }, "floor" : 1 } { "_id" : ObjectId("4e81e0fdeeef2ac76347a01d"), "name" : "Romance", "location" : { "row" : 8, "column" : 5 }, "state" : "window broken", "comments" : "keep away from children" } What just happened? You will notice that the second object has more fields, namely comments and state. When fetching objects, it's fine if you get extra data. That is the beauty of NoSQL. When the first document is fetched (the one with the name Fiction), it will not contain the state and comments fields but the second document (the one with the name Romance) will have them. Are you worried what will happen if we try to access non-existing data from an object, for example, accessing comments from the first object fetched? This can be logically resolved—we can check the existence of a key, or default to a value in case it's not there, or ignore its absence. This is typically done anyway in code when we access objects. Notice that when the schema changed we did not have to add fields in every object with default values like we do when using a SQL database. So there is no redundant information in our database. This ensures that the storage is minimal and in turn the object information fetched will have concise data. So there was no redundancy and no compromise on storage or performance. But wait! There's more. NoSQL scores over SQL databases The way many-to-many relations are managed tells us how we can do more with MongoDB that just cannot be simply done in a relational database. The following is an example: Each book can have reviews and votes given by customers. We should be able to see these reviews and votes and also maintain a list of top voted books. If we had to do this in a relational database, this would be somewhat like the relationship diagram shown as follows: (get scared now!) The vote_count and review_count fields are inside the books table that would need to be updated every time a user votes up/down a book or writes a review. So, to fetch a book along with its votes and reviews, we would need to fire three queries to fetch the information: SELECT * from book where id = 3; SELECT * from reviews where book_id = 3; SELECT * from votes where book_id = 3; We could also use a join for this: SELECT * FROM books JOIN reviews ON reviews.book_id = books.id JOIN votes ON votes.book_id = books.id; In MongoDB, we can do this directly using embedded documents or relational documents. Using MongoDB embedded documents Embedded documents, as the name suggests, are documents that are embedded in other documents. This is one of the features of MongoDB and this cannot be done in relational databases. Ever heard of a table embedded inside another table? Instead of four tables and a complex many-to-many relationship, we can say that reviews and votes are part of a book. So, when we fetch a book, the reviews and the votes automatically come along with the book. Embedded documents are analogous to chapters inside a book. Chapters cannot be read unless you open the book. Similarly embedded documents cannot be accessed unless you access the document. For the UML savvy, embedded documents are similar to the contains or composition relationship. Time for action – embedding reviews and votes In MongoDB, the embedded object physically resides inside the parent. So if we had to maintain reviews and votes we could model the object as follows: book : { name: "Oliver Twist", reviews : [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] votes: [ "Gautam", "Tom", "Dick"] } What just happened? We now have reviews and votes inside the book. They cannot exist on their own. Did you notice that they look similar to JSON hashes and arrays? Indeed, they are an array of hashes. Embedded documents are just like hashes inside another object. There is a subtle difference between hashes and embedded objects as we shall see later on in the book. Have a go hero – adding more embedded objects to the book Try to add more embedded objects such as orders inside the book document. It works! order = { name: "Toby Jones" type: "lease", units: 1, cost: 40 } Fetching embedded objects We can fetch a book along with the reviews and the votes with it. This can be done by executing the following command: > var book = db.books.findOne({name : 'Oliver Twist'}) > book.reviews.length 2 > book.votes.length 3 > book.reviews [ { user: "Gautam", comment: "Very interesting read" }, { user: "Harry", comment: "Who is Oliver Twist?" } ] > book.votes [ "Gautam", "Tom", "Dick"] This does indeed look simple, doesn't it? By fetching a single object, we are able to get the review and vote count along with the data. Use embedded documents only if you really have to! Embedded documents increase the size of the object. So, if we have a large number of embedded documents, it could adversely impact performance. Even to get the name of the book, the reviews and the votes are fetched. Using MongoDB document relationships Just like we have embedded documents, we can also set up relationships between different documents. Time for action – creating document relations The following is another way to create the same relationship between books, users, reviews, and votes. This is more like the SQL way. book: { _id: ObjectId("4e81b95ffed0eb0c23000002"), name: "Oliver Twist", author: "Charles Dickens", publisher: "Dover Publications", published_on: "December 30, 2002", category: ['Classics', 'Drama'] } Every document that is created in MongoDB has an object ID associated with it. In the next chapter, we shall soon learn about object IDs in MongoDB. By using these object IDs we can easily identify different documents. They can be considered as primary keys. So, we can also create the reviews collection and the votes collection as follows: users: [ { _id: ObjectId("8d83b612fed0eb0bee000702"), name: "Gautam" }, { _id : ObjectId("ab93b612fed0eb0bee000883"), name: "Harry" } ] reviews: [ { _id: ObjectId("5e85b612fed0eb0bee000001"), user_id: ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Very interesting read" }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), comment: "Who is Oliver Twist?" } ] votes: [ { _id: ObjectId("6e95b612fed0eb0bee000123"), user_id : ObjectId("8d83b612fed0eb0bee000702"), book_id: ObjectId("4e81b95ffed0eb0c23000002"), }, { _id: ObjectId("4585b612fed0eb0bee000003"), user_id : ObjectId("ab93b612fed0eb0bee000883"), } ] What just happened? Hmm!! Not very interesting, is it? It doesn't even seem right. That's because it isn't the right choice in this context. It's very important to know how to choose between nesting documents and relating them. In your object model, if you will never search by the nested document (that is, look up for the parent from the child), embed it. Just in case you are not sure about whether you would need to search by an embedded document, don't worry too much – it does not mean that you cannot search among embedded objects. You can use Map/Reduce to gather the information. Comparing MongoDB versus SQL syntax This is a good time to sit back and evaluate the similarities and dissimilarities between the MongoDB syntax and the SQL syntax. Let's map them together: SQL commands NoSQL (MongoDB) equivalent SELECT * FROM books db.books.find() SELECT * FROM books WHERE id = 3; db.books.find( { id : 3 } ) SELECT * FROM books WHERE name LIKE 'Oliver%' db.books.find( { name : /^Oliver/ } ) SELECT * FROM books WHERE name like '%Oliver%' db.books.find( { name : /Oliver/ } ) SELECT * FROM books WHERE publisher = 'Dover Publications' AND published_date = "2011-8-01" db.books.find( { publisher : "Dover Publications", published_date : ISODate("2011-8-01") } ) SELECT * FROM books WHERE published_date > "2011-8-01" db.books.find ( { published_date : { $gt : ISODate("2011-8-01") } } ) SELECT name FROM books ORDER BY published_date db.books.find( {}, { name : 1 } ).sort( { published_date : 1 } ) SELECT name FROM books ORDER BY published_date DESC db.books.find( {}, { name : 1 } ).sort( { published_date : -1 } ) SELECT votes.name from books JOIN votes where votes.book_id = books.id db.books.find( { votes : { $exists : 1 } }, { votes.name : 1 } ) Some more notable comparisons between MongoDB and relational databases are: MongoDB does not support joins. Instead it fires multiple queries or uses Map/Reduce. We shall soon see why the NoSQL faction does not favor joins. SQL has stored procedures. MongoDB supports JavaScript functions. MongoDB has indexes similar to SQL. MongoDB also supports Map/Reduce functionality. MongoDB supports atomic updates like SQL databases. Embedded or related objects are used sometimes instead of a SQL join. MongoDB collections are analogous to SQL tables. MongoDB documents are analogous to SQL rows. Using Map/Reduce instead of join We have seen this mentioned a few times earlier—it's worth jumping into it, at least briefly. Map/Reduce is a concept that was introduced by Google in 2004. It's a way of distributed task processing. We "map" tasks to works and then "reduce" the results. Understanding functional programming Functional programming is a programming paradigm that has its roots from lambda calculus. If that sounds intimidating, remember that JavaScript could be considered a functional language. The following is a snippet of functional programming: $(document).ready( function () { $('#element').click( function () { # do something here }); $('#element2').change( function () { # do something here }) }); We can have functions inside functions. Higher-level languages (such as Java and Ruby) support anonymous functions and closures but are still procedural functions. Functional programs rely on results of a function being chained to other functions. Building the map function The map function processes a chunk of data. Data that is fed to this function could be accessed across a distributed filesystem, multiple databases, the Internet, or even any mathematical computation series! function map(void) -> void The map function "emits" information that is collected by the "mystical super gigantic computer program" and feeds that to the reducer functions as input. MongoDB as a database supports this paradigm making it "the all powerful" (of course I am joking, but it does indeed make MongoDB very powerful). Time for action – writing the map function for calculating vote statistics Let's assume we have a document structure as follows: { name: "Oliver Twist", votes: ['Gautam', 'Harry'] published_on: "December 30, 2002" } The map function for such a structure could be as follows: function() { emit( this.name, {votes : this.votes} ); } What just happened? The emit function emits the data. Notice that the data is emitted as a (key, value) structure. Key: This is the parameter over which we want to gather information. Typically it would be some primary key, or some key that helps identify the information. For the SQL savvy, typically the key is the field we use in the GROUP BY clause. Value: This is a JSON object. This can have multiple values and this is the data that is processed by the reduce function. We can call emit more than once in the map function. This would mean we are processing data multiple times for the same object. Building the reduce function The reduce functions are the consumer functions that process the information emitted from the map functions and emit the results to be aggregated. For each emitted data from the map function, a reduce function emits the result. MongoDB collects and collates the results. This makes the system of collection and processing as a massive parallel processing system giving the all mighty power to MongoDB. The reduce functions have the following signature: function reduce(key, values_array) -> value Time for action – writing the reduce function to process emitted information This could be the reduce function for the previous example: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } What just happened? reduce takes an array of values – so it is important to process an array every time. There are various options to Map/Reduce that help us process data. Let's analyze this function in more detail: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The variable result has a structure similar to what was emitted from the map function. This is important, as we want the results from every document in the same format. If we need to process more results, we can use the finalize function (more on that later). The result function has the following structure: function(key, values) { var result = {votes: 0} values.forEach(function(value) { result.votes += value.votes; }); return result; } The values are always passed as arrays. It's important that we iterate the array, as there could be multiple values emitted from different map functions with the same key. So, we processed the array to ensure that we don't overwrite the results and collate them.
Read more
  • 0
  • 0
  • 4403

article-image-enhancing-your-blog-advanced-features
Packt
22 Sep 2015
8 min read
Save for later

Enhancing Your Blog with Advanced Features

Packt
22 Sep 2015
8 min read
In this article by Antonio Melé, the author of the Django by Example book shows how to use the Django forms, and ModelForms. You will let your users share posts by e-mail, and you will be able to extend your blog application with a comment system. You will also learn how to integrate third-party applications into your project, and build complex QuerySets to get useful information from your models. In this article, you will learn how to add tagging functionality using a third-party application. (For more resources related to this topic, see here.) Adding tagging functionality After implementing our comment system, we are going to create a system for adding tags to our posts. We are going to do this by integrating in our project a third-party Django tagging application. django-taggit is a reusable application that primarily offers you a Tag model, and a manager for easily adding tags to any model. You can take a look at its source code at https://github.com/alex/django-taggit. First, you need install django-taggit via pip by running the pip install django-taggit command. Then, open the settings.py file of the project, and add taggit to your INSTALLED_APPS setting as the following: INSTALLED_APPS = ( # ... 'mysite.blog', 'taggit', ) Then, open the models.py file of your blog application, and add to the Post model the TaggableManager manager, provided by django-taggit as the following: from taggit.managers import TaggableManager # ... class Post(models.Model): # ... tags = TaggableManager() You just added tags for this model. The tags manager will allow you to add, retrieve, and remove tags from the Post objects. Run the python manage.py makemigrations blog command to create a migration for your model changes. You will get the following output: Migrations for 'blog': 0003_post_tags.py: Add field tags to post Now, run the python manage.py migrate command to create the required database tables for django-taggit models and synchronize your model changes. You will see an output indicating that the migrations have been applied: Operations to perform: Apply all migrations: taggit, admin, blog, contenttypes, sessions, auth Running migrations: Applying taggit.0001_initial... OK Applying blog.0003_post_tags... OK Your database is now ready to use django-taggit models. Open the terminal with the python manage.py shell command, and learn how to use the tags manager. First, we retrieve one of our posts (the one with the ID as 1): >>> from mysite.blog.models import Post >>> post = Post.objects.get(id=1) Then, add some tags to it and retrieve its tags back to check that they were successfully added: >>> post.tags.add('music', 'jazz', 'django') >>> post.tags.all() [<Tag: jazz>, <Tag: django>, <Tag: music>] Finally, remove a tag and check the list of tags again: >>> post.tags.remove('django') >>> post.tags.all() [<Tag: jazz>, <Tag: music>] This was easy, right? Run the python manage.py runserver command to start the development server again, and open http://127.0.0.1:8000/admin/taggit/tag/ in your browser. You will see the admin page with the list of the Tag objects of the taggit application: Navigate to http://127.0.0.1:8000/admin/blog/post/ and click on a post to edit it. You will see that the posts now include a new Tags field as the following one where you can easily edit tags: Now, we are going to edit our blog posts to display the tags. Open the blog/post/list.html template and add the following HTML code below the post title: <p class="tags">Tags: {{ post.tags.all|join:", " }}</p> The join template filter works as the Python string join method to concatenate elements with the given string. Open http://127.0.0.1:8000/blog/ in your browser. You will see the list of tags under each post title: Now, we are going to edit our post_list view to let users see all posts tagged with a tag. Open the views.py file of your blog application, import the Tag model form django-taggit, and change the post_list view to optionally filter posts by tag as the following: from taggit.models import Tag def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) # ... The view now takes an optional tag_slug parameter that has a None default value. This parameter will come in the URL. Inside the view, we build the initial QuerySet, retrieving all the published posts. If there is a given tag slug, we get the Tag object with the given slug using the get_object_or_404 shortcut. Then, we filter the list of posts by the ones which tags are contained in a given list composed only by the tag we are interested in. Remember that QuerySets are lazy. The QuerySet for retrieving posts will only be evaluated when we loop over the post list to render the template. Now, change the render function at the bottom of the view to pass all the local variables to the template using locals(). The view will finally look as the following: def post_list(request, tag_slug=None): post_list = Post.published.all() if tag_slug: tag = get_object_or_404(Tag, slug=tag_slug) post_list = post_list.filter(tags__in=[tag]) paginator = Paginator(post_list, 3) # 3 posts in each page page = request.GET.get('page') try: posts = paginator.page(page) except PageNotAnInteger: # If page is not an integer deliver the first page posts = paginator.page(1) except EmptyPage: # If page is out of range deliver last page of results posts = paginator.page(paginator.num_pages) return render(request, 'blog/post/list.html', locals()) Now, open the urls.py file of your blog application, and make sure you are using the following URL pattern for the post_list view: url(r'^$', post_list, name='post_list'), Now, add another URL pattern as the following one for listing posts by tag: url(r'^tag/(?P<tag_slug>[-w]+)/$', post_list, name='post_list_by_tag'), As you can see, both the patterns point to the same view, but we are naming them differently. The first pattern will call the post_list view without any optional parameters, whereas the second pattern will call the view with the tag_slug parameter. Let’s change our post list template to display posts tagged with a specific tag, and also link the tags to the list of posts filtered by this tag. Open blog/post/list.html and add the following lines before the for loop of posts: {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} If the user is accessing the blog, he will the list of all posts. If he is filtering by posts tagged with a specific tag, he will see this information. Now, change the way the tags are displayed into the following: <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> Notice that now we are looping through all the tags of a post, and displaying a custom link to the URL for listing posts tagged with this tag. We build the link with {% url "blog:post_list_by_tag" tag.slug %} using the name that we gave to the URL, and the tag slug as parameter. We separate the tags by commas. The complete code of your template will look like the following: {% extends "blog/base.html" %} {% block title %}My Blog{% endblock %} {% block content %} <h1>My Blog</h1> {% if tag %} <h2>Posts tagged with "{{ tag.name }}"</h2> {% endif %} {% for post in posts %} <h2><a href="{{ post.get_absolute_url }}">{{ post.title }}</a></h2> <p class="tags"> Tags: {% for tag in post.tags.all %} <a href="{% url "blog:post_list_by_tag" tag.slug %}">{{ tag.name }}</a> {% if not forloop.last %}, {% endif %} {% endfor %} </p> <p class="date">Published {{ post.publish }} by {{ post.author }}</p> {{ post.body|truncatewords:30|linebreaks }} {% endfor %} {% include "pagination.html" with page=posts %} {% endblock %} Open http://127.0.0.1:8000/blog/ in your browser, and click on any tag link. You will see the list of posts filtered by this tag as the following: Summary In this article, you added tagging to your blog posts by integrating a reusable application. The book Django By Example, hands-on-guide will also show you how to integrate other popular technologies with Django in a fun and practical way. Resources for Article: Further resources on this subject: Code Style in Django[article] So, what is Django? [article] Share and Share Alike [article]
Read more
  • 0
  • 0
  • 4402
article-image-cocoa-and-objective-c-handling-events
Packt
09 Jun 2011
8 min read
Save for later

Cocoa and Objective-C: Handling events

Packt
09 Jun 2011
8 min read
Some recipes in this article require Mac OS X Snow Leopard 10.6. The Trackpad preferences allow you to easily adjust many gestures that will used in the following recipes. To make sure that your trackpad is recognizing gestures, make sure that you have set the correct preferences to enable gesture support under the Trackpad System Preference. Interpreting the pinch gesture The pinch gesture is a gesture normally used for the zooming of a view or for changing the font size of text. In this recipe, we will create a custom view that handles the pinch gesture to resize a custom view. Getting ready In Xcode, create a new Cocoa Application and name it Pinch. How to do it... In the Xcode project, right-click on the Classes folder and choose Add…, then choose New File… Under the MacOS X section, select Cocoa Class, then select Objective-C class. Finally, choose NSView in the Subclass of popup. Name the new file MyView.m Double-click on the MainMenu.xib file in the Xcode project. From Interface Builders Library palette, drag a Custom View into the application window. From Interface Builders Inspector's palette, select the Identity tab and set the Class popup to MyView. Choose Save from the File menu to save the changes that you have made. In Xcode, Add the following code in the drawRect: method of the MyView class implementation: NSBezierPath *path = [NSBezierPath bezierPathWithRoundedRect: [self bounds] xRadius:8.0 yRadius:8.0]; [path setClip]; [[NSColor whiteColor] setFill]; [NSBezierPath fillRect:[self bounds]]; [path setLineWidth:3.0]; [[NSColor grayColor] setStroke]; [path stroke]; Next, we need to add the code to handle the pinch gesture. Add the following method to the MyView class implementation: - (void)magnifyWithEvent:(NSEvent *)event { NSSize size = [self frame].size; size.height = size.height * ([event magnification] + 1.0); size.width = size.width * ([event magnification] + 1.0); [self setFrameSize:size]; } Choose Build and Run from Xcode's toolbar to run the application. How it works... In our drawRect: method, we use Cocoa to draw a simple rounded rectangle with a three point wide gray stroke. Next, we implement the magnifyWithEvent: method. Because NSView inherits from NSResponder, we can override the magnifyWithEvent: method from the NSResponder class. When the user starts a pinch gesture, and the magnifyWithEvent: method is called, the NSEvent passed to us in the magnifyWithEvent: method which contains a magnification factor we can use to determine how much to scale our view. First, we get the current size of our view. We add one to the magnification factor and multiply by the frame's width and height to scale the view. Finally, we set the frame's new size. There's more... You will notice when running the sample code that our view resizes with the lower-left corner of our custom view remaining in a constant position. In order to make our view zoom in and out from the center, change the magnifyWithEvent: method's code to the following: NSSize size = [self frame].size; NSSize originalSize = size; size.height = size.height * ([event magnification] + 1.0); size.width = size.width * ([event magnification] + 1.0); [self setFrameSize:size]; CGFloat deltaX = (originalSize.width - size.width) / 2; CGFloat deltaY = (originalSize.height - size.height) / 2; NSPoint origin = self.frame.origin; origin.x = origin.x + deltaX; origin.y = origin.y + deltaY; [self setFrameOrigin:origin]; Basically, what we have done is moved our custom view's origin by the difference between the original size and the new size. Interpreting the swipe gesture The swipe gesture is detected when three or more fingers move across the trackpad. This gesture is often used to page through a series of images. In this recipe, we will create a custom view that interprets the swipe gesture in four different directions and displays the direction of the swipe in our custom view: Getting ready In Xcode, create a new Cocoa Application and name it Swipe. How to do it... In the Xcode project, right-click on the Classes folder and choose Add…, then choose New File… Under the MacOS X section, select Cocoa Class, then select Objective-C class. Finally, choose NSView in the Subclass of popup. Name the new file MyView.m Double-click on the MainMenu.xib file in the Xcode project. From Interface Builders Library palette, drag a Custom View into the application window. From Interface Builders Inspector's palette, select the Identity tab and set the Class popup to MyView. Choose Save from the File menu to save the changes that you have made. In Xcode, open the MyView.h file and add the direction variable to the class interface: NSString *direction; Open the MyView.m file and add the following code in the drawRect: method of the MyView class implementation: NSBezierPath *path = [NSBezierPath bezierPathWithRoundedRect: [self bounds] xRadius:8.0 yRadius:8.0]; [path setClip]; [[NSColor whiteColor] setFill]; [NSBezierPath fillRect:[self bounds]]; [path setLineWidth:3.0]; [[NSColor grayColor] setStroke]; [path stroke]; if (direction == nil) { direction = @""; } NSAttributedString *string = [[[NSAttributedString alloc] initWithString:direction] autorelease]; NSPoint point = NSMakePoint(([self bounds].size.width / 2) - ([string size].width / 2), ([self bounds].size.height / 2) - ([string size].height / 2)); [string drawAtPoint:point]; Add the following code to handle the swipe gesture: - (void)swipeWithEvent:(NSEvent *)event { if ([event deltaX] > 0) { direction = @"Left"; } else if ([event deltaX] < 0) { direction = @"Right"; } else if ([event deltaY] > 0) { direction = @"Up"; } else if ([event deltaY] < 0){ direction = @"Down"; } [self setNeedsDisplay:YES]; } In Xcode, choose Build and Run from the toolbar to run the application. How it works... As we did in the other recipes in this article, we draw a simple rounded rectangle in the drawRect: method of the view. However, we will also be drawing a string denoting the direction of the swipe in the middle of our view. In order to handle the swipe gesture, we override the swipeWithEvent: method from the NSResponder class which NSView inherits. By inspecting the values of deltaX and deltaY of the NSEvent passed into the swipeWithEvent: method, we can determine the direction of the swipe. We set the direction string with the direction of the swipe so we can draw it in the drawRect: method. Finally, we call setNeedsDisplay:YES to force our view to redraw itself. There's more... You might have noticed that we do not need to override the acceptsFirstResponder: method in our view in order to handle the gesture events. When the mouse is located within our view, we automatically receive the gesture events. All we need to do is implement the methods for the gestures we are interested in. Interpreting the rotate gesture The rotate gesture can be used in any number of ways in a custom view. From rotating the view itself or simulating a rotating dial in a custom control. This recipe will show you how to implement the rotate gesture to rotate a custom view. Using your thumb and index finger, you will be able to rotate the custom view around its center: Getting ready In Xcode, create a new Cocoa Application and name it Rotate. How to do it... In the Xcode project, right-click on the Classes folder and choose Add…, then choose New File… Under the MacOS X section, select Cocoa Class, then select Objective-C class. Finally, choose NSView in the Subclass of popup. Name the new file MyView.m. Double-click on the MainMenu.xib file in the Xcode project. From Interface Builders Library palette, drag a Custom View into the application window. From Interface Builders Inspector's palette, select the Identity tab and set the Class popup to MyView. Choose Save from the File menu to save the changes that you have made. In Xcode, add the following code in the drawRect: method of the MyView class implementation: NSBezierPath *path = [NSBezierPath bezierPathWithRoundedRect: [self bounds] xRadius:8.0 yRadius:8.0]; [path setClip]; [[NSColor whiteColor] setFill]; [NSBezierPath fillRect:[self bounds]]; [path setLineWidth:3.0]; [[NSColor grayColor] setStroke]; [path stroke]; Next we need to add the code to handle the rotate gesture. Add the following method to the MyView class implementation: - (void)rotateWithEvent:(NSEvent *)event { CGFloat currentRotation = [self frameCenterRotation]; [self setFrameCenterRotation:(currentRotation + [event rotation])]; } Choose Build and Run from Xcode's toolbar to test the application. How it works... As we did in the previous recipe, we will create a simple rounded rectangle with a three-point stroke to represent our custom view. Our custom view overrides the rotateWithEvent: method from NSResponder to handle the rotation gesture. The rotation property of the NSEvent passed to us in the rotateWithEvent: contains the change in rotation from the last time the rotateWithEvent: method was called. We simply add this value to our view's frameCenterRotation value to get the new rotation. There's more... The value returned from NSEvent's rotation will be negative when the rotation is clockwise and positive when the rotation is counter-clockwise.
Read more
  • 0
  • 0
  • 4401

article-image-network-based-ubuntu-installations
Packt
08 Mar 2010
4 min read
Save for later

Network Based Ubuntu Installations

Packt
08 Mar 2010
4 min read
I will first outline the requirements and how to get started with a network installation. Next I will walk through a network installation including screenshots for every step. I will also include text descriptions of each step and screenshot. Requirements In order to install a machine over the network you'll need the network installer image. Unfortunately these images are not well publicized, and rarely listed alongside the other .ISO images. For this reason I have included direct download links to the 32bit and 64bit images. It is important that you download the network installer images from the same mirror that you will be installing from. These images are often bound to the kernel and library versions contained within the repository, and a mismatch will cause a failed installation. 32 bit http://archive.ubuntu.com/ubuntu/dists/karmic/main/installer-i386/current/images/netboot/mini.iso 64bit http://archive.ubuntu.com/ubuntu/dists/karmic/main/installer-amd64/current/images/netboot/mini.iso If you'd prefer to use a different repository, simply look for the "installer-$arch" folder within the main folder of the version you'd like to install. Once you've downloaded your preferred image you'll need to write the image to a CD. This can be done from an existing Linux installation (Ubuntu or otherwise), by following the steps below: Navigate to your download location (likely ~/Downloads/) Right-click on the mini.iso file Select Write to Disk... This will present you with an ISO image burning utility. Simply verify that it recognizes a disk in your CD writer, and that it has selected the mini.iso for writing. An image of this size (~12M) should only take a minute to write. If possible, you may want to burn this image to a business card CD. Due to the size of the installation image (~12M), you'll have plenty of room on even the smallest media. Installation Congratulations. You're now the proud owner of an Ubuntu network installation CD. You can use this small CD to install an Ubuntu machine anywhere you have access to an Ubuntu repository. This can be a public repository or a local repository. If you'd like to create a local repository you may want to read the article on Creating a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher, for additional information on creating a mirror or caching server. To kick off your installation simply reboot your machine and instruct it to boot off a CD. This is often done by pressing a function key during the initial boot process. On many Dell machines this is the F12 button. Some machines are already configured to boot from a CD if present during the boot process. If this is not the case for you, please consult your BIOS settings for more information. When the CD boots you'll be presented with a very basic prompt. Simply press ENTER to continue. This will then load the Ubuntu specific boot menu. For this article I selected Install from the main menu. The other options are beyond the scope of this tutorial. This will load some modules and then start the installation program. The network installer is purely text based. This may seem like a step backward for those used to the LiveCD graphical installers, but the text based nature allows for greater flexibility and advanced features. During the following screens I will outline what the prompts are asking for, and what additional options (if any) are available at each stage. The first selection menu that you will be prompted with is the language menu. This should default to "English". Of course you can select your preferred language as needed. Second, to verify the language variant, you'll need to select your country. Based on your first selection your menu may not appear with the same options as in this screenshot. Third you'll be asked to select or verify your keyboard layout. The installer will ask you if you'd like to automatically detect the proper keyboard layout. If you select Yes you will be prompted to press specific keys from a displayed list until it has verified your layout. If you select No you'll be prompted to select your layout from a list.
Read more
  • 0
  • 0
  • 4399
Modal Close icon
Modal Close icon