Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-getting-started-omnet
Packt
30 Sep 2013
5 min read
Save for later

Getting Started with OMNeT++

Packt
30 Sep 2013
5 min read
(For more resources related to this topic, see here.) What this book will cover This book will show you how you can get OMNeT++ up and running on your Windows or Linux operating system. This book will then take you through the components that make up an OMNeT++ network simulation. The components include models written in the NED (Network Description) language, initialization files, C++ source files, arrays, queues, and then configuring and running a simulation. This book will show you how these components make up a simulation using different examples, which can all be found online. At the end of the book, I will be focusing on a method to debug your network simulation using a particular type of data visualization known as a sequence chart, and what the visualization means. What is OMNeT++? OMNeT++ stands for Objective Modular Network Testbed in C++. It's a component-based simulation library written in C++ designed to simulate communication networks. OMNeT++ is not a network simulator but a framework to allow you to create your own network simulations. The need for simulation Understanding the need for simulation is a big factor in deciding if this book is for you. Have a look at this table of real network versus simulated network comparison. A real network A network simulation The cost of all the hardware, servers, switches and so on has to be borne. The cost of a single standalone machine with OMNeT++ installed (which is free). It takes a lot of time to set up big specialist networks used for business or academia It takes time to learn how to create simulations, though once you know how it's done, it's much easier to create new ones. Making changes to a pre-existing network takes planning, and if a change is made in error, it may cause the network to fail. Making changes to a simulated network of a real pre-existing network doesn't pose any risk. The outcome of the simulation can be analyzed to determine how the real network will be affected. You get the real thing, so what you observe from the real network is actually happening. If there is a bug in the simulation software, it could cause the simulation to act incorrectly. As you can see, there are benefits of using both real networks and network simulations when creating and testing your network. The point I want to convey though, is that network simulations can make network design cheaper and less costly. Examples of simulation in the industry After looking into different industries, we can see that there is obviously a massive need for simulation where the aim is to solve real-world problems from how a ticketing system should work in a hospital to what to do when a natural disaster strikes. Simulation allows us to forecast potential problems without having to first live through those problems. Different uses of simulation in the industry are as follows: Manufacturing: The following are the uses under manufacturing: To show how labor management will work, such as worker efficiency, and how rotas and various other factors will affect production To show what happens when a component fails on a production line Crowd Management: The following are the uses under crowd management: To show the length of queues at theme parks and how that will affect business To show how people will get themselves seated at an event in a stadium Airports: The following are the uses for airports: Show the effects of flight delays on air-traffic control Show how many bags can be processed at any one time on a baggage handling system, and what happens when it fails Weather Forecasting: The following are the uses under weather forecasting: To predict forthcoming weather To predict the effect of climate change on the weather That's just to outline a few, but hopefully you can see how and where simulation is useful. Simulating your network will allow you to test the network against myriads of network attacks, and test all the constraints of the network without damaging it in real life. What you will learn After reading this book you will know the following things: How to get a free copy of OMNeT++ How to compile and install OMNeT++ on Windows and Linux What makes up an OMNeT++ network simulation How to create network topologies with NED How to create your own network simulations using the OMNeT++ IDE How to use pre-existing libraries in order to make robust and realistic network simulations without reinventing the wheel Learning how to create and run network simulations is definitely a big goal of the book. Another goal of this book is to teach you how you can learn from the simulations you create. That's why this book will also show you how to set up your simulations, and to collect data of the events that occur during the runtime of the simulation. Once you have collected data from the simulation, you will learn how to debug your network by using the Data Visualization tools that come with OMNeT++. Then you will be able to grasp what you learned from debugging the simulated network and apply it to the actual network you would like to create. Summary You should now know that this book is intended for people who want to get network simulations up and running with OMNeT++ as soon as possible. You'll know by now, roughly, what OMNeT++ is, the need for simulation, and therefore OMNeT++. You'll also know what you can expect to learn from this book. Resources for Article: Further resources on this subject: Installing VirtualBox on Linux [Article] Fedora 8 — More than a Linux Distribution [Article] Linux Shell Scripting – various recipes to help you [Article]
Read more
  • 0
  • 0
  • 4282

article-image-connecting-mongohq-api-restkit
Packt
30 Sep 2013
7 min read
Save for later

Connecting to MongoHq API with RestKit

Packt
30 Sep 2013
7 min read
(For more resources related to this topic, see here.) Let's take a base URL: NSURL *baseURL = [NSURL URLWithString:@"http://example.com/v1/"]; Now: [NSURL URLWithString:@"foo" relativeToURL:baseURL]; // Will give us http://example.com/v1/foo [NSURL URLWithString:@"foo?bar=baz" relativeToURL:baseURL]; // -> http://example.com/v1/foo?bar=baz [NSURL URLWithString:@"/foo" relativeToURL:baseURL]; // -> http://example.com/foo [NSURL URLWithString:@"foo/" relativeToURL:baseURL]; // -> http://example.com/v1/foo [NSURL URLWithString:@"/foo/" relativeToURL:baseURL]; // -> http://example.com/foo/ [NSURL URLWithString:@"http://example2.com/" relativeToURL:baseURL]; // -> http://example2.com/ Having the knowledge of what an object manager is, let's try to apply it in a real-life example. Before proceeding, it is highly recommend that we check the actual documentation on REST API of MongoHQ. The current one is at the following link: http://support.mongohq.com/mongohq-api/introduction.html As there are no strict rules on REST API, every API is different and does a number of things in its own way. MongoHQ API is not an exception. In addition, it is currently in "beta" stage. Some of the non-standard things one can find in it are as follows: The API key should be provided as a parameter with every request. There is an undocumented way of how to provide it in Headers, which is a more common approach. Sometimes, if you get an error with the status code returned as 200 (OK), which is not according to REST standards, the normal way would be to return something in 4xx, which is stated as a client error. Sometimes, while the output of an error message is a JSON string, the HTTP response Content-type header is set as text/plain. To use the API, one will need a valid API Key. You can easily get one for free following a simple guideline recommended by the MongoHQ team: Sign up for an account at http://MongoHQ.com. Once logged in, click on the My Account drop-down menu at the top-right corner and select Account Settings. Look for the section labeled API Token. From there, take your token. We will put the API key into the MongoHQ-API-Token HTTP header. The following screenshot shows where one can find the API token key: API Token on Account Info page So let's set up our configuration using the following steps: You can use the AppDelegate class for putting the code, while I recommend using a separate MongoHqApi class for such App/API logic separation. First, let's set up our object manager with the following code: - (void)setupObjectManager { NSString *baseUrl = @"https://api.mongohq.com"; AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:[NSURL URLWithString:baseUrl]]; NSString *apiKey = @"MY_API_KEY"; [httpClient setDefaultHeader:@"MongoHQ-API-Token" value:apiKey]; RKObjectManager *manager = [[RKObjectManager alloc] initWithHTTPClient:httpClient]; [RKMIMETypeSerialization registerClass:[RKNSJSONSerialization class] forMIMEType:@"text/plain"]; [manager.HTTPClient registerHTTPOperationClass:[AFJSONRequestOperation class]]; [manager setAcceptHeaderWithMIMEType:RKMIMETypeJSON]; manager.requestSerializationMIMEType = RKMIMETypeJSON; [RKObjectManager setSharedManager:manager]; } Let's look at the code line by line and set the base URL. Remember not to put a slash (/) at the end, otherwise, you might have a problem with response mapping: NSString *baseUrl = @"https://api.mongohq.com"; Initialize the HTTP client with baseUrl: AFHTTPClient *httpClient = [[AFHTTPClient alloc] initWithBaseURL:[NSURL URLWithString:baseUrl]]; Set a few properties for our HTTP client, such as the API key in the header: NSString *apiKey = @"MY_API_KEY"; [httpClient setDefaultHeader:@"MongoHQ-API-Token" value:apiKey]; For the real-world app, one can show an Enter Api Key view controller to the user, and use a NSUserDefaults or a keychain to store and retrieve it. And initialize the RKObjectManager with our HTTP client: RKObjectManager *manager = [[RKObjectManager alloc] initWithHTTPClient:httpClient]; MongoHQ APIs sometimes return errors in text/plain, thus we explicitly will add text/plain as a JSON content type to properly parse errors: [RKMIMETypeSerialization registerClass:[RKNSJSONSerialization class] forMIMEType:@"text/plain"]; Register JSONRequestOperation to parse JSON in requests: [manager.HTTPClient registerHTTPOperationClass:[AFJSONRequestOperation class]]; State that we are accepting JSON content type: [manager setAcceptHeaderWithMIMEType:RKMIMETypeJSON]; Configure so that we want the outgoing objects to be serialized into JSON: manager.requestSerializationMIMEType = RKMIMETypeJSON; Finally, set the shared instance of the object manager, so that we can easily re-use it later: [RKObjectManager setSharedManager:manager]; Sending requests with object manager Next, we want to query our databases. Let's first see how a database request will show us the output in JSON. To check this, go to http://api.mongohq.com/databases?_apikey=YOUR_API_KEY in your web browser YOUR_API_KEY. If a JSON-formatter extension (https://github.com/rfletcher/safari-json-formatter) is installed in your Safari browser, you will probably see the output shown in the following screenshot. JSON response from API As we see, the JSON representation of one database is: [ { "hostname": "sandbox.mongohq.com", "name": "Test", "plan": "Sandbox", "port": 10097, "shared": true } ] Therefore, our possible MDatabase class could look like: @interface MDatabase : NSObject @property (nonatomic, strong) NSString *name; @property (nonatomic, strong) NSString *plan; @property (nonatomic, strong) NSString *hostname; @property (nonatomic, strong) NSNumber *port; @end We can also modify the @implementation section to override the description method, which will help us while debugging the application and printing the object: // in @implementation MDatabase - (NSString *)description { return [NSString stringWithFormat:@"%@ on %@ @ %@:%@", self.name, self.plan, self.hostname, self.port]; } Now let's set up a mapping for it: - (void)setupDatabaseMappings { RKObjectManager *manager = [RKObjectManager sharedManager]; Class itemClass = [MDatabase class]; NSString *itemsPath = @"/databases"; RKObjectMapping *mapping = [RKObjectMapping mappingForClass:itemClass]; [mapping addAttributeMappingsFromArray:@[@"name", @"plan", @"hostname", @"port"]]; NSString *keyPath = nil; NSIndexSet *statusCodes = RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful); RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:mapping method:RKRequestMethodGET pathPattern:itemsPath keyPath:keyPath statusCodes:statusCodes]; [manager addResponseDescriptor:responseDescriptor]; } Let's look at the mapping setup line by line: First, we define a class, which we will use to map to: Class itemClass = [MDatabase class]; And the endpoint we plan to request for getting a list of objects: NSString *itemsPath = @"/databases"; Then we create the RKObjectMapping mapping for our object class: RKObjectMapping *mapping = [RKObjectMapping mappingForClass:itemClass]; If the names of JSON fields and class properties are the same, we will use an addAttributeMappingsFromArray method and provide the array of properties: [mapping addAttributeMappingsFromArray:@[@"name", @"plan", @"hostname", @"port"]]; The root JSON key path in our case is nil. It means that there won't be one. NSString *keyPath = nil; The mapping will be triggered if a response status code is anything in 2xx: NSIndexSet *statusCodes = RKStatusCodeIndexSetForClass(RKStatusCodeClassSuccessful); Putting it all together in response descriptor (for a GET request method): RKResponseDescriptor *responseDescriptor = [RKResponseDescriptor responseDescriptorWithMapping:mapping method:RKRequestMethodGET pathPattern:itemsPath keyPath:keyPath statusCodes:statusCodes]; Add response descriptor to our shared manager: RKObjectManager *manager = [RKObjectManager sharedManager]; [manager addResponseDescriptor:responseDescriptor]; Sometimes, depending on the architectural decision, it's nicer to put the mapping definition as part of a model object, and later call it like [MDatabase mapping], but for the sake of simplicity, we will put the mapping in line with RestKit configuration. The actual code that loads the database list will look like: RKObjectManager *manager = [RKObjectManager sharedManager]; [manager getObjectsAtPath:@"/databases" parameters:nil success:^(RKObjectRequestOperation *operation, RKMappingResult *mappingResult) { NSLog(@"Loaded databases: %@", [mappingResult array]); } failure:^(RKObjectRequestOperation *operation, NSError *error) { NSLog(@"Error: %@", [error localizedDescription]) }]; As you may have noticed, the method is quite simple to use and it uses block-based APIs for callbacks, which greatly improves the code readability, compared to using delegates, especially if there is more than one network request in a class. A possible implementation of a table view that loads and shows the list of databases will look like the following screenshot: View of loaded Database items Summary In this article, we learned how to set up the RestKit library to work for our web service, we talked about sending requests, getting responses, and how to do object manipulations. We also talked about simplifying the requests by introducing routing. In addition, we discussed how integration with UI can be done and created forms. Resources for Article: Further resources on this subject: Linking OpenCV to an iOS project [Article] Getting Started on UDK with iOS [Article] Unity iOS Essentials: Flyby Background [Article]
Read more
  • 0
  • 0
  • 1950

article-image-plugins-and-extensions
Packt
30 Sep 2013
11 min read
Save for later

Plugins and Extensions

Packt
30 Sep 2013
11 min read
(For more resources related to this topic, see here.) In this modern world of JavaScript, Ext JS is the best JavaScript framework that includes a vast collection of cross-browser utilities, UI widgets, charts, data object stores, and much more. When developing an application, we mostly look for the best functionality support and components that offer it to the framework. But we usually face situations wherein the framework lacks the specific functionality or component that we need. Fortunately, Ext JS has a powerful class system that makes it easy to extend an existing functionality or component, or build new ones altogether. What is a plugin? An Ext JS plugin is a class that is used to provide additional functionalities to an existing component. Plugins must implement a method named init, which is called by the component and is passed as the parameter at the initialization time, at the beginning of the component's lifecycle. The destroy method is invoked by the owning component of the plugin, at the time of the component's destruction. We don't need to instantiate a plugin class. Plugins are inserted in to a component using the plugin's configuration option for that component. Plugins are used not only by components to which they are attached, but also by all the subclasses derived from that component. We can also use multiple plugins in a single component, but we need to be aware that using multiple plugins in a single component should not let the plugins conflict with each other. What is an extension? An Ext JS extension is a derived class or a subclass of an existing Ext JS class, which is designed to allow the inclusion of additional features. An Ext JS extension is mostly used to add custom functionalities or modify the behavior of an existing Ext JS class. An Ext JS extension can be as basic as the preconfigured Ext JS classes, which basically supply a set of default values to an existing class configuration. This type of extension is really helpful in situations where the required functionality is repeated at several places. Let us assume we have an application where several Ext JS windows have the same help button at the bottom bar. So we can create an extension of the Ext JS window, where we can add this help button and can use this extension window without providing the repeated code for the button. The advantage is that we can easily maintain the code for the help button in one place and can get the change in all places. Differences between an extension and a plugin The Ext JS extensions and plugins are used for the same purpose; they add extended functionality to Ext JS classes. But they mainly differ in terms of how they are written and the reason for which they are used. Ext JS extensions are extension classes or subclasses of Ext JS classes. To use these extensions, we need to instantiate these extensions by creating an object. We can provide additional properties, functions, and can even override any parent member to change its behavior. The extensions are very tightly coupled to the classes from which they are derived. The Ext JS extensions are mainly used when we need to modify the behavior of an existing class or component, or we need to create a fully new class or component. Ext JS plugins are also Ext JS classes, but they include the init function. To use the plugins we don't need to directly instantiate these classes; instead, we need to register the plugins in the plugins' configuration option within the component. After adding, the options and functions will become available to the component itself. The plugins are loosely coupled with the components they are plugged in, and they can be easily detachable and interoperable with multiple components and derived components. Plugins are used when we need to add features to a component. As plugins must be attached to an existing component, creating a fully new component, as done in the extensions, is not useful. Choosing the best option When we need to enhance or change the functionality of an existing Ext JS component, we have several ways to do that, each of which has both advantages and disadvantages. Let us assume we need to develop an SMS text field having a simple functionality of changing the text color to red whenever the text length exceeds the allocated length for a message; this way the user can see that they are typing more than one message. Now, this functionality can be implemented in three different ways in Ext JS, which is discussed in the following sections. By configuring an existing class We can choose to apply configuration to the existing classes. For example, we can create a text field by providing the required SMS functionality as a configuration within the listener's configuration, or we can provide event handlers after the text field is instantiated with the on method. This is the easiest option when the same functionality is used only at a few places. But as soon as the functionality is repeated at several places or in several situations, code duplication may arise. By creating a subclass or an extension By creating an extension, we can easily solve the problem as discussed in the previous section. So, if we create an extension for the SMS text field by extending the Ext JS text field, we can use this extension at as many places as we need, and can also create other extensions by using this extension. So, the code is centralized for this extension, and changing one place can reflect in all the places where this extension is used. But there is a problem: when the same functionality is needed for SMS in other subclasses of Ext JS text fields such as Ext JS text area field, we can't use the developed SMS text field extension to take advantage of the SMS functionality. Also, assume a situation where there are two subclasses of a base class, each of which provides their own facility, and we want to use both the features on a single class, then it is not possible in this implementation. By creating a plugin By creating a plugin, we can gain the maximum re-use of a code. As a plugin for one class, it is usable by the subclasses of that class, and also, we have the flexibility to use multiple plugins in a single component. This is the reason why if we create a plugin for the SMS functionality we can use the SMS plugin both in the text field and in the text area field. Also, we can use other plugins, including this SMS plugin, in the class. Building an Ext JS plugin Let us start developing an Ext JS plugin. In this section we will develop a simple SMS plugin, targeting the Ext JS textareafield component. The feature we wish to provide for the SMS functionality is that it should show the number of characters and the number of messages on the bottom of the containing field. Also, the color of the text of the message should change in order to notify the users whenever they exceed the allowed length for a message. Here, in the following code, the SMS plugin class has been created within the Examples namespace of an Ext JS application: Ext.define('Examples.plugin.Sms', { alias : 'plugin.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, init : function(textField) { this.textField = textField; if (!textField.rendered) { textField.on('afterrender', this.handleAfterRender, this); } else { this.handleAfterRender(); } }, handleAfterRender : function() { this.textField.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.textField.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'plugin-sms' }); }, handleChange : function(field, newValue) { if (newValue.length > this.getPerMessageLength()) { field.setFieldStyle('color:' + this.getWarningColor()); } else { field.setFieldStyle('color:' + this.getDefaultColor()); } this.updateMessageInfo(newValue.length); }, updateMessageInfo : function(length) { var tpl = ['Characters: {length}<br/>', 'Messages: {messages}'].join(''); var text = new Ext.XTemplate(tpl); var messages = parseInt(length / this.getPerMessageLength()); if ((length / this.getPerMessageLength()) - messages > 0) { ++messages; } Ext.get(this.getInfoPanel()).update(text.apply({ length : length, messages : messages })); }, getInfoPanel : function() { return this.textField.el.select('.plugin-sms'); } }); In the preceding plugin class, you can see that within this class we have defined a "must implemented" function called init. Within the init function, we check whether the component, on which this plugin is attached, has rendered or not, and then call the handleAfterRender function whenever the rendering is. Within this function, a code is provided, such that when the change event fires off the textareafield component, the handleChange function of this class should get executed; simultaneously, create an HTML <div> element within the handleAfterRender function, where we want to show the message information regarding the characters and message counter. And the handleChange function is the handler that calculates the message length in order to show the colored, warning text, and call the updateMessageInfo function to update the message information text for the characters length and the number of messages. Now we can easily add the following plugin to the component: { xtype : 'textareafield', plugins : ['sms'] } Also, we can supply configuration options when we are inserting the plugin within the plugins configuration option to override the default values, as follows: plugins : [Ext.create('Examples.plugin.Sms', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" })] Building an Ext JS extension Let us start developing an Ext JS extension. In this section we will develop an SMS extension that exactly satisfies the same requirements as the earlier-developed SMS plugin. We already know that an Ext JS extension is a derived class of existing Ext JS class, we are going to extend the Ext JS's textarea field that facilitates for typing multiline text and provides several event handling, rendering and other functionalities. Here is the following code where we have created the Extension class under the SMS view within the Examples namespace of an Ext JS application: Ext.define('Examples.view.sms.Extension', { extend : 'Ext.form.field.TextArea', alias : 'widget.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, afterRender : function() { this.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'extension-sms' }); }, handleChange : function(field, newValue) { if (newValue.length > this.getPerMessageLength()) { field.setFieldStyle('color:' + this.getWarningColor()); } else { field.setFieldStyle('color:' + this.getDefaultColor()); } this.updateMessageInfo(newValue.length); }, updateMessageInfo : function(length) { var tpl = ['Characters: {length}<br/>', 'Messages: {messages}'].join(''); var text = new Ext.XTemplate(tpl); var messages = parseInt(length / this.getPerMessageLength()); if ((length / this.getPerMessageLength()) - messages > 0) { ++messages; } Ext.get(this.getInfoPanel()).update(text.apply({ length : length, messages : messages })); }, getInfoPanel : function() { return this.el.select('.extension-sms'); } }); As seen in the preceding code, the extend keyword is used as a class property to extend the Ext.form.field.TextArea class in order to create the extension class. Within the afterRender event handler, we provide a code so that when the change event fires off the textarea field, we can execute the handleChange function of this class and also create an Html <div> element within this afterRender event handler where we want to show the message information regarding the characters counter and message counter. And from this section, the logic to show the warning, message character counter, and message counter is the same as we used in the SMS plugin. Now we can easily create an instance of this extension: Ext.create('Examples.view.sms.Extension'); Also, we can supply configuration options when we are creating the instance of this class to override the default values: Ext.create('Examples.view.sms.Extension', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" }); The following is the screenshot where we've used the SMS plugin and extension: In the preceding screenshot we have created an Ext JS window and incorporated the SMS extension and SMS plugin. As we have already discussed on the benefit of writing a plugin, we can not only use the SMS plugin with text area field, but we can also use it with text field. Summary We have learned from this article what a plugin and an extension are, the differences between the two, the facilities they offer, how to use them, and take decisions on choosing either an extension or a plugin for the needed functionality. In this article we've also developed a simple SMS plugin and an SMS extension. Resources for Article: Further resources on this subject: So, what is Ext JS? [Article] Ext JS 4: Working with the Grid Component [Article] Custom Data Readers in Ext JS [Article]
Read more
  • 0
  • 0
  • 2202

article-image-managing-content-must-know
Packt
27 Sep 2013
8 min read
Save for later

Managing content (Must know)

Packt
27 Sep 2013
8 min read
(For more resources related to this topic, see here.) Getting ready Content in Edublogs can take many different forms—posts, pages, uploaded media, and embedded media. The first step needs to be developing an understanding of what each of these types of content are, and how they fit into the Edublogs framework. Pages: Pages are generally static content, such as an About or a Frequently Asked Questions page. Posts: Posts are the content that is continually updated on a blog. When you write an article, it is referred to as a post. Media [uploaded]: Edublogs has a media manager that allows you to upload pictures, videos, audio files, and other files that readers would be able to interact with or download. Media [embedded]: Embedded media is different than internal media in that it is not stored on your Edublogs account. If you record a video and upload it, the video resides on your website and is considered internal to that website. If you want to add a YouTube video, a Prezi presentation, a slideshow, or any content that actually resides on another website, that is considered embedding. How to do it... Posts and pages are very similar. When you click on the Pages link on the left navigation column, if you are just beginning, you will see an empty list or the Sample Page that Edublogs provides. However, this page will show a list of all of the pages that you have written, as shown in the following screenshot: Click on any column header (Title, Author, Comments, and Date) to sort the pages by that criterion. A page can be any of several types: Published (anyone can see), Drafts, Private, Password Protected, or in the Trash. You can filter by those pages as well. You will only see the types of pages that you are currently using. For example, in the following screenshot, I have 3 Draft pages. If I had none, Drafts would not show as an option. When you hover over a page, you are provided with several options, such as Edit, Quick Edit, Trash, and View. View: This option shows you the actual live post, the same way that a reader would see it. Trash: This deletes the page. Edit: This brings you back to the main editing screen, where you can change the actual body of the page. Quick Edit: This allows you to change some of the main options of the post: Title, Slug (the end of the URL to access the page), Author, if the page has a parent, and if it should be published. The following screenshot demonstrates these options: How it works... Everything above about Pages also applies to Posts. Posts, though, have several additional options. It's also more common to use the additional options to customize Posts than Pages. Right away, hovering over Posts, it shows two new links: Categories and Tags. These tools are optional, and serve the dual purpose of aiding the author by providing an organizational structure, and helping the reader to find posts more effectively. A Category is usually very general; on one of my educational blogs, I limit my categories to a few: technology integration, assessment, pedagogy, and lessons. If I happen to write a post that does not fit, I do not categorize it. Tags are becoming ubiquitous in many applications and operating systems. They provide an easy way to browse a store of information thematically. On my educational blog, I have over 160 tags. On one post about Facebook's new advertising system, I added the following tags: Digital Literacy, Facebook, Privacy. Utilizing tags can help you to see trends in your writing and makes it much easier for new readers to find posts that interest them, and regular readers to find old posts that they want to re-reference. Let's take a look at some of the advanced features. When adding or editing a post, the following features are all located on the right-hand side column: Publish: The Publish box is necessary any time you want to remove your Post (or Page) from the draft stage, and allow readers to be able to see it. Most new bloggers simply click on Publish/Update when they are done writing a Post, which works fine. It is limited though. People often find that there are certain times of day that result in higher readership. If you click on Edit next to Publish Immediately, you can choose a date and time to schedule the publication. In addition, the Visibility line also allows you to set a Post as private, password protected, or always at the top of the page (if you have a post you particularly want to highlight, for example). Format: Most of the time, changing the format is not necessary, particularly if you run a normal, text driven blog. However, different formats lend themselves to different types of content. For example, if publishing a picture as a Post, as is often done on the microblogging site Tumblr, choosing Image would format the post more effectively. Categories: Click on + Add New Category, or check any existing categories to append them to the Post. Tags: Type any tags that you want to use, separated by commas (such as writing, blogging, Edublogs). Featured Image: Uploading and choosing a feature image adds a thumbnail image, to provide a more engaging browsing experience for the viewer. All of these features are optional, but they are useful for improving the experience, both for yourself and your readers. There's more... While for most people, the heart of a blog is the actual writing that they do. Media serves help to both make the experience more memorable and engaging, as well as to illustrate a point more effectively than text would alone. Media is anything other than text that a user can interact with; primarily, it is video, audio, or pictures. As teachers know, not everyone learns ideally through a text-based medium; media is an important part of engaging readers just as it is an important part of engaging students. There are a few ways to get media into your posts. The first is through the Media Library. On a free account, space is limited to 32 MB, a relatively small account. Pro accounts get 10 GB of space. Click on Media from the navigation menu on the left; it brings up the library. This will have a list of your media, similar to that which is used for Posts and Pages. To add media, simply click on Add New and choose an image, audio file, or video from your computer. This will then be available to any post or page to use. The following screenshot shows the Media Library page: If you are already in a post, you have even more options. Click on the Add Media button above the text editor, as shown in the following screenshot: Following are some of the options you have to embed media: Insert Media: This allows you to directly upload a file or choose one from the Media Library. Create Gallery: Creating a gallery allows you to create a set of images that users can browse through. Set Featured Image: As described above, set a thumbnail image representative of the post. Insert from URL: This allows you to insert an image by pasting in the direct URL. Make sure you give attribution, if you use someone else's image. Insert Embed Code: Embed code is extremely helpful. Many sites provide embed code (often referred to as share code) to allow people to post their content on other websites. One of the most common examples is adding a YouTube video to a post. The following screenshot is from the Share menu of a YouTube video. Copying the code provided and pasting it into the Insert Embed Code field will put the YouTube video right in the post, as shown in the following screenshot. This is much more effective than just providing a link, because readers can watch the video without ever having to leave the blog. Embedding is an Edublogs Pro feature only. Utilizing media effectively can dramatically improve the experience for your readers. Summary This article on managing content provided details about managing different types of content, in the form of posts, pages, uploaded media, and embedded media. It taught us the different features such as publish, format, categories, tags and features image. Resources for Article : Further resources on this subject: Customizing WordPress Settings for SEO [Article] Getting Started with WordPress 3 [Article] Dynamic Menus in WordPress [Article]
Read more
  • 0
  • 0
  • 2852

article-image-developing-your-mobile-learning-strategy
Packt
27 Sep 2013
27 min read
Save for later

Developing Your Mobile Learning Strategy

Packt
27 Sep 2013
27 min read
(For more resources related to this topic, see here.) What is mobile learning? There have been many attempts at defining mobile learning. Is it learning done on the move, such as on a laptop while we sit in a train? Or is it learning done on a personal mobile device, such as a smartphone or a tablet? The capabilities of mobile devices Anyone can develop mobile learning. You don't need to be a gadget geek or have the latest smartphone or tablet. You certainly don't need to know anything about the make and models of devices on the market. The only thing the learning practitioner really needs is an understanding of the capabilities of the mobile devices that your learners have. This will inform the types of mobile learning interventions that will be best suited to your audience. The following table shows an overview of what a mobile learner might be able to do with each of the device types. The Device uses column on the left should already be setting off lots of great learning ideas in your head! Device uses Feature phone Smartphone Tablet Gaming device Media player Send texts Yes Yes       Mark calls Yes Yes       Take photos Yes Yes Yes Yes Yes Listen to music Yes Yes Yes Yes Yes Social networking Yes Yes Yes Yes Yes Take high res photos   Yes Yes Yes Yes Web searches   Yes Yes Yes Yes Web browsing   Yes Yes Yes Yes Watch online videos   Yes Yes Yes Yes Video calls   Yes Yes Yes Yes Edit photos   Yes Yes Yes Yes Shoot videos   Yes Yes   Yes Take audio recordings   Yes Yes   Yes Install apps   Yes Yes   Yes Edit documents   Yes Yes   Yes Use maps   Yes Yes   Yes Send MMS   Yes Yes     View catch up TV     Yes Yes   Better quality web browsing     Yes Yes   Shopping online     Yes     Trip planning     Yes     Bear in mind that screen size will also impact the type of learning activity that can be undertaken. For example: Feature phone displays are very small, so learning activities for this device type should center on text messaging with a tutor or capturing photos for an assignment. Smartphones are significantly larger so there is a much wider range of learning activities available, especially around the creation of material such as photo and video for assignment or portfolio purposes, and a certain amount of web searching and browsing. Tablets are more akin to the desktop computing environment, although some tasks such as typing are harder and taking photos is bit clumsier due to the larger size of the device. They are great for short learning tasks, assessments, video watching, and much more. Warning – it's not about delivering courses Mobile learning can be many things. What it is not is simply the delivery of e-learning courses, which is traditionally the domain of the desktop computer, on a smaller device. Of course it can be used to deliver educational materials, but what is more important is that it can also be used to foster collaboration, to facilitate communication, to access performance support, and to capture evidence. But if you try to deliver an entire course purely on a mobile, then the likelihood is that no one will use it. Your mobile learning strategy Finding a starting point for your mobile learning design is easier said than done. It is often useful when designing any type of online interaction to think through a few typical user types and build up a picture of who they are and what they want to use the system for. This helps you to visualize who you are designing for. In addition to this, in order to understand how best to utilize mobile devices for learning, you also need to understand how people actually use their mobile devices. For example, learners are highly unlikely to sit at a smartphone and complete a 60 minutes e-learning course or type out an essay. But they are very likely to read an article, do some last minute test preparation or communicate with other learners. Who are your learners? Understanding your users is an important part of designing online experiences. You should take time to understand the types of learners within your own organization and what their mobile usage looks like, as a first step in delivering mobile learning on Moodle. With this in mind, let's look at a handful of typical mobile learners from around the world who could reasonably be expected to be using an educational or workplace learning platform such as Moodle: Maria is an office manager in Madrid, Spain. She doesn't leave home without her smartphone and uses it wherever she is, whether for e-mail, web searching and browsing, reading the news, or social networking. She lives in a country where smartphone penetration has reached almost half of the population, of whom two-third access the internet every day on their mobile. The company she works for has a small learning platform for delivery of work-based learning activities and performance support resources. Fourteen year old Jennifer attends school in Rio de Janeiro, Brazil. Like many of her peers, she carries a smartphone with her and it's a key part of her life. The Brazilian population is one of the most connected in the developing world with nearly half of the population using the Internet, and its mobile phone subscriptions accounting for one-third of the entire subscriptions across Latin America and the Caribbean. Her elementary school uses a learning platform for the delivery of course resources, formative assessments, and submission of student assignments. Nineteen year old Mike works as an apprentice at a large car maker in Sunderland, UK. He spends about one-third of his time in formal education, and his remaining days each week are spent on the production line, getting a thorough grounding in every element of the car manufacturing process. He owns a smartphone and uses it heavily, in a country where nearly half of the population accesses the Internet at least monthly on their smartphone. His employer has a learning platform for delivery of work-based learning and his college also has their own platform where he keeps a training diary and uploads evidence of skills acquisition for later submission and marking. Josh is a twenty year old university student in the United States. In his country, nearly 90 percent of adults now own a mobile phone and half of all adults use their phone to access the Internet, although in his age group this increases to three quarters. Among his student peers across the U.S., 40 percent are already doing test preparation on their mobiles, whether their institution provides the means or not. His university uses a learning platform for delivery of course resources, submission of student assignments, and student collaborative activities. These four particular learners were not chosen at random—there is one important thing that connects them all. The four countries they are from represent not just important mobile markets but, according to the statistics page on Moodle.org, also represent the four largest Moodle territories, together making up over a third of all registered Moodle sites in the world. When you combine those Moodle market statistics with the level of mobile internet usage in each country, you can immediately see why support for mobile learning is so important for Moodle sites. How do your learners use their devices? In 2012, Google published the findings of a research survey which investigated how users behave across computer, tablet, smartphone, and TV screens. Their researchers found that users make decisions about what device to use for a given task depending on four elements that together make up the user's context: location, goal, available time, and attitude. Each of these is important to take into account when thinking about what sort of learning interactions your users could engage in when using their mobile devices, and you should be aiming to offer a range of mobile learning interactions that can lend themselves to different contexts, for example, offering tasks ranging in length from 2 to 20 minutes, and tasks suited to different locations, such as home, work, college, or out in the field. The attitude element is an interesting one, and it's important to allow learners to choose tasks that are appropriate to their mood at the time. Google also found that users either move between screens to perform a single task ( sequential screening ) or use multiple screens at the same time ( simultaneous screening ). In the case of simultaneous screening, they are likely to be performing complementary tasks relating to the same activity on each screen. From a learning point of view, you can design for multi-screen tasks. For example, you may find learners use their computer to perform some complex research and then collect evidence in the field using their smartphone—these would be sequential screening tasks. A media studies student could be watching a rolling news channel on the television while taking photos, video, and notes for an assignment on his tablet or smartphone—these would be simultaneous screening tasks. Understanding the different scenarios in which learners can use multiple screens will open up new opportunities for mobile learning. A key statement from the Google research states that "Smartphones are the backbone of our daily media interactions". However, despite occupying such a dominant position in our lives, the smartphone also accounts for the lowest time per user interaction at an average of 17 minutes, as opposed to 30 minutes for tablet, 39 minutes for computer, and 43 minutes for TV. This is an important point to bear in mind when designing mobile learning: as a rule of thumb you can expect a learner to engage with a tablet-based task for half an hour, and a smartphone-based task for just a quarter of an hour. Google helpfully outlines some important multi-screen lessons. While these are aimed at identifying consumer behaviour and in particular online shopping habits, we can interpret them for use in mobile learning as follows: Understand how people consume digital media and tailor your learning strategies to each channel Learning goals should be adjusted to account for the inherent differences in each device Learners must be able to save their progress between devices Learners must be able to easily find the learning platform (Moodle) on each device Once in the learning platform, it must be easy for learners to find what they are looking for quickly Smartphones are the backbone of your learners' daily media use, so design your learning to be started on smartphone and continued on a tablet or desktop computer Having an understanding of how modern-day learners use their different screens and devices will have a real impact on your learning design. Mobile usage in your organization In 2011, the world reached a technology watershed when it was estimated that one third of the world's seven billion people were online. The growth in online users is dominated by the developing world and is fuelled by mobile devices. There are now a staggering six billion mobile phone subscriptions globally. Mobile technology has quite simply become ubiquitous. And as Google showed us, people use mobile devices as the backbone of their daily media consumption, and most people already use them for school, college, or work regardless of whether they are allowed to. In this section, we will look at how mobiles are used in some of the key sectors in which Moodle is used: in schools, further and higher education, and in the workplace. Mobile usage in school Moodle is widely used throughout primary and secondary education, and mobile usage among school pupils is widespread. The two are natural bedfellows in this sector. For example, in the UK half of all 12 to 15 year olds own a smartphone while 70 percent of 8 to 15 year olds have a games console such as a Nintendo DS or PlayStation in their bedroom. Mobile device use is quite simply rampant among school children. Many primary schools now have policies which allow children to bring mobile phones into school, recognizing that such devices have a role to play in helping pupils feel safe and secure, particularly on the journey to and from school. However, it is a fairly normal practice among this age group for mobiles to be handed in at the start of the school day and collected at the end of the day. For primary pupils, therefore, the use of mobile devices for education will be largely for homework. In secondary schools, the picture is very different. There is not likely to be a device hand-in policy during school hours and a variety of acceptable use policies will be in use. An acceptable use policy may include a provision for using mobiles in lesson time, with a teacher's agreement, for the purposes of supporting learning. This, of course, opens up valuable learning opportunities. Mobile learning in education has been the subject of a number of initiatives and research studies which are all excellent sources of information. These include: Learning2Go, who were pioneers in mobile learning for schools in the UK, distributing hundreds of Windows Mobile devices to Wolverhampton schools between 2003 and 2007, introducing smartphones in 2008 under the Computers for Pupils initiative and the national MoLeNET scheme. Learning Untethered, which was not a formal research project but an exploration that gave Android tablets to a class of fifth graders. It was noted that the overall ''feel'' of the classroom shifted as students took a more active role in discovery, exploration and active learning. The Dudley Handhelds initiative, which provided 300 devices to learners in grade five to ten across six primary schools, one secondary special school, and one mainstream secondary school. These are just a few of the many research studies available, and they are well worth a read to understand how schools have been implementing mobile learning for different age groups. Mobile usage in further and higher education College students are heavy users of mobiles, and there is a roughly half and half split between smartphones and feature phones among the student community. Of the smartphone users, over 80 percent use them for college-related tasks. As we saw from Google's research, smartphones are the backbone of your learners' daily media use for those who have them. So if you don't already provide mobile learning opportunities on your Moodle site, then it is likely that your users are already helping themselves to the vast array of mobile learning sites and apps that have sprung up in recent years to meet the high demand for such services. If you don't provide your students with mobile learning opportunities, you can bet your bottom dollar that someone else is, and it could be of dubious quality or out of date. Despite the ubiquity of the mobile, many schools and colleges continue to ban them, viewing mobiles as a distraction or a means of bullying. They are fighting a rising tide, however. Students are living their lives through their mobile devices, and these devices have become their primary means of communication. A study in late 2012 of nearly 295,000 students found that despite e-mail, IM, and text messaging being the dominant peer-communication tools for students, less than half of 14 to 18 year olds and only a quarter of 11 to 14 year olds used them to communicate with their teachers. Over half of high school students said they would use their smartphone to communicate with their teacher if it was allowed. Unfortunately it rarely is, but this will change. Students want to be able to communicate electronically with their teachers; they want online text articles with classmate collaboration tools; they want to go online on their mobile to get information. Go to where your students are and communicate with them in their native environment, which is via their mobile. Be there for them, engage them, and inspire them. In the years approaching 2010, some higher education institutions started engaging in headline-grabbing "iPad for every student" initiatives. Many institutions adopted a quick-win strategy of making mobile-friendly websites with access to campus information, directories, news and events. It is estimated that in the USA over 90 percent of higher education institutions have mobile-friendly websites. Some of the headline-grabbing initiatives include the following: Seton Hill University was the first to roll out iPads to all full-time students in 2010 and have continued to do so every year since. They are at the forefront of mobile learning in the US University sector and use Moodle as their virtual learning environment (VLE). Abilene Christian University was the first university in the U.S. to provide iPhones or iPod Touches to all new full-time students in 2008, and are regarded as one of the most mobile-friendly campuses in the U.S. The University of Western Sydney in Australia will roll out 11,000 iPads to all faculty and newly-enrolled students in 2013, as well as creating their own mobile apps. Coventry University in the UK is creating a smart campus in which the geographical location of students triggers access to content and experiences through their mobile devices. MoLeNET in the UK was one of the world's largest mobile learning implementations, comprising 115 colleges, 29 schools, 50,000 students, and 4,000 staff from 2007 to 2010. This was a research-led initiative although unfortunately the original website has now been taken down. While some of these examples are about providing mobile devices to new students, the Bring Your Own Device (BYOD) trend is strong in further and higher education. We know that mobile devices form the backbone of students' media consumption and in the U.S. alone, 75 percent of students use their phone to access the Internet. Additionally, 40 percent have signed up to online test preparation sites on their mobiles, heavily suggesting that if an institution doesn't provide mobile learning services, students will go and get it elsewhere anyway. Instead of the glamorous offer of iPads for all, some institutions have chosen to invest heavily in their wireless network infrastructure in support of a BYOD approach. This is a very heavy investment and can be far more expensive than a few thousand iPads. Some BYOD implementations include: King's College London in the UK, which supports 6,000 staff and 23,500 students The University of Tennessee at Knoxville in the U.S., which hosts more than 26,000 students and 5,000 faculty and staff members, with nearly 75,000 smartphones, tablets, and laptops The University of South Florida in the U.S., which supports 40,000 users Sau Paolo State University in Brazil, which has 45,000 students and noted that despite providing desktop machines in the computer labs, half of all students opted to use their own devices instead There are many challenges to BYOD which are not within the scope of this article, but there are also many resources on how to implement a BYOD policy that minimizes such risks. Use the Internet to seek these out. Providing campus information websites on mobiles obviously was not the key rationale behind such technology investments. The real interest is in delivering mobile learning, and this remains an area full of experimentation and research. Google Scholar can be used to chart the rise of mobile learning research and it becomes evident how this really takes off in the second half of the decade, when the first major institutions started investing in mobile technology. It indexes scholarly literature, including journal and conference papers, theses and dissertations, academic articles, pre-prints, abstracts, and technical reports. A year-by-year search reveals the rise of mobile learning research from just over 100 articles in 2000 to over 6,000 in 2012. The following chart depicts the rise of mobile learning in academic research: Mobile usage in apprenticeships A typical apprenticeship will include a significant amount of college-based learning towards a qualification, alongside a major component based in the workplace under the supervision of an employer while the apprentice learns a particular trade. Due to the movement of the student from college to workplace, and the fact that the apprentice usually has to keep a reflective log and capture evidence of their skills acquisition, mobile devices can play a really useful role in apprenticeships. Traditionally, the age group for apprenticeships is 16 to 24 year olds. This is an age group that has never known a world without mobiles and their mobile devices are integrated into the fabric of their daily lives and media consumption. They use social networks, SMS, and instant messaging rather than e-mail, and are more likely to use the mobile internet than any other age group. Statistics from the U.S. reveal that 75 percent of students use their phone to access the Internet. Reflective logs are an important part of any apprenticeship. There are a number of activities in Moodle that can be used for keeping reflective logs, and these are ideal for mobile learning. Reflective log entries tend to be shorter than traditional assignments and lend themselves well to production on a tablet or even a smartphone. Consumption of reflective logs is perfect for both smartphone and tablet devices, as posts tend to be readable in less than 5 minutes. Many institutions use Moodle coupled with an ePortfolio tool such as Mahara or Onefile to manage apprenticeship programs. There are additional Packt Publishing articles on ePortfolio tools such as Mahara, should you wish to investigate a third-party, open source ePortfolio solution. Mobile usage in the workplace BYOD in the workplace is also becoming increasingly common, and, appears to be an unstoppable trend. It may also be discouraged or banned on security, data protection, or distraction grounds, but it is happening regardless. There is an increasing amount of research available on this topic, and some key findings from various studies reveal the scale of the trend: A survey of 600 IT and business leaders revealed that 90 percent of survey respondents had employees using their own devices at work 65 to 75 percent of companies allow some sort of BYOD usage 80 to 90 percent of employees use a personal mobile device for business use If you are a workplace learning practitioner then you need to sit up and take note of these numbers if you haven't done so already. Even if your organization doesn't officially have a BYOD policy, it is most likely that your employees are already using their own mobile devices for business purposes. It's up to your IT department to manage this safely, and again there are many resources and case studies available online to help with this. But as a learning practitioner, whether it's officially supported or not, it's worth asking yourself whether you should embrace it anyway, and provide learning activities to these users and their devices. Mobile usage in distance learning Online distance learning is principally used in higher education (HE), and many institutions have taken to it either as a new stream of revenue or as a way of building their brand globally. Enrolments have rocketed over recent years; the number of U.S. students enrolled in an online course has increased from one to six million in a decade. Online enrolments have also been the greatest source of new enrolments in HE in that time, outperforming general student enrolment dramatically. Indeed, the year 2011 in the US saw a 10 percent growth rate in distance learning enrolment against 2 percent in the overall HE student population. In the 2010 to 2011 academic years, online enrolments accounted for 31 percent of all U.S. HE enrolments. Against this backdrop of phenomenal growth in HE distance learning courses, we also have a new trend of Massive Online Open Courses (MOOCs) which aim to extend enrolment past traditional student populations to the vast numbers of potential students for whom a formal HE program of study may not be an option. The convenience and flexibility of distance learning appeal to certain groups of the population. Distance learners are likely to be older students, with more than 30 years of age being the dominant age group. They are also more likely to be in full-time employment and taking the course to help advance their careers, and are highly likely to be married and juggling home and family commitments with their jobs and coursework. We know that among the 30 to 40 age group mobile device use is very high, particularly among working professionals, who are a major proportion of HE distance learners. However, the MOOC audience is of real interest here as this audience is much more diverse. As many MOOC users find traditional HE programs out of their reach, many of these will be in developing countries, where we already know that users are leapfrogging desktop computing and going straight to mobile devices and wireless connectivity. For these types of courses, mobile support is absolutely crucial. A wide variety of tools exist to support online distance learning, and these are split between synchronous and asynchronous tools, although typically a blend of the two is used. In synchronous learning, all participants are present at the same time. Courses will therefore be organized to a timetable, and will involve tools such as webinars, video conferences, and real-time chat. In asynchronous learning, courses are self-directed and students work to their own schedules, and tools include e-mail, discussion forums, audio recording, video recordings, and printed material. Connecting distance learning from traditional institutions to MOOCs is a recognized need to improve course quality and design, faculty training, course assessment, and student retention. There are known barriers, including motivation, feedback, teacher contact, and student isolation. These are major challenges to the effectiveness of distance learning, and later in this article we will demonstrate how mobile devices can be used to address some of these areas. Case studies The following case studies illustrate two approaches to how an HE institution and a distance learning institution have adopted Moodle to deliver mobile learning. Both institutions were very early movers in making Moodle mobile-friendly, and can be seen as torch bearers for the rest of us. Fortunately, both institutions have also been influential in the approach that Moodle HQ have taken to mobile compatibility, so in using the new mobile features in recent versions of Moodle, we are all able to take advantage of the substantial amount of work that went into these two sites. University of Sussex The University of Sussex is a research-led HE institution on the south coast of England. They use a customized Moodle 1.9 installation called Study Direct, which plays host to 1,500 editing tutors and 15,000 students across 2,100 courses per year, and receives 13,500 unique hits per day. The e-learning team at the University of Sussex contains five staff (one manager, two developers, one user support, and one tutor support) whose remit covers a much wider range of learning technologies beyond the VLE. However, the team has achieved a great deal with limited resources. It has been working towards a responsive design for some years and has helped to influence the direction of Moodle with regards to designing for mobile devices and usability, through speaking at UK Moodle and HE conferences and providing passionate inputs into debates on the Moodle forums on the subject of interface design. Further to this, team member Stuart Lamour is one of the three original developers of the Bootstrap theme for Moodle, which is used throughout this article. The Study Direct site shows what is possible in Moodle, given the time and resources for its development and a focus on user-centered design. The approach has been to avoid going down the native application route for mobile access like many institutions have done, and to instead focus on a responsive, browser-based user experience. The login page is simple and clean. One of the nice things that the University of Sussex has done is to think through the user interactions on its site and clearly identify calls to action, typically with a green button, as shown by the sign in button on the login page in the following screenshot: The team has built its own responsive theme for Moodle. While the team has taken a leading role on development of the Moodle 2 Bootstrap theme, the University of Sussex site is still on Moodle 1.9 so this implementation uses its own custom theme. This theme is fully responsive and looks good when viewed on a tablet or a smartphone, reordering screen elements as necessary for each screen resolution. The course page, shown in the following screenshot, is similarly clear and uncluttered. The editing interface has been customized quite heavily to give tutors a clear and easy way to edit their courses without running the risk of messing up the user interface. The team maintains a useful and informative blog explaining what they have done to improve the user experience, and which is well worth a read. Open University The Open University (OU) in the UK runs one the largest Moodle sites in the world. It is currently using Moodle 2 for the OU's main VLE as well as for its OpenLearn and Qualifications online platforms. Its Moodle implementation regularly sees days with well over one million transactions and over 60,000 unique users, and has seen peak times of 5,000 simultaneous online users. The OU's focus on mobile Moodle goes back to about 2010, so it was an early mover in this area. This means that the OU did not have the benefit of all the mobile-friendly features that now come with Moodle, but had to largely create its own mobile interface from scratch. Anthony Forth gave a presentation at the UK Moodle Moot in 2011 on the OU's approach to mobile interface design for Moodle. He identified that at the time the Open University migrated to Moodle 2 in 2011 it had over 13,000 mobile users per month. The OU chose to survey a group of 558 of these users in detail to investigate their needs more closely. It transpired that the most popular uses of Moodle on mobile devices was for forums, news, resources and study planners, while areas such as wikis and blogs were very low down the list of users' priorities. So the OU's mobile design focused on these particular areas as well as looking at usability in general. The preceding screenshot shows the OU course page with tabbed access to the popular areas such as Planner, News, Forums, and Resources, and then the main content area providing space for latest news, unread forum posts, and activities taking place this week. The site uses a nice, clean, and easy to understand user interface in which a lot of thought has gone into the needs of the student. Summary In this article, we have provided you with a vision of how mobile learning could be put to use on your own organization's Moodle platform. We gave you an understanding of some of the foundation concepts of mobile learning, some insights into how important mobile learning is becoming, and how it is gaining momentum in different sectors. Your learners are already using mobile devices whether in educational institutions or in the workplace, and they use mobile devices as the backbone of their daily online interactions. They want to also use them for learning. Hopefully, we have started you off on a mobile learning path that will allow you to make this happen. Mobile devices are where the future of Moodle is going to be played out, so it makes complete sense to be designing for mobile access right now. Fortunately, Moodle already provides the means for this to happen and provides tools that allow you to set it up for mobile delivery. Resources for Article : Further resources on this subject: Getting Started with Moodle 2.0 for Business [Article] Managing Student Work using Moodle: Part 2 [Article] Integrating Moodle 2.0 with Mahara and GoogleDocs for Business [Article]
Read more
  • 0
  • 0
  • 1947

article-image-creating-dynamic-ui-android-fragments
Packt
26 Sep 2013
2 min read
Save for later

Creating Dynamic UI with Android Fragments

Packt
26 Sep 2013
2 min read
(For more resources related to this topic, see here.) Many applications involve several screens of data that a user might want to browse or flip through to view each screen. As an example, think of an application where we list a catalogue of books with each book in the catalogue appearing on a single screen. A book's screen contains an image, title, and description like the following screenshot: To view each book's information, the user needs to move to each screen. We could put a next button and a previous button on the screen, but a more natural action is for the user to use their thumb or finger to swipe the screen from one edge of the display to the other and have the screen with the next book's information slide into place as represented in the following screenshot: This creates a very natural navigation experience, and honestly, is a more fun way to navigate through an application than using buttons. Summary Fragments are the foundation of modern Android app development, allowing us to display multiple application screens within a single activity. Thanks to the flexibility provided by fragments, we can now incorporate rich navigation into our apps with relative ease. Using these rich navigation capabilities, we're able to create a more dynamic user interface experience that make our apps more compelling and that users find more fun to work with. Resources for Article : Further resources on this subject: So, what is Spring for Android? [Article] Android Native Application API [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 5202
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-vaadin-and-its-context
Packt
25 Sep 2013
24 min read
Save for later

Vaadin and its Context

Packt
25 Sep 2013
24 min read
(For more resources related to this topic, see here.) Developing Java applications and more specifically, developing Java web applications should be fun. Instead, most projects are a mess of sweat and toil, pressure and delays, costs and cost cutting. Web development has lost its appeal. Yet, among the many frameworks available, there is one in particular that draws our attention because of its ease of use and its original stance. It has been around since the past decade and has begun to grow in importance. The name of this framework is Vaadin. The goal of this article is to see, step-by-step, how to develop web applications with Vaadin. Vaadin is the Finnish word for a female reindeer (as well as a Finnish goddess). This piece of information will do marvels to your social life as you are now one of the few people on Earth who know this (outside Finland). Before diving right into Vaadin, it is important to understand what led to its creation. Readers who already have this information (or who don't care) should go directly to Environment Setup. Rich applications Vaadin is often referred to as a Rich Internet Application (RIA) framework. Before explaining why, we need to first define some terms which will help us describe the framework. In particular, we will have a look at application tiers, the different kind of clients, and their history. Application tiers Some software run locally, that is, on the client machine and some run remotely, such as on a server machine. Some applications also run on both the client and the server. For example, when requesting an article from a website, we interact with a browser on the client side but the order itself is passed on a server in the form of a request. Traditionally, all applications can be logically separated into tiers, each having different responsibilities as follows: Presentation : The presentation tier is responsible for displaying the end-user information and interaction. It is the realm of the user interface. Business Logic : The logic tier is responsible for controlling the application logic and functionality. It is also known as the application tier, or the middle tier as it is the glue between the other two surrounding tiers, thus leading to the term middleware. Data : The data tier is responsible for storing and retrieving data. This backend may be a file system. In most cases, it is a database, whether relational, flat, or even an object-oriented one. This categorization not only naturally corresponds to specialized features, but also allows you to physically separate your system into different parts, so that you can change a tier with reduced impact on adjacent tiers and no impact on non-adjacent tiers. Tier migration In the histor yof computers and computer software, these three tiers have moved back and forth between the server and the client. Mainframes When computers were mainframes, all tiers were handled by the server. Mainframes stored data, processed it, and were also responsible for the layout of the presentation. Clients were dumb terminals, suited only for displaying characters on the screen and accepting the user input. Client server Not many companies could afford the acquisition of a mainframe (and many still cannot). Yet, those same companies could not do without computers at all, because the growing complexity of business processes needed automation. This development in personal computers led to a decrease in their cost. With the need to share data between them, the network traffic rose. This period in history saw the rise of the personal computer, as well as the Client server term, as there was now a true client. The presentation and logic tier moved locally, while shared databases were remotely accessible, as shown in the following diagram: Thin clients Big companies migrating from mainframes to client-server architectures thought that deploying software on ten client machines on the same site was relatively easy and could be done in a few hours. However, they quickly became aware of the fact that with the number of machines growing in a multi-site business, it could quickly become a nightmare. Enterprises also found that it was not only the development phase that had to be managed like a project, but also the installation phase. When upgrading either the client or the server, you most likely found that the installation time was high, which in turn led to downtime and that led to additional business costs. Around 1991, Sir Tim Berners-Leeinvented the Hyper Text Markup Language, better known as HTML. Some time after that, people changed its original use, which was to navigate between documents, to make HTML-based web applications. This solved the deployment problem as the logic tier was run on a single-server node (or a cluster), and each client connected to this server. A deployment could be done in a matter of minutes, at worst overnight, which was a huge improvement. The presentation layer was still hosted on the client, with the browser responsible for displaying the user interface and handling user interaction. This new approach brought new terms, which are as follows: The old client-server architecture was now referred to as fat client . The new architecture was coined as thin client, as shown in the following diagram: Limitations of the thin-client applications approach Unfortunately, this evolution was made for financial reasons and did not take into account some very important drawbacks of the thin client. Poor choice of controls HTML does not support many controls, and what is available is not on par with fat-client technologies. Consider, for example, the list box: in any fat client, choices displayed to the user can be filtered according to what is typed in the control. In legacy HTML, there's no such feature and all lines are displayed in all cases. Even with HTML5, which is supposed to add this feature, it is sadly not implemented in all browsers. This is a usability disaster if you need to display the list of countries (more than 200 entries!). As such, ergonomics of true thin clients have nothing to do with their fat-client ancestors. Many unrelated technologies Developers of fat-client applications have to learn only two languages: SQL and the technology's language, such as Visual Basic, Java, and so on. Web developers, on the contrary, have to learn an entire stack of technologies, both on the client side and on the server side. On the client side, the following are the requirements: First, of course, is HTML. It is the basis of all web applications, and although some do not consider it a programming language per se, every web developer must learn it so that they can create content to be displayed by browsers. In order to apply some common styling to your application, one will probably have to learn the Cascading Style Sheets ( CSS) technology. CSS is available in three main versions, each version being more or less supported by browser version combinations (see Browser compatibility). Most of the time, it is nice to have some interactivity on the client side, like pop-up windows or others. In this case, we will need a scripting technology such as ECMAScript. ECMAScript is the specification of which JavaScript is an implementation (along with ActionScript ). It is standardized by the ECMA organization. See http://www.ecma-international.org/publications/standards/Ecma-262.htm for more information on the subject. Finally, one will probably need to update the structure of the HTML page, a healthy dose of knowledge of the Document Object Model (DOM) is necessary. As a side note, consider that HTML, CSS, and DOM are W3C specifications while ECMAScript is an ECMA standard. From a Java point-of-view and on the server side, the following are the requirements: As servlets are the most common form of request-response user interactions in Java EE, every web developer worth his salt has to know both the Servlet specification and the Servlet API. Moreover, most web applications tend to enforce the Model-View-Controller paradigm. As such, the Java EE specification enforces the use of servlets for controllers and JavaServer Pages (JSP ) for views. As JSP are intended to be templates, developers who create JSP have an additional syntax to learn, even though they offer the same features as servlets. JSP accept scriptlets, that is, Java code snippets, but good coding practices tend to frown upon this, however, as Java code can contain any feature, including some that should not be part of views—for example, the database access code. Therefore, a completely new technology stack is proposed in order to limit code included in JSP: the tag libraries. These tag libraries also have a specification and API, and that is another stack to learn. However, these are a few of the standard requirements that you should know in order to develop web applications in Java. Most of the time, in order to boost developer productivity, one has to use frameworks. These frameworks are available in most of the previously cited technologies. Some of them are supported by Oracle, such as Java Server Faces, others are open source, such as Struts. JavaEE 6 seems to favor replacement of JSP and Servlet by Java Server Faces(JSF). Although JSF aims to provide a component-based MVC framework, it is plagued by a relative complexity regarding its components lifecycle. Having to know so much has negative effects, a few are as follows: On the technical side, as web developers have to manage so many different technologies, web development is more complex than fat-client development, potentially leading to more bugs On the human resources side, different meant either different profiles were required or more resources, either way it added to the complexity of human resource management On the project management side, increased complexity caused lengthier projects: developing a web application was potentially taking longer than developing a fat-client application All of these factors tend to make the thin-client development cost much more than fat-client, albeit the deployment cost was close to zero. Browser compatibility The Web has standards, most of them upheld by the World Wide Web Consortium. Browsers more or less implement these standards, depending on the vendor and the version. The ACID test, in version 3, is a test for browser compatibility with web standards. Fortunately, most browsers pass the test with 100 percent success, which was not the case two years ago. Some browsers even make the standards evolve, such as Microsoft which implemented the XmlHttpRequest objectin Internet Explorer and thus formed the basis for Ajax. One should be aware of the combination of the platform, browser, and version. As some browsers cannot be installed with different versions on the same platform, testing can quickly become a mess (which can fortunately be mitigated with virtual machines and custom tools like http://browsershots.org). Applications should be developed with browser combinations in mind, and then tested on it, in order to ensure application compatibility. For intranet applications, the number of supported browsers is normally limited. For Internet applications, however, most common combinations must be supported in order to increase availability. If this wasn't enough, then the same browser in the same version may run differently on different operating systems. In all cases, each combination has an exponential impact on the application's complexity, and therefore, on cost. Page flow paradigm Fat-client applications manage windows. Most of the time, there's a main window. Actions are mainly performed in this main window, even if sometimes managed windows or pop-up windows are used. As web applications are browser-based and use HTML over HTTP, things are managed differently. In this case, the presentation unit is not the window but the page. This is a big difference that entails a performance problem: indeed, each time the user clicks on a submit button, the request is sent to the server, processed by it, and the HTML response is sent back to the client. For example, when a client submits a complex registration form, the entire page is recreated on the server side and sent back to the browser even if there is a minor validation error, even though the required changes to the registration form would have been minimal. Beyond the limits Over the last few years, users have been applying some pressure in order to have user interfaces that offer the same richness as good old fat-client applications. IT managers, however, are unwilling to go back to the old deploy-as-a-project routine and its associated costs and complexity. They push towards the same deployment process as thin-client applications. It is no surprise that there are different solutions in order to solve this dilemma. What are rich clients? All the following solutions are globally called rich clients, even if the approach differs. They have something in common though: all of them want to retain the ease of deployment of the thin client and solve some or all of the problems mentioned previously. Rich clients fulfill the fourth quadrant of the following schema, which is like a dream come true, as shown in the following diagram: Some rich client approaches The following solutions are strategies that deserve the rich client label. Ajax Ajax was one of the first successful rich-client solutions. The term means Asynchronous JavaScript with XML. In effect, this browser technology enables sending asynchronous requests, meaning there is no need to reload the full page. Developers can provide client scripts implementing custom callbacks: those are executed when a response is sent from the server. Most of the time, such scripts use data provided in the response payload to dynamically update relevant part of the page DOM. Ajax addresses the richness of controls and the page flow paradigm. Unfortunately: It aggravates browser-compatibility problems as Ajax is not handled in the same way by all browsers. It has problems unrelated directly to the technologies, which are as follows: Either one learns all the necessary technologies to do Ajax on its own, that is, JavaScript, Document Object Model, and JSON/XML, to communicate with the server and write all common features such as error handling from scratch. Alternatively, one uses an Ajax framework, and thus, one has to learn another technology stack. Richness through a plugin The oldest way to bring richness to the user's experience is to execute the code on the client side and more specifically, as a plugin in the browser. Sun—now Oracle—proposed the applet technology, whereas Microsoft proposed ActiveX. The latest technology using this strategy is Flash. All three were failures due to technical problems, including performance lags, security holes, and plain-client incompatibility or just plain rejection by the market. There is an interesting way to revive the applet with the Apache Pivot project, as shown in the following screenshot (http://pivot.apache.org/), but it hasn't made a huge impact yet; A more recent and successful attempt at executing code on the client side through a plugin is through Adobe's Flex. A similar path was taken by Microsoft's Silverlight technology. Flex is a technology where static views are described in XML and dynamic behavior in ActionScript. Both are transformed at compile time in Flash format. Unfortunately, Apple refused to have anything to do with the Flash plugin on iOS platforms. This move, coupled with the growing rise of HTML5, resulted in Adobe donating Flex to the Apache foundation. Also, Microsoft officially renounced plugin technology and shifted Silverlight development to HTML5. Deploying and updating fat-client from the web The most direct way toward rich-client applications is to deploy (and update) a fat-client application from the web. Java Web Start Java Web Start (JWS), available at http://download.oracle.com/javase/1.5.0/docs/guide/javaws/, is a proprietary technology invented by Sun. It uses a deployment descriptor in Java Network Launching Protocol (JNLP) that takes the place of the manifest inside a JAR file and supplements it. For example, it describes the main class to launch the classpath, and also additional information such as the minimum Java version, icons to display on the user desktop, and so on. This descriptor file is used by the javaws executable, which is bundled in the Java Runtime Environment. It is the javaws executable's responsibility to read the JNLP file and do the right thing according to it. In particular, when launched, javaws will download the updated JAR. The detailed process goes something like the following: The user clicks on a JNLP file. The JNLP file is downloaded on the user machine, and interpreted by the local javaws application. The file references JARs that javaws can download. Once downloaded, JWS reassembles the different parts, create the classpath, and launch the main class described in the JNLP. JWS correctly tackles all problems posed by the thin-client approach. Yet it never reaches critical mass for a number of reasons: First time installations are time-consuming because typically lots of megabytes need to be transferred over the wire before the users can even start using the app. This is a mere annoyance for intranet applications, but a complete no go for Internet apps. Some persistent bugs weren't fixed across major versions. Finally, the lack of commercial commitment by Sun was the last straw. A good example of a successful JWS application is JDiskReport (http://www.jgoodies.com/download/jdiskreport/jdiskreport.jnlp), a disk space analysis tool by Karsten Lentzsch , which is available on the Web for free. Update sites Updating software through update sites is a path taken by both Integrated Development Environment ( IDE ) leaders, NetBeans and Eclipse. In short, once the software is initially installed, updates and new features can be downloaded from the application itself. Both IDEs also propose an API to build applications. This approach also handles all problems posed by the thin-client approach. However, like JWS, there's no strong trend to build applications based on these IDEs. This can probably be attributed to both IDEs using the OSGI standard whose goal is to address some of Java's shortcomings but at the price of complexity. Google Web Toolkit Google Web Toolkit (GWT) is the framework used by Google to create some of its own applications. Its point of view is very unique among the technologies presented here. It lets you develop in Java, and then the GWT compiler transforms your code to JavaScript, which in turn manipulates the DOM tree to update HTML. It's GWT's responsibility to handle browser compatibility. This approach also solves the other problems of the pure thin-client approach. Yet, GWT does not shield developers from all the dirty details. In particular, the developer still has to write part of the code handling server-client communication and he has to take care of the segregation between Java server-code which will be compiled into byte code and Java client-code which will be compiled into JavaScript. Also, note that the compilation process may be slow, even though there are a number of optimization features available during development. Finally, developers need a good understanding of the DOM, as well as the JavaScript/DOM event model. Why Vaadin? Vaadin is a solution evolved from a decade of problem-solving approach, provided by a Finnish company named Vaadin Ltd, formerly IT Mill. Therefore, having so many solutions available, could question the use of Vaadin instead of Flex or GWT? Let's first have a look at the state of the market for web application frameworks in Java, then detail what makes Vaadin so unique in this market. State of the market Despite all the cons of the thin-client approach, an important share of applications developed today uses this paradigm, most of the time with a touch of Ajax augmentation. Unfortunately, there is no clear leader for web applications. Some reasons include the following: Most developers know how to develop plain old web applications, with enough Ajax added in order to make them usable by users. GWT, although new and original, is still complex and needs seasoned developers in order to be effective. From a Technical Lead or an IT Manager's point of view, this is a very fragmented market where it is hard to choose a solution that will meet users' requirements, as well as offering guarantees to be maintained in the years to come. Importance of Vaadin Vaadin is a unique framework in the current ecosystem; its differentiating features include the following: There is no need to learn different technology stacks, as the coding is solely in Java. The only thing to know beside Java is Vaadin's own API, which is easy to learn. This means: The UI code is fully object-oriented There's no spaghetti JavaScript to maintain It is executed on the server side Furthermore, the IDE's full power is in our hands with refactoring and code completion. No plugin to install on the client's browser, ensuring all users that browse our application will be able to use it as-is. As Vaadin uses GWT under the hood, it supports all browsers that the version of GWT also supports. Therefore, we can develop a Vaadin application without paying attention to the browsers and let GWT handle the differences. Our users will interact with our application in the same way, whether they use an outdated version (such as Firefox 3.5), or a niche browser (like Opera). Moreover, Vaadin uses an abstraction over GWT so that the API is easier to use for developers. Also, note that Vaadin Ltd (the company) is part of GWT steering committee, which is a good sign for the future. Finally, Vaadin conforms to standards such as HTML and CSS, making the technology future proof. For example, many applications created with Vaadin run seamlessly on mobile devices although they were not initially designed to do so. Vaadin integration In today's environment, integration features of a framework are very important, as normally every enterprise has rules about which framework is to be used in some context. Vaadin is about the presentation layer and runs on any servlet container capable environment. Integrated frameworks There are three integration levels possible which are as follows: Level 1 : out-of-the-box or available through an add-on, no effort required save reading the documentation Level 2 : more or less documented Level 3 : possible with effort The following are examples of such frameworks and tools with their respective integration estimated effort: Level 1 : Java Persistence API ( JPA ): JPA is the Java EE 5 standard for all things related to persistence. An add-on exists that lets us wire existing components to a JPA backend. Other persistence add-ons are available in the Vaadin directory, such as a container for Hibernate, one of the leading persistence frameworks available in the Java ecosystem. A bunch of widget add-ons, such as tree tables, popup buttons, contextual menus, and many more. Level 2 : Spring is a framework which is based on Inversion of Control ( IoC ) that is the de facto standard for Dependency Injection. Spring can easily be integrated with Vaadin, and different strategies are available for this. Context Dependency Injection ( CDI ): CDI is an attempt at making IoC a standard on the Java EE platform. Whatever can be done with Spring can be done with CDI. Any GWT extensions such as Ext-GWT or Smart GWT can easily be integrated in Vaadin, as Vaadin is built upon GWT's own widgets. Level 3 : We can use another entirely new framework and languages and integrate them with Vaadin, as long as they run on the JVM: Apache iBatis, MongoDB, OSGi, Groovy, Scala, anything you can dream of! Integration platforms Vaadin provides an out-of-the-box integration with an important third-party platform: Liferay is an open source enterprise portal backed by Liferay Inc. Vaadin provides a specialized portlet that enables us to develop Vaadin applications as portlets that can be run on Liferay. Also, there is a widgetset management portlet provided by Vaadin, which deploys nicely into Liferay's Control Panel. Using Vaadin in the real world If you embrace Vaadin, then chances are that you will want to go beyond toying with the Vaadin framework and develop real-world applications. Concerns about using a new technology Although it is okay to use the latest technology for a personal or academic project, projects that have business objectives should just run and not be riddled with problems from third-party products. In particular, most managers may be wary when confronted by anew product (or even a new version), and developers should be too. The following are some of the reasons to choose Vaadin: Product is of highest quality : The Vaadin team has done rigorous testing throughout their automated build process. Currently, it consists of more than 8,000 unit tests. Moreover, in order to guarantee full compatibility between versions, many (many!) tests execute pixel-level regression testing. Support : Commercial : Although completely committed to open source, Vaadin Limited offer commercial support for their product. Check their Pro Account offering. User forums : A Vaadin user forum is available. Anyone registered can post questions and see them answered by a member of the team or of the community. Note that Vaadin registration is free, as well as hassle-free: you will just be sent the newsletter once a month (and you can opt-out, of course). Retro-compatibility: API : The server-side API is very stable, version after version, and has survived major client-engines rewrite. Some part of the API has been changed from v6 to v7, but it is still very easy to migrate. Architecture : Vaadin's architecture favors abstraction and is at the root of it all. Full-blown documentation available : Product documentation : Vaadin's site provides three levels of documentation regarding Vaadin: a five-minute tutorial, a one-hour tutorial, and the famed article of Vaadin . Tutorials API documentation : The Javadocs are available online; there is no need to build the project locally. Course/webinar offerings : Vaadin Ltd currently provides four different courses, which tackles all the needed skills for a developer to be proficient in the framework. Huge community around the product : There is a community gathering, which is ever growing and actively using the product. There are plenty of blogs and articles online on Vaadin. Furthermore, there are already many enterprises using Vaadin for their applications. Available competent resources : There are more and more people learning Vaadin. Moreover, if no developer is available, the framework can be learned in a few days. Integration with existing product/platforms : Vaadin is built to be easily integrated with other products and platforms. The artile of Vaadin describes how to integrate with Liferay and Google App Engine. Others already use Vaadin Upon reading this, managers and developers alike should realize Vaadin is mature and is used on real-world applications around the world. If you still have any doubts, then you should check http://vaadin.com/who-is-using-vaadin and be assured that big businesses trusted Vaadin before you, and benefited from its advantages as well. Summary In this article, we saw the migration of application tiers in the software architecture between the client and the server. We saw that each step resolved the problems in the previous architecture: Client-server used the power of personal computers in order to decrease mainframe costs Thin-clients resolved the deployment costs and delays Thin-clients have numerous drawbacks. For the user, a lack of usability due to poor choice of controls, browser compatibility issues, and the navigation based on page flow; for the developer, many technologies to know. As we are at the crossroad, there is no clear winner in all the solutions available: some only address a few of the problems, some aggravate them. Vaadin is an original solution that tries to resolve many problems at once: It provides rich controls It uses GWT under the cover that addresses most browser compatibility issues It has abstractions over the request response model, so that the model used is application-based and not page based The developer only needs to know one programming language: Java, and Vaadin generates all HTML, JavaScript, and CSS code for you Now we can go on and create our first Vaadin application! Resources for Article : Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Vaadin – Using Input Components and Forms [Article]
Read more
  • 0
  • 0
  • 4495

article-image-preparing-your-first-jquery-mobile-project
Packt
25 Sep 2013
10 min read
Save for later

Preparing Your First jQuery Mobile Project

Packt
25 Sep 2013
10 min read
(For more resources related to this topic, see here.) Building an HTML page Let's begin with a simple web page that is not mobile optimized. To be clear, we aren't saying it can't be viewed on a mobile device. Not at all! But it may not be usable on a mobile device. It may be hard to read (text too small). It may be too wide. It may use forms that don't work well on a touch screen. We don't know what kinds of problems we will have at all until we start testing. (And we've all tested our websites on mobile devices to see how well they work, right?) Let's have a look at the following code snippet: <h1>Welcome</h1> <p> Welcome to our first mobile web site. It's going to be the bestsite you've ever seen. Once we get some content. And a business plan. But the hard part is done! </p> <p> <i>Copyright Megacorp&copy; 2013</i> </p> </body> </html> As we said, there is nothing too complex, right? Let's take a quick look at this in the browser: Not so bad, right? But let's take a look at the same page in a mobile simulator: Wow, that's pretty tiny. You've probably seen web pages like this before on your mobile device. You can, of course, typically use pinch and zoom or double-click actions to increase the size of the text. But it would be preferable to have the page render immediately in a mobile-friendly view. This is where jQuery Mobile comes in. Getting jQuery Mobile In the preface we talked about how jQuery Mobile is just a set of files. That isn't said to minimize the amount of work done to create those files, or how powerful they are, but to emphasize that using jQuery Mobile means you don't have to install any special tools or server. You can download the files and simply include them in your page. And if that's too much work, you have an even simpler solution. jQuery Mobile's files are hosted on a Content Delivery Network (CDN). This is a resource hosted by them and guaranteed (as much as anything like this can be) to be online and available. Multiple sites are already using these CDN hosted files. That means when your users hit your site they may already have the resources in their cache. For this article, we will be making use of the CDN hosted files, but just for this first example we'll download and extract the files we need. I recommend doing this anyway for those times when you're on an airplane and wanting to whip up a quick mobile site. To grab the files, visit http://jquerymobile.com/download. There are a few options here but you want the ZIP file option. Go ahead and download that ZIP file and extract it. (The ZIP file you downloaded earlier from GitHub has a copy already.) The following screenshot demonstrates what you should see after extracting the files from the ZIP file: Notice the ZIP file contains a CSS and JavaScript file for jQuery Mobile, as well as a minified version of both. You will typically want to use the minified version in your production apps and the regular version while developing. The images folder has five images used by the CSS when generating mobile optimized pages. You will also see demos for the framework as well as theme and structure files. So, to be clear, the entire framework and all the features we will be talking about over the rest of the article will consist of a framework of 6 files. Of course, you also need to include the jQuery library. You can download that separately at www.jquery.com. At the time this article was written, the recommended version was 1.9.1. Customized downloads As a final option for downloading jQuery Mobile, you can also use a customized Download Builder tool at http://jquerymobile.com/download-builder. Currently in Alpha (that is, not certified to be bug-free!), the web-based tool lets you download a jQuery Mobile build minus features your website doesn't need. This creates smaller files which reduces the total amount of time your application needs to display to the end user. Implementing jQuery Mobile Ok, we've got the bits, but how do we use them? Adding jQuery Mobile support to a site requires the following three steps at a minimum: First, add the HTML5 DOCTYPE to the page: <!DOCTYPE html>. This is used to help inform the browser about the type of content it will be dealing with. Add a viewport metatag: <metaname="viewport"content="width=device-width,initial-scale="1">. This helps set better defaults for pages when viewed on a mobile device. Finally, the CSS, JavaScript library, and jQuery itself need to be included into the file. Let's look at a modified version of our previous HTML file that adds all of the above: code 1-2: test2.html <!DOCTYPE html> <html> <head> <title>First Mobile Example</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet"href="jquery.mobile-1.3.2.min.css" /> <script type="text/javascript"src = "http://code.jquery.com/jquery-1.9.1.min.js"></script> <script type="text/javascript"src = "jquery.mobile-1.3.2.min.js"></script> </head> <body> <h1>Welcome</h1> <p> Welcome to our first mobile web site. It's going to be the best siteyou've ever seen. Once we get some content. And a business plan. But the hard part is done! </p> <p> <i>Copyright Megacorp&copy; 2013</i> </p> </body> </html> For the most part, this version is the exact same as Code 1-1, except for the addition of the DOCTYPE, the CSS link, and our two JavaScript libraries. Notice we point to the hosted version of the jQuery library. It's perfectly fine to mix local JavaScript files and remote ones. If you wanted to ensure you could work offline, you can simply download the jQuery library as well. So while nothing changed in the code between the body tags, there is going to be a radically different view now in the browser. The following screenshot shows how the iOS mobile browser renders the page now: Right away, you see a couple of differences. The biggest difference is the relative size of the text. Notice how much bigger it is and easier to read. As we said, the user could have zoomed in on the previous version, but many mobile users aren't aware of this technique. This page loads up immediately in a manner that is much more usable on a mobile device. Working with data attributes As we saw in the previous example, just adding in jQuery Mobile goes a long way to updating our page for mobile support. But there's a lot more involved to really prepare our pages for mobile devices. As we work with jQuery Mobile over the course of the article, we're going to use various data attributes to mark up our pages in a way that jQuery Mobile understands. But what are data attributes? HTML5 introduced the concept of data attributes as a way to add ad-hoc values to the DOM ( Document Object Model). As an example, this is a perfectly valid HTML: <div id="mainDiv" data-ray="moo">Some content</div> In the previous HTML, the data-ray attribute is completely made-up. However, because our attribute begins with data-, it is also completely legal. So what happens when you view this in your browser? Nothing! The point of these data attributes is to integrate with other code, like JavaScript, that does whatever it wants with them. So for example, you could write JavaScript that finds every item in the DOM with the data-ray attribute, and change the background color to whatever was specified in the value. This is where jQuery Mobile comes in, making extensive use of data attributes, both for markup (to create widgets) and behavior (to control what happens when links are clicked). Let's look at one of the main uses of data attributes within jQuery Mobile—defining pages, headers, content, and footers: code 1-3: test3.html <!DOCTYPE html> <html> <head> <title>First Mobile Example</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet"href="jquery.mobile-1.3.2.min.css" /> <script type="text/javascript"src = "http://code.jquery.com/jquery-1.9.1.min.js"></script> <script type="text/javascript"src = "jquery.mobile-1.3.2.min.js"></script> </head> <body> <div data-role="page"> <div data-role="header"><h1>Welcome</h1></div> <div data-role="content"> <p> Welcome to our first mobile web site. It's going to be the bestsite you've ever seen. Once we get some content.And a business plan. But the hard part is done! </p> </div> <div data-role="footer"> <h4>Copyright Megacorp&copy; 2013</h4> </div> </div> </body> </html> Compare the previous code snippet to code 1-2, and you can see that the main difference was the addition of the div blocks. One div block defines the page. Notice it wraps all of the content inside the body tags. Inside the body tag, there are three separate div blocks. One has a role of header, another a role of content, and the final one is marked as footer. All the blocks use data-role, which should give you a clue that we're defining a role for each of the blocks. As we stated previously, these data attributes mean nothing to the browser itself. But let's look what at what jQuery Mobile does when it encounters these tags: Notice right away that both the header and footer now have a black background applied to them. This makes them stand out even more from the rest of the content. Speaking of the content, the page text now has a bit of space between it and the sides. All of this was automatic once the div tags with the recognized data-roles were applied. This is a theme you're going to see repeated again and again as we go through this article. A vast majority of the work you'll be doing will involve the use of data attributes. Summary In this article, we talked a bit about how web pages may not always render well in a mobile browser. We talked about how the simple use of jQuery Mobile can go a long way to improving the mobile experience for a website. Specifically, we discussed how you can download jQuery Mobile and add it to an existing HTML page, what data attributes mean in terms of HTML, and how jQuery Mobile makes use of data attributes to enhance your pages. Resources for Article: Further resources on this subject: jQuery Mobile: Collapsible Blocks and Theming Content [Article] Using different jQuery event listeners for responsive interaction [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 9883

article-image-deploying-vertx-application
Packt
24 Sep 2013
12 min read
Save for later

Deploying a Vert.x application

Packt
24 Sep 2013
12 min read
(For more resources related to this topic, see here.) Setting up an Ubuntu box We are going to set up an Ubuntu virtual machine using the Vagrant tool. This virtual machine will simulate a real production server. If you already have an Ubuntu box (or a similar Linux box) handy, you can skip this step and move on to setting up a user. Vagrant (http://www.vagrantup.com/) is a tool for managing virtual machines. Many people use it to manage their development environments so that they can easily share them and test their software on different operating systems. For us, it is a perfect tool to practice Vert.x deployment into a Linux environment. Install Vagrant by heading to the Downloads area at http://vagrantup.com and selecting the latest version. Select a package for your operating system and run the installer. Once it is done you should have a vagrant command available on the command line as follows: vagrant –v Navigate to the root directory of our project and run the following command: vagrant init precise64 http://files.vagrantup.com/precise64.box This will generate a file called Vagrant file in the project folder. It contains configuration for the virtual machine we're about to create. We initialized a precise64 box, which is shorthand for the 64-bit version of Ubuntu 12.04 Precise Pangolin. Open the file in an editor and find the following line: # config.vm.network :private_network, ip: "192.168.33.10" Uncomment the line by removing the # character. This will enable private networking for the box. We will be able to conveniently access it with the IP address 192.168.33.10 locally. Run the following command to download, install, and launch the virtual machine: vagrant up This command launches the virtual machine configured in the Vagrantfile. On first launch it will also download it. Because of this, running the command may take a while. Once the command is finished you can check the status of the virtual machine by running vagrant status, suspend it by running vagrant suspend, bring it back up by running vagrant up, and remove it by running vagrant destroy. Setting up a user For any application deployment, it's a good idea have an application-specific user configured. The sole purpose of the user is to run the application. This gives you a nice way to control permissions and make sure the application can only do what it's supposed to. Open a shell connection to our Linux box. If you followed the steps to set up a Vagrant box, you can do this by running the following command in the project root directory: vagrant ssh Add a new user called mindmaps using the following command: sudo useradd -d /home/mindmaps -m mindmaps Also specify a password for the new user using the following command (and make a note of the password you choose; you'll need it): sudo passwd mindmaps Install Java on the server Install Java for the Linux box, as described in Getting Started with Vert.x. As a quick reminder, Java can be installed on Ubuntu with the following command: sudo apt-get install openjdk-7-jdk On fresh Ubuntu installations, it is a good idea to always make sure the package manager index is up-to-date before installing any packages. This is also the case for our Ubuntu virtual machine. Run the following command if the Java installation fails: sudo apt-get update Installing MongoDB on the server We also need MongoDB to be installed on the server, for persisting the mind maps. Setting up privileged ports Our application is configured to serve requests on port 8080. When we deploy to the Internet, we don't want users to have to know anything about ports, which means we should deploy our app to the default HTTP port 80 instead. On Unix systems (such as Linux) port 80 can only be bound to by the root user. Because it is not a good idea to run applications as the root user, we should set up a special privilege for the mindmaps user to bind to port 80. We can do this with the authbind utility. authbind is a Linux utility that can be used to bind processes to privileged ports without requiring root access. Install authbind using the package manager with the following command: sudo apt-get install authbind Set up a privilege for the mindmaps user to bind to port 80, by creating a file into the authbind configuration directory with the following command: cd /etc/authbind/byport/sudo touch 80sudo chown mindmaps:mindmaps 80sudo chmod 700 80 When authbind is run, it checks from this directory, whether there is a file corresponding to the used port and whether the current user has access to it. Here we have created such a file. Many people prefer to have a web server such as Nginx or Apache as a frontend and not expose backend services to the Internet directly. This can also be done with Vert.x. In that case, you could just deploy Vert.x to port 8080 and skip the authbind configuration. Then, you would need to configure reverse proxying for the Vert.x application in your web server. Note that we are using the event bus bridge in our application, and that uses HTTP WebSockets as the transport mechanism. This means the front-end web server must be able to also proxy WebSocket traffic. Nginx is able to do this starting from version 1.3 and Apache from version 2.4.5. Installing Vert.x on the server Switch to the mindmaps user in the shell on the virtual machine using the following command: sudo su – mindmaps Install Vert.x for this user, as described in Getting Started with Vert.x. As a quick reminder, it can be done by downloading and unpacking the latest distribution from http://vertx.io. Making the application port configurable Let's move back to our application code for a moment. During development we have been running the application in port 8080, but on the server we will want to run it in port 80. To support both of these scenarios we can make the port configurable through an environment variable. Vert.x makes environment variables available to verticles through the container API. In JavaScript, the variables can be found in the container.env object. Let's use it to give our application a port at runtime. Find the following line from the deployment verticle app.js: port: 8080, Change it to the following line: port: parseInt(container.env.get('MINDMAPS_PORT')) || 8080, This gets the MINDMAPS_PORT environment variable, and parses it from a string to an integer using the standard JavaScript parseInt function. If no port has been given, the default value 8080 is used. We also need to change the host configuration of the web server. So far, we have been binding to localhost, but now we also want the application to be accessible from outside the server. Find the following line in app.js: host: "localhost", Change it to the following line: host: "0.0.0.0", Using the host 0.0.0.0 will make the server bind to all IPv4 network interfaces the server has. Setting up the application on the server We are going to need some way of transferring the application code itself to the server, as well as delivering incremental updates as new versions of the application are developed. One of the simplest ways to accomplish this is to just transfer the application files over using the rsync tool, which is what we will do. rsync is a widely used Unix tool for transferring files between machines. It has some useful features over plain file copying, such as only copying the deltas of what has changed, and two-way synchronization of files. Create a directory for the application, on the home directory of the mindmaps user using the following command: mkdir ~/app Go back to the application root directory and transfer the files from it to the new remote directory: rsync -rtzv . mindmaps@192.168.33.10:~/app Testing the setup At this point, the project working tree should already be in the application directory on the remote server, because we have transferred it over using rsync. You should also be able to run it on the virtual machine, provided that you have the JDK, Vert.x, and MongoDB installed, and that you have authbind installed and configured. You can run the app with the following commands: cd ~/appJAVA_OPTS="-Djava.net.preferIPv4Stack=true" MINDMAPS_PORT=80 authbind ~/vert.x-2.0.1-final/bin/vertx run app.js Let's go through the file bit by bit as follows: We pass a Java system parameter called java.net.preferIPv4Stack to Java via the JAVA_OPTS environment variable. This will have Java use IPv4 networking only. We need it because the authbind utility only supports IPv4. We also explicitly set the application to use port 80 using the MINDMAPS_PORT environment variable. We wrap the Vert.x command with the authbind command. Finally, there's the call to Vert.x. Substitute the path to the Vert.x executable with the path you installed Vert.x to. After starting the application, you should be able to see it by navigating to //192.168.33.10 in a browser. Setting up an upstart service We have our application fully operational, but it isn't very convenient or reliable to have to start it manually. What we'll do next is to set up an Ubuntu upstart job that will make sure the application is always running and survives things like server restarts. Upstart is an Ubuntu utility that handles task supervision and automated starting and stopping of tasks when the machine starts up, shuts down, or when some other events occur. It is similar to the /sbin/init daemon, but is arguably easier to configure, which is the reason we'll be using it. The first thing we need to do is set up an upstart configuration file. Open an editor with root access (using sudo) for a new file /etc/init/mindmaps.conf, and set its contents as follows: start on runlevel [2345]stop on runlevel [016]setuid mindmapssetgid mindmapsenv JAVA_OPTS="-Djava.net.preferIPv4Stack=true"env MINDMAPS_PORT=80chdir /home/mindmaps/appexec authbind /home/mindmaps/vert.x-2.0.1-final/bin/vertx run app.js Let's go through the file bit by bit as follows: On the first two lines, we configure when this service will start and stop. This is defined using runlevels, which are numeric identifiers of different states of the operating system (http://en.wikipedia.org/wiki/Runlevel). 2, 3, 4, and 5 designate runlevels where the system is operational; 0, 1, and 6 designate runlevels where the system is stopping or restarting. We set the user and group the service will run as to the mindmaps user and its group. We set the two environment variables we also used earlier when testing the service: JAVA_OPTS for letting Java know it should only use the IPv4 stack, and MINDMAPS_PORT to let our application know that it should use port 80. We change the working directory of the service to where our application resides, using the chdir directive. Finally, we define the command that starts the service. It is the vertx command wrapped in the authbind command. Be sure to change the directory for the vertx binary to match the directory you installed Vert.x to. Let's give the mindmaps user the permission to manage this job so that we won't have to always run it as root. Open up the /etc/sudoers file into an editor with the following command: sudo /usr/sbin/visudo At the end of the file, add the following line: mindmaps ALL = (root) NOPASSWD: /sbin/start mindmaps, /sbin/stopmindmaps, /sbin/restart mindmaps, /sbin/status mindmaps The visudo command is used to configure the privileges of different users to use the sudo command. With the line we added, we enabled the mindmaps user to run a few specific commands without having to supply a password. At this point you should be able to start and stop the application as the mindmaps user: sudo start mindmaps You also have the following additional commands available for managing the service: sudo status mindmapssudo restart mindmapssudo stop mindmaps If there is a problem with the commands, there might be some configuration error. The upstart service will log errors to the file: /var/log/upstart/mindmaps.log. You will need to open it using the sudo command. Deploying new versions Deploying a new version of the application consists of the following two steps: Transferring new files over using rsync Restarting the mind maps service We can make this even easier by creating a shell script that executes both steps. Create a file called deploy.sh in the root directory of the project and set its contents as: #!/bin/shrsync -rtzv . mindmaps@192.168.33.10:~/app/ssh mindmaps@192.168.33.10 sudo restart mindmaps Make the script executable, using the following command: chmod +x deploy.sh After this, just run the following command whenever you want a new version on the server: ./deploy.sh To make deployment even more streamlined, you can set up SSH public key authentication so that you won't need to supply the password of the mindmaps user as you deploy. See https://help.ubuntu.com/community/SSH/OpenSSH/Keys for more information. Summary In this article, we have learned the following things: How to set up a Linux server for Vert.x production deployment How to set up deployment for a Vert.x application using rsync How to start and supervise a Vert.x process using upstart Resources for Article : Further resources on this subject: IRC-style chat with TCP server and event bus [Article] Coding for the Real-time Web [Article] Integrating Storm and Hadoop [Article]
Read more
  • 0
  • 0
  • 6425

Packt
24 Sep 2013
8 min read
Save for later

Quick start – using Haml

Packt
24 Sep 2013
8 min read
(For more resources related to this topic, see here.) Step 1 – integrating with Rails and creating a simple view file Let's create a simple rails application and change one of the view files to Haml from ERB: Create a new rails application named blog: rails new blog Add the Haml gem to this application's Gemfile and run it: bundle install When the gem has been added, generate some views that we can convert to Haml and learn about its basic features. Run the Rails scaffold generator to create a scffold named post. rails g scaffold post After this, you need to run the migrations to create the database tables: rake db:migrate You should get an output as shown in the following screenshot: Our application is not yet generating Haml views automatically. We will switch it to this mode in the next steps. The index.html.erb file that has been generated and is located in app/views/posts/index.html.erb looks as follows: <h1>Listing posts</h1><table> <tr> <th></th> <th></th> <th></th> </tr><% @posts.each do |post| %> <tr> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' } %></td> </tr><% end %></table><br><%= link_to 'New Post', new_post_path %>] Let's convert it to an Haml view step-by-step. First, let's understand the basic features of Haml: Any HTML tags are written with a percent sign and then the name of the tag Whitespace (tabs) is being used to create nested tags Any part of an Haml line that is not interpreted as something else is taken to be plain text Closing tags as well as end statements are omitted for Ruby blocks Knowing the previous features we can write the first lines of our example view in Haml. Open the index.html.erb file in an editor and replace <h1>, <table>, <th>, and <tr> as follows: <h1>Listing posts</h1> can be written as %h1 Listing posts <table> can be written as %table <tr> becomes %tr <th> becomes %th After those first replacements our view file should look like: %h1 Listing posts%table %tr %th %th %th<% @posts.each do |post| %> <tr> <td><%= link_to 'Show', post %></td> <td><%= link_to 'Edit', edit_post_path(post) %></td> <td><%= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' } %></td> </tr><% end %><br><%= link_to 'New Post', new_post_path %> Please notice how %tr is nested within the %table tag using a tab and also how %th is nested within %tr using a tab. Next, let's convert the Ruby parts of this view. Ruby is evaluated and its output is inserted into the view when using the equals character. In ERB we had to use <%=, whereas in Haml, this is shortened to just a =. The following examples illustrate this: <%= link_to 'Show', post %> becomes = link_to 'Show', post and all the other <%= parts are changed accordingly The equals sign can be used at the end of the tag to insert the Ruby code within that tag Empty (void) tags, such as <br>, are created by adding a forward slash at the end of the tag Please note that you have to leave a space after the equals sign. After the changes are incorporated, our view file will look like %h1 Listing posts %table %tr %th %th %th<% @posts.each do |post| %> %tr %td= link_to 'Show',post %td= link_to 'Edit',edit_post_path(post) %td= link_to 'Destroy',post,method: :delete,data: { confirm: 'Areyou sure?' }%br/= link_to 'New Post',new_post_path The only thing left to do now is to convert the Ruby block part: <% @posts.each do | post | %>. Code that needs to be run, but does not generate any output, is written using a hyphen character. Here is how this conversion works: Ruby blocks do not need to be closed, they end when the indentation decreases HTML tags and the Ruby code that is nested within the block are indented by one tab more than the block <% @posts.each do |post| %> becomes - @ posts.each do |post| Remember about a space after the hyphen character After we replace the remaining part in our view file according to the previous rules, it should look as follows: %h1 Listing posts%table %tr %th %th %th- @posts.each do |post| %tr %td= link_to 'Show', post %td= link_to 'Edit', edit_post_path(post) %td= link_to 'Destroy', post, method: :delete, data: { confirm:'Are you sure?' }%br/= link_to 'New Post', new_post_path Save the view file and change its name to index.html.haml. This is now an Haml-based template. Start our example Rails application and visit http: //localhost:3000/posts to see the view being rendered by Rails, as shown in the following screenshot: Step 2 – switching Rails application to use Haml as the templating engine In the previous step, we have enabled Haml in the test application. However, if you generate new view files using any of the Rails built-in generators, it will still use ERB. Let's switch the application to use Haml as the templating engine. Edit the blog application Gemfile and add a gem named haml-rails to it. You can add it to the :development group because the generators are only used during development and this functionality is not needed in production or test environments. Our application Gemfile now looks as shown in the following code: source 'https://rubygems.org'gem 'rails', '3.2.13'gem 'sqlite3'gem 'haml'gem 'haml-rails', :group => :developmentgroup :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'uglifier', '>= 1.0.3'endgem 'jquery-rails' Then run following bundle command to install the gem: bundle install Let's say the posts in our application need to have categories. Run the scaffold generator to create some views for categories. This generator will create views using Haml, as shown in the following screenshot: Please note that new views have a .html.haml extension and are using Haml. For example, the _form.html.haml view for the form looks as follows: = form_for @category do |f|Asjd12As - if @category.errors.any? #error_explanation %h2= "#{pluralize(@category.errors.count, "error")} prohibitedthis category from being saved:" %ul - @category.errors.full_messages.each do |msg| %li= msg .actions = f.submit 'Save' There are two very useful shorthand notations for creating a <div> tag with a class or a <div> tag with an ID. To create a div with an ID, use the hash symbol followed by the name of the ID. For example, #error_explanation will result in <div id="error_explanation"> To create a <div>tag with a class attribute, use a dot followed by the name of the class. For example, .actions will create <div class="actions"> Step 3 – converting existing view templates to Haml Our example blog app still has some leftover templates which are using ERB as well as an application.html.erb layout file. We would like to convert those to Haml. There is no need to do it all individually, because there is a handy gem which will automatically convert all the ERB files to Haml, shown as follows: Let's install the html2haml gem: gem install html2haml Using the cd command, change the current working directory to the app directory of our example application and run the following bash command to convert all the ERB files to Haml (to run this command you need a bash shell. On Windows, you can use the embedded bash shell which ships with GitHub for Windows, Cygwin bash, MinGW bash, or the MSYS bash shell which is bundled with Git for Windows). for file in $(find . -type f -name \*.html.erb); do html2haml -e ${file} "$(dirname ${file})/$(basename ${file}.erb).haml";done Then to remove the ERB files and run this command: find ./ -name *erb | while read line; do rm $line; done Those two Bash snippets will first convert all the ERB files recursively in the app directory of our application and then remove the remaining ERB view templates. Summary This article covered integrating with Rails and creating a simple view file, switching Rails application to use Haml as the templating engine, and converting existing view templates to Haml. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Building tiny Web-applications in Ruby using Sinatra [Article]
Read more
  • 0
  • 0
  • 14585
Packt
24 Sep 2013
7 min read
Save for later

Mobiles First – How and Why

Packt
24 Sep 2013
7 min read
(For more resources related to this topic, see here.) What is Responsive Web Design? Responsive Web Design (RWD) is a set of strategies used to display web pages on screens of varying sizes. These strategies leverage, among other things, features available in modern browsers as well as a strategy of progressive enhancement (rather than graceful degradation). What's with all the buzzwords? Well, again, once we dig into the procedures and the code, it will all get a lot more meaningful. But here is a quick example to illustrate a two-way progressive enhancement that is used in RWD. Let's say you want to make a nice button that is a large target and can be reliably pressed with big, fat clumsy thumbs on a wide array of mobile devices. In fact, you want that button to pretty much run the full spectrum of every mobile device known to humans. This is not a problem. The following code is how your (greatly simplified) HTML will look: <!DOCTYPE html> <head> <link rel="stylesheet" href="css/main.css"> </head> <body> <button class="big-button">Click Me!</button> </body> </html> The following code is how your CSS will look: .big-button { width: 100%; padding: 8px 0; background: hotPink; border: 3px dotted purple; font-size: 18px; color: #fff; border-radius: 20px; box-shadow: #111 3px 4px 0px; } So this gets you a button that stretches the width of the document's body. It's also hot pink with a dotted purple border and thick black drop shadow (don't judge my design choices). Here is what is nice about this code. Let's break down the CSS with some imaginary devices/browsers to illustrate some of the buzzwords in the first paragraph of this section: Device one (code name: Goldilocks): This device has a modern browser, with screen dimensions of 320 x 480 px. It is regularly updated, so is highly likely to have all the cool browser features you read about in your favorite blogs. Device two (code name: Baby Bear): This device has a browser that partially supports CSS2 and is poorly documented, so much so that you can only figure out which styles are supported through trial and error or forums. The screen is 320 x 240 px. This describes a device that predated the modern adoption levels of browsing the web on a mobile but your use case may require you to support it anyway. Device three (code name: Papa Bear): This is a laptop computer with a modern browser but you will never know the screen dimensions since the viewport size is controlled by the user. Thus, Goldilocks gets the following display: Because it is all tricked out with full CSS3 feature, it will render the rounded corners and drop shadow. Baby Bear, on the other hand, will only get square corners and no drop shadow (as seen in the previous screenshot) because its browser can't make sense of those style declarations and will just do nothing with them. It's not a huge deal, though, as you still get the important features of the button; it stretches the full width of the screen, making it a big target for all the thumbs in the world (also, it's still pink). Papa Bear gets the button with all the CSS3 goodies too. That said, it stretches the full width of the browser no matter how absurdly wide a user makes his/her browser. We only need it to be about 480 px wide to make it big enough for a user to click and look reasonable within whatever design we are imagining. So in order to make that happen, we will take advantage of a nifty CSS3 feature called @media queries. We will use these extensively throughout this article and make your stylesheet look like this: .big-button { width: 100%; padding: 8px 0; background: hotPink; border: 3px dotted purple; font-size: 18px; color: #fff; border-radius: 20px; box-shadow: #111 3px 3px 0px; } @media only screen and (min-width: 768px){ .big-button { width: 480px; } } Now if you were coding along with me and have a modern browser (meaning a browser that supports most, if not all, features in the HTML5 specification, more on this later), you could do something fun. You can resize the width of your browser to see the start button respond to the @media queries. Start off with the browser really narrow and the button will get wider until the screen is 768 px wide; beyond that the button will snap to being only 480 px. If start off with your browser wider than 768 px, the button will stay 480 px wide until your browser width is under 768 px. Once it is under this threshold, the button snaps to being full width. This happens because of the media query. This query essentially asks the browser a couple of questions. The first part of the query is about what type of medium it is (print or screen). The second part of the query asks what the screen's minimum width is. When the browser replies yes to both screen and min-width 768px, the conditions are met for applying the styles within that media query. To say these styles are applied is a little misleading. In fact, the approach actually takes advantage of the fact that the styles provided in the media query can override other styles set previously in the stylesheet. In our case, the only style applied is an explicit width for the button that overrides the percentage width that was set previously. So, the nice thing about this is, we can make one website that will display appropriately for lots of screen sizes. This approach re-uses a lot of code, only applying styles as needed for various screen widths. Other approaches for getting usable sites to mobile devices require maintaining multiple codebases and having to resort to device detection, which only works if you can actually detect what device is requesting your website. These other approaches can be fragile and also break the Don't Repeat Yourself (DRY) commandment of programming. This article is going to go over a specific way of approaching RWD, though. We will use the 320 and Up framework to facilitate a mobile first strategy. In short, this strategy assumes that a device requesting the site has a small screen and doesn't necessarily have a lot of processing power. 320 and Up also has a lot of great helpers to make it fast and easy to produce features that many clients require on their sites. But we will get into these details as we build a simple site together. Take note, there are lots of frameworks out there that will help you build responsive sites, and there are even some that will help you build a responsive, mobile first site. One thing that distinguishes 320 and Up is that it is a tad less opinionated than most frameworks. I like it because it is simple and eliminates the busy work of setting up things one is likely to use for many sites. I also like that it is open source and can be used with static sites as well as any server-side language. Prerequisites Before we can start building, you need to download the code associated with this article. It will have all the components that you will need and is structured properly for you. If you want 320 and Up for your own projects, you can get it from the website of Andy Clarke (he's the fellow responsible for 320 and Up) or his GitHub account. I also maintain a fork in my own GitHub repo. Andy Clarke's site http://stuffandnonsense.co.uk/projects/320andup/ GitHub https://github.com/malarkey/320andup My GitHub Fork https://github.com/jasongonzales23/320andup That said, the simplest route to follow along with this article is to get the code I've wrapped up for you from: https://github.com/jasongonzales23/mobilefirst_book Summary In this article, we looked at a simple example of how responsive web design strategies can serve up the same content to screens of many sizes and have the layout adjust to the screen it is displayed on. We wrote a simple example of that for a pink button and got a link to 320 and Up, so we can get started building an entire mobile first-responsive website. Resources for Article: Further resources on this subject: HTML5 Canvas [Article] HTML5 Presentations - creating our initial presentation [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 15271

article-image-oracle-goldengate-advanced-administration-tasks-i
Packt
20 Sep 2013
19 min read
Save for later

Oracle GoldenGate- Advanced Administration Tasks - I

Packt
20 Sep 2013
19 min read
(For more resources related to this topic, see here.) Upgrading Oracle GoldenGate binaries In this recipe you will learn how to upgrade GoldenGate binaries. You will also learn about GoldenGate patches and how to apply them. Getting ready For this recipe, we will upgrade the GoldenGate binaries from version 11.2.1.0.1 to 11.2.1.0.3 on the source system, that is prim1-ol6-112 in our case. Both of these binaries are available from the Oracle Edelivery website under the part number V32400-01 and V34339-01 respectively. 11.2.1.0.1 binaries are installed under /u01/app/ggate/112101. How to do it... The steps to upgrade the Oracle GoldenGate binaries are: Make a new directory for 11.2.1.0.3 binaries: mkdir /u01/app/ggate/112103 Copy the binaries ZIP file to the server in the new directory. Unzip the binaries file: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112103 [ggate@prim1-ol6-112 112103]$ unzip V34339-01.zip Archive: V34339-01.zip inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar inflating: Oracle_GoldenGate_11.2.1.0.3_README.doc inflating: Oracle GoldenGate_11.2.1.0.3_README.txt inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.3.pdf Install the new binaries in /u01/app/ggate/112103: [ggate@prim1-ol6-112 112103]$ tar -pxvf fbo_ggs_Linux_x64_ora11g_64bit.tar Stop the processes in the existing installation: [ggate@prim1-ol6-112 112103]$ cd /u01/app/ggate/112101 [ggate@prim1-ol6-112 112101]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO Linux, x64, 64bit (optimized), Oracle 11g on Apr 23 2012 08:32:14 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> stop * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Stop the manager process: GGSCI (prim1-ol6-112.localdomain) 2> STOP MGRManager process is required by other GGS processes.Are you sure you want to stop it (y/n)? ySending STOP request to MANAGER ...Request processed.Manager stopped. Copy the subdirectories to the new binaries: [ggate@prim1-ol6-112 112101]$ cp -R dirprm /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirrpt /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirchk /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R BR /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirpcs /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112101]$ cp -R dirdef /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirout /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirdat /u01/app/ggate/112103/[ggate@prim1-ol6-112 112101]$ cp -R dirtmp /u01/app/ggate/112103/ Modify any parameter files under dirprm if you have hardcoded old binaries path in them. Edit the ggate user profile and update the value of the GoldenGate binaries home: vi .profile export GG_HOME=/u01/app/ggate/112103 Start the manager process from the new binaries: [ggate@prim1-ol6-112 ~]$ cd /u01/app/ggate/112103/ [ggate@prim1-ol6-112 112103]$ ./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21 Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved. GGSCI (prim1-ol6-112.localdomain) 1> START MGR Manager started. Start the processes: GGSCI (prim1-ol6-112.localdomain) 18> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting How it works... The method to upgrade the GoldenGate binaries is quite straightforward. As seen in the preceding section, you need to download and install the binaries on the server in a new directory. After this, you would stop the all GoldenGate processes that are running from the existing binaries. Then you would copy all the important GoldenGate directories with parameter files, trail files, report files, checkpoint files, and recovery files to the new binaries. If your trail files are kept on a separate filesystem which is linked to the dirdat directory using a softlink, then you would just need to create a new softlink under the new GoldenGate binaries home. Once all the files are copied, you would need to modify the parameter files if you have the path of the existing binaries hardcoded in them. The same would also need to be done in the OS profile of the ggate user. After this, you just start the manager process and rest of the processes from the new home. GoldenGate patches are all delivered as full binaries sets. This makes the procedure to patch the binaries exactly the same as performing major release upgrades. Table structure changes in GoldenGate environments with similar table definitions Almost all of the applications systems in IT undergo some change over a period of time. This change might include a fix of an identified bug, an enhancement or some configuration change required due to change in any other part of the system. The data that you would replicate using GoldenGate will most likely be part of some application schema. These schemas, just like the application software, sometimes require some changes which are driven by the application vendor. If you are replicating DDL along with DML in your environment then these schema changes will most likely be replicated by GoldenGate itself. However, if you are only replicating only DML and there are any DDL changes in the schema particularly around the tables that you are replicating, then these will affect the replication and might even break it. In this recipe, you will learn how to update the GoldenGate configuration to accommodate the schema changes that are done to the source system. This recipe assumes that the definitions of the tables that are replicated are similar in both the source and target databases. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target database are similar. The replication is configured for all objects owned by a SCOTT user using a SCOTT.* clause. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it… Here are the steps that you can follow to implement the preceding schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump processes and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-25 22:35:06 Seqno 350, RBA 11778560 SCN 0.11806849 (11806849) EXTRACT PGGTEST1 Last Started 2013-03-25 22:24 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-25 22:35:05.000000 RBA 7631 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-25 22:25 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000061 2013-03-25 22:37:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@ DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... The preceding steps cover a high level procedure that you can follow to modify the structure of the replicated tables in your GoldenGate configuration. Before you start to alter any processes or parameter file, you need to ensure that the applications are stopped and no user sessions in the database are modifying the data in the tables that you are replicating. Once the application is stopped, we check that all the redo data has been processed by GoldenGate processes and then stop. At this point we run the scripts that need to be run to make DDL changes to the database. This step needs to be run on both the source and target database as we will not be replicating these changes using GoldenGate. Once this is done, we alter the GoldenGate processes to start from the current time and start them. There's more... Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in such cases where the environment does not satisfy these conditions: Specific tables defined in GoldenGate parameter files Unlike the earlier example, where the tables are defined in the parameter files using a schema qualifier for example SCOTT.*, if you have individual tables defined in the GoldenGateparameterfiles, you would need to modify the GoldenGate parameter files to add these newly created tables to include them in replication. Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will have to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns. Table structure changes in GoldenGate environments with different table definitions In this recipe you will learn how to perform table structure changes in a replication environment where the table structures in the source and target environments are not similar. Getting ready For this recipe we are making the following assumptions: GoldenGate is set up to replicate only DML changes between the source and target environments. The application will be stopped for making schema changes in the source environment. The table structures in the source and target databases are not similar. The GoldenGate Admin user has been granted SELECT ANY TABLE in the source database and INSERT ANY TABLE, DELETE ANY TABLE, UPDATE ANY TABLE, SELECT ANY TABLE in the target database. The definition file was generated for the source schema and is configured in the replicat parameter file. The schema changes performed in this recipe are as follows: Add a new column called DOB (DATE) to the EMP table. Modify the DNAME column in the DEPT table to VARCHAR(20). Add a new table called ITEMS to the SCOTT schema: ITEMS ITEMNO NUMBER(5) PRIMARY KEY NAME VARCHAR(20) Add a new table called SALES to the SCOTT schema: SALES INVOICENO NUMBER(9) PRIMARY KEY ITEMNO NUMBER(5) FOREIGN KEY ITEMS(ITEMNO) EMPNO NUMBER(4) FOREIGN KEY EMP(EMPNO) Load the values for the DOB column in the EMP table. Load a few records in the ITEMS table. How to do it... Here are the steps that you can follow to implement the previous schema changes in the source environment: Ensure that the application accessing the source database is stopped. There should not be any process modifying the data in the database. Once you have stopped the application, wait for 2 to 3 minutes so that all pending redo is processed by the GoldenGate extract. Check the latest timestamp read by the Extract and Datapump process, and ensure it is the current timestamp: GGSCI (prim1-ol6-112.localdomain) 9> INFO EXTRACT EGGTEST1 GGSCI (prim1-ol6-112.localdomain) 10> INFO EXTRACT * EXTRACT EGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:07 ago) Log Read Checkpoint Oracle Redo Logs 2013-03-28 10:16:06 Seqno 352, RBA 12574320 SCN 0.11973456 (11973456) EXTRACT PGGTEST1 Last Started 2013-03-28 10:12 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File /u01/app/ggate/dirdat/st000010 2013-03-28 10:15:43.000000 RBA 8450 Stop the Extract and Datapump processes in the source environment: GGSCI (prim1-ol6-112.localdomain) 1> STOP EXTRACT * Sending STOP request to EXTRACT EGGTEST1 ... Request processed. Sending STOP request to EXTRACT PGGTEST1 ... Request processed. Check the status of the Replicat process in the target environment and ensure that it has processed the timestamp noted in step 3: GGSCI (stdby1-ol6-112.localdomain) 54> INFO REPLICAT * REPLICAT RGGTEST1 Last Started 2013-03-28 10:15 Status RUNNING Checkpoint Lag 00:00:00 (updated 00:00:04 ago) Log Read Checkpoint File ./dirdat/rt000062 2013-03-28 10:15:04.950188 RBA 10039 Stop the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 48> STOP REPLICAT * Sending STOP request to REPLICAT RGGTEST1 ... Request processed. Apply the schema changes to the source database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Apply the schema changes to the target database: SQL> ALTER TABLE SCOTT.EMP ADD DOB DATE; Table altered. SQL> ALTER TABLE SCOTT.DEPT MODIFY DNAME VARCHAR(20); Table altered. SQL> CREATE TABLE SCOTT.ITEMS ( ITEMNO NUMBER(5) PRIMARY KEY, NAME VARCHAR(20)); Table created. SQL> CREATE TABLE SCOTT.SALES ( INVOICENO NUMBER(9) PRIMARY KEY, ITEMNO NUMBER(5) REFERENCES SCOTT.ITEMS(ITEMNO), EMPNO NUMBER(4) REFERENCES SCOTT.EMP(EMPNO)); Table created. SQL> UPDATE SCOTT.EMP SET DOB=TO_DATE('01-01-1980','DD-MM-YYYY'); 14 rows updated. SQL> INSERT INTO SCOTT.ITEMS VALUES (1,'IRON'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (2,'COPPER'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (3,'GOLD'); 1 row created. SQL> INSERT INTO SCOTT.ITEMS VALUES (4,'SILVER'); 1 row created. SQL> COMMIT; Commit complete. Add supplemental logging for the newly added tables: GGSCI (prim1-ol6-112.localdomain) 4> DBLOGIN USERID GGATE_ADMIN@DBORATEST Password: Successfully logged into database. GGSCI (prim1-ol6-112.localdomain) 5> ADD TRANDATA SCOTT.ITEMS Logging of supplemental redo data enabled for table SCOTT.ITEMS. GGSCI (prim1-ol6-112.localdomain) 6> ADD TRANDATA SCOTT.SALES Logging of supplemental redo data enabled for table SCOTT.SALES. Update the parameter file for generating definitions as follows: vi $GG_HOME/dirprm/defs.prm DEFSFILE ./dirdef/defs.def USERID ggate_admin@dboratest, PASSWORD XXXX TABLE SCOTT.EMP; TABLE SCOTT.DEPT; TABLE SCOTT.BONUS; TABLE SCOTT.DUMMY; TABLE SCOTT.SALGRADE; TABLE SCOTT.ITEMS; TABLE SCOTT.SALES; Generate the definitions in the source environment: ./defgen paramfile ./dirprm/defs.prm Push the definitions file to the target server using scp: scp ./dirdef/defs.def stdby1-ol6-112:/u01/app/ggate/dirdef/ Edit the Extract and Datapump process parameter to include the newly created tables if you have specified individual table names in them. Alter the Extract and Datapump processes to skip the changes generated by the Application Schema Patch: GGSCI (prim1-ol6-112.localdomain) 7> ALTER EXTRACT EGGTEST1 BEGIN NOW EXTRACT altered. GGSCI (prim1-ol6-112.localdomain) 8> ALTER EXTRACT PGGTEST1 BEGIN NOW EXTRACT altered. Start the Extract and Datapump in the source environment: GGSCI (prim1-ol6-112.localdomain) 9> START EXTRACT * Sending START request to MANAGER ... EXTRACT EGGTEST1 starting Sending START request to MANAGER ... EXTRACT PGGTEST1 starting Edit the Replicat process parameter file to include the tables: ./ggsci EDIT PARAMS RGGTEST1 REPLICAT RGGTEST1 USERID GGATE_ADMIN@TGORTEST, PASSWORD GGATE_ADMIN DISCARDFILE /u01/app/ggate/dirrpt/RGGTEST1.dsc,append,MEGABYTES 500 SOURCEDEFS ./dirdef/defs.def MAP SCOTT.BONUS, TARGET SCOTT.BONUS; MAP SCOTT.SALGRADE, TARGET SCOTT.SALGRADE; MAP SCOTT.DEPT, TARGET SCOTT.DEPT; MAP SCOTT.DUMMY, TARGET SCOTT.DUMMY; MAP SCOTT.EMP, TARGET SCOTT.EMP; MAP SCOTT.EMP,TARGET SCOTT.EMP_DIFFCOL_ORDER; MAP SCOTT.EMP, TARGET SCOTT.EMP_EXTRACOL, COLMAP(USEDEFAULTS, LAST_UPDATE_TIME = @DATENOW ()); MAP SCOTT.SALES, TARGET SCOTT.SALES; MAP SCOTT.ITEMS, TARGET SCOTT.ITEMS; Start the Replicat process in the target environment: GGSCI (stdby1-ol6-112.localdomain) 56> START REPLICAT RGGTEST1 Sending START request to MANAGER ... REPLICAT RGGTEST1 starting How it works... You can follow the previously mentioned procedure to apply any DDL changes to the tables in the source database. This procedure is valid for environments where existing table structures between the source and the target databases are not similar. The key things to note in this method are: The changes should only be made when all the changes extracted by GoldenGate are applied to the target database, and the replication processes are stopped. Once the DDL changes have been performed in the source database, the definitions file needs to be regenerated. The changes that you are making to the table structures needs to be performed on both sides. There's more… Some of the assumptions made in the earlier procedure might not hold true for all environments. Let's see what needs to be done in cases where the environment does not satisfy these conditions: Individual table permissions granted to the GoldenGate Admin user If you have granted table-specific permissions to the GoldenGate Admin user in the source and target environments, you would need to grant them on the newly added tables to allow the GoldenGate user to read their data in the source environment and also to apply the changes to these tables in the target environment. Supplemental logging for modified tables without any keys If you are adding or deleting any columns from the tables in the source database which do not have any primary/unique keys, you would then need to drop the existing supplemental log group and read them. This is because when there are no primary/unique keys in a table, GoldenGate adds all columns to the supplemental log group. This supplemental log group will need to be modified when the structure of the underlying table is modified. Supplemental log groups with all columns for modified tables In some cases, you would need to enable supplemental logging on all columns of the source tables that you are replicating. This is mostly applicable for consolidation replication topologies where all changes are captured and converted into INSERTs in the target environment, which usually is a Data warehouse. In such cases, you need to drop and read the supplemental logging on the tables in which you are adding or removing any columns.
Read more
  • 0
  • 0
  • 4717

article-image-self-service-business-intelligence-creating-value-data
Packt
20 Sep 2013
15 min read
Save for later

Self-service Business Intelligence, Creating Value from Data

Packt
20 Sep 2013
15 min read
(For more resources related to this topic, see here.) Over the years most businesses have spent considerable amount of time, money, and effort in building databases, reporting systems, and Business Intelligence (BI) systems. IT often thinks that they are providing the necessary information to the business users for them to make the right decisions. However, when I meet the users they tell me a different story. Most often they say that they do not have the information they need to do their job. Or they have to spend a lot of time getting the relevant information. Many users state that they spend more time getting access to the data than understanding the information. This divide between IT and business is very common, it causes a lot of frustration and can cost a lot of money, which is a real issue for companies that needs to be solved for them to be profitable in the future. Research shows that by 2015 companies that build a good information management system will be 20 percent more profitable compared to their peers. You can read the entire research publication from http://download.microsoft.com/download/7/B/8/7B8AC938-2928-4B65-B1B3-0B523DDFCDC7/Big%20Data%20Gartner%20 information_management_in_the_21st%20Century.pdf. So how can an organization avoid the pitfalls in business intelligence systems and create an effective way of working with information? This article will cover the following topics concerning it: Common user requirements related to BI Understanding how these requirements can be solved by Analysis Services An introduction to self-service reporting Identifying common user requirements for a business intelligence system In many cases, companies that struggle with information delivery do not have a dedicated reporting system or data warehouse. Instead the users have access only to the operational reports provided by each line of business application. This is extremely troublesome for the users that want to compare information from different systems. As an example, think of a sales person that wants to have a report that shows the sales pipeline, from the Customer Relationship Management (CRM) system together with the actual sales figures from the Enterprise Resource Planning (ERP) system. Without a common reporting system the users have to combine the information themselves with whatever tools are available to them. Most often this tool is Microsoft Excel. While Microsoft Excel is an application that can be used to effectively display information to the users, it is not the best system for data integration. To perform the steps of extracting, transforming, and loading data (ETL), from the source system, the users have to write tedious formulas and macros to clean data, before they can start comparing the numbers and taking actual decisions based on the information. Lack of a dedicated reporting system can also cause trouble with the performance of the Online Transaction Processing (OLTP) system. When I worked in the SQL Server support group at Microsoft, we often had customers contacting us on performance issues that they had due to the users running the heavy reports directly on the production system. To solve this problem, many companies invest in a dedicated reporting system or a data warehouse. The purpose of this system is to contain a database customized for reporting, where the data can be transformed and combined once and for all from all source systems. The data warehouse also serves another purpose and that is to serve as the storage of historic data. Many companies that have invested in a common reporting database or data warehouse still require a person with IT skills to create a report. The main reason for this is that the organizations that have invested in a reporting system have had the expert users define the requirements for the system. Expert users will have totally different requirements than the majority of the users in the organization and an expert tool is often very hard to learn. An expert tool that is too hard for the normal users will put a strain on the IT department that will have to produce all the reports. This will result in the end users waiting for their reports for weeks and even months. One large corporation that I worked with had invested millions of dollars in a reporting solution, but to get a new report the users had to wait between nine and 12 months, before they got the report in their hand. Imagine the frustration and the grief that waiting this long before getting the right information causes the end users. To many users, business intelligence means simple reports with only the ability to filter data in a limited way. While simple reports such as the one in the preceding screenshot can provide valuable information, it does not give the users the possibility to examine the data in detail. The users cannot slice-and-dice the information and they cannot drill down to the details, if the aggregated level that the report shows is insufficient for decision making. If a user would like to have these capabilities, they would need to export the information into a tool that enables them to easily do so. In general, this means that the users bring the information into Excel to be able to pivot the information and add their own measures. This often results in a situation where there are thousands of Excel spreadsheets floating around in the organization, all with their own data, and with different formulas calculating the same measures. When analyzing data, the data itself is the most important thing. But if you cannot understand the values, the data is of no benefit to you. Many users find that it is easier to understand information, if it is presented in a way that they can consume efficiently. This means different things to different users, if you are a CEO, you probably want to consume aggregated information in a dashboard such as the one you can see in the following screenshot: On the other hand, if you are a controller, you want to see the numbers on a very detailed level that would enable you to analyze the information. A controller needs to be able to find the root cause, which in most cases includes analyzing information on a transaction level. A sales representative probably does not want to analyze the information. Instead, he or she would like to have a pre-canned report filtered on customers and time to see what goods the customers have bought in the past, and maybe some suggested products that could be recommended to the customers. Creating a flexible reporting solution What the companies need is a way for the end users to access information in a user-friendly interface, where they can create their own analytical reports. Analytical reporting gives the user the ability to see trends, look at information on an aggregated level, and drill down to the detailed information with a single-click. In most cases this will involve building a data warehouse of some kind, especially if you are going to reuse the information in several reports. The reason for creating a data warehouse is mainly the ability to combine different sources into one infrastructure once. If you build reports that do the integration and cleaning of the data in the reporting layer, then you will end up doing the same tasks of data modification in every report. This is both tedious and could cause unwanted errors as the developer would have to repeat all the integration efforts in all the reports that need to access the data. If you do it in the data warehouse you can create an ETL program that will move the data, and prepare it for the reports once, and all the reports can access this data. A data warehouse is also beneficial from many other angles. With a data warehouse, you have the ability to offload the burden of running the reports from the transactional system, a system that is built mainly for high transaction rates at high speed, and not for providing summarized data in a report to the users. From a report authoring perspective a data warehouse is also easier to work with. Consider the simple static report shown in the first screenshot. This report is built against a data warehouse that has been modeled using dimensional modeling. This means that the query used in the report is very simple compared to getting the information from a transactional system. In this case, the query is a join between six tables containing all the information that is available about dates, products, sales territories, and sales. selectf.SalesOrderNumber,s.EnglishProductSubcategoryName,SUM(f.OrderQuantity) as OrderQuantity,SUM(f.SalesAmount) as SalesAmount,SUM(f.TaxAmt) as TaxAmtfrom FactInternetSales fjoin DimProduct p on f.ProductKey=p.ProductKeyjoin DimProductSubcategory s on p.ProductSubcategoryKey =s.ProductSubcategoryKeyjoin DimProductCategory c on s.ProductCategoryKey =c.ProductCategoryKeyjoin DimDate d on f.OrderDateKey = d.DateKeyjoin DimSalesTerritory t on f.SalesTerritoryKey =t.SalesTerritoryKeywhere c.EnglishProductCategoryName = @ProductCategoryand d.CalendarYear = @Yearand d.EnglishMonthName = @MonthNameand t.SalesTerritoryCountry = @Countrygroup by f.SalesOrderNumber, s.EnglishProductSubcategoryName You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. The preceding query is included for illustrative purposes. As you can see it is very simple to write for someone that is well versed in Transact-SQL. Compare this to getting all the information from the operational system necessary to produce this report, and all the information stored in the six tables. It would be a daunting task. Even though the sample database for AdventureWorks is very simple, we still need to query a lot of tables to get to the information. The following figure shows the necessary tables from the OLTP system you would need to query, to get the information available in the six tables in the data warehouse. Now imagine creating the same query against a real system, it could easily be hundreds of tables involved to extract the data that are stored in a simple data model used for sales reporting. As you can see clearly now, working against a model that has been optimized for reporting is much simpler when creating the reports. Even with a well-structured data warehouse, many users would struggle with writing the select query driving the report shown earlier. The users, in general, do not know SQL. They typically do not understand the database schema since the table and column names usually consists of abbreviations that can be cryptic to the casual user. What if a user would like to change the report, so that it would show data in a matrix with the ability to drill down to lower levels? Then they most probably would need to contact IT. IT would need to rewrite the query and change the entire report layout, causing a delay between the need of the data and the availability. What is needed is a tool that enables the users to work with the business attributes instead of the tables and columns, with simple understandable objects instead of a complex database engine. Fortunately for us SQL Server contains this functionality; it is just for us database professionals to learn how to bring these capabilities to the business. That is what this article is all about, creating a flexible reporting solution allowing the end users to create their own reports. I have assumed that you as the reader have knowledge of databases and are well-versed with your data. What you will learn in this article is, how to use a component of SQL Server 2012 called SQL Server Analysis Services to create a cube or semantic model, exposing data in the simple business attributes allowing the users to use different tools to create their own ad hoc reports. Think of the cube as a PivotTable spreadsheet in Microsoft Excel. From the users perspective, they have full flexibility when analyzing the data. You can drag-and-drop whichever column you want to, into either the rows, columns, or filter boxes. The PivotTable spreadsheet also summarizes the information depending on the different attributes added to the PivotTable spreadsheet. The same capabilities are provided through the semantic model or the cube. When you are using the semantic model the data is not stored locally within the PivotTable spreadsheet, as it is when you are using the normal PivotTable functionality in Microsoft Excel. This means that you are not limited to the number of rows that Microsoft Excel is able to handle. Since the semantic model sits in a layer between the database and the end user reporting tool, you have the ability to rename fields, add calculations, and enhance your data. It also means that whenever new data is available in the database and you have processed your semantic model, then all the reports accessing the model will be updated. The semantic model is available in SQL Server Analysis Services. It has been part of the SQL Server package since Version 7.0 and has had major revisions in the SQL Server 2005, 2008 R2, and 2012 versions. This article will focus on how to create semantic models or cubes through practical step-by-step instructions. Getting user value through self-service reporting SQL Server Analysis Services is an application that allows you to create a semantic model that can be used to analyze very large amounts of data with great speed. The models can either be user created, or created and maintained by IT. If the user wants to create it, they can do so, by using a component in Microsoft Excel 2010 and upwards called PowerPivot. If you run Microsoft Excel 2013, it is included in the installed product, and you just need to enable it. In Microsoft Excel 2010, you have to download it as a separate add-in that you either can find on the Microsoft homepage or on the site called http://www.powerpivot.com. PowerPivot creates and uses a client-side semantic model that runs in the context of the Microsoft Excel process; you can only use Microsoft Excel as a way of analyzing the data. If you just would like to run a user created model, you do not need SQL Server at all, you just need Microsoft Excel. On the other hand, if you would like to maintain user created models centrally then you need, both SQL Server 2012 and SharePoint. Instead, if you would like IT to create and maintain a central semantic model, then IT need to install SQL Server Analysis Services. IT will, in most cases, not use Microsoft Excel to create the semantic models. Instead, IT will use Visual Studio as their tool. Visual Studio is much more suitable for IT compared to Microsoft Excel. Not only will they use it to create and maintain SQL Server Analysis Services semantic models, they will also use it for other database related tasks. It is a tool that can be connected to a source control system allowing several developers to work on the same project. The semantic models that they create from Visual Studio will run on a server that several clients can connect to simultaneously. The benefit of running a server-side model is that they can use the computational power of the server, this means that you can access more data. It also means that you can use a variety of tools to display the information. Both approaches enable users to do their own self-service reporting. In the case where PowerPivot is used they have complete freedom; but they also need the necessary knowledge to extract the data from the source systems and build the model themselves. In the case where IT maintains the semantic model, the users only need the knowledge to connect an end user tool such as Microsoft Excel to query the model. The users are, in this case, limited to the data that is available in the predefined model, but on the other hand, it is much simpler to do their own reporting. This is something that can be seen in the preceding figure that shows Microsoft Excel 2013 connected to a semantic model. SQL Server Analysis Services is available in the Standard edition with limited functionality, and in the BI and Enterprise edition with full functionality. For smaller departmental solutions the Standard edition can be used, but in many cases you will find that you need either the BI or the Enterprise edition of SQL Server. If you would like to create in-memory models, you definitely cannot run the Standard edition of the software since this functionality is not available in the Standard edition of SQL Server. Summary In this article, you learned about the requirements that most organizations have when it comes to an information management platform. You were introduced to SQL Server Analysis Services that provides the capabilities needed to create a self-service platform that can serve as the central place for all the information handling. SQL Server Analysis Services allows users to work with the data in the form of business entities, instead of through accessing a databases schema. It allows users to use easy to learn query tools such as Microsoft Excel to analyze the large amounts of data with subsecond response times. The users can easily create different kinds of reports and dashboards with the semantic model as the data source. Resources for Article : Further resources on this subject: MySQL Linked Server on SQL Server 2008 [Article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] FAQs on Microsoft SQL Server 2008 High Availability [Article]
Read more
  • 0
  • 0
  • 3004
article-image-angular-zen
Packt
19 Sep 2013
5 min read
Save for later

Angular Zen

Packt
19 Sep 2013
5 min read
(For more resources related to this topic, see here.) Meet AngularJS AngularJS is a client-side MVC framework written in JavaScript. It runs in a web browser and greatly helps us (developers) to write modern, single-page, AJAX-style web applications. It is a general purpose framework, but it shines when used to write CRUD (Create Read Update Delete) type web applications. Getting familiar with the framework AngularJS is a recent addition to the client-side MVC frameworks list, yet it has managed to attract a lot of attention, mostly due to its innovative templating system, ease of development, and very solid engineering practices. Indeed, its templating system is unique in many respects: It uses HTML as the templating language It doesn't require an explicit DOM refresh, as AngularJS is capable of tracking user actions, browser events, and model changes to figure out when and which templates to refresh It has a very interesting and extensible components subsystem, and it is possible to teach a browser how to interpret new HTML tags and attributes The templating subsystem might be the most visible part of AngularJS, but don't be mistaken that AngularJS is a complete framework packed with several utilities and services typically needed in single-page web applications. AngularJS also has some hidden treasures, dependency injection (DI) and strong focus on testability. The built-in support for DI makes it easy to assemble a web application from smaller, thoroughly tested services. The design of the framework and the tooling around it promote testing practices at each stage of the development process. Finding your way in the project AngularJS is a relatively new actor on the client-side MVC frameworks scene; its 1.0 version was released only in June 2012. In reality, the work on this framework started in 2009 as a personal project of Miško Hevery, a Google employee. The initial idea turned out to be so good that, at the time of writing, the project was officially backed by Google Inc., and there is a whole team at Google working full-time on the framework. AngularJS is an open source project hosted on GitHub (https://github.com/angular/angular.js) and licensed by Google, Inc. under the terms of the MIT license. The community At the end of the day, no project would survive without people standing behind it. Fortunately, AngularJS has a great, supportive community. The following are some of the communication channels where one can discuss design issues and request help: angular@googlegroups.com mailing list (Google group) Google + community at https://plus.google.com/u/0/communities/115368820700870330756 #angularjs IRC channel [angularjs] tag at http://stackoverflow.com AngularJS teams stay in touch with the community by maintaining a blog (http://blog.angularjs.org/) and being present in the social media, Google + (+ AngularJS), and Twitter (@angularjs). There are also community meet ups being organized around the world; if one happens to be hosted near a place you live, it is definitely worth attending! Online learning resources AngularJS has its own dedicated website (http://www.angularjs.org) where we can find everything that one would expect from a respectable framework: conceptual overview, tutorials, developer's guide, API reference, and so on. Source code for all released AngularJS versions can be downloaded from http://code.angularjs.org. People looking for code examples won't be disappointed, as AngularJS documentation itself has plenty of code snippets. On top of this, we can browse a gallery of applications built with AngularJS (http://builtwith.angularjs.org). A dedicated YouTube channel (http://www.youtube.com/user/angularjs) has recordings from many past events as well as some very useful video tutorials. Libraries and extensions While AngularJS core is packed with functionality, the active community keeps adding new extensions almost every day. Many of those are listed on a dedicated website: http://ngmodules.org. Tools AngularJS is built on top of HTML and JavaScript, two technologies that we've been using in web development for years. Thanks to this, we can continue using our favorite editors and IDEs, browser extensions, and so on without any issues. Additionally, the AngularJS community has contributed several interesting additions to the existing HTML/JavaScript toolbox. Batarang Batarang is a Chrome developer tool extension for inspecting the AngularJS web applications. Batarang is very handy for visualizing and examining the runtime characteristics of AngularJS applications. We are going to use it extensively in this article to peek under the hood of a running application. Batarang can be installed from the Chrome's Web Store (AngularJS Batarang) as any other Chrome extension. Plunker and jsFiddle Both Plunker (http://plnkr.co) and jsFiddle (http://jsfiddle.net) make it very easy to share live-code snippets (JavaScript, CSS, and HTML). While those tools are not strictly reserved for usage with AngularJS, they were quickly adopted by the AngularJS community to share the small-code examples, scenarios to reproduce bugs, and so on. Plunker deserves special mentioning as it was written in AngularJS, and is a very popular tool in the community. IDE extensions and plugins Each one of us has a favorite IDE or an editor. The good news is that there are existing plugins/extensions for several popular IDEs such as Sublime Text 2 (https://github.com/angular-ui/AngularJS-sublime-package), Jet Brains' products (http://plugins.jetbrains.com/plugin?pr=idea&pluginId=6971), and so on.
Read more
  • 0
  • 0
  • 3216

Packt
16 Sep 2013
16 min read
Save for later

Linux Shell Scripting – various recipes to help you

Packt
16 Sep 2013
16 min read
(For more resources related to this topic, see here.) The shell scripting language is packed with all the essential problem-solving components for Unix/Linux systems. Text processing is one of the key areas where shell scripting is used, and there are beautiful utilities such as sed, awk, grep, and cut, which can be combined to solve problems related to text processing. Various utilities help to process a file in fine detail of a character, line, word, column, row, and so on, allowing us to manipulate a text file in many ways. Regular expressions are the core of pattern-matching techniques, and most of the text-processing utilities come with support for it. By using suitable regular expression strings, we can produce the desired output, such as filtering, stripping, replacing, and searching. Using regular expressions Regular expressions are the heart of text-processing techniques based on pattern matching. For fluency in writing text-processing tools, one must have a basic understanding of regular expressions. Using wild card techniques, the scope of matching text with patterns is very limited. Regular expressions are a form of tiny, highly-specialized programming language used to match text. A typical regular expression for matching an e-mail address might look like [a-z0-9_]+@[a-z0-9]+\.[a-z]+. If this looks weird, don't worry, it is really simple once you understand the concepts through this recipe. How to do it... Regular expressions are composed of text fragments and symbols, which have special meanings. Using these, we can construct any suitable regular expression string to match any text according to the context. As regex is a generic language to match texts, we are not introducing any tools in this recipe. Let's see a few examples of text matching: To match all words in a given text, we can write the regex as follows: ( ?[a-zA-Z]+ ?) ? is the notation for zero or one occurrence of the previous expression, which in this case is the space character. The [a-zA-Z]+ notation represents one or more alphabet characters (a-z and A-Z). To match an IP address, we can write the regex as follows: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} Or: [[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3} We know that an IP address is in the form 192.168.0.2. It is in the form of four integers (each from 0 to 255), separated by dots (for example, 192.168.0.2). [0-9] or [:digit:] represents a match for digits from 0 to 9. {1,3} matches one to three digits and \. matches the dot character (.). This regex will match an IP address in the text being processed. However, it doesn't check for the validity of the address. For example, an IP address of the form 123.300.1.1 will be matched by the regex despite being an invalid IP. This is because when parsing text streams, usually the aim is to only detect IPs. How it works... Let's first go through the basic components of regular expressions (regex): regex Description Example ^ This specifies the start of the line marker. ^tux matches a line that starts with tux. $ This specifies the end of the line marker. tux$ matches a line that ends with tux. . This matches any one character. Hack. matches Hack1, Hacki, but not Hack12 or Hackil; only one additional character matches. [] This matches any one of the characters enclosed in [chars]. coo[kl] matches cook or cool. [^] This matches any one of the characters except those that are enclosed in [^chars]. 9[^01] matches 92 and 93, but not 91 and 90. [-] This matches any character within the range specified in []. [1-5] matches any digits from 1 to 5. ? This means that the preceding item must match one or zero times. colou?r matches color or colour, but not colouur. + This means that the preceding item must match one or more times. Rollno-9+ matches Rollno-99 and Rollno-9, but not Rollno-. * This means that the preceding item must match zero or more times. co*l matches cl, col, and coool. () This treats the terms enclosed as one entity ma(tri)?x matches max or matrix. {n} This means that the preceding item must match n times. [0-9]{3} matches any three-digit number. [0-9]{3} can be expanded as [0-9][0-9][0-9]. {n,} This specifies the minimum number of times the preceding item should match. [0-9]{2,} matches any number that is two digits or longer. {n, m} This specifies the minimum and maximum number of times the preceding item should match. [0-9]{2,5} matches any number has two digits to five digits. | This specifies the alternation-one of the items on either of sides of | should match. Oct (1st | 2nd) matches Oct 1st or Oct 2nd. \ This is the escape character for escaping any of the special characters mentioned previously. a\.b matches a.b, but not ajb. It ignores the special meaning of . because of \. For more details on the regular expression components available, you can refer to the following URL: http://www.linuxforu.com/2011/04/sed-explained-part-1/ There's more... Let's see how the special meanings of certain characters are specified in the regular expressions. Treatment of special characters Regular expressions use some characters, such as $, ^, ., *, +, {, and }, as special characters. But, what if we want to use these characters as normal text characters? Let's see an example of a regex, a.txt. This will match the character a, followed by any character (due to the '.' character), which is then followed by the string txt . However, we want '.' to match a literal '.' instead of any character. In order to achieve this, we precede the character with a backward slash \ (doing this is called escaping the character). This indicates that the regex wants to match the literal character rather than its special meaning. Hence, the final regex becomes a\.txt. Visualizing regular expressions Regular expressions can be tough to understand at times, but for people who are good at understanding things with diagrams, there are utilities available to help in visualizing regex. Here is one such tool that you can use by browsing to http://www.regexper.com; it basically lets you enter a regular expression and creates a nice graph to help understand it. Here is a screenshot showing the regular expression we saw in the previous section: Searching and mining a text inside a file with grep Searching inside a file is an important use case in text processing. We may need to search through thousands of lines in a file to find out some required data, by using certain specifications. This recipe will help you learn how to locate data items of a given specification from a pool of data. How to do it... The grep command is the magic Unix utility for searching in text. It accepts regular expressions, and can produce output in various formats. Additionally, it has numerous interesting options. Let's see how to use them: To search for lines of text that contain the given pattern: $ grep pattern filenamethis is the line containing pattern Or: $ grep "pattern" filenamethis is the line containing pattern We can also read from stdin as follows: $ echo -e "this is a word\nnext line" | grep wordthis is a word Perform a search in multiple files by using a single grep invocation, as follows: $ grep "match_text" file1 file2 file3 ... We can highlight the word in the line by using the --color option as follows: $ grep word filename --color=autothis is the line containing word Usually, the grep command only interprets some of the special characters in match_text. To use the full set of regular expressions as input arguments, the -E option should be added, which means an extended regular expression. Or, we can use an extended regular expression enabled grep command, egrep. For example: $ grep -E "[a-z]+" filename Or: $ egrep "[a-z]+" filename In order to output only the matching portion of a text in a file, use the -o option as follows: $ echo this is a line. | egrep -o "[a-z]+\." line. In order to print all of the lines, except the line containing match_pattern, use: $ grep -v match_pattern file The -v option added to grep inverts the match results. Count the number of lines in which a matching string or regex match appears in a file or text, as follows: $ grep -c "text" filename 10 It should be noted that -c counts only the number of matching lines, not the number of times a match is made. For example: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -c "[0-9]" 2 Even though there are six matching items, it prints 2, since there are only two matching lines. Multiple matches in a single line are counted only once. To count the number of matching items in a file, use the following trick: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -o "[0-9]" | wc -l 6 Print the line number of the match string as follows: $ cat sample1.txt gnu is not unix linux is fun bash is art $ cat sample2.txt planetlinux $ grep linux -n sample1.txt 2:linux is fun or $ cat sample1.txt | grep linux -n If multiple files are used, it will also print the filename with the result as follows: $ grep linux -n sample1.txt sample2.txt sample1.txt:2:linux is fun sample2.txt:2:planetlinux Print the character or byte offset at which a pattern matches, as follows: $ echo gnu is not unix | grep -b -o "not" 7:not The character offset for a string in a line is a counter from 0, starting with the first character. In the preceding example, not is at the seventh offset position (that is, not starts from the seventh character in the line; that is, gnu is not unix). The -b option is always used with -o. To search over multiple files, and list which files contain the pattern, we use the following: $ grep -l linux sample1.txt sample2.txt sample1.txt sample2.txt The inverse of the -l argument is -L. The -L argument returns a list of non-matching files. There's more... We have seen the basic usages of the grep command, but that's not it; the grep command comes with even more features. Let's go through those. Recursively search many files To recursively search for a text over many directories of descendants, use the following command: $ grep "text" . -R -n In this command, "." specifies the current directory. The options -R and -r mean the same thing when used with grep. For example: $ cd src_dir $ grep "test_function()" . -R -n ./miscutils/test.c:16:test_function(); test_function() exists in line number 16 of miscutils/test.c. This is one of the most frequently used commands by developers. It is used to find files in the source code where a certain text exists. Ignoring case of pattern The -i argument helps match patterns to be evaluated, without considering the uppercase or lowercase. For example: $ echo hello world | grep -i "HELLO" hello grep by matching multiple patterns Usually, we specify single patterns for matching. However, we can use an argument -e to specify multiple patterns for matching, as follows: $ grep -e "pattern1" -e "pattern" This will print the lines that contain either of the patterns and output one line for each match. For example: $ echo this is a line of text | grep -e "this" -e "line" -o this line There is also another way to specify multiple patterns. We can use a pattern file for reading patterns. Write patterns to match line-by-line, and execute grep with a -f argument as follows: $ grep -f pattern_filesource_filename For example: $ cat pat_file hello cool $ echo hello this is cool | grep -f pat_file hello this is cool Including and excluding files in a grep search grep can include or exclude files in which to search. We can specify include files or exclude files by using wild card patterns. To search only for .c and .cpp files recursively in a directory by excluding all other file types, use the following command: $ grep "main()" . -r --include *.{c,cpp} Note, that some{string1,string2,string3} expands as somestring1 somestring2 somestring3. Exclude all README files in the search, as follows: $ grep "main()" . -r --exclude "README" To exclude directories, use the --exclude-dir option. To read a list of files to exclude from a file, use --exclude-from FILE. Using grep with xargs with zero-byte suffix The xargs command is often used to provide a list of file names as a command-line argument to another command. When filenames are used as command-line arguments, it is recommended to use a zero-byte terminator for the filenames instead of a space terminator. Some of the filenames can contain a space character, and it will be misinterpreted as a terminator, and a single filename may be broken into two file names (for example, New file.txt can be interpreted as two filenames New and file.txt). This problem can be avoided by using a zero-byte suffix. We use xargs so as to accept a stdin text from commands such as grep and find. Such commands can output text to stdout with a zero-byte suffix. In order to specify that the input terminator for filenames is zero byte (\0), we should use -0 with xargs. Create some test files as follows: $ echo "test" > file1 $ echo "cool" > file2 $ echo "test" > file3 In the following command sequence, grep outputs filenames with a zero-byte terminator (\0), because of the -Z option with grep. xargs -0 reads the input and separates filenames with a zero-byte terminator: $ grep "test" file* -lZ | xargs -0 rm Usually, -Z is used along with -l. Silent output for grep Sometimes, instead of actually looking at the matched strings, we are only interested in whether there was a match or not. For this, we can use the quiet option (-q), where the grep command does not write any output to the standard output. Instead, it runs the command and returns an exit status based on success or failure. We know that a command returns 0 on success, and non-zero on failure. Let's go through a script that makes use of grep in a quiet mode, for testing whether a match text appears in a file or not. #!/bin/bash #Filename: silent_grep.sh #Desc: Testing whether a file contain a text or not if [ $# -ne 2 ]; then echo "Usage: $0 match_text filename" exit 1 fi match_text=$1 filename=$2 grep -q "$match_text" $filename if [ $? -eq 0 ]; then echo "The text exists in the file" else echo "Text does not exist in the file" fi The silent_grep.sh script can be run as follows, by providing a match word (Student) and a file name (student_data.txt) as the command argument: $ ./silent_grep.sh Student student_data.txt The text exists in the file Printing lines before and after text matches Context-based printing is one of the nice features of grep. Suppose a matching line for a given match text is found, grep usually prints only the matching lines. But, we may need "n" lines after the matching line, or "n" lines before the matching line, or both. This can be performed by using context-line control in grep. Let's see how to do it. In order to print three lines after a match, use the -A option: $ seq 10 | grep 5 -A 3 5 6 7 8 In order to print three lines before the match, use the -B option: $ seq 10 | grep 5 -B 3 2 3 4 5 Print three lines after and before the match, and use the -C option as follows: $ seq 10 | grep 5 -C 3 2 3 4 5 6 7 8 If there are multiple matches, then each section is delimited by a line "--": $ echo -e "a\nb\nc\na\nb\nc" | grep a -A 1 a b -- a b Cutting a file column-wise with cut We may need to cut the text by a column rather than a row. Let's assume that we have a text file containing student reports with columns, such as Roll, Name, Mark, and Percentage. We need to extract only the name of the students to another file or any nth column in the file, or extract two or more columns. This recipe will illustrate how to perform this task. How to do it... cut is a small utility that often comes to our help for cutting in column fashion. It can also specify the delimiter that separates each column. In cut terminology, each column is known as a field . To extract particular fields or columns, use the following syntax: cut -f FIELD_LIST filename FIELD_LIST is a list of columns that are to be displayed. The list consists of column numbers delimited by commas. For example: $ cut -f 2,3 filename Here, the second and the third columns are displayed. cut can also read input text from stdin. Tab is the default delimiter for fields or columns. If lines without delimiters are found, they are also printed. To avoid printing lines that do not have delimiter characters, attach the -s option along with cut. An example of using the cut command for columns is as follows: $ cat student_data.txt No Name Mark Percent 1 Sarath 45 90 2 Alex 49 98 3 Anu 45 90 $ cut -f1 student_data.txt No 1 2 3 Extract multiple fields as follows: $ cut -f2,4 student_data.txt Name Percent Sarath 90 Alex 98 Anu 90 To print multiple columns, provide a list of column numbers separated by commas as arguments to -f. We can also complement the extracted fields by using the --complement option. Suppose you have many fields and you want to print all the columns except the third column, then use the following command: $ cut -f3 --complement student_data.txt No Name Percent 1 Sarath 90 2 Alex 98 3 Anu 90 To specify the delimiter character for the fields, use the -d option as follows: $ cat delimited_data.txt No;Name;Mark;Percent 1;Sarath;45;90 2;Alex;49;98 3;Anu;45;90 $ cut -f2 -d";" delimited_data.txt Name Sarath Alex Anu There's more The cut command has more options to specify the character sequences to be displayed as columns. Let's go through the additional options available with cut. Specifying the range of characters or bytes as fields Suppose that we don't rely on delimiters, but we need to extract fields in such a way that we need to define a range of characters (counting from 0 as the start of line) as a field. Such extractions are possible with cut. Let's see what notations are possible: N- from the Nth byte, character, or field, to the end of line N-M from the Nth to Mth (included) byte, character, or field -M from first to Mth (included) byte, character, or field We use the preceding notations to specify fields as a range of bytes or characters with the following options: -b for bytes -c for characters -f for defining fields For example: $ cat range_fields.txt abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxy You can print the first to fifth characters as follows: $ cut -c1-5 range_fields.txt abcde abcde abcde abcde The first two characters can be printed as follows: $ cut range_fields.txt -c -2 ab ab ab ab Replace -c with -b to count in bytes. We can specify the output delimiter while using with -c, -f, and -b, as follows: --output-delimiter "delimiter string" When multiple fields are extracted with -b or -c, the --output-delimiter is a must. Otherwise, you cannot distinguish between fields if it is not provided. For example: $ cut range_fields.txt -c1-3,6-9 --output-delimiter "," abc,fghi abc,fghi abc,fghi abc,fghi
Read more
  • 0
  • 0
  • 2567
Modal Close icon
Modal Close icon