Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-development-windows-mobile-applications-part-1
Packt
26 Oct 2009
4 min read
Save for later

Development of Windows Mobile Applications (Part 1)

Packt
26 Oct 2009
4 min read
Windows OS for Windows Mobile is available in various versions, but for this article we will be using Windows Mobile 6. Windows Mobile 6 uses .NET Compact Framework v2 SP2, and has 3 different versions: Windows Mobile 6 Standard (phones without Touch Screen) Windows Mobile 6 Professional (with Phone functionality) Windows Mobile 6 Classic (without Phone functionality) Windows Mobile 6.1 and Windows Mobile 6.5 are other 2 higher versions available with some additional features as compared to Windows Mobile 6. Windows Mobile 7 expected to be released in 2010 and is said to have major updates. This article concentrates on development on Windows Mobile 6 Professional. Software Prerequisite This article will introduce you to the development for Windows Mobile 6 Professional, using Visual C#. Windows Mobile 6 has .NET Compact Framework v2 SP2 preinstalled. .NET Compact Framework is a Compact Edition of .NET Framework, and does not have all the features of the complete .NET Framework. Following are the software required for development: Microsoft Visual Studio 2008 Windows Mobile 6 Professional SDK Refresh (SDK contains emulator, used for testing, debugging. To download click here) ActiveSync (Used for Data Synchronizing between development machine and Windows Mobile, To download click here) Without making any exception, we will follow the golden rule of learning by writing “Hello World” application. Hello World We will assume that you have installed all the prerequisite software mentioned above. Launch Visual Studio 2008 and select Visual C# (if prompted). Create a new Project (File -> New) as shown below: While creating a new Project, Visual Studio 2008 IDE provides an option to select an installed template, which will create Project with all the basic requirements/structure for development. For Windows Mobile, we will select option Smart Device and template Smart Device Project as shown below.  You can also provide: Name: Name for Project. We will call it as MyFirstApp. Location: Location where this project will be created. Browse and set the desired location. We will use default location for now. Solution Name: Name for referring the Solution. Usually we keep it same as Project Name. Since Windows Mobile 6 has .NET Compact Framework v2, it will select the .NET Framework 2.0 from the dropdown on the top right. Click OK. Next step is to select the Target platform, .NET Compact Framework Version and Template. For our application we will select: Target platform: Windows Mobile 6 Professional SDK .NET Compact Framework Version: .NET Compact Framework Version 2.0. Template: Device Application Project MyFirstApp is successfully created and IDE will open a Form as shown. Let me introduce you to the various sections on screen. This is the main section called development section. All the coding and designing of the Form is done here. This section is called Toolbox and lists all the available components. If this section is not visible click View->Toolbox. This section is called Solution Explorer and shows all the forms, resources and properties. If this section is not visible click View->Solution Explorer. This section is Properties and displays all the properties for the component selected. If this section is not visible click View->Properties Window. By default Form is named as Form1. Let us first change the Name of the form. To do so select the form and the properties related to form will be listed in properties window. The entire properties list is in the form of Key Value pair. For changing Name of form, change value of property Name. For this example we will change it to HelloWorldForm. Now this form will be referred as HelloWorldForm throughout the application. Changing form name doesn’t change form caption (title) it is still showing Form1. To change caption change the value of property name Text. For this example we will change the Text to Hello World. Also the file representing this form in Solution Explorer will still be referred as Form1.cs, again you can either keep the name of the file as it is or can rename it. We will rename it to HelloWorld.cs.
Read more
  • 0
  • 0
  • 10349

article-image-getting-started-raspberry-pi
Packt
21 Feb 2018
7 min read
Save for later

Getting Started on the Raspberry Pi

Packt
21 Feb 2018
7 min read
 In this article, by Soham Chetan Kamani, author of the book Full Stack Web Development with Raspberry Pi 3, we will cover the marvel of the Raspberry Pi, however, doesn’t end here. It’s extreme portability means we can now do things which were not previously possible with traditional desktop computers. The GPIO pins give us easy access to interface with external devices. This allows the Pi to act as a bridge between embedded electronics and sensors, and the power that linux gives us. In essence, we can run any code in our favorite programming language (which can run on linux), and interface it directly to outside hardware quickly and easily. Once we couple this with the wireless networking capabilities introduced in the Raspberry Pi 3, we gain the ability to make applications that would not have been feasible to make before this device existed.and Scar de Courcier, authors of Windows Forensics Cookbook The Raspberry Pi has become hugely popular as a portable computer, and for good reason. When it comes to what you can do with this tiny piece of technology, the sky’s the limit. Back in the day, computers used to be the size of entire neighborhood blocks, and only large corporations doing expensive research could afford them. After that we went on to embrace personal computers, which were still a bit expensive, but, for the most part, could be bought by the common man. This brings us to where we are today, where we can buy a fully functioning Linux computer, which is as big as a credit card, for under 30$. It is truly a huge leap in making computers available to anyone and everyone. (For more resources related to this topic, see here.)  Web development and portable computing have come a long way. A few years ago we couldn’t dream of making a rich, interactive, and performant application which runs on the browser. Today, not only can we do that, but also do it all in the palm of our hands (quite literally). When we think of developing an application that uses databases, application servers, sockets, and cloud APIs, the picture that normally comes to mind is that of many server racks sitting in a huge room. In this book however, we are going to implement all of that using only the Raspberry Pi. In this article, we will go through the concept of the internet of things, and discuss how web development on the Raspberry Pi can help us get there. Following this, we will also learn how to set up our Raspberry Pi and access it from our computer. We will cover the following topics: The internet of things Our application Setting up Raspberry Pi Remote access The Internet of things (IOT) The web has until today been a network of computers exchanging data. The limitation of this was that it was a closed loop. People could send and receive data from other people via their computers, but rarely much else. The internet of things, in contrast, is a network of devices or sensors that connect the outside world to the internet. Superficially, nothing is different: the internet is still a network of computers. What has changed, is that now, these computers are collecting and uploading data from things instead of people. This now allows anyone who is connected to obtain information that is not collected by a human. The internet of things as a concept has been around for a long time, but it is only now that almost anyone can connect a sensor or device to the cloud, and the IOT revolution was hugely enabled by the advent of portable computing, which was led by the Raspberry Pi.  A brief look at our application Throughout this book, we are going to go through different components and aspects of web development and embedded systems. These are all going to be held together by our central goal of making an entire web application capable of sensing and displaying the surrounding temperature and humidity. In order to make a properly functioning system, we have to first build out the individual parts. More difficult still, is making sure all the parts work well together. Keeping this in mind, let's take a look at the different components of our technology stack, and the problems that each of them solve : The sensor interface - Perception The sensor is what connects our otherwise isolated application to the outside world. The sensor will be connected to the GPIO pins of the Raspberry pi. We can interface with the sensor through various different native libraries. This is the starting point of our data. It is where all the data that is used by our application is created. If you think about it, every other component of our technology stack only exists to manage, manipulate, and display the data collected from the sensor. The database - Persistence "Data" is the term we give to raw information, which is information that we cannot easily aggregate or understand. Without a way to store and meaningfully process and retrieve this data, it will always remain "data" and never "information", which is what we actually want. If we just hook up a sensor and display whatever data it reads, we are missing out on a lot of additional information. Let's take the example of temperature: What if we wanted to find out how the temperature was changing over time? What if we wanted to find the maximum and minimum temperatures for a particular day, or a particular week, or even within a custom duration of time? What if we wanted to see temperature variation across locations? There is no way we could do any of this with only the sensor. We also need some sort of persistence and structure to our data, and this is exactly what the database provides for us. If we structure our data correctly, getting the answers to the above questions is just a matter of a simple database query. The user interface - Presentation The user interface is the layer which connects our application to the end user. One of the most challenging aspects of software development is to make information meaningful and understandable to regular users of our application. The UI layer serves exactly this purpose: it takes relevant information and shows it in such a way that it is easily understandable to humans. How do we achieve such a level of understandability with such a large amount of data? We use visual aids: like colors, charts and diagrams (just like how the diagrams in this book make its information easier to understand). An important thing for any developer to understand is that your end user actually doesn't care about any of the the back-end stuff. The only thing that matters to them is a good experience. Of course, all the 0ther components serve to make the users experience better, but it's really the user facing interface that leaves the first impression, and that's why it's so important to do it well. The application server - Middleware This layer consists of the actual server side code we are going to write to get the application running. It is also called "middleware". In addition to being in the exact center of the architecture diagram, this layer also acts as the controller and middle-man for the other layers. The HTML pages that form the UI are served through this layer. All the database queries that we were talking about earlier are made here. The code that runs in this layer is responsible for retrieving the sensor readings from our external pins and storing the data in our database. Summary We are just warming up! In this article we got a brief introduction to the concept of the internet of things. We then went on to look at an overview of what we were going to build throughout the rest of this book, and saw how the Raspberry Pi can help us get there. Resources for Article:   Further resources on this subject: Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article] Setting up your Raspberry Pi [article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 10346

article-image-five-common-questions-netjava-developers-learning-javascript-and-nodejs
Packt
20 Jun 2016
19 min read
Save for later

Five common questions for .NET/Java developers learning JavaScript and Node.js

Packt
20 Jun 2016
19 min read
In this article by Harry Cummings, author of the book Learning Node.js for .NET Developers For those with web development experience in .NET or Java, perhaps who've written some browser-based JavaScript in the past, it might not be obvious why anyone would want to take JavaScript beyond the browser and treat it as a general-purpose programming language. However, this is exactly what Node.js does. What's more, Node.js has been around for long enough now to have matured as a platform, and has sustained its impressive growth in popularity well beyond any period that could be attributed to initial hype over a new technology. In this introductory article, we'll look at why Node.js is a compelling technology worth learning more about, and address some of the common barriers and sources of confusion that developers encounter when learning Node.js and JavaScript. (For more resources related to this topic, see here.) Why use Node.js? The execution model of Node.js follows that of JavaScript in the browser. This might not be an obvious choice for server-side development. In fact, these two use cases do have something important in common. User interface code is naturally event-driven (for example, binding event handlers to button clicks). Node.js makes this a virtue by applying an event-driven approach to server-side programming. Stated formally, Node.js has a single-threaded, non-blocking, event-driven execution model. We'll define each of these terms. Non-blocking Put simply, Node.js recognizes that many programmes spend most of their time waiting for other things to happen. For example, slow I/O operations such as disk access and network requests. Node.js addresses this by making these operations non-blocking. This means that program execution can continue while they happen. This non-blocking approach is also called asynchronous programming. Of course, other platforms support this too (for example, C#'s async/await keywords and the Task Parallel Library). However, it is baked in to Node.js in a way that makes it simple and natural to use. Asynchronous API methods are all called in the same way: They all take a callback function to be invoked ("called back") when the execution completes. This function is invoked with an optional error parameter and the result of the operation. The consistency of calling non-blocking (asynchronous) API methods in Node.js carries through to its third-party libraries. This consistency makes it easy to build applications that are asynchronous throughout. Other JavaScript libraries, such as bluebird (http://bluebirdjs.com/docs/getting-started.html), allow callback-based APIs to be adapted to other asynchronous patterns. As an alternative to callbacks, you may choose to use Promises (similar to Tasks in .NET or Futures in Java) or coroutines (similar to async methods in C#) within your own codebase. This allows you to streamline your code while retaining the benefits of consistent asynchronous APIs in Node.js and its third-party libraries. Event-driven The event-driven nature of Node.js describes how operations are scheduled. In typical procedural programming environments, a program has some entry point that executes a set of commands until completion or enters a loop to perform work on each iteration. Node.js has a built-in event loop, which isn't directly exposed to the developer. It is the job of the event loop to decide which piece of code to execute next. Typically, this will be some callback function that is ready to run in response to some other event. For example, a filesystem operation may have completed, a timeout may have expired, or a new network request may have arrived. This built-in event loop simplifies asynchronous programming by providing a consistent approach and avoiding the need for applications to manage their own scheduling. Single-threaded The single-threaded nature of Node.js simply means that there is only one thread of execution in each process. Also, each piece of code is guaranteed to run to completion without being interrupted by other operations. This greatly simplifies development and makes programmes easier to reason about. It removes the possibility for a range of concurrency issues. For example, it is not necessary to synchronize/lock access to shared in-process state as it is in Java or .NET. A process can't deadlock itself or create race conditions within its own code. Single-threaded programming is only feasible if the thread never gets blocked waiting for long-running work to complete. Thus, this simplified programming model is made possible by the non-blocking nature of Node.js. Writing web applications The flagship use case for Node.js is in building websites and web APIs. These are inherently event-driven as most or all processing takes place in response to HTTP requests. Also, many websites do little computational heavy-lifting of their own. They tend to perform a lot of I/O operations, for example: Streaming requests from the client Talking to a database locally or over the network Pulling in data from remote APIs over the network Reading files from disk to send back to the client These factors make I/O operations a likely bottleneck for web applications. The non-blocking programming model of Node.js allows web applications to make the most of a single thread. As soon as any of these I/O operations starts, the thread is immediately free to pick up and start processing another request. Processing of each request continues via asynchronous callbacks when I/O operations complete. The processing thread is only kicking off and linking together these operations, never waiting for them to complete. This allows Node.js to handle a much higher rate of requests per thread than other runtime environments. How does Node.js scale? So, Node.js can handle many requests per thread, but what happens when we reach the limit of what one thread can handle? The answer is, of course, to use more threads! You can achieve this by starting multiple Node.js processes, typically, one for each web server CPU core. Note that this is still quite different to most Java or .NET web applications. These typically use a pool of threads much larger than the number of cores, because threads are expected to spend much of their time being blocked. The built-in Node.js cluster module makes it straightforward to spawn multiple Node.js processes. Tools such as PM2 (http://pm2.keymetrics.io/) and libraries such as throng (https://github.com/hunterloftis/throng) make it even easier to do so. This approach gives us the best of all worlds: Using multiple threads makes the most of our available CPU power By having a single thread per core, we also save overheads from the operating system context-switching between threads Since the processes are independent and don't share state directly, we retain the benefits of the single-threaded programming model discussed above By using long-running application processes (as with .NET or Java), we avoid the overhead of a process-per-request (as in PHP) Do I really have to use JavaScript? A lot of web developers new to Node.js will already have some experience of client-side JavaScript. This experience may not have been positive and might put you off using JavaScript elsewhere. You do not have to use JavaScript to work with Node.js. TypeScript (http://www.typescriptlang.org/) and other compile-to-JavaScript languages exist as alternatives. However, I do recommend learning Node.js with JavaScript first. It will give you a clearer understanding of Node.js and simplify your tool chain. Once you have a project or two under your belt, you'll be better placed to understand the pros and cons of other languages. In the meantime, you might be pleasantly surprised by the JavaScript development experience in Node.js. There are three broad categories of prior JavaScript development experience that can lead to people having a negative impression of it. These are as follows: Experience from the late 90s and early 00s, prior to MV* frameworks like Angular/Knockout/Backbone/Ember, maybe even prior to jQuery. This is the pioneer phase of client-side web development. More recent experience within the much more mature JavaScript ecosystem, perhaps as a full-stack developer writing server-side and client-side code. The complexity of some frameworks (such as the MV* frameworks listed earlier), or the sheer amount of choice in general, can be overwhelming. Limited experience with JavaScript itself, but exposure to some its most unusual characteristics. This may lead to a jarring sensation as a result of encountering the language in surprising or unintuitive ways. We'll address groups of people affected by each type of experience in turn. But note that individuals might identify with more than one of these groups. I'm happy to admit that I've been a member of all three in the past. The web pioneers These developers have been burned by worked with client-side JavaScript in the past. The browser is sometimes described as a hostile environment for code to execute in. A single execution context shared by all code allows for some particularly nasty gotchas. For example, third-party code on the same page can create and modify global objects. Node.js solves some of these issues on a fundamental level, and mitigates others where this isn't possible. It's JavaScript, so it's still the case that everything is mutable. But the Node.js module system narrows the global scope, so libraries are less likely to step on each other's toes. The conventions that Node.js establishes also make third-party libraries much more consistent. This makes the environment less hostile and more predictable. The web pioneers will also have had to cope with the APIs available to JavaScript in the browser. Although these have improved over time as browsers and standards have matured, the earlier days of web development were more like the Wild West. Quirks and inconsistencies in fundamental APIs caused a lot of hard work and frustration. The rise of jQuery is a testament to the difficulty of working with the Document Object Model of old. The continued popularity of jQuery indicates that people still prefer to avoid working with these APIs directly. Node.js addresses these issues quite thoroughly: First of all, by taking JavaScript out of the browser, the DOM and other APIs simply go away as they are no longer relevant. The new APIs that Node.js introduces are small, focused, and consistent. You no longer need to contend with inconsistencies between browsers. Everything you write will execute in the same JavaScript engine (V8). The overwhelmed full-stack developers Many of the frontend JavaScript frameworks provide a lot of power, but come with a great deal of complexity. For example, AngularJS has a steep learning curve, is quite opinionated about application structure, and has quite a few gotchas or things you just need to know. JavaScript itself is actually a language with a very small surface area. This provides a blank canvas for Node.js to provide a small number of consistent APIs (as described in the previous section). Although there's still plenty to learn in total, you can focus on just the things you need without getting tripped up by areas you're not yet familiar with. It's still true that there's a lot of choice and that this can be bewildering. For example, there are many competing test frameworks for JavaScript. The trend towards smaller, more composable packages in the Node.js ecosystem—while generally a good thing—can mean more research, more decisions, and fewer batteries-included frameworks that do everything out of the box. On balance though, this makes it easier to move at your own pace and understand everything that you're pulling into your application. The JavaScript dabblers It's easy to have a poor impression of JavaScript if you've only worked with it occasionally and never as the primary (or even secondary) language on a project. JavaScript doesn't do itself any favors here, with a few glaring gotchas that most people will encounter. For example, the fundamentally broken == equality operator and other symptoms of type coercion. Although these make a poor first impression, they aren't really indicative of the experience of working with JavaScript more regularly. As mentioned in the previous section, JavaScript itself is actually a very small language. Its simplicity limits the number of gotchas there can be. While there are a few things you "just need to know", it's a short list. This compares well against the languages that offer a constant stream of nasty surprises (for example, PHP's notoriously inconsistent built-in functions). What's more, successive ECMAScript standards have done a lot to clean up the JavaScript language. With Node.js, you get to take advantage of this, as all your code will run on the V8 engine, which implements the latest ES2015 standard. The other big that reason JavaScript can be jarring is more a matter of context than an inherent flaw. It looks superficially similar to the other languages with a C-like syntax, like Java and C#. The similarity to Java was intentional when JavaScript was created, but it's unfortunate. JavaScript's programming model is quite different to other object-oriented languages like Java or C#. This can be confusing or frustrating, when its syntax suggests that it may work in roughly the same way. This is especially true of object-oriented programming in JavaScript, as we'll discuss shortly. Once you've understood the fundamentals of JavaScript though, it's very easy to work productively with it. Working with JavaScript I'm not going to argue that JavaScript is the perfect language. But I do think many of the factors that lead to people having a bad impression of JavaScript are not down to the language itself. Importantly, many factors simply don't apply when you take JavaScript out of the browser environment. What's more, JavaScript has some really great extrinsic properties. These are things that aren't visible in the code, but have an effect on what it's like to work with the language. For example, JavaScript's interpreted nature makes it easy to set up automated tests to run continuously and provide near-instant feedback on your code changes. How does inheritance work in JavaScript? When introducing object-oriented programming, we usually talk about classes and inheritance. Java, C# and numerous other languages take a very similar approach to these concepts. JavaScript is quite unusual in that; it supports object-oriented programming without classes. It does this by applying the concept of inheritance directly to objects. Anything that is not one of JavaScript's built-in primitives (strings, number, null, and so on) is an object. Functions are just a special type of object that can be invoked with arguments. Arrays are a special type of object with list-like behavior. All objects (including these two special types) can have properties, which are just names with a value. You can think of JavaScript objects as a dictionary with string keys and object values. Programming without classes Let's say you have a chart with a very large number of data points. These points may be represented by objects that have some common behavior. In C# or Java, you might create a Point class. In JavaScript, you could implement points like this: function create Point(x, y) {     return {         x: x,         y: y,         isAboveDiagonal: function() {             return this.y > this.x;         }     }; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The createPoint function returns a new object each time it is called (the object is defined using JavaScript's object-literal notation, which is the basis for JSON). One problem with this approach is that the function assigned to the isAboveDiagonal property is redefined for each point on the graph, thus taking up more space in memory. You can address this using prototypal inheritance. Although JavaScript doesn't have classes, objects can inherit from other objects. Each object has a prototype. If the interpreter attempts to access a property on an object and that property doesn't exist, it will look for a property with the same name on the object's prototype instead. If the property doesn't exist there, it will check the prototype's prototype, and so on. The prototype chain will end with the built-in Object.prototype. You can implement point objects using a prototype as follows: var pointPrototype = {     isAboveDiagonal: function() {         return this.y > this.x;     } };   function createPoint(x, y) {     var newPoint = Object.create(pointPrototype);     newPoint.x = x;     newPoint.y = y;     return newPoint; }   var myPoint = createPoint(1, 2); console.log(myPoint.isAboveDiagonal()); // Prints "true" The Object.create method creates a new object with a specified prototype. The isAboveDiagonal method now only exists once in memory on the pointPrototype object. When the code tries to call isAboveDiagonal on an individual point object, it is not present, but it is found on the prototype instead. Note that the preceding example tells us something important about the behavior of the this keyword in JavaScript. It actually refers to the object that the current function was called on, rather than the object it was defined on. Creating objects with the 'new' keyword You can rewrite the previous code example in a more compact form using the new operator: function Point(x, y) {     this.x = x;     this.y = y; }   Point.prototype.isAboveDiagonal = function() {     return this.y > this.x; }   var myPoint = new Point(1, 2); By convention, functions have a property named prototype, which defaults to an empty object. Using the new operator with the Point function creates an object that inherits from Point.prototype and applies the Point function to the newly created object. Programming with classes Although JavaScript doesn't fundamentally have classes, ES2015 introduces a new class keyword. This makes it possible to implement shared behavior and inheritance in a way that may be more familiar compared to other object-oriented languages. The equivalent of the previous code example would look like the following: class Point {     constructor(x, y) {         this.x = x;         this.y = y;     }         isAboveDiagonal() {         return this.y > this.x;     } }   var myPoint = new Point(1, 2); Note that this really is equivalent to the previous example. The class keyword is just syntactic sugar for setting up the prototype-based inheritance already discussed. Once you know how to define objects and classes, you can start to structure the rest of your application. How do I structure Node.js applications? In C# and Java, the static structure of an application is defined by namespaces or packages (respectively) and static types. An application's run-time structure (that is, the set of objects created in memory) is typically bootstrapped using a dependency injection (DI) container. Examples of DI containers include NInject, Autofac and Unity in .NET, or Spring, Guice and Dagger in Java. These frameworks provide features like declarative configuration and autowiring of dependencies. Since JavaScript is a dynamic, interpreted language, it has no inherent static application structure. Indeed, in the browser, all the scripts loaded into a page run one after the other in the same global context. The Node.js module system allows you to structure your application into files and directories and provides a mechanism for importing functionality from one file into another. There are DI containers available for JavaScript, but they are less commonly used. It is more common to pass around dependencies explicitly. The Node.js module system and JavaScript's dynamic typing makes this approach more natural. You don't need to add a lot of fields and constructors/properties to set up dependencies. You can just wrap modules in an initialization function that takes dependencies as parameters. The following very simple example illustrates the Node.js module system, and shows how to inject dependencies via a factory function: We add the following code under /src/greeter.js: module.exports = function(writer) {     return {         greet: function() { writer.write('Hello World!'); }     } }; We add the following code under /src/main.js: var consoleWriter = {     write: function(string) { console.log(string); } }; var greeter = require('./greeter.js')(consoleWriter); greeter.greet(); In the Node.js module system, each file establishes a new module with its own global scope. Within this scope, Node.js provides the module object for the current module to export its functionality, and the require function for importing other modules. If you run the previous example (using node main.js), the Node.js runtime will load the greeter module as a result of the main module's call to the require function. The greeter module assigns a value to the exports property of the module object. This becomes the return value of the require call back in the main module. In this case, the greeter module exports a single object, which is a factory function that takes a dependency. Summary In this article, we have: Understood the Node.js programming model and its use in web applications Described how Node.js web applications can scale Discussed the suitability of JavaScript as a programming language Illustrated how object-oriented programming works in JavaScript Seen how dependency injection works with the Node.js module system Hopefully this article has given you some insight into why Node.js is a compelling technology, and made you better prepared to learn more about writing server-side applications with JavaScript and Node.js. Resources for Article: Further resources on this subject: Web Components [article] Implementing a Log-in screen using Ext JS [article] Arrays [article]
Read more
  • 0
  • 0
  • 10338

article-image-freeswitch-utilizing-built-ivr-engine
Packt
05 Aug 2010
10 min read
Save for later

FreeSWITCH: Utilizing the Built-in IVR Engine

Packt
05 Aug 2010
10 min read
IVR engine overview Unlike many applications within FreeSWITCH which are built as modules, IVR is considered the core functionality of FreeSWITCH. It is used anytime a prompt is played and digits are collected. Even if you are not using the IVR application itself from your Dialplan, you will see IVR-related functions being utilized from various other applications. As an example, the voicemail application makes heavy use of IVR functionality when playing messages, while awaiting digits to control deleting, saving, and otherwise managing voicemails. In this section, we will only be reviewing the IVR functionality that is exposed from within the ivr Dialplan application. This functionality is typically used to build an auto-attendant menu, although other functions are possible as well. IVR XML configuration file FreeSWITCH ships with a sample IVR menu are typically invoked by dialing 5000 from the sample Dialplan. When you dial 500, you will hear a greeting welcoming you to FreeSWITCH, and presenting your menu options. The menu options consist of calling the FreeSWITCH conference, calling the echo extension, hearing music on hold, going to a sub menu, or listening to screaming monkeys. We will start off reviewing the XML that powers this example. Open conf/autoload_configs/ivr.xml which contains the following XML: <configuration name="ivr.conf" description="IVR menus"> <menus> <!-- demo IVR, Main Menu --> <menu name="demo_ivr" greet-long="phrase:demo_ivr_main_menu" greet-short="phrase:demo_ivr_main_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="10000" inter-digit-timeout="2000" max-failures="3" max-timeouts="3" digit-len="4"> <entry action="menu-exec-app" digits="1" param="bridge sofia/$${domain}/888@conference.freeswitch.org"/> <entry action="menu-exec-app" digits="2" param="transfer 9196 XML default"/> <entry action="menu-exec-app" digits="3" param="transfer 9664 XML default"/> <entry action="menu-exec-app" digits="4" param="transfer 9191 XML default"/> <entry action="menu-exec-app" digits="5" param="transfer 1234*256 enum"/> <entry action="menu-exec-app" digits="/^(10[01][0-9])$/" param="transfer $1 XML features"/> <entry action="menu-sub" digits="6" param="demo_ivr_submenu"/> <entry action="menu-top" digits="9"/> </menu> <!-- Demo IVR, Sub Menu --> <menu name="demo_ivr_submenu" greet-long="phrase:demo_ivr_sub_menu" greet-short="phrase:demo_ivr_sub_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="15000" max-failures="3" max-timeouts="3"> <entry action="menu-top" digits="*"/> </menu> </menus> </configuration> In the preceding example, there are two IVR menus defined. Let's break apart the first one and examine it, starting with the IVR menu definition itself. IVR menu definitions The following XML defines an IVR menu named "demo_ivr". <menu name="demo_ivr" greet-long="phrase:demo_ivr_main_menu" greet-short="phrase:demo_ivr_main_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="10000" inter-digit-timeout="2000" max-failures="3" max-timeouts="3" digit-len="4"> We'll use this menu's name later when we route calls to the IVR from the Dialplan. Following the name, various XML attributes specify how the IVR will behave. The following options are available when defining an IVR's options: greet-long The greet-long attribute specifies the initial greeting that is played when a caller reaches the IVR. This is different from the greet-short sound file which allows for introductions to be played, such as "Thank you for calling XYZ Company". In the sample IVR, the greet-long attribute is a Phrase Macro that plays an introductory message to the caller ("Welcome to FreeSWITCH...") followed by the menu options the caller may choose from. Argument syntax: Sound file name (or path + name), TTS, or Phrase Macro Examples: greet-long="my_greeting" greet-long="phrase:my_greeting_phrase" greet-long="say:Welcome to our company. Press 1 for sales, 2 for support." greet-short The greet-short attribute specifies the greeting that is re-played if the caller enters invalid information, or no information at all. This is typically the same sound file as greet-long without the introduction. In the sample IVR, the greet-short attribute is a Phrase Macro that simply plays the menu options to the caller, and does not play the lengthy introduction found in greet-long. Argument syntax: Sound file name (or path + name), TTS, or Phrase Macro Examples: greet-short="my_greeting_retry" greet-long="phrase:my_greeting_retry_phrase" greet-long="say:Press 1 for sales, 2 for support." invalid-sound The invalid-sound attribute specifies the sound that is played when a caller makes an invalid entry. Argument syntax: Sound file name (or path + name), TTS, or Phrase Macro Examples invalid-sound="invalid_entry.wav" invalid-sound="phrase:my_invalid_entry_phrase" invalid-sound="say:That was not a valid entry" exit-sound The exit-sound attribute specifies the sound, which is played when a caller makes too many invalid entries or too many timeouts occur. This file is played before disconnecting the caller. Argument syntax: Any number, in milliseconds Examples: exit-sound="too_many_bad_entries.wav" exit-sound="phrase:my_too_many_bad_entries_phrase" exit-sound="say:Hasta la vista, baby." timeout The timeout attribute specifies the maximum amount of time to wait for the user to begin entering digits after the greeting has played. If this time limit is exceeded, the menu is repeated until the value in the max-timeouts attribute has been reached. Argument syntax: Any number, in milliseconds Examples: timeout="10000" timeout="20000" inter-digit-timeout The inter-digit-timeout attribute specifies the maximum amount of time to wait in-between each digit the caller presses. This is different from the overall timeout.It is useful to allow enough time to enter as many digits as necessary, without frustrating the caller by pausing too long after they are done making their entry. For example, if both 1000 and 1 are valid IVR entries, the system will continue waiting for the inter-digit-timeout length of time after 1 is entered, before determining that it is the final entry. Argument syntax: Any number, in milliseconds Examples: inter-digit-timeout="2000" max-failures The max-failures attribute specifies how many failures, due to invalid entries, to tolerate before disconnecting. Argument syntax: Any number Examples: xx-xx="too_many_bad_entries.wav" xx-xx="phrase:my_too_many_bad_entries_phrase" max-timeouts The max-timeouts attribute specifes how many timeouts to tolerate before disconnecting. Argument syntax: Any number Examples: max-timeouts="3" digit-len The digit-len attribute specifes the maximum number of digits that the user can enter before determining the entry is complete. Argument syntax: Any number greater than 1. Examples: digit-len="4" tts-voice The tts-voice attribute specifes the specifc text-to-speech voice that should be used. Argument syntax: Any valid text-to-speech engine. Examples: tts-voice="Mary" tts-engine The tts-engine attribute specifies the specific text-to-speech engine that should be used. Argument syntax: Any valid text-to-speech engine. Examples: tts-engine="flite" confirm-key The confirm-key attribute specifes the key which the user can press to signify that they are done entering information. Argument syntax: Any valid DTMF digit. Examples: confirm-key="#" These attributes dictate the general behavior of the IVR. IVR menu destinations After defining the global attributes of the IVR, you need to specify what specific destinations (or options) are available for the caller to press. You do this with <entry > XML elements. Let's review the first five XML options used by this IVR: <entry action="menu-exec-app" digits="1" param="bridge sofia/$${domain}/888@conference.freeswitch.org"/> <entry action="menu-exec-app" digits="2" param="transfer 9196 XML default"/> <entry action="menu-exec-app" digits="3" param="transfer 9664 XML default"/> <entry action="menu-exec-app" digits="4" param="transfer 9191 XML default"/> <entry action="menu-exec-app" digits="5" param="transfer 1234*256 enum"/> <entry action="menu-exec-app" digits="/^(10[01][0-9])$/" param="transfer $1 XML features"/> Each preceding entry defines three parameters—an action to be taken, the digits the caller must press to activate that action, and the parameters that are passed to the action. In most cases you will probably use the menu-exec-app action, which simply allows you to specify an action and parameters to call just as you would from the regular Dialplan (bridge, transfer, hangup, and so on.). These options are all pretty simple—they define a single digit which, when pressed, either bridges a call or transfers the call to an extension. There is one entry that is a bit different from the rest, which is the fnal IVR entry. It deserves a closer look.   <entry action="menu-exec-app" digits="/^(10[01][0-9])$/" param="transfer $1 XML features"/> This entry definition specifes a regular expression for the digits feld. This regular expression feld is identical to the expressions you would use in the Dialplan. In this example, the IVR is looking for any four-digit extension number from 1000 through 1019 (which is the default extension number range for the predefined users in the directory). As the regular expression is wrapped in parenthesis, the result of the entry will be passed to the transfer application as the $1 channel variable. This effectively allows the IVR to accept 1000-1019 as entries, and transfer the caller directly to those extensions when they are entered into the IVR. The remaining IVR entry actions are a bit different. They introduce menu-sub as an action, which transfers the caller to an IVR sub-menu, and menu-top, which restarts the current IVR and replays the menu. <entry action="menu-sub" digits="6" param="demo_ivr_submenu"/> <entry action="menu-top" digits="9"/> Several other actions exist that can be used within an IVR. The complete list of actions you can use from within the IVR include the following: menu-exec-app The menu-exec-app action, combined with a param field, executes the specified application and passes the parameterslisted to that application. This is equivalent to using <action application="app" data="data"> in your Dialplan. The most common use of menu-exec-app is to transfer a caller to another extension in the Dialplan. Argument syntax: application <params> Examples: <entry digits="1" action="menu-exec-app" param="application param1 param2 param3 ..."> <entry digits="2" action="menu-exec-app" param="transfer 9664 XML default"> menu-exec-api The menu-exec-api action, combined with a param feld, executes the specifed API command and passes the parameters listed to that command. This is equivalent to entering API commands at the CLI or from the event socket. Argument syntax: api_command <params> Examples: <entry digits="1" action="menu-exec-api" param="eval Caller Pressed 1!"> menu-play-sound The menu-play-sound action, combined with a param field, plays a specified sound file. Argument syntax: valid sound file <entry digits="1" action="menu-play-sound" param="screaming_monkeys.wav"> menu-back The menu-back action returns to the previous IVR menu, if any. Argument syntax: none Examples: <entry digits="1" action="menu-back"> menu-top The menu-top action restarts this IVR's menu. Argument syntax: None. Examples: <entry digits="1" action="menu-top"> Take a look at the XML for the sample sub-menu IVR and see if you can fgure out what it does. Also note how it is called above, when clicking 6 from the main menu. <menu name="demo_ivr_submenu" greet-long="phrase:demo_ivr_sub_menu" greet-short="phrase:demo_ivr_sub_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="15000" max-failures="3" max-timeouts="3"> <entry action="menu-top" digits="*"/> </menu>
Read more
  • 0
  • 0
  • 10276

article-image-groups-and-cohorts
Packt
06 Jul 2015
20 min read
Save for later

Groups and Cohorts in Moodle

Packt
06 Jul 2015
20 min read
In this article by William Rice, author of the book, Moodle E-Learning Course Development - Third Edition shows you how to use groups to separate students in a course into teams. You will also learn how to use cohorts to mass enroll students into courses. Groups versus cohorts Groups and cohorts are both collections of students. There are several differences between them. We can sum up these differences in one sentence, that is; cohorts enable administrators to enroll and unenroll students en masse, whereas groups enable teachers to manage students during a class. Think of a cohort as a group of students working together through the same academic curriculum. For example, a group of students all enrolled in the same course. Think of a group as a subset of students enrolled in a course. Groups are used to manage various activities within a course. Cohort is a system-wide or course category-wide set of students. There is a small amount of overlap between what you can do with a cohort and a group. However, the differences are large enough that you would not want to substitute one for the other. Cohorts In this article, we'll look at how to create and use cohorts. You can perform many operations with cohorts in bulk, affecting many students at once. Creating a cohort To create a cohort, perform the following steps: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, click on the Add button. The Add New Cohort page is displayed. Enter a Name for the cohort. This is the name that you will see when you work with the cohort. Enter a Cohort ID for the cohort. If you upload students in bulk to this cohort, you will specify the cohort using this identifier. You can use any characters you want in the Cohort ID; however, keep in mind that the file you upload to the cohort can come from a different computer system. To be safe, consider using only ASCII characters; such as letters, numbers, some special characters, and no spaces in the Cohort ID option. For example, Spring_2012_Freshmen. Enter a Description that will help you and other administrators remember the purpose of the cohort. Click on Save changes. Now that the cohort is created, you can begin adding users to this cohort. Adding students to a cohort Students can be added to a cohort manually by searching and selecting them. They can also be added in bulk by uploading a file to Moodle. Manually adding and removing students to a cohort If you add a student to a cohort, that student is enrolled in all the courses to which the cohort is synchronized. If you remove a student from a cohort, that student will be unenrolled from all the courses to which the cohort is synchronized. We will look at how to synchronize cohorts and course enrollments later. For now, here is how to manually add and remove students from a cohort: From the main menu, select Site administration | Users | Accounts | Cohorts. On the Cohorts page, for the cohort to which you want to add students, click on the people icon: The Cohort Assign page is displayed. The left-hand side panel displays users that are already in the cohort, if any. The right-hand side panel displays users that can be added to the cohort. Use the Search field to search for users in each panel. You can search for text that is in the user name and e-mail address fields. Use the Add and Remove button to move users from one panel to another. Adding students to a cohort in bulk – upload When you upload students to Moodle, you can add them to a cohort. After you have all the students in a cohort, you can quickly enroll and unenroll them in courses just by synchronizing the cohort to the course. If you are going to upload students in bulk, consider putting them in a cohort. This makes it easier to manipulate them later. Here is an example of a cohort. Note that there are 1,204 students enrolled in the cohort: These students were uploaded to the cohort under Administration | Site Administration | Users | Upload users: The file that was uploaded contained information about each student in the cohort. In a spreadsheet, this is how the file looks: username,email,firstname,lastname,cohort1 moodler_1,bill@williamrice.net,Bill,Binky,open-enrollmentmoodlers moodler_2,rose@williamrice.net,Rose,Krial,open-enrollmentmoodlers moodler_3,jeff@williamrice.net,Jeff,Marco,open-enrollmentmoodlers moodler_4,dave@williamrice.net,Dave,Gallo,open-enrollmentmoodlers In this example, we have the minimum required information to create new students. These are as follows: The username The e-mail address The first name The last name We also have the cohort ID (the short name of the cohort) in which we want to place a student. During the upload process, you can see a preview of the file that you will upload: Further down on the Upload users preview page, you can choose the Settings option to handle the upload: Usually, when we upload users to Moodle, we will create new users. However, we can also use the upload option to quickly enroll existing users in the cohort. You saw previously (Manually adding and removing students to a cohort) how to search for and then enroll users in a cohort. However, when you want to enroll hundreds of users in the cohort, it's often faster to create a text file and upload it, than to search your existing users. This is because when you create a text file, you can use powerful tools—such as spreadsheets and databases—to quickly create this file. If you want to perform this, you will find options to Update existing users under the Upload type field. In most Moodle systems, a user's profile must include a city and country. When you upload a user to a system, you can specify the city and country in the upload file or omit them from the upload file and assign the city and country to the system while the file is uploaded. This is performed under Default values on the Upload users page: Now that we have examined some of the capabilities and limitations of this process, let's list the steps to upload a cohort to Moodle: Prepare a plain file that has, at minimum, the username, email, firstname, lastname, and cohort1 information. If you were to create this in a spreadsheet, it may look similar to the following screenshot: Under Administration | Site Administration | Users | Upload users, select the text file that you will upload. On this page, choose Settings to describe the text file, such as delimiter (separator) and encoding. Click on the Upload users button. You will see the first few rows of the text file displayed. Also, additional settings become available on this page. In the Settings section, there are settings that affect what happens when you upload information about existing users. You can choose to have the system overwrite information for existing users, ignore information that conflicts with existing users, create passwords, and so on. In the Default values section, you can enter values to be entered into the user profiles. For example, you can select a city, country, and department for all the users. Click on the Upload users button to begin the upload. Cohort sync Using the cohort sync enrolment method, you can enroll and un-enroll large collections of students at once. Using cohort sync involves several steps: Creating a cohort. Enrolling students in the cohort. Enabling the cohort sync enrollment method. Adding the cohort sync enrollment method to a course. You saw the first two steps: how to create a cohort and how to enroll students in the cohort. We will cover the last two steps: enabling the cohort sync method and adding the cohort sync to a course. Enabling the cohort sync enrollment method To enable the cohort sync enrollment method, you will need to log in as an administrator. This cannot be done by someone who has only teacher rights: Select Site administration | Plugins | Enrolments | Manage enrol plugins. Click on the Enable icon located next to Cohort sync. Then, click on the Settings button located next to Cohort sync. On the Settings page, choose the default role for people when you enroll them in a course using Cohort sync. You can change this setting for each course. You will also choose the External unenrol action. This is what happens to a student when they are removed from the cohort. If you choose Unenrol user from course, the user and all his/her grades are removed from the course. The user's grades are purged from Moodle. If you were to read this user to the cohort, all the user's activity in this course will be blank, as if the user was never in the course. If you choose Disable course enrolment and remove roles, the user and all his/her grades are hidden. You will not see this user in the course's grade book. However, if you were to read this user to the cohort or to the course, this user's course records will be restored. After enabling the cohort sync method, it's time to actually add this method to a course. Adding the cohort sync enrollment method to a course To perform this, you will need to log in as an administrator or a teacher in the course: Log in and enter the course to which you want to add the enrolment method. Select Course administration | Users | Enrolment methods. From the Add method drop-down menu, select Cohort sync. In Custom instance name, enter a name for this enrolment method. This will enable you to recognize this method in a list of cohort syncs. For Active, select Yes. This will enroll the users. Select the Cohort option. Select the role that the members of the cohort will be given. Click on the Save changes button. All the users in the cohort will be given a selected role in the course. Un-enroll a cohort from a course There are two ways to un-enroll a cohort from a course. First, you can go to the course's enrollment methods page and delete the enrollment method. Just click on the X button located next to the cohort sync field that you added to the course. However, this will not just remove users from the course, but also delete all their course records. The second method preserves the student records. Once again, go to the course's enrollment methods page located next to the Cohort sync method that you added and click on the Settings icon. On the Settings page, select No for Active. This will remove the role that the cohort was given. However, the members of the cohort will still be listed as course participants. So, as the members of the cohort do not have a role in the course, they can no longer access this course. However, their grades and activity reports are preserved. Differences between cohort sync and enrolling a cohort Cohort sync and enrolling a cohort are two different methods. Each has advantages and limitations. If you follow the preceding instructions, you can synchronize a cohort's membership to a course's enrollment. As people are added to and removed from the cohort, they are enrolled and un-enrolled from the course. When working with a large group of users, this can be a great time saver. However, using cohort sync, you cannot un-enroll or change the role of just one person. Consider a scenario where you have a large group of students who want to enroll in several courses, all at once. You put these students in a cohort, enable the cohort sync enrollment method, and add the cohort sync enrollment method to each of these courses. In a few minutes, you have accomplished your goal. Now, if you want to un-enroll some users from some courses, but not from all courses, you remove them from the cohort. So, these users are removed from all the courses. This is how cohort sync works. Cohort sync is everyone or no one When a person is added to or removed from the cohort, this person is added to or removed from all the courses to which the cohort is synced. If that's what you want, great. If not, An alternative to cohort sync is to enroll a cohort. That is, you can select all the members of a cohort and enroll them in a course, all at once. However, this is a one-way journey. You cannot un-enroll them all at once. You will need to un-enroll them one at a time. If you enroll a cohort all at once, after enrollment, users are independent entities. You can un-enroll them and change their role (for example, from student to teacher) whenever you wish. To enroll a cohort in a course, perform the following steps: Enter the course as an administrator or teacher. Select Administration | Course administration | Users | Enrolled users. Click on the Enrol cohort button. A popup window appears. This window lists the cohorts on the site. Click on Enrol users next to the cohort that you want to enroll. The system displays a confirmation message. Now, click on the OK button. You will be taken back to the Enrolled users page. Note that although you can enroll all users in a cohort (all at once), there is no button to un-enroll them all at once. You will need to remove them one at a time from your course. Managing students with groups A group is a collection of students in a course. Outside of a course, a group has no meaning. Groups are useful when you want to separate students studying the same course. For example, if your organization is using the same course for several different classes or groups, you can use the group feature to separate students so that each group can see only their peers in the course. For example, you can create a new group every month for employees hired that month. Then, you can monitor and mentor them together. After you have run a group of people through a course, you may want to reuse this course for another group. You can use the group feature to separate groups so that the current group doesn't see the work done by the previous group. This will be like a new course for the current group. You may want an activity or resource to be open to just one group of people. You don't want others in the class to be able to use that activity or resource. Course versus activity You can apply the groups setting to an entire course. If you do this, every activity and resource in the course will be segregated into groups. You can also apply the groups setting to an individual activity or resource. If you do this, it will override the groups setting for the course. Also, it will segregate just this activity, or resource between groups. The three group modes For a course or activity, there are several ways to apply groups. Here are the three group modes: No groups: There are no groups for a course or activity. If students have been placed in groups, ignore it. Also, give everyone the same access to the course or activity. Separate groups: If students have been placed in groups, allow them to see other students and only the work of other students from their own group. Students and work from other groups are invisible. Visible groups: If students have been placed in groups, allow them to see other students and the work of other students from all groups. However, the work from other groups is read only. You can use the No groups setting on an activity in your course. Here, you want every student who ever took the course to be able to interact with each other. For example, you may use the No groups setting in the news forum so that all students who have ever taken the course can see the latest news. Also, you can use the Separate groups setting in a course. Here, you will run different groups at different times. For each group that runs through the course, it will be like a brand new course. You can use the Visible groups setting in a course. Here, students are part of a large and in-person class; you want them to collaborate in small groups online. Also, be aware that some things will not be affected by the groups setting. For example, no matter what the group setting, students will never see each other's assignment submissions. Creating a group There are three ways to create groups in a course. You can: Manually create and populate each group Automatically create and populate groups based on the characteristics of students Import groups using a text file We'll cover these methods in the following subsections. Manually creating and populating a group Don't be discouraged by the idea of manually populating a group with students. It takes only a few clicks to place a student in a group. To create and populate a group, perform the following steps: Select Course administration | Users | Groups. This takes you to the Groups page. Click on the Create group button. The Create group page is displayed. You must enter a Name for the group. This will be the name that teachers and administrators see when they manage a group. The Group ID number is used to match up this group with a group identifier in another system. If your organization uses a system outside Moodle to manage students and this system categorizes students in groups, you can enter the group ID from the other system in this field. It does not need to be a number. This field is optional. The Group description field is optional. It's good practice to use this to explain the purpose and criteria for belonging to a group. The Enrolment key is a code that you can give to students who self enroll in a course. When the student enrolls, he/she is prompted to enter the enrollment key. On entering this key, the student is enrolled in the course and made a member of the group. If you add a picture to this group, then when members are listed (as in a forum), the member will have the group picture shown next to them. Here is an example of a contributor to a forum on http://www.moodle.org with her group memberships: Click on the Save changes button to save the group. On the Groups page, the group appears in the left-hand side column. Select this group. In the right-hand side column, search for and select the students that you want to add to this group: Note the Search fields. These enable you to search for students that meet a specific criteria. You can search the first name, last name, and e-mail address. The other part of the user's profile information is not available in this search box. Automatically creating and populating a group When you automatically create groups, Moodle creates a number of groups that you specify and then takes all the students enrolled in the course and allocates them to these groups. Moodle will put the currently enrolled students in these groups even if they already belong to another group in the course. To automatically create a group, use the following steps: Click on the Auto-create groups button. The Auto-create groups page is displayed. In the Naming scheme field, enter a name for all the groups that will be created. You can enter any characters. If you enter @, it will be converted to sequential letters. If you enter #, it will be converted to sequential numbers. For example, if you enter Group @, Moodle will create Group A, Group B, Group C, and so on. In the Auto-create based on field, you will tell the system to choose either of the following options:     Create a specific number of groups and then fill each group with as many students as needed (Number of groups)     Create as many groups as needed so that each group has a specific number of students (Members per group). In the Group/member count field, you will tell the system to choose either of the following options:     How many groups to create (if you choose the preceding Number of groups option)     How many members to put in each group (if you choose the preceding Members per group option) Under Group members, select who will be put in these groups. You can select everyone with a specific role or everyone in a specific cohort. The setting for Prevent last small group is available if you choose Members per group. It prevents Moodle from creating a group with fewer than the number of students that you specify. For example, if your class has 12 students and you choose to create groups with five members per group, Moodle would normally create two groups of five. Then, it would create another group for the last two members. However, with Prevent last small group selected, it will distribute the remaining two members between the first two groups. Click on the Preview button to preview the results. The preview will not show you the names of the members in groups, but it will show you how many groups and members will be in each group. Importing groups The term importing groups may give you the impression that you will import students into a group. The import groups button does not import students into groups. It imports a text file that you can use to create groups. So, if you need to create a lot of groups at once, you can use this feature to do this. This needs to be done by a site administrator. If you need to import students and put them into groups, use the upload students feature. However, instead of adding students to the cohort, you will add them to a course and group. You perform this by specifying the course and group fields in the upload file, as shown in the following code: username,email,firstname,lastname,course1,group1,course2 moodler_1,bill@williamrice.net,Bill,Binky,history101,odds,science101 moodler_2,rose@williamrice.net,Rose,Krial,history101,even,science101 moodler_3,jeff@williamrice.net,Jeff,Marco,history101,odds,science101 moodler_4,dave@williamrice.net,Dave,Gallo,history101,even,science101 In this example, we have the minimum needed information to create new students. These are as follows: The username The e-mail address The first name The last name We have also enrolled all the students in two courses: history101 and science101. In the history101 course, Bill Binky, and Jeff Marco are placed in a group called odds. Rose Krial and Dave Gallo are placed in a group called even. In the science101 course, the students are not placed in any group. Remember that this student upload doesn't happen on the Groups page. It happens under Administration | Site Administration | Users | Upload users. Summary Cohorts and groups give you powerful tools to manage your students. Cohorts are a useful tool to quickly enroll and un-enroll large numbers of students. Groups enable you to separate students who are in the same course and give teachers the ability to quickly see only those students that they are responsible for. Useful Links: What's New in Moodle 2.0 Moodle for Online Communities Understanding Web-based Applications and Other Multimedia Forms
Read more
  • 0
  • 0
  • 10261

article-image-objects-and-types-documentum-65-content-management-foundations-sequel
Packt
04 Jun 2010
11 min read
Save for later

Objects and Types in Documentum 6.5 Content Management Foundations- A Sequel

Packt
04 Jun 2010
11 min read
Content persistence We have seen so far how metadata is persisted but it is not obvious how content is persisted and associated with its metadata. All sysobjects (objects of type dm_sysobject and its subtypes) other than folders (objects of type dm_folder and its subtypes) can have associated content. We saw that a document can have content in the form of renditions as well as in primary format. How are these content files associated with a sysobject? In other words, how does Content Server know what metadata is associated with a content fi le? How does it know that one content fi le is a rendition of another one? Content Server manages content files using content objects, which (indirectly) point to the physical locations of content files and associate them with sysobjects. Locating content files Recall that Documentum repositories can store content in various types of storage systems including a file system, a Relational Database Management System (RDBMS), a content-addressed storage (CAS), or external storage devices. Content Server decides to store each file in a location based on the configuration and the presence of products like Content Storage Services. In general, users are not concerned about where the file is stored since Content Server is able to retrieve the file from the location where it was stored. We will discuss the physical location of a content file without worrying about why Content Server chose to use that location. Content object Every content file in the repository has an associated content object, which stores information about the location of the fi le and identifi es the sysobjects associated with it. These sysobjects are referred to as the parent objects of the content object. A content object is an object of type dmr_content, whose key attributes are listed as follows: Attribute Description parent_count Number of parent objects parent_id List of object IDs of the parent objects storage_id Object ID of the store object representing the storage area holding the content. data_ticket A value used internally to retrieve the content. The value and its usage depend upon the type of storage used. i_contents When the content is stored in turbo storage, this property contains the actual content. If the content is larger than the size of this property (2000 characters for databases other than Sybase, 255 for Sybase), the content is stored in a dmi_subcontent object and this property is unused. If the content is stored in content addressed storage, it contains the content address. If the content is stored in external storage, it contains the token used to retrieve the content. rendition Identifies if it's a rendition and its related behavior 0 means original content 1 means rendition generated by server 2 means rendition generated by client 3 means rendition not to be removed when its primary content is updated or removed format Object ID of the format object representing the format of the content full_content_size Content file size in bytes, except when the content is stored in external storage Object-content relationship Content Server manages content objects while performing content-related operations. Content associated with a sysobject is categorized as primary content or a rendition. A rendition is a content fi le associated with a sysobject that is not its primary content. Content in the first content file added to a sysobject is called its primary content and its format is referred to as the primary format for the parent object. Any other content added to the parent object in the same format is also called primary content, though it is rarely done by users manually. This ability to add multiple primary content files is typically utilized programmatically by applications for their internal use. While a sysobject can have multiple primary content files it is also possible for one content object to have multiple parent objects. This just means that a content file can be shared by multiple objects. Putting it together The details about content persistence can become confusing due to the number of objects involved and the relationships among various attributes. It becomes even more complicated when the full Content Server capabilities (such as multiple content files for one sysobject) are manifested. We will look at a simple scenario to visually grasp how content persistence works in common situations. Documentum provides multiple options for locating the content file. DFC provides the getPath() method and DQL provides get_file_url administration method for this purpose. This section has been included to satisfy the reader's curiosity about content persistence and works through the information manually. This discussion can be treated as supplementary to technical fundamentals.. The sysobject is named paystub.jpg. The primary content file is in jpg format and the rendition is in pdf format, as shown in the following figure: The following figure shows the objects involved in the content persistence for this document. The central object is of type dm_document. The figure also includes two content objects and one format object. Let's try to understand the relationships by asking specific questions. How many content files, primary or renditions, are there for the document paystub.jpg? This question can be answered by looking for the corresponding content objects. We look for dmr_content objects that have the document's object ID in one of their parent_id values. This figure shows that there are two such content objects. Which of these content objects represents the primary content and which one is a rendition? This can be determined by looking at the rendition attribute. The content object on the left shows rendition=0, which indicates primary content. The content object on the right shows rendition=2, which indicates rendition generated by client (recall that we manually imported this rendition). What is the primary format for this document? This is easy to answer by looking at the a_content_type attribute on the document itself. If we need to know the format for a content object we can look for the dm_format object which has the same object ID as the value present in the format property of the content object. In the fi gure above, the format object for the primary content object is shown which represents a JPEG image. Thus, the format determined for the primary content of the object is expected to match the value of a_content_type property of the object. The format object for the rendition is not shown but it would be PDF. What is the exact physical location of the primary content file? As mentioned in the beginning of this section, there are DFC and DQL methods which can provide this information. For understanding content persistence, we will deduce this manually for a file store, which represents storage on a file system. For other types of storage, an exact location might not be evident since we need to rely on the storage interface to access the content file. Deducing the exact file path requires the ability to convert a decimal number to a hexadecimal (hex) number; this can be done with pen and paper or using one of the free tools available on the Web. Also remember that negative numbers are represented with what is known as a 2's-complement notation and many of these tools either don't handle 2's complement or don't support enough digits for our purposes. There are two parts of the file path—the root path for the file store and the path of the file relative to this root path. In order to fi gure out the root path, we identify the fi le store first. Find the dm_filestore object whose object ID is the same as the value in storage_id property of the content object. Then find the dm_location object whose object name is the same as the root property on the file store object. The file_ system_path property on this location object has the root path for the fi le store, which is C:Documentumdatalocaldevcontent_storage_01 in the figure above. In order to find the relative path of the content fi le, we look at data_ticket (data type integer) on the content object. Find the 8-digit hex representation for this number. Treat the hex number as a string and split the string with path separators (slashes, / or depending on the operating system) after every two characters. Suffi x the right-most two characters with the file extension (.jpg), which can be inferred from the format associated with the content object. Prefix the path with an 8-digit hex representation of the repository ID. This gives us the relative path of the content file, which is 000000108009be.jpg in the figure above. Prefix this path with the file store root path identified earlier to get the full path of the content file. Content persistence in Documentum appears to be complicated at first sight. There are a number of separate objects involved here and that is somewhat similar to having several tables in a relational database when we normalize the schema. At a high level, this complexity in the content persistence model serves to provide scalability, flexibility by supporting multiple kinds of content stores, and ease of managing changes in such an environment. Lightweight and shareable object types So far we have primarily dealt with standard types. Lightweight and shareable object types work together to provide performance improvements, which are significant when a large number of lightweight objects share information. The key performance benefits are in terms of savings in storage and in the time it takes to import a large number of documents that share metadata. These types are suitable for use in transactional and archival applications but are not recommended for traditional content management. The term transactional content (as in business transactions) was coined by Forrester Research to describe content typically originating from external parties, such as customers and partners, and driving transactional back-office business processes. Transactional Content Management (TCM) unifi es process, content, and compliance to support solutions involving transactional content. Our example scenario of mortgage loan approval process management is a perfect example of TCM. It involves numerous types of documents, several external parties, and sub-processes implementing parts of the overall process. Lightweight and shareable types play a central role in the High Volume Server, which enhances the performance of Content Server for TCM. A lightweight object type (also known as LwSO for Lightweight SysObject ) is a subtype of a shareable type. When a lightweight object is created, it references an object of its shareable supertype called the parent object of the lightweight object. Conversely, the lightweight object is called the child object of the shareable object. Additional lightweight objects of the same type can share the same parent object. These lightweight objects share the information present in the common parent object rather than each carrying a copy of that information. In order to make the best use of lightweight objects we need to address a couple of questions. When should we use lightweight objects? Lightweight objects are useful when there are a large number of attribute values that are identical for a group of objects. This redundant information can be pushed into one parent object and shared by the lightweight objects. What kind of information is suitable for sharing in the parent object? System-managed metadata, such as policies for security, retention, storage, and so on, are usually applied to a group of objects based on certain criteria. For example, all the documents in one loan application packet could use a single ACL and retention information, which could be placed into the shareable parent object. The specific information about each document would reside in a separate lightweight object. Lightweight object persistence Persistence for lightweight objects works much the same way it works for objects of standard types, with one exception. A lightweight object is a subtype of a shareable type and these types have their separate tables as usual. For a standard type, each object has separate records in all of these tables, with each record identified by the object ID of the object. However, when multiple lightweight objects share one parent object there is only one object ID (of the parent object) in the tables of the shareable type. The lightweight objects need to refer to the object ID of the parent object, which is different from the object ID of any of the lightweight objects, in order to access the shared properties. This reference is made via an attribute named i_sharing_parent, as shown in the last figure.
Read more
  • 0
  • 0
  • 10250
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-apex-plug-ins
Packt
30 Oct 2013
17 min read
Save for later

APEX Plug-ins

Packt
30 Oct 2013
17 min read
(For more resources related to this topic, see here.) In APEX 4.0, Oracle introduced the plug-in feature. A plug-in is an extension to the existing functionality of APEX. The idea behind plug-ins is to make life easier for developers. Plug-ins are reusable, and can be exported and imported. In this way it is possible to create a functionality which is available to all APEX developers. And installing and using them without the need of having a knowledge of what's inside the plug-in. APEX translates settings from the APEX builder to HTML and JavaScript. For example, if you created a text item in the APEX builder, APEX converts this to the following code (simplified): <input type="text" id="P12_NAME" name="P12_NAME" value="your name"> When you create an item type plug-in, you actually take over this conversion task of APEX, and you generate the HTML and JavaScript code yourself by using PL/SQL procedures. That offers a lot of flexibility because now you can make this code generic, so that it can be used for more items. The same goes for region type plug-ins. A region is a container for forms, reports, and so on. The region can be a div or an HTML table. By creating a region type plug-in, you create a region yourself with the possibility to add more functionality to the region. Plug-ins are very useful because they are reusable in every application. To make a plug-in available, go to Shared Components | Plug-ins , and click on the Export Plug-in link on the right-hand side of the page. Select the desired plug-in and file format and click on the Export Plug-in button. The plug-in can then be imported into another application. Following are the six types of plug-in: Item type plug-ins Region type plug-ins Dynamic action plug-ins Process type plug-ins Authorization scheme type plug-ins Authentication scheme type plug-ins In this Aricle we will discuss the first five types of plug-ins. Creating an item type plug-in In an item type plug-in you create an item with the possibility to extend its functionality. To demonstrate this, we will make a text field with a tooltip. This functionality is already available in APEX 4.0 by adding the following code to the HTML form element attributes text field in the Element section of the text field: onmouseover="toolTip_enable(event,this,'A tooltip')" But you have to do this for every item that should contain a tooltip. This can be made more easily by creating an item type plug-in with a built-in tooltip. And if you create an item of type plug-in, you will be asked to enter some text for the tooltip. Getting ready For this recipe you can use an existing page, with a region in which you can put some text items. How to do it... Follow these steps: Go to Shared Components | User Interface | Plug-ins . Click on the Create button. In the Name section, enter a name in the Name text field. In this case we enter tooltip. In the Internal Name text field, enter an internal name. It is advised to use the company's domain address reversed to ensure the name is unique when you decide to share this plug-in. So for example you can use com.packtpub.apex.tooltip. In the Source section, enter the following code in the PL/SQL Code text area: function render_simple_tooltip ( p_item in apex_plugin.t_page_item , p_plugin in apex_plugin.t_plugin , p_value in varchar2 , p_is_readonly in boolean , p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; -- sys.htp.p('<input type="text" id="'||p_item.name||'" name="'||p_item.name||'" class="text_field" onmouseover="toolTip_enable(event,this,'||''''||p_item.attribute_01||''''||')">');--return l_result;end render_simple_tooltip; This function uses the sys.htp.p function to put a text item (<input type="text") on the screen. On the text item, the onmouseover event calls the function toolTip_enable(). This function is an APEX function, and can be used to put a tooltip on an item. The arguments of the function are mandatory. The function starts with the option to show debug information. This can be very useful when you create a plug-in and it doesn't work. After the debug information, the htp.p function puts the text item on the screen, including the call to tooltip_enable. You can also see that the call to tooltip_enable uses p_item.attribute_01. This is a parameter that you can use to pass a value to the plug-in. That is, the following steps in this recipe. The function ends with the return of l_result. This variable is of the type apex_plugin.t_page_item_render_result. For the other types of plug-in there are dedicated return types also, for example t_region_render_result. Click on the Create Plug-in button The next step is to define the parameter (attribute) for this plug-in. In the Custom Attributes section, click on the Add Attribute button. In the Name section, enter a name in the Label text field, for example tooltip. Ensure that the Attribute text field contains the value 1 . In the Settings section, set the Type field to Text . Click on the Create button. In the Callbacks section, enter render_simple_tooltip into the Render Function Name text field. In the Standard Attributes section, check the Is Visible Widget checkbox. Click on the Apply Changes button. The plug-in is now ready. The next step is to create an item of type tooltip plug-in. Go to a page with a region where you want to use an item with a tooltip. In the Items section, click on the add icon to create a new item. Select Plug-ins . Now you will get a list of the available plug-ins. Select the one we just created, that is tooltip . Click on Next . In the Item Name text field, enter a name for the item, for example, tt_item. In the Region drop-down list, select the region you want to put the item in. Click on Next . In the next step you will get a new option. It's the attribute you created with the plug-in. Enter here the tooltip text, for example, This is tooltip text. Click on Next . In the last step, leave everything as it is and click on the Create Item button. You are now ready. Run the page. When you move your mouse pointer over the new item, you will see the tooltip. How it works... As stated before, this plug-in actually uses the function htp.p to put an item on the screen. Together with the call to the JavaScript function, toolTip_enable on the onmouseover event makes this a text item with a tooltip, replacing the normal text item. There's more... The tooltips shown in this recipe are rather simple. You could make them look better, for example, by using the Beautytips tooltips. Beautytips is an extension to jQuery and can show configurable help balloons. Visit http://plugins.jquery.com to download Beautytips. We downloaded Version 0.9.5-rc1 to use in this recipe. Go to Shared Components and click on the Plug-ins link. Click on the tooltip plug-in you just created. In the Source section, replace the code with the following code: function render_simple_tooltip ( p_item in apex_plugin.t_page_item, p_plugin in apex_plugin.t_plugin, p_value in varchar2, p_is_readonly in boolean, p_is_printer_friendly in boolean ) return apex_plugin.t_page_item_render_result is l_result apex_plugin.t_page_item_render_result; begin if apex_application.g_debug then apex_plugin_util.debug_page_item ( p_plugin => p_plugin , p_page_item => p_item , p_value => p_value , p_is_readonly => p_is_readonly , p_is_printer_friendly => p_is_printer_friendly); end if; The function also starts with the debug option to see what happens when something goes wrong. --Register the JavaScript and CSS library the plug-inuses. apex_javascript.add_library ( p_name => 'jquery.bgiframe.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.bt.min', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.easing.1.3', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'jquery.hoverintent.minified', p_directory => p_plugin.file_prefix, p_version => null ); apex_javascript.add_library ( p_name => 'excanvas', p_directory => p_plugin.file_prefix, p_version => null ); After that you see a number of calls to the function apex_javascript.add_library. These libraries are necessary to enable these nice tooltips. Using apex_javascript.add_library ensures that a JavaScript library is included in the final HTML of a page only once, regardless of how many plug-in items appear on that page. sys.htp.p('<input type="text" id="'||p_item.name||'"class="text_field" title="'||p_item.attribute_01||'">');-- apex_javascript.add_onload_code (p_code =>'$("#'||p_item.name||'").bt({ padding: 20 , width: 100 , spikeLength: 40 , spikeGirth: 40 , cornerRadius: 50 , fill: '||''''||'rgba(200, 50, 50, .8)'||''''||' , strokeWidth: 4 , strokeStyle: '||''''||'#E30'||''''||' , cssStyles: {color: '||''''||'#FFF'||''''||',fontWeight: '||''''||'bold'||''''||'} });'); -- return l_result; end render_tooltip; Another difference with the first code is the call to the Beautytips library. In this call you can customize the text balloon with colors and other options. The onmouseover event is not necessary anymore as the call to $().bt in the wwv_flow_javascript.add_onload_code takes over this task. The $().bt function is a jQuery JavaScript function which references the generated HTML of the plug-in item by ID, and converts it dynamically to show a tooltip using the Beautytips plug-in. You can of course always create extra plug-in item type parameters to support different colors and so on per item. To add the other libraries, do the following: In the Files section, click on the Upload new file button. Enter the path and the name of the library. You can use the file button to locate the libraries on your filesystem. Once you have selected the file, click on the Upload button. The files and their locations can be found in the following table: Libra ry Location jquery.bgiframe.min.js bt-0.9.5-rc1other_libsbgiframe_2.1.1 jquery.bt.min.js bt-0.9.5-rc1 jquery.easing.1.3.js bt-0.9.5-rc1other_libs jquery.hoverintent.minified.js bt-0.9.5-rc1other_libs Excanvas.js bt-0.9.5-rc1other_libsexcanvas_r3     If all libraries have been uploaded, the plug-in is ready. The tooltip now looks quite different, as shown in the following screenshot: In the plug-in settings, you can enable some item-specific settings. For example, if you want to put a label in front of the text item, check the Is Visible Widget checkbox in the Standard Attributes section. For more information on this tooltip, go to http://plugins.jquery.com/project/bt. Creating a region type plug-in As you may know, a region is actually a div. With the region type plug-in you can customize this div. And because it is a plug-in, you can reuse it in other pages. You also have the possibility to make the div look better by using JavaScript libraries. In this recipe we will make a carousel with switching panels. The panels can contain images but they can also contain data from a table. We will make use of another jQuery extension, Step Carousel. Getting ready You can download stepcarousel.js from http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm. However, in order to get this recipe work in APEX, we needed to make a slight modification in it. So, stepcarousel.js, arrowl.gif, and arrow.gif are included in this book. How to do it... Follow the given steps, to create the plug-in: Go to Shared Components and click on the Plug-ins link. Click on the Create button. In the Name section, enter a name for the plug-in in the Name field. We will use Carousel. In the Internal Name text field, enter a unique internal name. It is advised to use your domain reversed, for example com.packtpub.carousel. In the Type listbox, select Region . In the Source section, enter the following code in the PL/SQL Code text area: function render_stepcarousel ( p_region in apex_plugin.t_region, p_plugin in apex_plugin.t_plugin, p_is_printer_friendly in boolean ) return apex_plugin.t_region_render_result is cursor c_crl is select id , panel_title , panel_text , panel_text_date from app_carousel order by id; -- l_code varchar2(32767); begin The function starts with a number of arguments. These arguments are mandatory, but have a default value. In the declare section there is a cursor with a query on the table APP_CAROUSEL. This table contains several data to appear in the panels in the carousel. -- add the libraries and stylesheets -- apex_javascript.add_library ( p_name => 'stepcarousel', p_directory => p_plugin.file_prefix, p_version => null ); -- --Output the placeholder for the region which is used by--the Javascript code The actual code starts with the declaration of stepcarousel.js. There is a function, APEX_JAVASCRIPT.ADD_LIBRARY to load this library. This declaration is necessary, but this file needs also to be uploaded in the next step. You don't have to use the extension .js here in the code. -- sys.htp.p('<style type="text/css">'); -- sys.htp.p('.stepcarousel{'); sys.htp.p('position: relative;'); sys.htp.p('border: 10px solid black;'); sys.htp.p('overflow: scroll;'); sys.htp.p('width: '||p_region.attribute_01||'px;'); sys.htp.p('height: '||p_region.attribute_02||'px;'); sys.htp.p('}'); -- sys.htp.p('.stepcarousel .belt{'); sys.htp.p('position: absolute;'); sys.htp.p('left: 0;'); sys.htp.p('top: 0;'); sys.htp.p('}'); sys.htp.p('.stepcarousel .panel{'); sys.htp.p('float: left;'); sys.htp.p('overflow: hidden;'); sys.htp.p('margin: 10px;'); sys.htp.p('width: 250px;'); sys.htp.p('}'); -- sys.htp.p('</style>'); After the loading of the JavaScript library, some style elements are put on the screen. The style elements could have been put in a Cascaded Style Sheet (CSS ), but since we want to be able to adjust the size of the carousel, we use two parameters to set the height and width. And the height and the width are part of the style elements. -- sys.htp.p('<div id="mygallery" class="stepcarousel"style="overflow:hidden"><div class="belt">'); -- for r_crl in c_crl loop sys.htp.p('<div class="panel">'); sys.htp.p('<b>'||to_char(r_crl.panel_text_date,'DD-MON-YYYY')||'</b>'); sys.htp.p('<br>'); sys.htp.p('<b>'||r_crl.panel_title||'</b>'); sys.htp.p('<hr>'); sys.htp.p(r_crl.panel_text); sys.htp.p('</div>'); end loop; -- sys.htp.p('</div></div>'); The next command in the script is the actual creation of a div. Important here is the name of the div and the class. The Step Carousel searches for these identifiers and replaces the div with the stepcarousel. The next step in the function is the fetching of the rows from the query in the cursor. For every line found, the formatted text is placed between the div tags. This is done so that Step Carousel recognizes that the text should be placed on the panels. --Add the onload code to show the carousel -- l_code := 'stepcarousel.setup({ galleryid: "mygallery" ,beltclass: "belt" ,panelclass: "panel" ,autostep: {enable:true, moveby:1, pause:3000} ,panelbehavior: {speed:500, wraparound:true,persist:true} ,defaultbuttons: {enable: true, moveby: 1,leftnav:["'||p_plugin.file_prefix||'arrowl.gif", -5,80], rightnav:["'||p_plugin.file_prefix||'arrowr.gif", -20,80]} ,statusvars: ["statusA", "statusB", "statusC"] ,contenttype: ["inline"]})'; -- apex_javascript.add_onload_code (p_code => l_code); -- return null; end render_stepcarousel; The function ends with the call to apex_javascript.add_onload_code. Here starts the actual code for the stepcarousel and you can customize the carousel, like the size, rotation speed and so on. In the Callbacks section, enter the name of the function in the Return Function Name text field. In this case it is render_stepcarousel. Click on the Create Plug-in button. In the Files section, upload the stepcarousel.js, arrowl.gif, and arrowr.gif files. For this purpose, the file stepcarousel.js has a little modification in it. In the last section (setup:function), document.write is used to add some style to the div tag. Unfortunately, this will not work in APEX, as document.write somehow destroys the rest of the output. So, after the call, APEX has nothing left to show, resulting in an empty page. Document.write needs to be removed, and the following style elements need to be added in the code of the plug-in: sys.htp.p('</p><div id="mygallery" class="stepcarousel" style="overflow: hidden;"><div class="belt">'); In this line of code you see style='overflow:hidden'. That is the line that actually had to be included in stepcarousel.js. This command hides the scrollbars. After you have uploaded the files, click on the Apply Changes button. The plug-in is ready and can now be used in a page. Go to the page where you want this stepcarousel to be shown. In the Regions section, click on the add icon. In the next step, select Plug-ins . Select Carousel . Click on Next . Enter a title for this region, for example Newscarousel. Click on Next . In the next step, enter the height and the width of the carousel. To show a carousel with three panels, enter 800 in the Width text field. Enter 100 in the Height text field. Click on Next . Click on the Create Region button. The plug-in is ready. Run the page to see the result. How it works... The stepcarousel is actually a div. The region type plug-in uses the function sys.htp.p to put this div on the screen. In this example, a div is used for the region, but a HTML table can be used also. An APEX region can contain any HTML output, but for the positioning, mostly a HTML table or a div is used, especially when layout is important within the region. The apex_javascript.add_onload_code starts the animation of the carousel. The carousel switches panels every 3 seconds. This can be adjusted (Pause : 3000). See also For more information on this jQuery extension, go to http://www.dynamicdrive.com/dynamicindex4/stepcarousel.htm.
Read more
  • 0
  • 0
  • 10237

article-image-exploring-themes
Packt
14 Jan 2016
10 min read
Save for later

Exploring Themes

Packt
14 Jan 2016
10 min read
Drupal developers and interface engineers do not always create custom themes from scratch. Sometimes, we are asked to create starter themes that we begin any project from or subthemes that extend the functionality of a base theme. Having the knowledge of how to handle each of these situations is important. In this article by Chaz Chumley, the author of Drupal 8 Theming with Twig, we will be discussing some starter themes and how to work around the various libraries available to us. (For more resources related to this topic, see here.) Starter themes Any time we begin developing in Drupal, it is preferable to have a collection of commonly used functions and libraries that we can reuse. Being able to have a consistent starting point when creating multiple themes means we don't have to rethink much from design to design. The concept of a starter theme makes this possible, and we will walk through the steps involved in creating one. Before we begin, take a moment to use the drupal8.sql file that we already have with us to restore our current Drupal instance. This file will add the additional content and configuration required while creating a starter theme. Once the restore is complete, our home page should look like the following screenshot: This is a pretty bland-looking home page with no real styling or layout. So, one thing to keep in mind when first creating a starter theme is how we want our content to look. Do we want our starter theme to include another CSS framework, or do we want to create our own from scratch? Since this is our first starter theme, we should not be worried about reinventing the wheel but instead leverage an existing CSS framework, such as Twitter Bootstrap. Creating a Bootstrap starter Having an example or mockup that we can refer to while creating a starter theme is always helpful. So, to get the most out of our Twitter Bootstrap starter, let's go over to http://getbootstrap.com/examples/jumbotron/, where we will see an example of a home page layout: If we take a look at the mockup, we can see that the layout consists of two rows of content, with the first row containing a large callout known as a Jumbotron. The second row contains three featured blocks of content. The remaining typography and components take advantage of the Twitter Bootstrap CSS framework to display content. One advantage of integrating the Twitter Bootstrap framework into our starter theme is that our markup will be responsive in nature. This means that as the browser window is resized, the content will scale down accordingly. At smaller resolutions, the three columns will stack on top of one another, enabling the user to view the content more easily on smaller devices. We will be recreating this home page for our starter theme, so let's take a moment and familiarize ourselves with some basic Bootstrap layout terminology before creating our theme. Understanding grids and columns Bootstrap uses a 12-column grid system to structure content using rows and columns. The page layout begins with a parent container that wraps all child elements and allows you to maintain a specific page width. Each row and column then has CSS classes identifying how the content should appear. So, for example, if we want to have a row with two equal-width columns, we would build our page using the following markup: <div class="container">     <div class="row">         <div class="col-md-6"></div>         <div class="col-md-6"></div>     </div> </div> The two columns within a row must combine to a value of 12, since Bootstrap uses a 12-column grid system. Using this simple math, we can have variously sized columns and multiple columns, as long as their total is 12. We should also take notice of these following column classes, as we have great flexibility in targeting different breakpoints: Extra small (col-xs-x) Small (col-sm-x) Medium (col-md-x) Large (col-lg-x) Each breakpoint references the various devices, from smartphones all the way up to television-size monitors. We can use multiple classes like  class="col-sm-6 col-md-4" to manipulate our layout, which gives us a two-column row on small devices and a three-column row on medium devices when certain breakpoints are reached. To get a more detailed understanding of the remaining Twitter Bootstrap documentation, you can go to http://getbootstrap.com/getting-started/ any time. For now, it's time we begin creating our starter theme. Setting up a theme folder The initial step in our process of creating a starter theme is fairly simple: we need to open up Finder or Windows Explorer, navigate to the themes folder, and create a folder for our theme. We will name our theme tweet, as shown in the following screenshot: Adding a screenshot Every theme deserves a screenshot, and in Drupal 8, all we need to do is have a file named screenshot.png, and the Appearance screen will use it to display an image above our theme. Configuring our theme Next, we will need to create our theme configuration file, which will allow our theme to be discoverable. We will only worry about general configuration information to start with and then add library and region information in the next couple of steps. Begin by creating a new file called tweet.info.yml in your themes/tweet folder, and add the following metadata to your file: name: Tweet type: theme description: 'A Twitter Bootstrap starter theme' core: 8.x base theme: false Notice that we are setting the base theme configuration to false. Setting this value to false lets Drupal know that our theme will not rely on any other theme files. This allows us to have full control of our theme's assets and Twig templates. We will save our changes at this juncture and clear the Drupal cache. Now we can take a look to see whether our theme is available to install. Installing our theme Navigate to /admin/appearance in your browser and you should see your new theme located in the Uninstalled themes section. Go ahead and install the theme by clicking on the Install and set as default link. If we navigate to the home page, we should see an unstyled home page: This clean palate is perfect while creating a starter theme, as it allows us to begin theming without worrying about overriding any existing markup that a base theme might include. Working with libraries While Drupal 8 ships with some improvements to its default CSS and JavaScript libraries, we will generally find ourselves wanting to add additional third-party libraries that can enhance the function and feel of our website. In our case, we have decided to add Twitter Bootstrap (http://getbootstrap.com), which provides us with a responsive CSS framework and JavaScript library that utilize a component-based approach to theming. The process actually involves three steps. The first is downloading or installing the assets that make up the framework or library. The second is creating a *.libraries.yml file and adding library entries that point to our assets. Finally, we will need to add a libraries reference to our *.info.yml file. Adding assets We can easily add Twitter Bootstrap framework assets by following these steps: Navigate to http://getbootstrap.com/getting-started/#download Click on the Download Bootstrap button Extract the zip file Copy the contents of the bootstrap folder to our themes/tweet folder Once we are done, our themes/tweet folder content should look like the following screenshot: Now that we have the Twitter Bootstrap assets added to our theme, we need to create a *.libraries.yml file that we can use to reference our assets. Creating a library reference Any time we want to add CSS or JS files to our theme, we will either need to create or modify an existing *.libraries.yml file that allows us to organize our assets. Each library entry can include one to multiple pointers to the file and location within our theme structure. Remember that the filename of our *.libraries.yml file should follow the same naming convention as our theme. We can begin by following these steps: Create a new file called tweet.libraries.yml. Add a library entry called bootstrap. Add a version that reflects the current version of Bootstrap that we are using. Add the CSS entry for bootstrap.min.css and bootstrap-theme.min.css. Add the JS entry for bootstrap.min.js. Add a dependency to jQuery located in Drupal's core: bootstrap:   version: 3.3.6   css:     theme:       css/bootstrap.min.css: {}       css/bootstrap-theme.min.css: {}     js:       js/bootstrap.min.js     dependencies:       - core/jquery Save tweet.libraries.yml. In the preceding library entry, we have added both CSS and JS files as well as introduced dependencies. Dependencies allow any JS file that relies on a specific JS library to make sure that the file can include the library as a dependency, which makes sure that the library is loaded before our JS file. In the case of Twitter Bootstrap, it relies on jQuery, and since Drupal 8 has it as part of its core.libraries.yml file, we can reference it by pointing to that library and its entry. Including our library Just because we added a library to our theme doesn't mean it will automatically be added to our website. In order for us to add Bootstrap to our theme, we need to include it in our tweet.info.yml configuration file. We can add Bootstrap by following these steps: Open tweet.info.yml Add a libraries reference to bootstrap to the bottom of our configuration: libraries:   - tweet/bootstrap Save tweet.info.yml. Make sure to clear Drupal's cache to allow our changes to be added to the theme registry. Finally, navigate to our home page and refresh the browser so that we can preview our changes: If we inspect the HTML using Chrome's developer tools, we should see that the Twitter Bootstrap library has been included along with the rest of our files. Both the CSS and JS files are being loaded into the proper flow of our document. Summary Whether a starter theme or subthemes, they are all just different variations of the same techniques. The level of effort required to create each type of theme may vary, but as we saw, there was a lot of repetition. We began with a discussion around starter themes and learned what steps were involved in working with libraries. Resources for Article: Further resources on this subject: Using JavaScript with HTML [article] Custom JavaScript and CSS and tokens [article] Concurrency Principles [article]
Read more
  • 0
  • 0
  • 10207

article-image-understanding-material-design
Packt
18 Feb 2016
22 min read
Save for later

Understanding Material Design

Packt
18 Feb 2016
22 min read
Material can be thought of as something like smart paper. Like paper, it has surfaces and edges that reflect light and cast shadows, but unlike paper, material has properties that real paper does not, such as its ability to move, change its shape and size, and merge with other material. Despite this seemingly magical behavior, material should be treated like a physical object with a physicality of its own. Material can be seen as existing in a three-dimensional space, and it is this that gives its interfaces a reassuring sense of depth and structure. Hierarchies become obvious when it is instantly clear whether an object is above or below another. Based largely on age-old principles taken from color theory, animation, traditional print design, and physics, material design provides a virtual space where developers can use surface and light to create meaningful interfaces and movement to design intuitive user interactions. (For more resources related to this topic, see here.) Material properties As mentioned in the introduction, material can be thought of as being bound by physical laws. There are things it can do and things it cannot. It can split apart and heal again, and change color and shape, but it cannot occupy the same space as another sheet of material or rotate around two of its axes. We will be dealing with these properties throughout the book, but it is a good idea to begin with a quick look at the things material can and can't do. The third dimension is fundamental when it comes to material. This is what gives the user the illusion that they are interacting with something more tangible than a rectangle of light. The illusion is generated by the widening and softening of shadows beneath material that is closer to the user. Material exists in virtual space, but a space that, nevertheless, represents the real dimensions of a phone or tablet. The x axis can be thought of as existing between the top and bottom of the screen, the y axis between the right and left edges, and the z axis confined to the space between the back of the handset and the glass of the screen. It is for this reason that material should not rotate around the x or y axes, as this would break the illusion of a space inside the phone. The basic laws of the physics of material are outlined, as follows, in the form of a list: All material is 1 dp thick (along the z axis). Material is solid, only one sheet can exist in one place at a time and material cannot pass through other material. For example, if a card needs to move past another, it must move over it. Elevation, or position along the z axis, is portrayed by shadow, with higher objects having wider, softer shadows. The z axis should be used to prompt interaction. For example, an action button rising up toward the user to demonstrate that it can be used to perform some action. Material does not fold or bend. Material cannot appear to rise higher than the screen surface. Material can grow and shrink along both x and y axes. Material can move along any axis. Material can be spontaneously created and destroyed, but this must not be without movement. The arrivals and departures of material components must be animated. For example, a card growing from the point that it was summoned from or sliding off the screen when dismissed. A sheet of material can split apart anywhere along the x or y axes, and join together again with its original partner or with other material. This covers the basic rules of material behavior but we have said nothing of its content. If material can be thought of as smart paper, then its content can only be described as smart ink. The rules governing how ink behaves are a little simpler: Material content can be text, imagery, or any other form of visual digital content Content can be of any shape or color and behaves independently from its container material It cannot be displayed beyond the edges of its material container It adds nothing to the thickness (z axis) of the material it is displayed on Setting up a development environment The Android development environment consists mainly of two distinct components: the SDK, which provides the code libraries behind Android and Android Studio, and a powerful code editor that is used for constructing and testing applications for Android phones and tablets, Wear, TV, Auto, Glass, and Cardboard. Both these components can both be downloaded as a single package from http://developer.android.com/sdk/index.html. Installing Android Studio The installation is very straightforward. Run the Android Studio bundle and follow the on-screen instructions, installing HAXM hardware acceleration if prompted, and selecting all SDK components, as shown here: Android Studio is dependent on the Java JDK. If you have not previously installed it, this will be detected while you are installing Android Studio, and you will be prompted to download and install it. If for some reason it does not, it can be found at http://www.oracle.com/technetwork/java/javase/downloads/index.html, from where you should download the latest version. This is not quite the end of the installation process. There are still some SDK components that we will need to download manually before we can build our first app. As we will see next, this is done using the Android SDK Manager. Configuring the Android SDK People often refer to Android versions by name, such as Lollipop, or an identity number, such as 5.1.1. As developers, it makes more sense to use the API level, which in the case of Android 5.1.1 would be API level 22. The SDK provides a platform for every API level since API level 8 (Android 2.2). In this section, we will use the SDK Manager to take a closer look at Android platforms, along with the other tools included in the SDK. Start a new Android Studio project or open an existing one with the minimum SDK at 21 or higher. You can then open the SDK manager from the menu via Tools | Android | SDK Manager or the matching icon on the main toolbar. The Android SDK Manager can also be started as a stand alone program. It can be found in the /Android/sdk directory, as can the Android Virtual Device (AVD) manager. As can be seen in the preceding screenshot, there are really three main sections in the SDK: A Tools folder A collection of platforms An Extras folder All these require a closer look. The Tools directory contains exactly what it says, that is, tools. There are a handful of these but the ones that will concern us are the SDK manager that we are using now, and the AVD manager that we will be using shortly to create a virtual device. Open the Tools folder. You should find the latest revisions of the SDK tools and the SDK Platform-tools already installed. If not, select these items, along with the latest Build-tools, that is, if they too have not been installed. These tools are often revised, and it is well worth it to regularly check the SDK manager for updates. When it comes to the platforms themselves, it is usually enough to simply install the latest one. This does not mean that these apps will not work on or be available to devices running older versions, as we can set a minimum SDK level when setting up a project, and along with the use of support libraries, we can bring material design to almost any Android device out there. If you open up the folder for the latest platform, you will see that some items have already been installed. Strictly speaking, the only things you need to install are the SDK platform itself and at least one system image. System images are copies of the hard drives of actual Android devices and are used with the AVD to create emulators. Which images you use will depend on your system and the form factors that you are developing for. In this book, we will be building apps for phones and tablets, so make sure you use one of these at least. Although they are not required to develop apps, the documentation and samples packages can be extremely useful. At the bottom of each platform folder are the Google APIs and corresponding system images. Install these if you are going to include Google services, such as Maps and Cloud, in your apps. You will also need to install the Google support libraries from the Extras directory, and this is what we will cover next. The Extras folder contains various miscellaneous packages with a range of functions. The ones you are most likely to want to download are listed as follows: Android support libraries are invaluable extensions to the SDK that provide APIs that not only facilitate backwards compatibility, but also provide a lot of extra components and functions, and most importantly for us, the design library. As we are developing on Android Studio, we need only install the Android Support Repository, as this contains the Android Support Library and is designed for use with Android. The Google Play services and Google Repository packages are required, along with the Google APIs mentioned a moment ago, to incorporate Google Services into an application. You will most likely need the Google USB Driver if you are intending to test your apps on a real device. How to do this will be explained later in this chapter. The HAXM installer is invaluable if you have a recent Intel processor. Android emulators can be notoriously slow, and this hardware acceleration can make a noticeable difference. Once you have downloaded your selected SDK components, depending on your system and/or project plans, you should have a list of installed packages similar to the one shown next: The SDK is finally ready, and we can start developing material interfaces. All that is required now is a device to test it on. This can, of course, be done on an actual device, but generally speaking, we will need to test our apps on as many devices as possible. Being able to emulate Android devices allows us to do this. Emulating Android devices The AVD allows us to test our designs across the entire range of form factors. There are an enormous number of screen sizes, shapes, and densities around. It is vital that we get to test our apps on as many device configurations as possible. This is actually more important for design than it is for functionality. An app might operate perfectly well on an exceptionally small or narrow screen, but not look as good as we had wanted, making the AVD one of the most useful tools available to us. This section covers how to create a virtual device using the AVD Manager. The AVD Manager can be opened from within Android Studio by navigating to Tools | Android | AVD Manager from the menu or the corresponding icon on the toolbar. Here, you should click on the Create Virtual Device... button. The easiest way to create an emulator is to simply pick a device definition from the list of hardware images and keep clicking on Next until you reach Finish. However, it is much more fun and instructive to either clone and edit an existing profile, or create one from scratch. Click on the New Hardware Profile button. This takes you to the Configure Hardware Profile window where you will be able to create a virtual device from scratch, configuring everything from cameras and sensors, to storage and screen resolution. When you are done, click on Finish and you will be returned to the hardware selection screen where your new device will have been added: As you will have seen from the Import Hardware Profiles button, it is possible to download system images for many devices not included with the SDK. Check the developer sections of device vendor's web sites to see which models are available. So far, we have only configured the hardware for our virtual device. We must now select all the software it will use. To do this, select the hardware profile you just created and press Next. In the following window, select one of the system images you installed earlier and press Next again. This takes us to the Verify Configuration screen where the emulator can be fine-tuned. Most of these configurations can be safely left as they are, but you will certainly need to play with the scale when developing for high density devices. It can also be very useful to be able to use a real SD card. Once you click on Finish, the emulator will be ready to run. An emulator can be rotated through 90 degrees with left Ctrl + F12. The menu can be called with F2, and the back button with ESC. Keyboard commands to emulate most physical buttons, such as call, power, and volume, and a complete list can be found at http://developer.android.com/tools/help/emulator.html. Android emulators are notoriously slow, during both loading and operating, even on quite powerful machines. The Intel hardware accelerator we encountered earlier can make a significant difference. Between the two choices offered, the one that you use should depend on how often you need to open and close a particular emulator. More often than not, taking advantage of your GPU is the more helpful of the two. Apart from this built-in assistance, there are a few other things you can do to improve performance, such as setting lower pixel densities, increasing the device's memory, and building the website for lower API levels. If you are comfortable doing so, set up exclusions in your anti-virus software for the Android Studio and SDK directories. There are several third-party emulators, such as Genymotion, that are not only faster, but also behave more like real devices. The slowness of Android emulators is not necessarily a big problem, as most early development needs only one device, and real devices suffer none of the performance issues found on emulators. As we shall see next, real devices can be connected to our development environment with very little effort. Connecting a real device Using an actual physical device to run and test applications does not have the flexibility that emulators provide, but it does have one or two advantages of its own. Real devices are faster than any emulator, and you can test features unavailable to a virtual device, such as accessing sensors, and making and receiving calls. There are two steps involved in setting up a real phone or tablet. We need to set developer options on the handset and configure the USB connection with our development computer: To enable developer options on your handset, navigate to Settings | About phone. Tap on Build number 7 times to enable Developer options, which will now be available from the previous screen. Open this to enable USB debugging and Allow mock locations. Connect the device to your computer and check that it is connected as a Media device (MTP). Your handset can now be used as a test device. Depending on your We need only install the Google USB. Connect the device to your computer with a USB cable, start Android Studio, and open a project. Depending on your setup, it is quite possible that you are already connected. If not, you can install the Google USB driver by following these steps: From the Windows start menu, open the device manager. Your handset can be found under Other Devices or Portable Devices. Open its Properties window and select the Driver tab. Update the driver with the Google version, which can be found in the sdkextrasgoogleusb_driver directory. An application can be compiled and run from Android Studio by selecting Run 'app' from the Run menu, pressing Shift + F10, or clicking on the green play icon on the toolbar. Once the project has finished building, you will be asked to confirm your choice of device before the app loads and then opens on your handset. With a fully set up development environment and devices to test on, we can now start taking a look at material design, beginning with the material theme that is included as the default in all SDKs with APIs higher than 21. The material theme Since API level 21 (Android 5.0), the material theme has been the built-in user interface. It can be utilized and customized, simplifying the building of material interfaces. However, it is more than just a new look; the material theme also provides the automatic touch feedback and transition animations that we associate with material design. To better understand Android themes and how to apply them, we need to understand how Android styles work, and a little about how screen components, such as buttons and text boxes, are defined. Most individual screen components are referred to as widgets or views. Views that contain other views are called view groups, and they generally take the form of a layout, such as the relative layout we will use in a moment. An Android style is a set of graphical properties defining the appearance of a particular screen component. Styles allow us to define everything from font size and background color, to padding elevation, and much more. An Android theme is simply a style applied across a whole screen or application. The best way to understand how this works is to put it into action and apply a style to a working project. This will also provide a great opportunity to become more familiar with Android Studio. Applying styles Styles are defined as XML files and are stored in the resources (res) directory of Android Studio projects. So that we can apply different styles to a variety of platforms and devices, they are kept separate from the layout code. To see how this is done, start a new project, selecting a minimum SDK of 21 or higher, and using the blank activity template. To the left of the editor is the project explorer pane. This is your access point to every branch of your project. Take a look at the activity_main.xml file, which would have been opened in the editor pane when the project was created. At the bottom of the pane, you will see a Text tab and a Design tab. It should be quite clear, from examining these, how the XML code defines a text box (TextView) nested inside a window (RelativeLayout). Layouts can be created in two ways: textually and graphically. Usually, they are built using a combination of both techniques. In the design view, widgets can be dragged and dropped to form layout designs. Any changes made using the graphical interface are immediately reflected in the code, and experimenting with this is a fantastic way to learn how various widgets and layouts are put together. We will return to both these subjects in detail later on in the book, but for now, we will continue with styles and themes by defining a custom style for the text view in our Hello world app. Open the res node in the project explorer; you can then right-click on the values node and select the New | Values resource file from the menu. Call this file my_style and fill it out as follows: <?xml version="1.0" encoding="utf-8"?> <resources>     <style name="myStyle">         <item name="android:layout_width">match_parent</item>         <item name="android:layout_height">wrap_content</item>         <item name="android:elevation">4dp</item>         <item name="android:gravity">center_horizontal</item>         <item name="android:padding">8dp</item>         <item name="android:background">#e6e6e6</item>         <item name="android:textSize">32sp</item>         <item name="android:textColor">#727272</item>     </style> </resources> This style defines several graphical properties, most of which should be self-explanatory with the possible exception of gravity, which here refers to how content is justified within the view. We will cover measurements and units later in the book, but for now, it is useful to understand dp and sp: Density-independent pixel (dp): Android runs on an enormous number of devices, with screen densities ranging from 120 dpi to 480 dpi and more. To simplify the process of developing for such a wide variety, Android uses a virtual pixel unit based on a 160 dpi screen. This allows us to develop for a particular screen size without having to worry about screen density. Scale-independent pixel (sp): This unit is designed to be applied to text. The reason it is scale-independent is because the actual text size on a user's device will depend on their font size settings. To apply the style we just defined, open the activity_main.xml file (from res/layouts, if you have closed it) and edit the TextView node so that it matches this: <TextView     style="@style/myStyle"     android_text="@string/hello_world" /> The effects of applying this style can be seen immediately from the design tab or preview pane, and having seen how styles are applied, we can now go ahead and create a style to customize the material theme palette. Customizing the material theme One of the most useful features of the material theme is the way it can take a small palette made of only a handful of colors and incorporate these colors into every aspect of a UI. Text and cursor colors, the way things are highlighted, and even system features such as the status and navigation bars can be customized to give our apps brand colors and an easily recognizable look. The use of color in material design is a topic in itself, and there are strict guidelines regarding color, shade, and text, and these will be covered in detail later in the book. For now, we will just look at how we can use a style to apply our own colors to a material theme. So as to keep our resources separate, and therefore easier to manage, we will define our palette in its own XML file. As we did earlier with the my_style.xml file, create a new values resource file in the values directory and call it colors. Complete the code as shown next: <?xml version="1.0" encoding="utf-8"?> <resources>     <color name="primary">#FFC107</color>     <color name="primary_dark">#FFA000</color>     <color name="primary_light">#FFECB3</color>     <color name="accent">#03A9F4</color>     <color name="text_primary">#212121</color>     <color name="text_secondary">#727272</color>     <color name="icons">#212121</color>     <color name="divider">#B6B6B6</color> </resources> In the gutter to the left of the code, you will see small, colored squares. Clicking on these will take you to a dialog with a color wheel and other color selection tools for quick color editing. We are going to apply our style to the entire app, so rather than creating a separate file, we will include our style in the theme that was set up by the project template wizard when we started the project. This theme is called AppTheme, as can be seen by opening the res/values/styles/styles.xml (v21) file. Edit the code in this file so that it looks like the following: <?xml version="1.0" encoding="utf-8"?> <resources>     <style name="AppTheme" parent="android:Theme.Material.Light">         <item name="android:colorPrimary">@color/primary</item>         <item name="android:colorPrimaryDark">@color/primary_dark</item>         <item name="android:colorAccent">@color/accent</item>         <item name="android:textColorPrimary">@color/text_primary</item>         <item name="android:textColor">@color/text_secondary</item>     </style> </resources> Being able to set key colors, such as colorPrimary and colorAccent, allows us to incorporate our brand colors throughout the app, although the project template only shows us how we have changed the color of the status bar and app bar. Try adding radio buttons or text edit boxes to see how the accent color is applied. In the following figure, a timepicker replaces the original text view: The XML for this looks like the following lines: <TimePicker     android_layout_width="wrap_content"     android_layout_height="wrap_content"     android_layout_alignParentBottom="true"     android_layout_centerHorizontal="true" /> For now, it is not necessary to know all the color guidelines. Until we get to them, there is an online material color palette generator at http://www.materialpalette.com/ that lets you try out different palette combinations and download color XML files that can just be cut and pasted into the editor. With a complete and up-to-date development environment constructed, and a way to customize and adapt the material theme, we are now ready to look into how material specific widgets, such as card views, are implemented. Summary The Android SDK, Android Studio, and AVD comprise a sophisticated development toolkit, and even setting them up is no simple task. But, with our tools in place, we were able to take a first look at one of material design's major components: the material theme. We have seen how themes and styles relate, and how to create and edit styles in XML. Finally, we have touched on material palettes, and how to customize a theme to utilize our own brand colors across an app. With these basics covered, we can move on to explore material design further, and in the next chapter, we will look at layouts and material components in greater detail. To learn more about material design, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Instant Responsive Web Design (https://www.packtpub.com/web-development/instant-responsive-web-design-instant) Mobile Game Design Essentials (https://www.packtpub.com/game-development/mobile-game-design-essentials) Resources for Article: Further resources on this subject: Speaking Java – Your First Game [article] Metal API: Get closer to the bare metal with Metal API [article] Looking Good – The Graphical Interface [article]
Read more
  • 0
  • 0
  • 10159

article-image-setting-woocommerce
Packt
19 Nov 2013
6 min read
Save for later

Setting Up WooCommerce

Packt
19 Nov 2013
6 min read
(For more resources related to this topic, see here.) So, you're already familiar with WordPress and know how to use plugins, widgets, and themes? Your next step is to expand your existing WordPress website or blog with an online store? In that case you've come to the right place! WooCommerce is a versatile plugin for WordPress, that gives the possibility for everyone with a little WordPress knowledge to start their own online store. In case you are not familiar with WordPress at all, this book is not the first one you should read. No worries though, WordPress isn't that hard to learn and there are tons of online possibilities to learn about the WordPress solution very quickly. Or just turn to one of the many printed books on WordPress that are available. These are the topics we'll be covering in this article: Installing and activating WooCommerce Learn everything about setting up WooCommerce correctly Preparing for takeoff Before we start, remember that it's only possible to install your own plugins if you're working in your own WordPress installation. This means that users running a website on WordPress.com will not be able to follow along. It's simply impossible in that environment to install plugins yourself. Although installing WooCommerce on top of WordPress isn't difficult, we highly recommend that you set up a test environment first. Without going too much into depth, this is what you need to do: Create a backup copy of your complete WordPress environment using FTP. Alternatively use a plugin to store a copy into your Dropbox folder automatically. There are tons of solutions available, just pick your own favorite. UpDraftPlus is one of the possibilities and delivers a complete backup solution: http://wordpress.org/plugins/updraftplus/. Don't forget to backup your WordPress database as well. You may do this using a tool like phpMyAdmin and create an export from there. But also in this case, there are plugins that make life easier. The UpDraftPlus plugin mentioned previously can perform this task as well. Once your backups are complete, install XAMPP on a local (Windows) machine that can be downloaded from http://www.apachefriends.org. Although XAMPP is available for Mac users, MAMP is a widely used alternative for this group. MAMP can be downloaded from http://www.mamp.info/en/index.html. Restore your WordPress backup on your test server and start following the remaining part of this book in your new test environment. Alternatively, install a copy of your WordPress website as a temporary subdomain at your hosting provider. For instance, if my website is http://www.example.com, I could easily create a copy of my site in http://test.example.com. Possibilities may vary, depending on the package you have with your hosting provider. If in your situation it isn't needed to add WooCommerce to an existing WordPress site, of course you may also start from scratch. Just install WordPress on a local test server or install it at your hosting provider. To keep our instructions in this book as clear as possible we did just that. We created a fresh installation of WordPress Version 3.6. Next, you see a screenshot of our fresh WordPress installation: Are these short instructions just too much for you at this moment? Do you need a more detailed step-by-step guide to create a test environment for your WordPress website? Look at the following tutorials: For Max OSX users: http://wpmu.org/local-wordpresstest-environment-mamp-osx/ For Windows users: http://www.thegeekscope.com/howto-copy-a-live-wordpress-website-to-local-indowsenvironment/ More tutorials will also be available on our website: http://www.joomblocks.com Don't forget to sign up for the free Newsletter, that will bring you even more news and tutorials on WordPress, WooCommerce, and other open source software solutions! Once ready, we'll be able to take the next step and install the WooCommerce plugin. Let's take a look at our WordPress backend. In our situation we can open this by browsing to http://localhost/wp36/wp-admin. Depending on the choices you made previously for your test environment, your URL could be different. Well, this should all be pretty familiar for you already. Again, your situation might look different, depending on your theme or the number of plugins already active for your website. Installing WooCommerce Installing a plugin is a fairly simple task: Click on Plugins in the menu on the left and click on Add New. Next, simply enter woocommerce in the Search field and click on Search Plugins. Verify if the correct plugin is shown on top and click on Install Now. Confirm the warning message that appears by clicking on OK. Click on Activate Plugin. Note that in the following screenshot, we're installing Version 2.0.13 of WooCommerce. New versions will follow rather quickly, so you might already see a higher version number. WooCommerce needs to have a number of specific WordPress pages, that it automatically will setup for you. Just click on the Install WooCommerce Pages button and make sure not to forget this step! In our example project, we're installing the English version of WooCommerce. But you might need a different language. By default, WooCommerce is already delivered in a number of languages. This means the installation will automatically follow the language of your WordPress installation. If you need something else, just browse through the plugin directory on WordPress.org to find any additional translations. Once we have created the necessary pages, the WooCommerce welcome screen will appear and you will see a new menu item has been added to the main menu on the left. Meanwhile the plugin created the necessary pages, that you can access by clicking on Pages in the menu on the left. Note that if you open a page that was automatically created by WooCommerce, you'll only see a shortcode, which is used to call the needed functionality. Do not delete the shortcodes, or WooCommerce might stop working. However, it's still possible to add your own content before or after the shortcode on these pages. WooCommerce also added some widgets to your WordPress dashboard, giving an overview of the latest product and sales statistics. At this moment this is all still empty of course. Summary In this article, we learned about the basics of WooCommerce and installing the same. We also learned that WooCommerce is a free but versatile plugin for WordPress, that you may use to easily set up your own online store. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Increasing sales with Brainshark slideshows/documents [Article] Implementing OpenCart Modules [Article]
Read more
  • 0
  • 0
  • 10147
article-image-downloading-and-setting-bootstrap
Packt
30 Jan 2013
4 min read
Save for later

Downloading and setting up Bootstrap

Packt
30 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Twitter Bootstrap is more than a set of code. It is an online community. To get started, you will do well to familiarize yourself with Twitter Bootstrap's home base: http://twitter.github.com/bootstrap/ Here you'll find the following: The documentation: If this is your first visit, grab a cup of coffee and spend some time perusing the pages, scanning the components, reading the details, and soaking it in. (You'll see this is going to be fun.) The download button: You can get the latest and greatest versions of the Twitter Bootstrap's CSS, JavaScript plugins, and icons, compiled and ready for action, coming to you in a convenient ZIP folder. This is where we'll start. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you. How to do it… Whatever your experience level, as promised, I'll walk you through all the necessary steps. Here goes! Go to the Bootstrap homepage: http://twitter.github.com/bootstrap/ Click on the large Download Bootstrap button. Locate the download file and unzip or extract it. You should get a folder named simply bootstrap. Inside this folder you should find the folders and files shown in thefollowing screenshot: From the homepage, click on the main navigation item: Get started. Scroll down, or use the secondary navigation, to navigate to the heading: Examples. The direct link is: http://twitter.github.com/bootstrap/getting-started. html#examples Right-click and download the leftmost example, labeled Basic Marketing Site. You'll see that it is an HTML file, named hero.html Save (or move) it to your main bootstrap folder, right alongside the folders named css, img, and js. Rename the file index.html (a standard name for what will become our homepage). You should now see something similar to the following screenshot: Next, we need to update the links to the stylesheets Why? When you downloaded the starter template file, you changed the relationship between the file and its stylesheets. We need to let it know where to find the stylesheets in this new file structure. Open index.html (formerly, hero.html) in your code editor. Need a code editor? Windows users: You might try Notepad++ (http://notepadplus-plus.org/download/) Mac users: Consider TextWrangler (http://www. barebones.com/products/textwrangler/)   Find these lines near the top of the file (lines 11-18 in version 2.0.2): Update the href attributes in both link tags to read as follows: Save your changes! You're set to go! Open it up in your browser! (Double-click on index.html.) You should see something like this: Congratulations! Your first Bootstrap site is underway. Problems? Don't worry. If your page doesn't look like this yet, let me help you spot the problem. Revisit the steps above and double-check a couple of things: Are your folders and files in the right relationship? (see step 3 as detailed previosuly) In your index.html, did you update the href attributes in both stylesheet links? (These should be lines 11 and 18 as of Twitter Bootstrap version 2.1.0.) There's more… Of course, this is not the only way you could organize your files. Some developers prefer to place stylesheets, images, and JavaScript files all within a larger folder named assets or library. The organization method I've presented is recommended by the developers who contribute to the HTML5 Boilerplate. One advantage of this approach is that it reduces the length of the paths to our site assets. Thus, whereas others might have a path to a background image such as this: url('assets/img/bg.jpg'); In the organization scheme I've recommended it will be shorter: url('img/bg.jpg'); This is not a big deal for a single line of code. However, when you consider that there will be many links to stylesheets, JavaScript files, and images running throughout your site files, when we reduce each path a few characters, this can add up. And in a world where speed matters, every bit counts. Shorter paths save characters, reduce file size, and help support faster web browsing. Summary This article gave us a quick introduction to the Twitter Bootstrap. We've got a fair idea as to how to download and set up our Bootstrap. By following these simple steps, we can easily create our first Bootstrap site. Resources for Article : Further resources on this subject: Starting Up Tomcat 6: Part 1 [Article] Build your own Application to access Twitter using Java and NetBeans: Part 2 [Article] Integrating Twitter with Magento [Article]
Read more
  • 0
  • 0
  • 10132

article-image-why-ajax-is-a-different-type-of-software
Packt
27 Jan 2011
14 min read
Save for later

Why Ajax is a different type of software

Packt
27 Jan 2011
14 min read
Ajax is not a piece of software in the way we think about JavaScript or CSS being a piece of software. It's actually a lot more like an overlaid function. But what does that mean, exactly? Why Ajax is a bit like human speech Human speech is an overlaid function. What is meant by this is reflected in the answer to a question: "What part of the human body has the basic job of speech?" The tongue, for one answer, is used in speech, but it also tastes food and helps us swallow. The lungs and diaphragm, for another answer, perform the essential task of breathing. The brain cannot be overlooked, but it also does a great many other jobs. All of these parts of the body do something more essential than speech and, for that matter, all of these can be found among animals that cannot talk. Speech is something that is overlaid over organs that are there in the first place because of something other than speech. Something similar to this is true for Ajax, which is not a technology in itself, but something overlaid on top of other technologies. Ajax, some people say, stands for Asynchronous JavaScript and XML, but that was a retroactive expansion. JavaScript was introduced almost a decade before people began seriously talking about Ajax. Not only is it technically possible to use Ajax without JavaScript (one can substitute VBScript at the expense of browser compatibility), but there are quite a few substantial reasons to use JavaScript Object Notation (JSON) in lieu of heavy-on-the-wire eXtensible Markup Language (XML). Performing the overlaid function of Ajax with JSON replacing XML is just as eligible to be considered full-fledged Ajax as a solution incorporating XML. Ajax helps the client talk to the server Ajax is a way of using client-side technologies to talk with a server and perform partial page updates. Updates may be to all or part of the page, or simply to data handled behind the scenes. It is an alternative to the older paradigm of having a whole page replaced by a new page loaded when someone clicks on a link or submits a form. Partial page updates, in Ajax, are associated with Web 2.0, while whole page updates are associated with Web 1.0; it is important to note that "Web 2.0" and "Ajax" are not interchangeable. Web 2.0 includes more decentralized control and contributions besides Ajax, and for some objectives it may make perfect sense to develop an e-commerce site that uses Ajax but does not open the door to the same kind of community contributions as Web 2.0. Some of the key features common in Web 2.0 include: Partial page updates with JavaScript communicating with a server and rendering to a page An emphasis on user-centered design Enabling community participation to update the website Enabling information sharing as core to what this communication allows The concept of "partial page updates" may not sound very big, but part of its significance may be seen in an unintended effect. The original expectation of partial page updates was that it would enable web applications that were more responsive. The expectation was that if submitting a form would only change a small area of a page, using Ajax to just load the change would be faster than reloading the entire page for every minor change. That much was true, but once programmers began exploring, what they used Ajax for was not simply minor page updates, but making client-side applications that took on challenges more like those one would expect a desktop program to do, and the more interesting Ajax applications usually became slower. Again, this was not because you could not fetch part of the page and update it faster, but because programmers were trying to do things on the client side that simply were not possible under the older way of doing things, and were pushing the envelope on the concept of a web application and what web applications can do. Which technologies does Ajax overlay? Now let us look at some of the technologies where Ajax may be said to be overlaid. JavaScript JavaScript deserves pride of place, and while it is possible to use VBScript for Internet Explorer as much more than a proof of concept, for now if you are doing Ajax, it will almost certainly be Ajax running JavaScript as its engine. Your application will have JavaScript working with XMLHttpRequest, JavaScript working with HTML, XHTML, or HTML5; JavaScript working with the DOM, JavaScript working with CSS, JavaScript working with XML or JSON, and perhaps JavaScript working with other things. While addressing a group of Django developers or Pythonistas, it would seem appropriate to open with, "I share your enthusiasm." On the other hand, while addressing a group of JavaScript programmers, in a few ways it is more appropriate to say, "I feel your pain." JavaScript is a language that has been discovered as a gem, but its warts were enough for it to be largely unappreciated for a long time. "Ajax is the gateway drug to JavaScript," as it has been said—however, JavaScript needs a gateway drug before people get hooked on it. JavaScript is an excellent language and a terrible language rolled into one. Before discussing some of the strengths of JavaScript—and the language does have some truly deep strengths—I would like to say "I feel your pain" and discuss two quite distinct types of pain in the JavaScript language. The first source of pain is some of the language decisions in JavaScript: The Wikipedia article says it was designed to resemble Java but be easier for non-programmers, a decision reminiscent of SQL and COBOL. The Java programmer who finds the C-family idiom of for(i = 0; i < 100; ++i) available will be astonished to find that the functions are clobbering each other's assignments to i until they are explicitly declared local to the function by declaring the variables with var. There is more pain where that came from. The following two functions will not perform the naively expected mathematical calculation correctly; the assignments to i and the result will clobber each other: function outer() { result = 0; for(i = 0; i < 100; ++i) { result += inner(i); } return result } function inner(limit) { result = 0; for(i = 0; i < limit; ++i) { result += i; } return result; } The second source of pain is quite different. It is a pain of inconsistent implementation: the pain of, "Write once, debug everywhere." Strictly speaking, this is not JavaScript's fault; browsers are inconsistent. And it need not be a pain in the server-side use of JavaScript or other non-browser uses. However, it comes along for the ride for people who wish to use JavaScript to do Ajax. Cross-browser testing is a foundational practice in web development of any stripe; a good web page with semantic markup and good CSS styling that is developed on Firefox will usually look sane on Internet Explorer (or vice versa), even if not quite pixel-perfect. But program directly for the JavaScript implementation on one version of a browser, and you stand rather sharp odds of your application not working at all on another browser. The most important object by far for Ajax is the XMLHttpRequest and not only is it not the case that you may have to do different things to get an XMLHttpRequest in different browsers or sometimes different (common) versions of the same browser, and, even when you have code that will get an XMLHttpRequest object, the objects you have can be incompatible so that code that works on one will show strange bugs for another. Just because you have done the work of getting an XMLHttpRequest object in all of the major browsers, it doesn't mean you're home free. Before discussing some of the strengths of the JavaScript language itself, it would be worth pointing out that a good library significantly reduces the second source of pain. Almost any sane library will provide a single, consistent way to get XMLHttpRequest functionality, and consistent behavior for the access it provides. In other words, one of the services provided by a good JavaScript library is a much more uniform behavior, so that you are programming for only one model, or as close as it can manage, and not, for instance, pasting conditional boilerplate code to do simple things that are handled differently by different browser versions, often rendering surprisingly different interpretations of JavaScript. Many of the things we will see done well as we explore jQuery are also done well in other libraries. We previously said that JavaScript is an excellent language and a terrible language rolled into one; what is to be said in favor of JavaScript? The list of faults is hardly all that is wrong with JavaScript, and saying that libraries can dull the pain is not itself a great compliment. But in fact, something much stronger can be said for JavaScript: If you can figure out why Python is a good language, you can figure out why JavaScript is a good language. I remember, when I was chasing pointer errors in what became 60,000 lines of C, teasing a fellow student for using Perl instead of a real language. It was clear in my mind that there were interpreted scripting languages, such as the bash scripting that I used for minor convenience scripts, and then there were real languages, which were compiled to machine code. I was sure that a real language was identified with being compiled, among other things, and that power in a language was the sort of thing C traded in. (I wonder why he didn't ask me if he wasn't a real programmer because he didn't spend half his time chasing pointer errors.) Within the past year or so I've been asked if "Python is a real programming language or is just used for scripting," and something similar to the attitude shift I needed to appreciate Perl and Python is needed to properly appreciate JavaScript. The name "JavaScript" is unfortunate; like calling Python "Assembler Kit", it's a way to ask people not to see its real strengths. (Someone looking for tools for working on an assembler would be rather disgusted to buy an "Assembler Kit" and find Python inside. People looking for Java's strengths in JavaScript will almost certainly be disappointed.) JavaScript code may look like Java in an editor, but the resemblance is a façade; besides Mocha, which had been renamed LiveScript, being renamed to JavaScript just when Netscape was announcing Java support in web browsers, it is has been described as being descended from NewtonScript, Self, Smalltalk, and Lisp, as well as being influenced by Scheme, Perl, Python, C, and Java. What's under the Java façade is pretty interesting. And, in the sense of the simplifying "façade" design pattern, JavaScript was marketed in a way almost guaranteed not to communicate its strengths to programmers. It was marketed as something that nontechnical people could add snippets of, in order to achieve minor, and usually annoying, effects on their web pages. It may not have been a toy language, but it sure was dressed up like one. Python may not have functions clobbering each other's variables (at least not unless they are explicitly declared global), but Python and JavaScript are both multiparadigm languages that support object-oriented programming, and their versions of "object-oriented" have a lot in common, particularly as compared to (for instance) Java. In Java, an object's class defines its methods and the type of its fields, and this much is set in stone. In Python, an object's class defines what an object starts off as, but methods and fields can be attached and detached at will. In JavaScript, classes as such do not exist (unless simulated by a library such as Prototype), but an object can inherit from another object, making a prototype and by implication a prototype chain, and like Python it is dynamic in that fields can be attached and detached at will. In Java, the instanceof keyword is important, as are class casts, associated with strong, static typing. Python doesn't have casts, and its isinstance() function is seen by some as a mistake. The concern is that Python, like JavaScript, is a duck-typing language: "If it looks like a duck, and it quacks like a duck, it's a duck!" In a duck-typing language, if you write a program that polls weather data, and there's a ForecastFromScreenscraper object that is several years old and screenscrapes an HTML page, you should be able to write a ForecastFromRSS object that gets the same information much more cleanly from an RSS feed. You should be able to use it as a drop-in replacement as long as you have the interface right. That is different from Java; at least if it were a ForecastFromScreenscraper object, code would break immediately if you handed it a ForecastFromRSS object. Now, in fairness to Java, the "best practices" Java way to do it would probably separate out an IForecast interface, which would be implemented by both ForecastFromScreenscraper and later ForecastFromRSS, and Java has ways of allowing drop-in replacements if they have been explicitly foreseen and planned for. However, in duck-typed languages, the reality goes beyond the fact that if the people in charge designed things carefully and used an interface for a particular role played by an object, you can make a drop-in replacement. In a duck-typed language, you can make a drop-in replacement for things that the original developers never imagined you would want to replace. JavaScript's reputation is changing. More and more people are recognizing that there's more to the language than design flaws. More and more people are looking past the fact that JavaScript is packaged like Java, like packaging a hammer to give the impression that it is basically like a wrench. More and more people are looking past the silly "toy language" Halloween costume that JavaScript was stuffed into as a kid. One of the ways good programmers grow is by learning new languages, and JavaScript is not just the gateway to mainstream Ajax; it is an interesting language in itself. With that much stated, we will be making a carefully chosen, selective use of JavaScript, and not make a language lover's exploration of the JavaScript language, overall. Much of our work will be with the jQuery library; if you have just programmed a little "bare JavaScript", discovering jQuery is a bit like discovering Python, in terms of a tool that cuts like a hot knife through butter. It takes learning, but it yields power and interesting results soon as well as having some room to grow. What is XMLHttpRequest in relation to Ajax? The XMLHttpRequest object is the reason why the kind of games that can be implemented with Ajax technologies do not stop at clones of Tetris and other games that do not know or care if they are attached to a network. They include massive multiplayer online role-playing games where the network is the computer. Without having something like XMLHttpRequest, "Ajax chess" would probably mean a game of chess against a chess engine running in your browser's JavaScript engine; with XMLHttpRequest, "Ajax chess" is more likely man-to-man chess against another human player connected via the network. The XMLHttpRequest object is the object that lets Gmail, Google Maps, Bing Maps, Facebook, and many less famous Ajax applications deliver on Sun's promise: the network is the computer. There are differences and some incompatibilities between different versions of XMLHttpRequest, and efforts are underway to advance "level-2-compliant" XMLHttpRequest implementations, featuring everything that is expected of an XMLHttpRequest object today and providing further functionality in addition, somewhat in the spirit of level 2 or level 3 CSS compliance. We will not be looking at level 2 efforts, but we will look at the baseline of what is expected as standard in most XMLHttpRequest objects. The basic way that an XMLHttpRequest object is used is that the object is created or reused (the preferred practice usually being to reuse rather than create and discard a large number), a callback event handler is specified, the connection is opened, the data is sent, and then when the network operation completes, the callback handler retrieves the response from XMLHttpRequest and takes an appropriate action. A bare-bones XMLHttpRequest object can be expected to have the following methods and properties.
Read more
  • 0
  • 0
  • 10014

article-image-deploying-your-own-server
Packt
30 Sep 2015
16 min read
Save for later

Deploying on your own server

Packt
30 Sep 2015
16 min read
In this article by Jack Stouffer, the author of the book Mastering Flask, you will learn how to deploy and host your application on the different options available, and the advantages and disadvantages related to them. The most common way to deploy any web app is to run it on a server that you have control over. Control in this case means access to the terminal on the server with an administrator account. This type of deployment gives you the most amount of freedom out of the other choices as it allows you to install any program or tool you wish. This is in contrast to other hosting solutions where the web server and database are chosen for you. This type of deployment also happens to be the least expensive option. The downside to this freedom is that you take the responsibility of keeping the server up, backing up user data, keeping the software on the server up to date to avoid security issues, and so on. Entire books have been written on good server management, so if this is not a responsibility that you believe you or your company can handle, it would be best if you choose one of the other deployment options. This section will be based on a Debian Linux-based server, as Linux is far and away the most popular OS for running web servers, and Debian is the most popular Linux distro (a particular combination of software and the Linux kernel released as a package). Any OS with Bash and a program called SSH (which will be introduced in the next section) will work for this article, the only differences will be the command-line programs to install software on the server. (For more resources related to this topic, see here.) Each of these web servers will use a protocol named Web Server Gateway Interface (WSGI), which is a standard designed to allow Python web applications to easily communicate with web servers. We will never directly work with WSGI. However, most of the web server interfaces we will be using will have WSGI in their name, and it can be confusing if you don't know what the name is. Pushing code to your server with fabric To automate the process of setting up and pushing our application code to the server, we will use a Python tool called fabric. Fabric is a command-line program that reads and executes Python scripts on remote servers using a tool called SSH. SSH is a protocol that allows a user of one computer to remotely log in to another computer and execute commands on the command line, provided that the user has an account on the remote machine. To install fabric, we will use pip: $ pip install fabric Fabric commands are collections of command-line programs to be run on the remote machine's shell, in this case, Bash. We are going to make three different commands: one to run our unit tests, one to set up a brand new server to our specifications, and one to have the server update its copy of the application code with git. We will store these commands in a new file at the root of our project directory called fabfile.py. As it's the easiest to create, let's make the test command first: from fabric.api import local def test(): local('python -m unittest discover') To run this function from the command line, we can use fabric's command-line interface by passing the name of the command to run: $ fab test [localhost] local: python -m unittest discover ..... --------------------------------------------------------------------- Ran 5 tests in 6.028s OK Fabric has three main commands: local, run, and sudo. The local function, as seen in the preceding function, runs commands on the local computer. The run and sudo functions run commands on a remote machine, but sudo runs commands as an administrator. All of these functions notify fabric if the command ran successfully or not. If a command didn't run successfully, meaning that our tests failed in this case, any other commands in the function will not be run. This is useful for our commands because it allows us to force ourselves not to push any code to the server that does not pass our tests. Now we need to create the command to set up a new server from scratch. What this command will do is install the software our production environment needs as well as downloads the code from our centralized git repository. It will also create a new user that will act as the runner of the web server as well as the owner of the code repository. Do not run your webserver or have your code deployed by the root user. This opens your application to a whole host of security vulnerabilities. This command will differ based on your operating system, and we will be adding to this command in the rest of the article based on what server you choose: from fabric.api import env, local, run, sudo, cd env.hosts = ['deploy@[your IP]'] def upgrade_libs(): sudo("apt-get update") sudo("apt-get upgrade") def setup(): test() upgrade_libs() # necessary to install many Python libraries sudo("apt-get install -y build-essential") sudo("apt-get install -y git") sudo("apt-get install -y python") sudo("apt-get install -y python-pip") # necessary to install many Python libraries sudo("apt-get install -y python-all-dev") run("useradd -d /home/deploy/ deploy") run("gpasswd -a deploy sudo") # allows Python packages to be installed by the deploy user sudo("chown -R deploy /usr/local/") sudo("chown -R deploy /usr/lib/python2.7/") run("git config --global credential.helper store") with cd("/home/deploy/"): run("git clone [your repo URL]") with cd('/home/deploy/webapp'): run("pip install -r requirements.txt") run("python manage.py createdb") There are two new fabric features in this script. One is the env.hosts assignment, which tells fabric the user and IP address of the machine it should be logging in to. Second, there is the cd function used in conjunction with the with keyword, which executes any functions in the context of that directory instead of the home directory of the deploy user. The line that modifies the git configuration is there to tell git to remember your repository's username and password, so you do not have to enter it every time you wish to push code to the server. Also, before the server is set up, we make sure to update the server's software to keep the server up to date. Finally, we have the function to push our new code to the server. In time, this command will also restart the web server and reload any configuration files that come from our code. But this depends on the server you choose, so this is filled out in the subsequent sections: def deploy(): test() upgrade_libs() with cd('/home/deploy/webapp'): run("git pull") run("pip install -r requirements.txt") So, if we were to begin working on a new server, all we would need to do to set it up is to run the following commands: $ fabric setup $ fabric deploy Running your web server with supervisor Now that we have automated our updating process, we need some program on the server to make sure that our web server, and database if you aren't using SQLite, is running. To do this, we will use a simple program called supervisor. All that supervisor does is automatically run command-line programs in background processes and allows you to see the status of running programs. Supervisor also monitors all of the processes its running, and if the process dies, it tries to restart it. To install supervisor, we need to add it to the setup command in our fabfile.py: def setup(): … sudo("apt-get install -y supervisor") To tell supervisor what to do, we need to create a configuration file and then copy it to the /etc/supervisor/conf.d/ directory of our server during the deploy fabric command. Supervisor will load all of the files in this directory when it starts and attempt to run them. In a new file in the root of our project directory named supervisor.conf, add the following: [program:webapp] command= directory=/home/deploy/webapp user=deploy [program:rabbitmq] command=rabbitmq-server user=deploy [program:celery] command=celery worker -A celery_runner directory=/home/deploy/webapp user=deploy This is the bare minimum configuration needed to get a web server up and running. But, supervisor has a lot more configuration options. To view all of the customizations, go to the supervisor documentation at http://supervisord.org/. This configuration tells supervisor to run a command in the context of /home/deploy/webapp under the deploy user. The right hand of the command value is empty because it depends on what server you are running and will be filled in for each section. Now we need to add a sudo call in the deploy command to copy this configuration file to the /etc/supervisor/conf.d/ directory: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp supervisord.conf /etc/supervisor/conf.d/webapp.conf") sudo('service supervisor restart') A lot of projects just create the files on the server and forget about them, but having the configuration file stored in our git repository and copied on every deployment gives several advantages. First, this means that it easy to revert changes if something goes wrong using git. Second, it means that we don't have to log in to our server in order to make changes to the files. Don't use the Flask development server in production. Not only it fails to handle concurrent connections, but it also allows arbitrary Python code to be run on your server. Gevent The simplest option to get a web server up and running is to use a Python library called gevent to host your application. Gevent is a Python library that adds an alternative way of doing concurrent programming outside of the Python threading library called coroutines. Gevent has an interface for running WSGI applications that is both simple and has good performance. A simple gevent server can easily handle hundreds of concurrent users, which is more in number than 99 percent of websites on the Internet will ever have. The downside to this option is that its simplicity means a lack of configuration options. There is no way, for example, to add rate limiting to the server or to add HTTPS traffic. This deployment option is purely for sites that you don't expect to receive a huge amount of traffic. Remember YAGNI (short for You Aren't Gonna Need It); only upgrade to a different web server if you really need to. Coroutines are a bit outside of the scope of this book, so a good explanation can be found at https://en.wikipedia.org/wiki/Coroutine. To install gevent, we will use pip: $ pip install gevent In a new file in the root of the project directory named gserver.py, add the following: from gevent.wsgi import WSGIServer from webapp import create_app app = create_app('webapp.config.ProdConfig') server = WSGIServer(('', 80), app) server.serve_forever() To run the server with supervisor, just change the command value to the following: [program:webapp] command=python gserver.py directory=/home/deploy/webapp user=deploy Now when you deploy, gevent will be automatically installed for you by running your requirements.txt on every deployment, that is, if you are properly pip freeze–ing after every new dependency is added. Tornado Tornado is another very simple way to deploy WSGI apps purely with Python. Tornado is a web server that is designed to handle thousands of simultaneous connections. If your application needs real-time data, Tornado also supports websockets for continuous, long-lived connections to the server. Do not use Tornado in production on a Windows server. The Windows version of Tornado is not only much slower, but it is considered beta quality software. To use Tornado with our application, we will use Tornado's WSGIContainer in order to wrap the application object to make it Tornado compatible. Then, Tornado will start to listen on port 80 for requests until the process is terminated. In a new file named tserver.py, add the following: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from webapp import create_app app = WSGIContainer(create_app("webapp.config.ProdConfig")) http_server = HTTPServer(app) http_server.listen(80) IOLoop.instance().start() To run the Tornado with supervisor, just change the command value to the following: [program:webapp] command=python tserver.py directory=/home/deploy/webapp user=deploy Nginx and uWSGI If you need more performance or customization, the most popular way to deploy a Python web application is to use the web server Nginx as a frontend for the WSGI server uWSGI by using a reverse proxy. A reverse proxy is a program in networks that retrieves contents for a client from a server as if they returned from the proxy itself as shown in the following figure: Nginx and uWSGI are used in this way because we get the power of the Nginx frontend while having the customization of uWSGI. Nginx is a very powerful web server that became popular by providing the best combination of speed and customization. Nginx is consistently faster than other web severs, such as Apache httpd, and has native support for WSGI applications. The way it achieves this speed is several good architecture decisions as well as the decision early on that they were not going to try to cover a large amount of use cases like Apache does. Having a smaller feature set makes it much easier to maintain and optimize the code. From a programmer's perspective, it is also much easier to configure Nginx, as there is no giant default configuration file (httpd.conf) that needs to be overridden with .htaccess files in each of your project directories. One downside is that Nginx has a much smaller community than Apache, so if you have an obscure problem, you are less likely to be able to find answers online. Also, it's possible that a feature that most programmers are used to in Apache isn't supported in Nginx. uWSGI is a web server that supports several different types of server interfaces, including WSGI. uWSGI handles severing the application content as well as things such as load balancing traffic across several different processes and threads. To install uWSGI, we will use pip in the following way: $ pip install uwsgi In order to run our application, uWSGI needs a file with an accessible WSGI application. In a new file named wsgi.py in the top level of the project directory, add the following: from webapp import create_app app = create_app("webapp.config.ProdConfig") To test uWSGI, we can run it from the command line with the following: $ uwsgi --socket 127.0.0.1:8080 --wsgi-file wsgi.py --callable app --processes 4 --threads 2 If you are running this on your server, you should be able to access port 8080 and see your app (if you don't have a firewall that is). What this command does is load the app object from the wsgi.py file and makes it accessible from localhost on port 8080. It also spawns four different processes with two threads each, which are automatically load balanced by a master process. This amount of processes is the overkill for the vast, vast majority of websites. To start off, use a single process with two threads and scale up from there. Instead of adding all of the configuration options on the command line, we can create a text file to hold our configuration, which brings the same benefits for configuration that were listed in the section on supervisor. In a new file in the root of the project directory named uwsgi.ini, add the following: [uwsgi] socket = 127.0.0.1:8080 wsgi-file = wsgi.py callable = app processes = 4 threads = 2 uWSGI supports hundreds of configuration options as well as several official and unofficial plugins. To leverage the full power of uWSGI, you can explore the documentation at http://uwsgi-docs.readthedocs.org/. Let's run the server now from supervisor: [program:webapp] command=uwsgi uwsgi.ini directory=/home/deploy/webapp user=deploy We also need to install Nginx during the setup function: def setup(): … sudo("apt-get install -y nginx") Because we are installing Nginx from the OS's package manager, the OS will handle running Nginx for us. At the time of writing, the Nginx version in the official Debian package manager is several years old. To install the most recent version, follow the instructions here: http://wiki.nginx.org/Install. Next, we need to create an Nginx configuration file and then copy it to the /etc/nginx/sites-available/ directory when we push the code. In a new file in the root of the project directory named nginx.conf, add the following server { listen 80; server_name your_domain_name; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8080; } location /static { alias /home/deploy/webapp/webapp/static; } } What this configuration file does is tell Nginx to listen for incoming requests on port 80 and forward all requests to the WSGI application that is listening on port 8080. Also, it makes an exception for any requests for static files and instead sends those requests directly to the file system. Bypassing uWSGI for static files gives a great performance boost, as Nginx is really good at serving static files quickly. Finally, in the fabfile.py file: def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp nginx.conf " "/etc/nginx/sites-available/[your_domain]") sudo("ln -sf /etc/nginx/sites-available/your_domain " "/etc/nginx/sites-enabled/[your_domain]") sudo("service nginx restart") Apache and uWSGI Using Apache httpd with uWSGI has mostly the same setup. First off, we need an apache configuration file in a new file in the root of our project directory named apache.conf: <VirtualHost *:80> <Location /> ProxyPass / uwsgi://127.0.0.1:8080/ </Location> </VirtualHost> This file just tells Apache to pass all requests on port 80 to the uWSGI web server listening on port 8080. But, this functionality requires an extra Apache plugin from uWSGI called mod proxy uWSGI. We can install this as well as Apache in the set command: def setup(): … sudo("apt-get install -y apache2") sudo("apt-get install -y libapache2-mod-proxy-uwsgi") Finally, in the deploy command, we need to copy our Apache configuration file into Apache's configuration directory. def deploy(): … with cd('/home/deploy/webapp'): … sudo("cp apache.conf " "/etc/apache2/sites-available/[your_domain]") sudo("ln -sf /etc/apache2/sites-available/[your_domain] " "/etc/apache2/sites-enabled/[your_domain]") sudo("service apache2 restart") Summary In this article you learnt that there are many different options to hosting your application, each having their own pros and cons. Deciding on one depends on the amount of time and money you are willing to spend as well as the total number of users you expect. Resources for Article: Further resources on this subject: Handling sessions and users[article] Snap – The Code Snippet Sharing Application[article] Man, Do I Like Templates! [article] from fabric.api import local def test():     local('python -m unittest discover')
Read more
  • 0
  • 0
  • 9985
article-image-working-client-object-model-microsoft-sharepoint
Packt
05 Oct 2011
9 min read
Save for later

Working with Client Object Model in Microsoft Sharepoint

Packt
05 Oct 2011
9 min read
Microsoft SharePoint 2010 is the best-in-class platform for content management and collaboration. With Visual Studio, developers have an end-to-end business solutions development IDE. To leverage this powerful combination of tools it is necessary to understand the different building blocks of SharePoint. In this article by Balaji Kithiganahalli, author of Microsoft SharePoint 2010 Development with Visual Studio 2010 Expert Cookbook, we will cover: Creating a list using a Client Object Model Handling exceptions Calling Object Model asynchronously (For more resources on Microsoft Sharepoint, see here.) Introduction Since out-of-the-box web services does not provide the full functionality that the server model exposes, developers always end up creating custom web services for use with client applications. But there are situations where deploying custom web services may not be feasible. For example, if your company is hosting SharePoint solutions in a cloud environment where access to the root folder is not permitted. In such cases, developing client applications with new Client Object Model (OM) will become a very attractive proposition. SharePoint exposes three OMs which are as follows: Managed Silverlight JavaScript (ECMAScript) Each of these OMs provide object interface to functionality exposed in Microsoft. SharePoint namespace. While none of the object models expose the full functionality that the server-side object exposes, the understanding of server Object Models will easily translate for a developer to develop applications using an OM. A managed OM is used to develop custom .NET managed applications (service, WPF, or console applications). You can also use the OM for ASP.NET applications that are not running in the SharePoint context as well. A Silverlight OM is used by Silverlight client applications. A JavaScript OM is only available to applications that are hosted inside the SharePoint applications like web part pages or application pages. Even though each of the OMs provide different programming interfaces to build applications, behind the scenes, they all call a service called Client.svc to talk to SharePoint. This Client.svc file resides in the ISAPI folder. The service calls are wrapped around with an Object Model that developers can use to make calls to SharePoint server. This way, developers make calls to an OM and the calls are all batched together in XML format to send it to the server. The response is always received in JSON format which is then parsed and associated with the right objects. The basic architectural representation of the client interaction with the SharePoint server is as shown in the following image: The three Object Models come in separate assemblies. The following table provides the locations and names of the assemblies: Object OM Location Names Managed ISAPI folder Microsoft.SharePoint.Client.dll Microsoft.SharePoint.Client.Runtime.dll Silverlight LayoutsClientBin Microsoft.SharePoint.Client. Silverlight.dll Microsoft.SharePoint.Client. Silverlight.Runtime.dll JavaScript Layouts SP.js The Client Object Model can be downloaded as a redistributable package from the Microsoft download center at:http://www.microsoft.com/downloads/en/details.aspx?FamilyID=b4579045-b183-4ed4-bf61-dc2f0deabe47 OM functionality focuses on objects at the site collection and below. The main reason being that it will be used to enhance the end-user interaction. Hence the OM is a smaller subset of what is available through the server Object Model. In all three Object Models, main object names are kept the same, and hence the knowledge from one OM is easily portable to another. As indicated earlier, knowledge of server Object Models easily transfer development using client OM The following table shows some of the major objects in the OM and their equivalent names in the server OM: Client OM Server OM ClientContext SPContext Site SPSite Web SPWeb List SPList ListItem SPListItem Field SPField Creating a list using a Managed OM In this recipe, we will learn how to create a list using a Managed Object Model. We will also add a new column to the list and insert about 10 rows of data to the list. For this recipe, we will create a console application that makes use of a generic list template. Getting ready You can copy the DLLs mentioned earlier to your development machine. Your development machine need not have the SharePoint server installed. But you should be able to access one with proper permission. You also need Visual Studio 2010 IDE installed on the development machine. How to do it… In order to create a list using a Managed OM, adhere to the following steps: Launch your Visual Studio 2010 IDE as an administrator (right-click the shortcut and select Run as administrator). Select File | New | Project . The new project wizard dialog box will be displayed (make sure to select .NET Framework 3.5 in the top drop-down box). Select Windows Console application under the Visual C# | Windows | Console Application node from Installed Templates section on the left-hand side. Name the project OMClientApplication and provide a directory location where you want to save the project and click on OK to create the console application template. To add a references to Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll, go to the menu Project | Add Reference and navigate to the location where you copied the DLLs and select them as shown In the following screenshot: Now add the code necessary to create a list. A description field will also be added to our list. Your code should look like the following (make sure to change the URL passed to the ClientContext constructor to your environment): using Microsoft.SharePoint.Client;namespace OMClientApplication{ class Program { static void Main(string[] args) { using (ClientContext clientCtx = new ClientContext("http://intsp1")) { Web site = clientCtx.Web; // Create a list. ListCreationInformation listCreationInfo = new ListCreationInformation(); listCreationInfo.Title = "OM Client Application List"; listCreationInfo.TemplateType = (int)ListTemplateType.GenericList; listCreationInfo.QuickLaunchOption = QuickLaunchOptions.On; List list = site.Lists.Add(listCreationInfo); string DescriptionFieldSchema = "<Field Type='Note' DisplayName='Item Description' Name='Description' Required='True' MaxLength='500' NumLines='10' />"; list.Fields.AddFieldAsXml(DescriptionFieldSchema, true, AddFieldOptions.AddToDefaultContentType);// Insert 10 rows of data - Concat loop Id with "Item Number" string. for (int i = 1; i < 11; ++i) { ListItemCreationInformation listItemCreationInfo = new ListItemCreationInformation(); ListItem li = list.AddItem(listItemCreationInfo); li["Title"] = string.Format("Item number {0}",i); li["Item_x0020_Description"] = string.Format("Item number {0} from client Object Model", i); li.Update(); } clientCtx.ExecuteQuery(); Console.WriteLine("List creation completed"); Console.Read(); } } }} Build and execute the solution by pressing F5 or from the menu Debug | Start Debugging . This should bring up the command window with a message indicating that the List creation completed as shown in the following screenshot. Press Enter and close the command window. Navigate to your site to verify that the list has been created. The following screenshot shows the list with the new field and ten items inserted: (Move the mouse over the image to enlarge.) How it works... The first line of the code in the Main method is to create an instance of ClientContext class. The ClientContext instance provides information about the SharePoint server context in which we will be working. This is also the proxy for the server we will be working with. We passed the URL information to the context to get the entry point to that location. When you have access to the context instance, you can browse the site, web, and list objects of that location. You can access all the properties like Name , Title , Description , and so on. The ClientContext class implements the IDisposable interface, and hence you need to use the using statement. Without that you have to explicitly dispose the object. If you do not do so, your application will have memory leaks. For more information on disposing objects refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee557362.aspx From the context we were able to obtain access to our site object on which we wanted to create the list. We provided list properties for our new list through the ListCreationInformation instance. Through the instance of ListCreationInformation, we set the values to list properties like name, the templates we want to use, whether the list should be shown in the quick launch bar or not, and so on. We added a new field to the field collection of the list by providing the field schema. Each of the ListItems are created by providing ListItemCreationInformation. The ListItemCreationInformation is similar to ListCreationInformation where you would provide information regarding the list item like whether it belongs to a document library or not, and so on. For more information on ListCreationInformation and ListItemCreationInformation members refer to MSDN at:http://msdn.microsoft.com/en-us/library/ee536774.aspx. All of this information is structured as an XML and batched together to send it to the server. In our case, we created a list and added a new field and about ten list items. Each of these would have an equivalent server-side call, and hence, all these multiple calls were batched together to send it to the server. The request is only sent to the server when we issue an ExecuteQuery or ExecuteQueryAsync method in the client context. The ExecuteQuery method creates an XML request and passes that to Client.svc. The application waits until the batch process on the server is completed and then returns back with the JSON response. Client.svc makes the server Object Model call to execute our request. There's more... By default, ClientContext instance uses windows authentication. It makes use of the windows identity of the person executing the application. Hence, the person running the application should have proper authorization on the site to execute the commands. Exceptions will be thrown if proper permissions are not available for the user executing the application. We will learn about handling exceptions in the next recipe. It also supports Anonymous and FBA (ASP.Net form based authentication) authentication. The following is the code for passing FBA credentials if your site supports it: using (ClientContext clientCtx = new ClientContext("http://intsp1")){clientCtx.AuthenticationMode = ClientAuthenticationMode.FormsAuthentication;FormsAuthenticationLoginInfo fba = new FormsAuthenticationLoginInfo("username", "password");clientCtx.FormsAuthenticationLoginInfo = fba;//Business Logic} Impersonation In order to impersonate you can pass in credential information to the ClientContext as shown in the following code: clientCtx.Credentials = new NetworkCredential("username", "password", "domainname"); Passing credential information this way is supported only in Managed OM.  
Read more
  • 0
  • 0
  • 9973

article-image-look-responsive-design-frameworks
Packt
19 Nov 2014
11 min read
Save for later

A look into responsive design frameworks

Packt
19 Nov 2014
11 min read
In this article, by Thoriq Firdaus author of Responsive Web Design by Example Beginner's Guide Second Edition we will look into responsive web design which is one of the most discussed topics among the web design and development community. So I believe many of you have heard about it to certain extend. (For more resources related to this topic, see here.) Ethan Marcotte was the one who coined the term "Responsive Web Design". He suggests in his article, Responsive Web Design, that the web should seamlessly adjust and adapt to the environment where the users view the website, rather than addressing it exclusively for a specific platform. In other words, the website should be responsive; it should be presentable at any screen size and regardless of the platform in which the website is viewed. Take Time website as an example, the web page fits nicely in a desktop browser with large screen size and also in a mobile browser with limited viewable area. The layout shifts and adapts as the viewport size changes. As you can see from the following screenshot, the header background color turned into dark grey, the image is scaled down proportionally, and the Tap bar appears where Time hides the Latest news, Magazine and Videos section: Yet, building a responsive website could be very tedious work. There are many measurements to consider when building a responsive website, one of which would be creating the responsive grid. Grid helps us to build websites with proper alignment. If you have ever used 960.gs framework, which is one of the popular CSS Frameworks, you should’ve experienced how easy is to organize the web page layout by adding preset classes like grid_1 or push_1 in the elements. However, 960.gs grid is set in fixed unit, pixel (px), which is not applicable when it comes to building a responsive website. We need a Framework with the grid set in percentage (%) unit to build responsive websites; we need a Responsive Framework. A Responsive Framework provides the building blocks to build responsive websites. Generally, it includes the classes to assemble a responsive grid, the basic styles for typography and form inputs, and a few styles to address various browser quirks. Some frameworks even go further with a series of styles for creating common design patterns and Web User Interface such as buttons, navigation bars, and image slider. These predefined styles allow us to develop responsive websites faster with less of the hassle. And the following are a few other reasons why using a Responsive Framework is a favorable option to build responsive websites: Browser Compatibility: Assuring consistency of a web page on different browsers is really painful and more distressing than developing the website itself. But, with a framework, we can minimize the work to address Browser Compatibility issues. The framework developers most likely have tested the framework in various desktop browsers and mobile browsers with the most constrained environment prior to releasing it publicly. Documentation: A framework, in general, also comes with comprehensive documentation that records the bits and pieces on using the framework. The documentation would be very helpful for entry users to begin to study the framework. It is also a great advantage when we are working with a team. We can refer to the documentation to get everyone on the same page and follow the standard code of writing conventions. Community and Extensions: Some popular frameworks like Bootstrap and Foundation have an active community that helps addressing the bugs in the framework and extends the functionality. The jQuery UI Bootstrap is perhaps a good example, in this case. jQuery UI Bootstrap is a collection styles for jQuery UI widgets to match the feel and look of Bootstrap’s original theme. It’s now a common to find free WordPress and Joomla themes that are based using these frameworks. The Responsive.gs framework Responsive.gs is a lightweight responsive framework with merely 1kb of size when compressed. Responsive.gs is based on a width of 940px, and made in three variant of grids: 12, 16, and 24 columns. What’s more, Responsive.gs is shipped with Box Sizing polyfill that enables CSS3 box-sizing in Internet Explorer 8 to Internet Explorer 6, and make it decently presentable in those browsers. Polyfill is a piece code that enables certain web features and capabilities that are not built in the browser natively; usually, it addresses to the older version of Internet Explorer. For example, you can use HTML5 Shiv so that new HTML5 elements, such as <header>, <footer>, and <nav>, are recognized in Internet Explorer 8 to Internet Explorer 6. CSS Box model HTML elements, which are categorized as a block-level element, are essentially a box drawn with the content width, height, margin, padding, and border through CSS. Prior to CSS3, we have been facing a constraint when specifying a box. For instance, when we specify a <div> with width and height of 100px, as follows: div {width: 100px;height: 100px;} The browser will render the div as 100px, square box. However, this will only be true if the padding and border have not been added in. Since a box has four sides, a padding of 10px (padding: 10px;) will actually add 20px to the width and height — 10px for each side, as follows. While it takes up space on the page, the element's margin is space reserved outside the element rather than part of the element itself; thus, if we give an element a background color, the margin area will not take on that color. CSS3 Box sizing CSS3 introduced a new property called box-sizing that lets us to specify how the browser should calculate the CSS box model. There are a couple of values that we can apply within the box-sizing property, which are: content-box: this is the default value of the box model. This value specifies the padding and the border box's thickness outside the specified width and height of the content, as we have demonstrated in the preceding section. border-box: this value will do the opposite; it includes the padding and the border box as the width and height of the box. padding-box: at the time of writing this article, this value is experimental and has just been added recently. This value specifies the box dimensions. Let’s take our preceding as our example, but this time we will set the box-sizing model to border-box. As mentioned in the table above, the border-box value will retain the box’s width and the height for 100px regardless of the padding and border addition. The following illustration shows a comparison between the outputs of the two different values, the content-box (the default) and the border-box. The Bootstrap framework Bootstrap was originally built by Mark Otto and was initially only intended for internal use in Twitter. Short story, Bootstrap was then launched for free for public consumption. Bootstrap has long been associated with Twitter, but since the author has departed from Twitter and Bootstrap itself has grown beyond his expectations..... Date back to the initial development, the responsive feature was not yet added, it was then added in version 2 along with the increasing demand for creating responsive websites. Bootstrap also comes with a lot more added features as compared to Responsive.gs. It is packed with preset user interface styles, which comprise of common User Interfaces used on websites such as buttons, navigation bars, pagination, and forms so you don’t have to create them from scratch again when starting off a new project. On top of that, Bootstrap is also powered with some custom jQuery plugins like image slider, carousel, popover and modal box. You can use and customize Bootstrap in many ways. You can directly customize Bootstrap theme and components directly through the CSS style sheets, the Bootstrap Customization page, or the Bootstrap LESS variables and mixins, which are used to generate the style sheets. The Foundation framework Foundation is a framework created by ZURB, a design agency based in California. Similar to Bootstrap, Foundation is beyond just a responsive CSS framework; it is shipped with preset grid, components, and a number of jQuery plugins to present interactive features. Some high-profile brands have built their website using of Foundation such as McAfee, which is one the most respectable brands for computer anti-virus. Foundation style sheet is powered by Sass, a Ruby-based CSS Pre-processor. There are many complaint that the code in responsive frameworks is excessive; since a framework like Bootstrap is used widely, it has to cover every design scenario and thus it comes with some extra styles that you might not need for your website. Fortunately, we can easily minimize this issue by using the right tools like CSS Preprocessors and following a proper workflow. And speaking the truth, there isn’t a perfect solution, and certainly using a framework isn’t for everyone. It all comes down to your need, your website need, and in particular your client needs and budgets. In reality, you will have to weigh these factors to decide whether you will go with responsive framework or not. Jem Kremer has an extensive discussion on this regard in her article: Responsive Design Frameworks: Just Because You Can, Should You? A brief Introduction to CSS preprocessors Both Bootstrap and Foundation uses CSS Pre-processors to generate their style sheets. Bootstrap uses LESS — though the official support for Sass has just been released recently. Foundation, on the contrary, uses Sass as the only way to generate its style sheets. CSS pre-processor is not an entirely new language. If you have known CSS, you should be accustomed to CSS Pre-preprocessor immediately. CSS Pre-processor simply extends CSS by allowing the use of programming features like Variables, Functions, and Operations. Below is an example of how we write CSS with LESS syntax. @color: #f3f3f3;body {background-color: @color;}p {color: darken(@color, 50%);} When the above code is compiled, it takes the @color variable that we have defined and place the value in the output, as follows. body {background-color: #f3f3f3;}p {color: #737373;} The variable is reusable throughout the style sheet that enables us to retain style consistency and make the style sheet more maintainable. Delve into responsive web design Our discussion on Responsive Web Design herein, though essential, is merely a tip of the iceberg. There are so much more about Responsive Web Design than what have recently covered in the preceding sections. I would suggest that you take your time to get yourself more insight and apprehension on Responsive Web Design including the concept, the technicalities, and some constraints. The following are some of the best recommendations of reference to follow: Also a good place to start Responsive Web Design by Rachel Shillcock. Don’t Forget the Viewport Meta Tag by Ian Yates. How To Use CSS3 Media Queries To Create a Mobile Version of Your Website by Rachel Andrew. Read about the future standard on responsive image using HTML5 Picture Element Responsive Images Done Right: A Guide To <picture> And srcset by Eric Portis a roundup of methods of making data table responsive. Responsive web design inspiration sources Now before we jump down into the next Chapters and start off building responsive websites, it may be a good idea to spend some time looking for ideas and inspiration of responsive websites; to see how they are built, and how the layout is organized in desktop browsers as well as in mobile browsers. It’s a common thing for websites to be redesigned from time to time to stay fresh. So herein, instead of making a pile of website screenshots, which may no longer be relevant in the next several months because of the redesign, we’re better going straight to the websites that curates websites, and following is the places to go: MediaQueries Awwwards CSS Awards WebDesignServed Bootstrap Expo Zurb Responsive Summary Using a framework is the easier and faster way to build responsive websites up and running rather than building everything from scratch on our own. Alas, as mentioned, using a framework also has some negative sides. If it is not done properly, the end result could all go wrong. The website could be stuffed and stuck with unnecessary styles and JavaScript, which at the end makes the website load slowly and hard to maintain. We need to set up the right tools that, not only will they facilitate the projects, but they also help us making the website more easily maintainable. Resources for Article:  Further resources on this subject: Linking Dynamic Content from External Websites [article] Building Responsive Image Sliders [article] Top Features You Need to Know About – Responsive Web Design [article]
Read more
  • 0
  • 0
  • 9964
Modal Close icon
Modal Close icon