Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-linux-shell-script-monitoring-activities
Packt
28 Jan 2011
8 min read
Save for later

Linux Shell Script: Monitoring Activities

Packt
28 Jan 2011
8 min read
Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes Master the art of crafting one-liner command sequence to perform tasks such as text processing, digging data from files, and lot more Practical problem solving techniques adherent to the latest Linux platform Packed with easy-to-follow examples to exercise all the features of the Linux shell scripting language Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible.    Disk usage hacks Disk space is a limited resource. We frequently perform disk usage calculation on hard disks or any storage media to find out the free space available on the disk. When free space becomes scarce, we will need to find out large-sized files that are to be deleted or moved in order to create free space. Disk usage manipulations are commonly used in shell scripting contexts. This recipe will illustrate various commands used for disk manipulations and problems where disk usages can be calculated with a variety of options. Getting ready df and du are the two significant commands that are used for calculating disk usage in Linux. The command df stands for disk free and du stands for disk usage. Let's see how we can use them to perform various tasks that involve disk usage calculation. How to do it... To find the disk space used by a file (or files), use: $ du FILENAME1 FILENAME2 . . For example: $ du file.txt 4 The result is, by default, shown as size in bytes. In order to obtain the disk usage for all files inside a directory along with the individual disk usage for each file showed in each line, use: $ du -a DIRECTORY -a outputs results for all files in the specified directory or directories recursively. Running du DIRECTORY will output a similar result, but it will show only the size consumed by subdirectories. However, they do not show the disk usage for each of the files. For printing the disk usage by files, -a is mandatory. For example: $  du -a test 4  test/output.txt 4  test/process_log.sh 4  test/pcpu.sh 16  test An example of using du DIRECTORY is as follows: $ du test 16  test There's more... Let's go through additional usage practices for the du command. Displaying disk usage in KB, MB, or Blocks By default, the disk usage command displays the total bytes used by a file. A more human-readable format is when disk usage is expressed in standard units KB, MB, or GB. In order to print the disk usage in a display-friendly format, use –h as follows: du -h FILENAME For example: $ du -sh test/pcpu.sh 4.0K  test/pcpu.sh # Multiple file arguments are accepted Or: # du -h DIRECTORY $ du -h hack/ 16K  hack/ Finding the 10 largest size files from a given directory Finding large-size files is a regular task we come across. We regularly require to delete those huge size files or move them. We can easily find out large-size files using du and sort commands. The following one-line script can achieve this task: $ du -ak SOURCE_DIR | sort -nrk 1 | head Here -a specifies all directories and files. Hence du traverses the SOURCE_DIR and calculates the size of all files. The first column of the output contains the size in Kilobytes since -k is specified and the second column contains the file or folder name. sort is used to perform numerical sort with column 1 and reverse it. head is used to parse the first 10 lines from the output. For example: $ du -ak /home/slynux | sort -nrk 1 | head -n 4 50220 /home/slynux 43296 /home/slynux/.mozilla 43284 /home/slynux/.mozilla/firefox 43276 /home/slynux/.mozilla/firefox/8c22khxc.default One of the drawbacks of the above one-liner is that it includes directories in the result. However, when we need to find only the largest files and not directories we can improve the one-liner to output only the large-size files as follows: $ find . -type f -exec du -k {} ; | sort -nrk 1 | head We used find to filter only files to du rather than allow du to traverse recursively by itself. Calculating execution time for a command While testing an application or comparing different algorithms for a given problem, execution time taken by a program is very critical. A good algorithm should execute in minimum amount of time. There are several situations in which we need to monitor the time taken for execution by a program. For example, while learning about sorting algorithms, how do you practically state which algorithm is faster? The answer to this is to calculate the execution time for the same data set. Let's see how to do it. How to do it... time is a command that is available with any UNIX-like operating systems. You can prefix time with the command you want to calculate execution time, for example: $ time COMMAND The command will execute and its output will be shown. Along with output, the time command appends the time taken in stderr. An example is as follows: $ time ls test.txt next.txt real    0m0.008s user    0m0.001s sys     0m0.003s It will show real, user, and system times for execution. The three different times can be defined as follows: Real is wall clock time—the time from start to finish of the call. This is all elapsed time including time slices used by other processes and the time that the process spends when blocked (for example, if it is waiting for I/O to complete). User is the amount of CPU time spent in user-mode code (outside the kernel) within the process. This is only the actual CPU time used in executing the process. Other processes and the time that the process spends when blocked do not count towards this figure. Sys is the amount of CPU time spent in the kernel within the process. This means executing the CPU time spent in system calls within the kernel, as opposed to library code, which is still running in the user space. Like 'user time', this is only the CPU time used by the process. An executable binary of the time command is available at /usr/bin/time as well as a shell built-in named time exists. When we run time, it calls the shell built-in by default. The shell built-in time has limited options. Hence, we should use an absolute path for the executable (/usr/bin/time) for performing additional functionalities. We can write this time statistics to a file using the -o filename option as follows: $ /usr/bin/time -o output.txt COMMAND The filename should always appear after the –o flag. In order to append the time statistics to a file without overwriting, use the -a flag along with the -o option as follows: $ /usr/bin/time -a -o output.txt COMMAND We can also format the time outputs using format strings with the -f option. A format string consists of parameters corresponding to specific options prefixed with %. The format strings for real time, user time, and sys time are as follows: Real time - %e f User - %U f sys - %S By combining parameter strings, we can create formatted output as follows: $ /usr/bin/time -f "FORMAT STRING" COMMAND For example: $ /usr/bin/time -f "Time: %U" -a -o timing.log uname Linux Here %U is the parameter for user time. When formatted output is produced, the formatted output of the command is written to the standard output and the output of the COMMAND, which is timed, is written to standard error. We can redirect the formatted output using a redirection operator (>) and redirect the time information output using the (2>) error redirection operator. For example: $ /usr/bin/time -f "Time: %U" uname> command_output.txt 2>time.log $ cat time.log Time: 0.00 $ cat command_output.txt Linux Many details regarding a process can be collected using the time command. The important details include, exit status, number of signals received, number of context switches made, and so on. Each parameter can be displayed by using a suitable format string. The following table shows some of the interesting parameters that can be used: For example, the page size can be displayed using the %Z parameters as follows: $ /usr/bin/time -f "Page size: %Z bytes" ls> /dev/null Page size: 4096 bytes Here the output of the timed command is not required and hence the standard output is directed to the /dev/null device in order to prevent it from writing to the terminal.  
Read more
  • 0
  • 0
  • 5844

article-image-25-useful-extensions-drupal-7-themers
Packt
07 Jun 2011
5 min read
Save for later

25 Useful Extensions for Drupal 7 Themers

Packt
07 Jun 2011
5 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling Drupal modules There exist within the Drupal.org site a number of modules that are relevant to your work of theming a site. Some are straightforward tools that make your standard theming tasks easier, others are extensions to Drupal functionality that enable to you do new things, or to do things from the admin interface that normally would require working with the code. The list here is not meant to be comprehensive, but it does list all the key modules that are either presently available for Drupal 7 or at least in development. There are additional relevant modules that are not listed here, as at the time this was written, they showed no signs of providing a Drupal 7 version. Caution One thing to keep in mind here—some of these modules attempt to reduce complex tasks to simple GUI-based admin interfaces. While that is a wonderful and worthy effort, you should be conscious of the fact that sometimes tools of this nature can raise performance and security issues and due to their complexity, sometimes cause conflicts with other modules that also are designed to perform at least part of the functions being fulfilled by the more complex module. As with any new module, test it out locally first and make sure it not only does what you want, but also does not provide any unpleasant surprises. The modules covered in this article include: Administration Menu Chaos Tool Suit Colorbox Conditional Stylesheets Devel @font-your-face Frontpage HTML5 Tools .mobi loader Mobile Theme Nice Menus Noggin Organic Groups Panels Semantic Views Skinr Style Guide Sweaver Taxonomy Theme Theme Developer ThemeKey Views Webform Administration Menu The Administration Menu was a mainstay of many Drupal sites built during the lifespan of Drupal 6.x. With the arrival of Drupal 7, we thought it unlikely we would need the module, as the new toolbar functionality in the core accomplished a lot of the same thing. In the course of writing this, however, we installed Administration Menu and were pleasantly surprised to find that not only can you run the old-style Administration Menu, but they have also now included the option to run a Toolbar-style Administration Menu, as shown in the following screenshot: The Administration Menu Toolbar offers all the options of the default Toolbar plus the added advantage of exposing all the menu options without having to navigate through sub-menus on the overlay. Additionally, you have fast access to clearing the caching, running cron, and disabling the Devel module (assuming you have it installed). A great little tweak to the new Drupal 7 administration interface. View the project at: http://drupal.org/project/admin_menu. Chaos Tool Suite This module provides a collection of APIs and tools to assist developers. Though the module is required by both the Views and Panels modules, discussed elsewhere in this article, it provides other features that also make it attractive. Among the tools to help themers are the Form Wizard, which simplifies the creation of complex forms, and the Dependent widget that allows you to set conditional field visibility on forms. The suite also includes CSS Tools to help cache and sanitize your CSS. Learn more at http://drupal.org/project/ctools. Colorbox The Colorbox module for Drupal provides a jQuery-based lightbox plugin. It integrates the third-party plugin of the same name (http://colorpowered.com/colorbox/). The module allows you to easily create lightboxes for images, forms, and content. The module supports the most commonly requested features, including slideshows, captions, and the preloading of images. Colorbox comes with a selection of styles or you can create your own with CSS. To run this module, you must first download and install the Colorbox plugin from the aforementioned URL. Visit the Colorbox Drupal module project page at: http://drupal.org/project/colorbox. Conditional Stylesheets The module allows themers to easily address cross-browser compatibility issues with Internet Explorer. With this module installed, you can add stylesheets targeting the browser via the theme's .info file, rather than having to modify the template.php file. The module relies on the conditional comments syntax originated by Microsoft. To learn more, visit the project site at http://drupal.org/project/conditional_styles. Devel The Devel module is a suite of tools that are useful to both module and theme developers. The module provides a suite of useful tools and utilities. Among the options it provides: Auto-generate content, menus, taxonomies, and users Print summaries of DB queries Print arrays Log performance Summaries of node access The module is also a prerequisite to the Theme Developer module, discussed later in this article. Learn more: http://drupal.org/project/devel. @font-your-face @font-your-face provides an admin interface for browsing and applying web fonts to your Drupal themes. The module employs the CSS @font-face syntax and draws upon a variety of online font resources, including Google Fonts, Typekit. com, KERNEST, and others. The system automatically loads fonts from the selected sources and you can apply them to the styles you designate—without having to manually edit the stylesheets. It's easy-to-use and has the potential to change the way you select and use fonts on your websites. @font-your-face requires the Views module to function. Learn more at the project site: http://drupal.org/project/fontyourface. Frontpage This module serves a very specific purpose—it allows you to designate, from the admin interface, different front pages for anonymous and authenticated users. Though you can accomplish the same thing through use of $classes and a bit of work, the module makes it possible for anyone to set this up without having to resort to coding. Visit the project site at http://drupal.org/project/frontpage.
Read more
  • 0
  • 0
  • 5839

article-image-django-debug-toolbar
Packt
20 Apr 2010
9 min read
Save for later

The Django Debug Toolbar

Packt
20 Apr 2010
9 min read
The debug toolbar has a far more advanced way of displaying the information than simply embedding it in HTML comments. The capabilities are best shown by example, so we will immediately proceed with installing the toolbar. Installing the Django Debug Toolbar The toolbar can be found on the Python package index site: https://pypi.python.org/pypi/django-debug-toolbar. Once installed, activating the debug toolbar in a Django project is accomplished with the addition of just a couple of settings. First, the debug toolbar middleware, debug_toolbar.middleware.DebugToolbarMiddleware, must be added to the MIDDLEWARE_CLASSES setting. The documentation for the toolbar notes that it should be placed after any other middleware that encodes the response content, so it is best to place it last in the middleware sequence. Second, the debug_toolbar application needs to be added to INSTALLED_APPS. The debug_toolbar application uses Django templates to render its information, thus it needs to be listed in INSTALLED_APPS so that its templates will be found by the application template loader. Third, the debug toolbar requires that the requesting IP address be listed in INTERNAL_IPS. Finally, the debug toolbar is displayed only when DEBUG is True. We've been running with debug turned on, so again we don't have to make any changes here. Note also that the debug toolbar allows you to customize under what conditions the debug toolbar is displayed. It's possible, then, to set things up so that the toolbar will be displayed for requesting IP addresses not in INTERNAL_IPS or when debug is not turned on, but for our purposes the default configuration is fine so we will not change anything. One thing that is not required is for the application itself to use a RequestContext in order for things such as the SQL query information to be available in the toolbar. The debug toolbar runs as middleware, and thus is not dependent on the application using a RequestContext in order for it to generate its information. Thus, the changes made to the survey views to specify RequestContexts on render_to_response calls would not have been needed if we started off first with the Django Debug Toolbar. Debug toolbar appearance Once the debug toolbar is added to the middleware and installed applications settings, we can see what it looks like by simply visiting any page in the survey application. Let's start with the home page. The returned page should now look something like this: Note this screenshot shows the appearance of the 0.8.0 version of the debug toolbar. Earlier versions looked considerably different, so if your results do not look like this you may be using a different version than 0.8.0. The version that you have will most likely be newer than what was available when this was written, and there may be additional toolbar panels or functions that are not covered here. As you can see, the debug toolbar appears on the right-hand side of the browser window. It consists of a series of panels that can be individually enabled or disabled by changing the toolbar configuration. The ones shown here are the ones that are enabled by default. Before taking a closer look at some of the individual panels, notice that the toolbar contains an option to hide it at the top. If Hide is selected, the toolbar reduces itself to a small tab-like indication to show that it is present: This can be very useful for cases where the expanded version of the toolbar obscures application content on the page. All of the information provided by the toolbar is still accessible, after clicking again on the DjDT tab; it is just out of the way for the moment. Most of the panels will provide detailed information when they are clicked. A few also provide summary information in the main toolbar display. As of debug toolbar version 0.8.0, the first panel listed, Django Version, only provides summary information. There is no more detailed information available by clicking on it. As you can see in the screenshot, Django 1.1.1 is the version in use here. Note that the current latest source version of the debug toolbar already provides more information for this panel than the 0.8.0 release. Since 0.8.0, this panel has been renamed to Versions, and can be clicked to provide more details. These additional details include version information for the toolbar itself and for any other installed Django applications that provide version information. The other three panels that show summary information are the Time, SQL, and Logging panels. Thus, we can see at a glance from the first appearance of the page that 60 milliseconds of CPU time were used to produce this page (111 milliseconds total elapsed time), that the page required four queries, which took 1.95 milliseconds, and that zero messages were logged during the request. In the following sections, we will dig into exactly what information is provided by each of the panels when clicked. We'll start first with the SQL panel, since it is one of the most interesting and provides the same information (in addition to a lot more). The SQL panel If we click on the SQL section of the debug toolbar, the page will change to: At a glance, this is a much nicer display of the SQL queries for the page than what we came up with earlier. The queries themselves are highlighted so that SQL keywords stand out, making them easier to read. Also, since they are not embedded inside an HTML comment, their content does not need to be altered in any way—there was no need to change the content of the query containing the double dash in order to avoid it causing display problems. (Now would probably be a good time to remove that added query, before we forget why we added it.) Notice also that the times listed for each query are more specific than what was available in Django's default query history. The debug toolbar replaces Django's query recording with its own, and provides timings in units of milliseconds instead of seconds. The display also includes a graphical representation of how long each query took, in the form of horizontal bars that appear above each query. This representation makes it easy to see when there are one or more queries that are much more expensive than the others. In fact, if a query takes an excessive amount of time, its bar will be colored red. In this case, there is not a great deal of difference in the query times, and none took particularly long, so all the bars are of similar length, and are colored gray. Digging deeper, some of the information we had to manually figure out earlier in this article is just a click away on this SQL query display. Specifically, the answer to the question of what line of our code triggered a particular SQL query to be issued. Each of the displayed queries has a Toggle Stacktrace option, which when clicked will show the stack trace associated with the query: Here we can see that all queries are made by the home method in the survey views. py file. Note that the toolbar filters out levels in the stack trace that are within Django itself, which explains why each of these has only one level shown. The first query is triggered by Line 61, which contains the filter call added to test what will happen if a query containing two dashes in a row was logged. The remaining queries are all attributed to Line 66, which is the last line of the render_to_response call in the home view. These queries, as we figured out earlier, are all made during the rendering of the template. (Your line numbers may vary from those shown here, depending on where in the file various functions were placed.) Finally, this SQL query display makes available information that we had not even gotten around to wanting yet. Under the Action column are links to SELECT, EXPLAIN, and PROFILE each query. Clicking on the SELECT link shows what the database returns when the query is actually executed. For example: Similarly, clicking on EXPLAIN and PROFILE displays what the database reports when asked to explain or profile the selected query, respectively. The exact display, and how to interpret the results, will differ from database to database. (In fact, the PROFILE option is not available with all databases—it happens to be supported by the database in use here, MySQL.) Interpreting the results from EXPLAIN and PROFILE is beyond the scope of what's covered here, but it is useful to know that if you ever need to dig deep into the performance characteristics of a query, the debug toolbar makes it easy to do so. We've now gotten a couple of pages deep into the SQL query display. How do we get back to the actual application page? Clicking on the circled >> at the upper-right of the main page display will return to the previous SQL query page, and the circled >> will turn into a circled X. Clicking the circled X on any panel detail page closes the details and returns to displaying the application data. Alternatively, clicking again on the panel area on the toolbar for the currently displayed panel will have the same effect as clicking on the circled symbol in the display area. Finally, if you prefer using the keyboard to the mouse, pressing Esc has the same effect as clicking the circled symbol. Now that we have completely explored the SQL panel, let's take a brief look at each of the other panels provided by the debug toolbar. The Time panel Clicking on the Time panel brings up more detailed information on where time was spent during production of the page: The total CPU time is split between user and system time, the total elapsed (wall clock) time is listed, and the number of voluntary and involuntary context switches are displayed. For a page that is taking too long to generate, these additional details about where the time is being spent can help point towards a cause. Note that the detailed information provided by this panel comes from the Python resource module. This is a Unix-specific Python module that is not available on non-Unix-type systems. Thus on Windows, for example, the debug toolbar time panel will only show summary information, and no further details will be available.
Read more
  • 0
  • 0
  • 5833

article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 5816

article-image-modx-web-development-creating-lists
Packt
10 Mar 2011
7 min read
Save for later

MODx Web Development: Creating Lists

Packt
10 Mar 2011
7 min read
Menu details in document properties Every resource that can be shown in a menu must have the Shown in Menu option enabled in the resource's setting page. The Resource setting page also has two other options related to menus: Menu title—what to show in the menu. The resource title is used, if this value is left blank. Menu index—when a list of the resources that are to be listed in the menu is created, the menu index can be used to sort the resources in the required order. Menu index is a number, and when creating lists we can specify how we want to use the index. Authentication and authorization When creating the list of resources, WayFinder lists only those resources that are accessible by the user depending on the access permissions set for each resource, and the web user group to which the user belongs. Getting to know WayFinder WayFinder is a snippet that outputs the structure of the resources as reflected in the resource tree. It creates the lists of all the resources that can be accessed by the current user, from those that been marked as Shown in Menu in the resource properties. Let's try out an exercise to discover WayFinder. Create a new resource. Set the name as testing wayfinder. Choose the template as (blank). Place the following as the content: [[Wayfinder?startId=`0` ]] Save the document, and then preview it. You will see a screen like the one shown in the following screenshot: Notice that WayFinder has created a list of all of the resources, even the ones from the sample site. Each item is a link, so clicking on it leads you to the corresponding document. The generated HTML will look like the following example: <ul><li><a href="http://localhost/learningMODx/" title="Home" >Home</a></li><li><a href="/learningMODx/index.php?id=2" title="Blog" >Blog</a></li><li><a href="/learningMODx/index.php?id=15" title="MODx Features">Features</a><ul><li><a href="/learningMODx/index.php?id=16"title="Ajax" >Ajax</a></li><li><a href="/learningMODx/index.php?id=22" title="Menus and Lists">Menus and Lists</a></li><li><a href="/learningMODx/index.php?id=14" title="Content Management">Manage Content</a></li><li class="last"><a href="/learningMODx/index.php?id=24"title="Extendable by design" >Extendability</a></li></ul></li><li><a href="/learningMODx/index.php?id=33" title="Getting Help">Getting Help</a></li><li><a href="/learningMODx/index.php?id=32" title="Design" >Design</a></li><li><a href="/learningMODx/index.php?id=53" title="Signup Form">Signup Form</a></li><li><a href="/learningMODx/index.php?id=6" title="Contact Us" >Contactus</a></li><li><a href="/learningMODx/index.php?id=54" title="Getting to knowditto" >Getting to know ditto</a><ul><li><a href="/learningMODx/index.php?id=55" title="Sports RSS" >Sports RSS</a></li><li><a href="/learningMODx/index.php?id=56" title="Lifestyle RSS">Lifestyle RSS</a></li><li class="last"><a href="/learningMODx/index.php?id=57" title="ITRSS" >IT RSS</a></li></ul></li><li class="last active"><a href="/learningMODx/index.php?id=58"title="testing wayfinder" >testing wayfinder</a></li></ul> As seen in the preceding output, the generated list is just a set of <ul> and <li> tags. Let's go step-by-step, in understanding how the preceding output can be customized and themed, starting with menus of one level. Theming To be able to theme the list generated by WayFinder to appear as menus, we need to understand how WayFinder works in more detail. In this section, we will show you step-by-step how to create a simple menu without any sub-items, and then proceed to creating menus with sub-items. Creating a simple menu Since, for now, we only want a menu without any submenu items, we have to show resources only from the top level of the resource tree. By default, WayFinder will reflect the complete structure of the resource tree, including the resources within containers, as seen in the preceding screenshot. WayFinder lets you choose the depth of the list via the &level parameter. The parameter &level takes a value indicating the number of levels that WayFinder should include in the menu. For our example, because we only want a simple menu with no submenu items, &level is set to 1. Now, let us change the testing wayfinder resource, which we just created, to the following code: [[Wayfinder?startId=`0` &level=`1` ]] Preview the resource now, and you will see that the source code of the generated page in place of Wayfinder is: <ul><li><a href="http://localhost/learningMODx/" title="Home" >Home</a></li><li><a href="/learningMODx/index.php?id=2" title="Blog" >Blog</a></li><li><a href="/learningMODx/index.php?id=15" title="MODx Features">Features</a></li><li><a href="/learningMODx/index.php?id=33" title="Getting Help">Getting Help</a></li><li><a href="/learningMODx/index.php?id=32" title="Design" >Design</a></li><li><a href="/learningMODx/index.php?id=53" title="Signup Form">Signup Form</a></li><li><a href="/learningMODx/index.php?id=6" title="Contact Us" >Contactus</a></li><li><a href="/learningMODx/index.php?id=54" title="Getting to knowditto" >Getting to know ditto</a></li><li class="last active"><a href="/learningMODx/index.php?id=58"title="testing wayfinder" >testing wayfinder</a></li></ul> Now, if we can just give <ul> and <li> respective classes, we can style them to appear as a menu. We can do this by passing the class names to the parameter &rowClass. Change the contents of the preceding testing wayfinder to: <div id="menu">[!Wayfinder?startId=`0` &level=`1` &rowClass=`menu`!]</div> Now, open style.css from the root folder, and change the CSS to the following code. What we are doing is styling the preceding generated list to appear like a menu, by using CSS: * { padding:2; margin:0; border:1; }body { margin:0 20px; background:#8CEC81; }#banner { background: #2BB81B; border-top:5px solid #8CEC81; borderbottom:5px solid #8CEC81; }#banner h1 { padding:10px; }#wrapper { background: #8CEC81; }#container { width: 100%; background: #2BB81B; float: left; }#content { background: #ffffff; height:600px; padding:0 10px 10px10px; clear:both; }#footer { background: #2BB81B; border-top:5px solid #8CEC81; borderbottom:5px solid #8CEC81; }.clearing { clear:both; height:0; }#content #col-1 {float:left;width:500px; margin:0px;padding:0px;}#content #col-2 {float:right; width:300px; margin:0px; padding:30px 010px 25px; border-left:3px solid #99cc66; height:500px;}#content #col-2 div {padding-bottom:20px;}#menu {background:#ffffff;float: left;}#menu ul {list-style: none;margin: 0;padding: 0;width: 48em;float: left;}#menu ul li {display: inline;}#menu a, #menu h2 {font: bold 11px/16px arial, helvetica, sans-serif;display: inline;border-width: 1px;border-style: solid;border-color: #ccc #888 #555 #bbb;margin: 0;padding: 2px 3px;}#menu h2 {color: #fff;background: #000;text-transform: uppercase;}#menu a {color: #000;background: #2BB81B;text-decoration: none;}#menu a:hover {color: #2BB81B;background: #fff;} Also remember to change the template of the resource to the learning MODx default template. Now preview the page, and you will see something like the one shown in the following screenshot: The HTML code returned will be similar to the following: <ul><li class="menu"><a href="http://localhost/learningMODx/"title="Home" >Home</a></li><li class="menu"><a href="/learningMODx/index.php?id=2" title="Blog">Blog</a></li><li class="menu"><a href="/learningMODx/index.php?id=15" title="MODxFeatures" >Features</a></li><li class="menu"><a href="/learningMODx/index.php?id=33"title="Getting Help" >Getting Help</a></li><li class="menu"><a href="/learningMODx/index.php?id=32"title="Design" >Design</a></li><li class="menu"><a href="/learningMODx/index.php?id=53" title="SignupForm" >Signup Form</a></li><li class="menu"><a href="/learningMODx/index.php?id=6" title="ContactUs" >Contact us</a></li><li class="menu"><a href="/learningMODx/index.php?id=54"title="Getting to know ditto" >Getting to know ditto</a></li><li class="menu last active"><a href="/learningMODx/index.php?id=58"title="testing wayfinder" >testing wayfinder</a></li></ul> Notice that for each menu item, the class menu has been applied. Although we have not applied any custom style to the menu class, we have shown you that when you are building more fine-grained menu systems, you have the ability to have every item associated with a class.
Read more
  • 0
  • 0
  • 5804

article-image-choosing-your-shipping-method
Packt
19 Jun 2013
9 min read
Save for later

Choosing your shipping method

Packt
19 Jun 2013
9 min read
(For more resources related to this topic, see here.) Getting ready To view and edit our shipping methods we must first navigate to System | Configuration | Shipping Methods. Remember, our Current Configuration Scope field is important as shipping methods can be set on a per website scope basis. There are many shipping methods available by default, but the main generic methods are Flat Rate, Table Rates, and Free Shipping. By default, Magento comes with the Flat Rate method enabled. We are going to start off by disabling this shipping method. Be careful when disabling shipping methods; if we leave our Magento installation without any active shipping methods then no orders can be placed—the customer would be presented with this error in the checkout: Sorry, no quotes are available for this order at this time. Likewise through the administration panel manual orders will also receive the error. How to do it... To disable our Flat Rate method we need to navigate to its configuration options in System | Configuration | Shipping Methods | Flat Rate and choose Enabled as No, and click on Save. The following screenshot highlights our current configuration scope and disabled Flat Rate method: Next we need to configure our Table Rates method, so we need to now click on the Table Rates tab and set Enabled to Yes , within Title enter National Delivery and within Method Name enter Shipping. Finally, for the Condition option select Weight vs. Destination (all the other information can be left as default as it will not affect our pricing for this scenario). To upload our spreadsheet for our new Table Rates method we need to first change our scope (shipping rates imported via a .csv file are always entered at a website view level). To do this we need to select Main Website (this wording can differ depending on System | Manage Stores Settings) from our Current Configuration Scope field. The following screenshot shows the change in input fields when our configuration scope has changed: Click on the Export CSV button and we should start downloading a blank .csv file (or if there are rates already, it will give us our active rates). Next we will populate our spreadsheet with the following information (shown in the screenshot) so that we can ship to anywhere in the USA: After finishing our spreadsheet we can now import it, so (with our Current Configuration Scope field set to our Website view) click on the Choose File/Browse button and upload it. Once the browser has uploaded the file we can click on Save. Next we are going to configure our Free Shipping method to run alongside our Table Rates method, so to start with we need to switch back to our Default Config scope and then click on the Free Shipping tab Within this tab we will set Enabled to Yes and Minimum Order Amount to 50. We can leave the other options as default. How it works... The following is a brief explanation of each of our main shipping methods. Flat Rate The Flat Rate method allows us to specify a fixed shipping charge to be applied either per item or per order. The Flat Rate method also allows us to specify a handling fee—a percentage or fixed amount surcharge of the flat rate fee. With this method we can also specify which countries we wish to make this shipping method applicable for (dependent solely on the customers' shipping address details). Unlike the Table Rates method, you cannot specify multiple flat rates for any given region of a country nor can you specify flat rates individually per country. Table Rates The Table Rates method uses a spreadsheet of data to increase the flexibility of our shipping charges by allowing us to apply different prices to our orders depending on the criteria we specify in the spreadsheet. Along with the liberty to specify which countries this method is applicable for and giving us the option to apply a handling fee, the Table Rates method also allows us to choose from a variety of shopping cart conditions. The choice that we select from these conditions affects the data that we can import via the spreadsheet. Inside this spreadsheet we can specify hundreds of rows of countries along with their specific states or Zip/Postal Codes. Each row has a condition such as weight (and above) and also a specific price. If a shopping cart matches the criteria entered on any of the rows, the shipping price will be taken from that row and set to the cart. In our example we have used Weight vs. Destination; there are two other alternative conditions which come with a default Magento installation that could be used to calculate the shipping: Price vs. Destination: This Table Rates condition takes into account the Order Subtotal (and above) amount in whichever currency is currently set for the store # of Items vs. Destination: This Table Rates condition calculates the shipping cost based on the # of Items (and above) within the customer's basket Free Shipping The Free Shipping method is one of the simplest and most commonly used of all the methods that come with a default Magento installation. One of the best ways to increase the conversion rate through your Magento store is to offer your customers Free Shipping. Magento allows you to do this by using its Free Shipping method. Selecting the countries that this method is applicable for and inputting a minimum order amount as the criteria will enable this method in the checkout for any matching shopping cart. Unfortunately, you cannot specify regions of a country within this method (although you can still offer a free shipping solution through table rates and promotional rules). Our configuration As mentioned previously, the Table Rates method provides us with three types of conditions. In our example we created a table rate spreadsheet that relies on the weight information of our products to work out the shipping price. Magento's default Free Shipping method is one of the most popular and useful shipping methods and its most important configuration option is Minimum Order Amount. Setting this value to 50 will tell Magento that any shopping cart with a subtotal greater than $50 should provide the Free Shipping method for the customer to use; we can see this demonstrated in the following screenshot: The enabled option is a standard feature among nearly all shipping method extensions. Whenever we wish to enable or disable a shipping method, all we need to do is set it to Yes for enabled and No to disable it. Once we have configured our Table Rates extension, Magento will use the values inputted by our customer and try to match them against our imported data. In our case if a customer has ordered a product weighing 2.5 kg and they live anywhere in the USA, they will be presented with our $6.99 price. However, a drawback of our example is if they live outside of the USA, our shipping method will not be available. The .csv file for our Weight vs. Destination spreadsheet is slightly different to the spreadsheet used for the other Table Rates conditions. It is therefore important to make sure that if we change our condition, we export a fresh spreadsheet with the correct column information. One very important point to note when editing our shipping spreadsheets is the format of the file—programs such as Microsoft Excel sometimes save in an incompatible format. It is recommended to use the free, downloadable Open Office suite to edit any of Magento's spreadsheets as they save the file in a compatible format. We can download Open Office here: www.openoffice.org If there is no alternative but to use Microsoft Excel then we must ensure we save as CSV for Windows or alternatively CSV (Comma Delimited). A few key points when editing the Table Rates spreadsheet: The * (Asterisk) is a wildcard—similar to saying ANY Weight (and above) is really a FROM weight and will set the price UNTIL the next row value that is higher than itself (for the matching Country, Region/State, and Zip/ Postal Code)—the downside of this is that you cannot set a maximum weight limit The Country column takes three-letter codes—ISO 3166-1 alpha-3 codes The Zip/Postal Code column takes either a full USA ZIP code or a full postal code The Region/State column takes all two-letter state codes from the USA or any other codes that are available in the drop-down select menus for regions on the checkout pages of Magento One final note is that we can run as many shipping methods as we like at the same time—just like we did with our Free Shipping method and our Table Rates method. There's more... For more information on setting up the many shipping methods that are available within Magento please see the following link: http://innoexts.com/magento-shipping-methods We can also enable and disable shipping methods on a per website view basis, so for example we could disable a shipping method for our French store. Disabling Free Shipping for French website If we wanted to disable our Free Shipping method for just our French store, we could change our Current Configuration Scope field to our French website view and then perform the following steps: Navigate to System | Configuration | Shipping Methods and click on the Free Shipping tab. Uncheck Use Default next to the Enabled option and set Enabled to No, and then click on Save Config. We can see that Magento normally defaults all of our settings to the Default Config scope; by unchecking the Use Default checkbox we can edit our method for our chosen store view. Summary This article explored the differences between the Flat Rate, Table Rates, and Free Shipping methods, as well as taught us how to disable a shipping method and configure your Table Rates. Resources for Article : Further resources on this subject: Magento Performance Optimization [Article] Magento: Exploring Themes [Article] Getting Started with Magento Development [Article]
Read more
  • 0
  • 0
  • 5787
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-firebase
Packt
04 May 2015
8 min read
Save for later

Using Firebase: Learn how and why to use Firebase

Packt
04 May 2015
8 min read
In this article by Manoj Waikar, author of the book Data-oriented Development with AngularJS, we will learn a brief description about various types of persistence mechanisms, local versus hosted databases, what Firebase is, why to use it, and different use cases where Firebase can be useful. (For more resources related to this topic, see here.) We can write web applications by using the frameworks of our choice—be it server-side MVC frameworks, client-side MVC frameworks, or some combination of these. We can also use a persistence store (a database) of our choice—be it an RDBMS or a more modern NoSQL store. However, making our applications real time (meaning, if you are viewing a page and data related to that page gets updated, then the page should be updated or at least you should get a notification to refresh the page) is not a trivial task and we have to start thinking about push notifications and what not. This does not happen with Firebase. Persistence One of the very early decisions a developer or a team has to make when building any production-quality application is the choice of a persistent storage mechanism. Until a few years ago, this choice, more often than not, boiled down to a relational database such as Oracle, SQL Server, or PostgreSQL. However, the rise of NoSQL solutions such as MongoDB (http://www.mongodb.org/) and CouchDB (http://couchdb.apache.org/)—document-oriented databases or Redis (http://redis.io/), Riak (http://basho.com/riak/), keyvalue stores, Neo4j (http://www.neo4j.org/), and a graph database—has widened the choice for us. Please check the Wikipedia page on NoSQL (http://en.wikipedia.org/wiki/NoSQL) solutions for a detailed list of various NoSQL solutions including their classification and performance characteristics. There is one more buzzword that everyone must have already heard of, Cloud, the short form for cloud computing. Cloud computing briefly means that shared resources (or software) are provided to consumers on a paid/free basis over a network (typically, the Internet). So, we now have the luxury of choosing our preferred RDBMS or NoSQL database as a hosted solution. Consequently, we have one more choice to make—whether to install the database locally (on our own machine or inside the corporate network) or use a hosted solution (in the cloud). As with everything else, there are pros and cons to each of the approaches. The pros of a local database are fast access and one-time buying cost (if it's not an open source database), and the cons include the initial setup time. If you have to evaluate some another database, then you'll have to install the other database as well. The pros of a hosted solution are ease of use and minimal initial setup time, and the cons are the need for a reliable Internet connection, cost (again, if it's not a free option), and so on. Considering the preceding pros and cons, it's a safe bet to use a hosted solution when you are still evaluating different databases and only decide later between a local or a hosted solution, when you've finally zeroed in on your database of choice. What is Firebase? So, where does Firebase fit into all of this? Firebase is a NoSQL database that stores data as simple JSON documents. We can, therefore, compare it to other document-oriented databases such as CouchDB (which also stores data as JSON) or MongoDB (which stores data in the BSON, which stands for binary JSON, format). Although Firebase is a database with a RESTful API, it's also a real-time database, which means that the data is synchronized between different clients and with the backend server almost instantaneously. This implies that if the underlying data is changed by one of the clients, it gets streamed in real time to every connected client; hence, all the other clients automatically get updates with the newest set of data (without anyone having to refresh these clients manually). So, to summarize, Firebase is an API and a cloud service that gives us a real-time and scalable (NoSQL) backend. It has libraries for most server-side languages/frameworks such as Node.js, Java, Python, PHP, Ruby, and Clojure. It has official libraries for Node.js and Java and unofficial third-party libraries for Python, Ruby, and PHP. It also has libraries for most of the leading client-side frameworks such as AngularJS, Backbone, Ember, React, and mobile platforms such as iOS and Android. Firebase – Benefits and why to use? Firebase offers us the following benefits: It is a cloud service (a hosted solution), so there isn't any setup involved. Data is stored as native JSON, so what you store is what you see (on the frontend, fetched through a REST API)—WYSIWYS. Data is safe because Firebase requires 2048-bit SSL encryption for all data transfers. Data is replicated and backed-up to multiple secure locations, so there are minimal chances of data loss. When data changes, apps update instantly across devices. Our apps can work offline—as soon as we get connectivity, the data is synchronized instantly. Firebase gives us lightning fast data synchronization. So, combined with AngularJS, it gives us three-way data binding between HTML, JavaScript, and our backend (data). With two-way data binding, whenever our (JavaScript) model changes, the view (HTML) updates itself and vice versa. But, with three-way data binding, even when the data in our database changes, our JavaScript model gets updated, and consequently, the view gets updated as well. Last but not the least, it has libraries for the most popular server-side languages/frameworks (such as Node.js, Ruby, Java, and Python) as well as the popular client-side frameworks (such as Backbone, Ember, and React), including AngularJS. The Firebase binding for AngularJS is called AngularFire (https://www.firebase.com/docs/web/libraries/angular/). Firebase use cases Now that you've read how Firebase makes it easy to write applications that update in real time, you might still be wondering what kinds of applications are most suited for use with Firebase. Because, as often happens in the enterprise world, either you are not at liberty to choose all the components of your stack or you might have an existing application and you just have to add some new features to it. So, let's study the three main scenarios where Firebase can be a good fit for your needs. Apps with Firebase as the only backend This scenario is feasible if: You are writing a brand-new application or rewriting an existing one from scratch You don't have to integrate with legacy systems or other third-party services Your app doesn't need to do heavy data processing or it doesn't have complex user authentication requirements In such scenarios, Firebase is the only backend store you'll need and all dynamic content and user data can be stored and retrieved from it. Existing apps with some features powered by Firebase This scenario is feasible if you already have a site and want to add some real-time capabilities to it without touching other parts of the system. For example, you have a working website and just want to add chat capabilities, or maybe, you want to add a comment feed that updates in real time or you have to show some real-time notifications to your users. In this case, the clients can connect to your existing server (for existing features) and they can connect to Firebase for the newly added real-time capabilities. So, you can use Firebase together with the existing server. Both client and server code powered by Firebase In some use cases, there might be computationally intensive code that can't be run on the client. In situations like these, Firebase can act as an intermediary between the server and your clients. So, the server talks to the clients by manipulating data in Firebase. The server can connect to Firebase using either the Node.js library (for Node.js-based server-side applications) or through the REST API (for other server-side languages). Similarly, the server can listen to the data changes made by the clients and can respond appropriately. For example, the client can place tasks in a queue that the server will process later. One or more servers can then pick these tasks from the queue and do the required processing (as per their availability) and then place the results back in Firebase so that the clients can read them. Firebase is the API for your product You might not have realized by now (but you will once you see some examples) that as soon as we start saving data in Firebase, the REST API keeps building side-by-side for free because of the way data is stored as a JSON tree and is associated on different URLs. Think for a moment if you had a relational database as your persistence store; you would then need to specially write REST APIs (which are obviously preferable to old RPC-style web services) by using the framework available for your programming language to let external teams or customers get access to data. Then, if you wanted to support different platforms, you would need to provide libraries for all those platforms whereas Firebase already provides real-time SDKs for JavaScript, Objective-C, and Java. So, Firebase is not just a real-time persistence store, but it doubles up as an API layer too. Summary In this article, we learned a brief description about Firebase is, why to use it, and different use cases where Firebase can be useful. Resources for Article: Further resources on this subject: AngularJS Performance [article] An introduction to testing AngularJS directives [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 5775

article-image-load-validate-submit-forms-using-ext-js-3-0-part-1
Packt
17 Oct 2011
6 min read
Save for later

Load, Validate, and Submit Forms using Ext JS 3.0: Part 1

Packt
17 Oct 2011
6 min read
Specifying the required fields in a form This recipe uses a login form as an example to explain how to create required fields in a form. How to do it... Initialize the global QuickTips instance: Ext.QuickTips.init(); Create the login form: var loginForm = { xtype: 'form', id: 'login-form', bodyStyle: 'padding:15px; background:transparent', border: false, url:'login.php', items: [{ xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"> <img src="img/magic-wand.png" class="app-img" /> Log in to The Magic Forum</div>'} }, { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false }, { xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password',allowBlank: false }], buttons: [{ text: 'Login', handler: function() { Ext.getCmp('login-form').getForm().submit(); } }, { text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the login form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [loginForm] }); win.show();}); How it works... Initializing the QuickTips singleton allows the form's validation errors to be shown as tool tips. When the form is created, each required field needs to have the allowblank configuration option set to false: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false},{ xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password', allowBlank: false} Setting allowBlank to false activates a validation rule that requires the length of the field's value to be greater than zero. There's more... Use the blankText configuration option to change the error text when the blank validation fails. For example, the username field definition in the previous code snippet can be changed as shown here: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username', allowBlank: false, blankText:'Enter your username'} The resulting error is shown in the following figure: Validation rules can be combined and even customized. Other recipes in this article explain how to range-check a field's length, as well as how to specify the valid format of the field's value. See also... The next recipe titled Setting the minimum and maximum length allowed for a field's value explains how to restrict the number of characters entered in a field The Changing the location where validation errors are displayed recipe, covered later in this article, shows how to relocate a field's error icon Refer to the Deferring field validation until form submission recipe, covered later in this article, to learn how to validate all fields at once upon form submission, instead of using the default automatic field validation The Creating validation functions for URLs, email addresses, and other types of data recipe, covered later in this article, explains the validation functions available in Ext JS The Confirming passwords and validating dates using relational field validation recipe, covered later in this article, explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe, covered later in this article, explains how to perform server-side validation Setting the minimum and maximum length allowed for a field's value This recipe shows how to set the minimum and maximum number of characters allowed for a text field. The way to specify a custom error message for this type of validation is also explained. The login form built in this recipe has username and password fields of a login form whose lengths are restricted: How to do it... The first thing is to initialize the QuickTips singleton: Ext.QuickTips.init(); Create the login form: var loginForm = { xtype: 'form', id: 'login-form', bodyStyle: 'padding:15px;background:transparent', border: false, url:'login.php', items: [ { xtype: 'box', autoEl: { tag: 'div', html: '<div class="app-msg"> <img src="img/magic-wand.png" class="app-img" /> Log in to The Magic Forum</div>' } }, { xtype: 'textfield',id: 'login-user', fieldLabel: 'Username', allowBlank: false,minLength: 3,maxLength: 32 }, { xtype: 'textfield',id: 'login-pwd', fieldLabel: 'Password',inputType: 'password', allowBlank: false,minLength: 6,maxLength: 32, minLengthText: 'Password must be at least 6 characters long.' } ], buttons: [{ text: 'Login', handler: function() { Ext.getCmp('login-form').getForm().submit(); } }, { text: 'Cancel', handler: function() { win.hide(); } }]} Create the window that will host the login form: Ext.onReady(function() { win = new Ext.Window({ layout: 'form', width: 340, autoHeight: true, closeAction: 'hide', items: [loginForm] }); win.show();}); How it works... After initializing the QuickTips singleton, which allows the form's validation errors to be shown as tool tips, the form is built. The form is an instance of Ext.form.FormPanel. The username and password fields have their lengths restricted by the way of the minLength and maxLength configuration options: { xtype: 'textfield', id: 'login-user', fieldLabel: 'Username',allowBlank: false, minLength: 3, maxLength: 32},{ xtype: 'textfield', id: 'login-pwd',fieldLabel: 'Password', inputType: 'password',allowBlank: false, minLength: 6, maxLength: 32,minLengthText: 'Password must be at least 6 characters long.'} Notice how the minLengthText option is used to customize the error message that is displayed when the minimum length validation fails: { xtype: 'textfield', id: 'login-pwd', fieldLabel: 'Password', inputType: 'password', allowBlank: false, minLength: 6, maxLength: 32, minLengthText: 'Password must be at least 6 characters long.'} As a last step, the window that will host the form is created and displayed. There's more... You can also use the maxLengthText configuration option to specify the error message when the maximum length validation fails. See also... The previous recipe, Specifying the required fields in a form, explains how to make some form fields required The next recipe, Changing the location where validation errors are displayed, shows how to relocate a field's error icon Refer to the Deferring field validation until form submission recipe (covered later in this article) to learn how to validate all fields at once upon form submission, instead of using the default automatic field validation Refer to the Creating validation functions for URLs, email addresses, and other types of data recipe (covered later in this article) for an explanation of the validation functions available in Ext JS The Confirming passwords and validating dates using relational field validation recipe (covered later in this article) explains how to perform validation when the value of one field depends on the value of another field The Rounding up your validation strategy with server-side validation of form fields recipe (covered later in this article) explains how to perform server-side validation
Read more
  • 0
  • 0
  • 5773

article-image-introduction-custom-template-filters-and-tags
Packt
13 Oct 2014
25 min read
Save for later

Introduction to Custom Template Filters and Tags

Packt
13 Oct 2014
25 min read
This article is written by Aidas Bendoratis, the author of Web Development with Django Cookbook. In this article, we will cover the following recipes: Following conventions for your own template filters and tags Creating a template filter to show how many days have passed Creating a template filter to extract the first media object Creating a template filter to humanize URLs Creating a template tag to include a template if it exists Creating a template tag to load a QuerySet in a template Creating a template tag to parse content as a template Creating a template tag to modify request query parameters As you know, Django has quite an extensive template system, with features such as template inheritance, filters for changing the representation of values, and tags for presentational logic. Moreover, Django allows you to add your own template filters and tags in your apps. Custom filters or tags should be located in a template-tag library file under the templatetags Python package in your app. Your template-tag library can then be loaded in any template with a {% load %} template tag. In this article, we will create several useful filters and tags that give more control to the template editors. Following conventions for your own template filters and tags Custom template filters and tags can become a total mess if you don't have persistent guidelines to follow. Template filters and tags should serve template editors as much as possible. They should be both handy and flexible. In this recipe, we will look at some conventions that should be used when enhancing the functionality of the Django template system. How to do it... Follow these conventions when extending the Django template system: Don't create or use custom template filters or tags when the logic for the page fits better in the view, context processors, or in model methods. When your page is context-specific, such as a list of objects or an object-detail view, load the object in the view. If you need to show some content on every page, create a context processor. Use custom methods of the model instead of template filters when you need to get some properties of an object not related to the context of the template. Name the template-tag library with the _tags suffix. When your app is named differently than your template-tag library, you can avoid ambiguous package importing problems. In the newly created library, separate filters from tags, for example, by using comments such as the following: # -*- coding: UTF-8 -*-from django import templateregister = template.Library()### FILTERS #### .. your filters go here..### TAGS #### .. your tags go here.. Create template tags that are easy to remember by including the following constructs: for [app_name.model_name]: Include this construct to use a specific model using [template_name]: Include this construct to use a template for the output of the template tag limit [count]: Include this construct to limit the results to a specific amount as [context_variable]: Include this construct to save the results to a context variable that can be reused many times later Try to avoid multiple values defined positionally in template tags unless they are self-explanatory. Otherwise, this will likely confuse the template developers. Make as many arguments resolvable as possible. Strings without quotes should be treated as context variables that need to be resolved or as short words that remind you of the structure of the template tag components. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template filter to show how many days have passed Not all people keep track of the date, and when talking about creation or modification dates of cutting-edge information, for many of us, it is more convenient to read the time difference, for example, the blog entry was posted three days ago, the news article was published today, and the user last logged in yesterday. In this recipe, we will create a template filter named days_since that converts dates to humanized time differences. Getting ready Create the utils app and put it under INSTALLED_APPS in the settings, if you haven't done that yet. Then, create a Python package named templatetags inside this app (Python packages are directories with an empty __init__.py file). How to do it... Create a utility_tags.py file with this content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from datetime import datetimefrom django import templatefrom django.utils.translation import ugettext_lazy as _from django.utils.timezone import now as tz_nowregister = template.Library()### FILTERS ###@register.filterdef days_since(value):""" Returns number of days between today and value."""today = tz_now().date()if isinstance(value, datetime.datetime):value = value.date()diff = today - valueif diff.days > 1:return _("%s days ago") % diff.dayselif diff.days == 1:return _("yesterday")elif diff.days == 0:return _("today")else:# Date is in the future; return formatted date.return value.strftime("%B %d, %Y") How it works... If you use this filter in a template like the following, it will render something like yesterday or 5 days ago: {% load utility_tags %}{{ object.created|days_since }} You can apply this filter to the values of the date and datetime types. Each template-tag library has a register where filters and tags are collected. Django filters are functions registered by the register.filter decorator. By default, the filter in the template system will be named the same as the function or the other callable object. If you want, you can set a different name for the filter by passing name to the decorator, as follows: @register.filter(name="humanized_days_since")def days_since(value):... The filter itself is quite self-explanatory. At first, the current date is read. If the given value of the filter is of the datetime type, the date is extracted. Then, the difference between today and the extracted value is calculated. Depending on the number of days, different string results are returned. There's more... This filter is easy to extend to also show the difference in time, such as just now, 7 minutes ago, or 3 hours ago. Just operate the datetime values instead of the date values. See also The Creating a template filter to extract the first media object recipe The Creating a template filter to humanize URLs recipe Creating a template filter to extract the first media object Imagine that you are developing a blog overview page, and for each post, you want to show images, music, or videos in that page taken from the content. In such a case, you need to extract the <img>, <object>, and <embed> tags out of the HTML content of the post. In this recipe, we will see how to do this using regular expressions in the get_first_media filter. Getting ready We will start with the utils app that should be set in INSTALLED_APPS in the settings and the templatetags package inside this app. How to do it... In the utility_tags.py file, add the following content: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templatefrom django.utils.safestring import mark_saferegister = template.Library()### FILTERS ###media_file_regex = re.compile(r"<object .+?</object>|"r"<(img|embed) [^>]+>") )@register.filterdef get_first_media(content):""" Returns the first image or flash file from the htmlcontent """m = media_file_regex.search(content)media_tag = ""if m:media_tag = m.group()return mark_safe(media_tag) How it works... While the HTML content in the database is valid, when you put the following code in the template, it will retrieve the <object>, <img>, or <embed> tags from the content field of the object, or an empty string if no media is found there: {% load utility_tags %} {{ object.content|get_first_media }} At first, we define the compiled regular expression as media_file_regex, then in the filter, we perform a search for that regular expression pattern. By default, the result will show the <, >, and & symbols escaped as &lt;, &gt;, and &amp; entities. But we use the mark_safe function that marks the result as safe HTML ready to be shown in the template without escaping. There's more... It is very easy to extend this filter to also extract the <iframe> tags (which are more recently being used by Vimeo and YouTube for embedded videos) or the HTML5 <audio> and <video> tags. Just modify the regular expression like this: media_file_regex = re.compile(r"<iframe .+?</iframe>|"r"<audio .+?</ audio>|<video .+?</video>|"r"<object .+?</object>|<(img|embed) [^>]+>") See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to humanize URLs recipe Creating a template filter to humanize URLs Usually, common web users enter URLs into address fields without protocol and trailing slashes. In this recipe, we will create a humanize_url filter used to present URLs to the user in a shorter format, truncating very long addresses, just like what Twitter does with the links in tweets. Getting ready As in the previous recipes, we will start with the utils app that should be set in INSTALLED_APPS in the settings, and should contain the templatetags package. How to do it... In the FILTERS section of the utility_tags.py template library in the utils app, let's add a filter named humanize_url and register it: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import refrom django import templateregister = template.Library()### FILTERS ###@register.filterdef humanize_url(url, letter_count):""" Returns a shortened human-readable URL """letter_count = int(letter_count)re_start = re.compile(r"^https?://")re_end = re.compile(r"/$")url = re_end.sub("", re_start.sub("", url))if len(url) > letter_count:url = u"%s…" % url[:letter_count - 1]return url How it works... We can use the humanize_url filter in any template like this: {% load utility_tags %}<a href="{{ object.website }}" target="_blank">{{ object.website|humanize_url:30 }}</a> The filter uses regular expressions to remove the leading protocol and the trailing slash, and then shortens the URL to the given amount of letters, adding an ellipsis to the end if the URL doesn't fit into the specified letter count. See also The Creating a template filter to show how many days have passed recipe The Creating a template filter to extract the first media object recipe The Creating a template tag to include a template if it exists recipe Creating a template tag to include a template if it exists Django has the {% include %} template tag that renders and includes another template. However, in some particular situations, there is a problem that an error is raised if the template does not exist. In this recipe, we will show you how to create a {% try_to_include %} template tag that includes another template, but fails silently if there is no such template. Getting ready We will start again with the utils app that should be installed and is ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templatefrom django.template.loader import get_templateregister = template.Library()### TAGS ###@register.tagdef try_to_include(parser, token):"""Usage: {% try_to_include "sometemplate.html" %}This will fail silently if the template doesn't exist.If it does, it will be rendered with the current context."""try:tag_name, template_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "%r tag requires a single argument" % token.contents.split()[0]return IncludeNode(template_name) Then, we need the node class in the same file, as follows: class IncludeNode(template.Node):def __init__(self, template_name):self.template_name = template_namedef render(self, context):try:# Loading the template and rendering ittemplate_name = template.resolve_variable(self. template_name, context)included_template = get_template(template_name).render(context)except template.TemplateDoesNotExist:included_template = ""return included_template How it works... The {% try_to_include %} template tag expects one argument, that is, template_name. So, in the try_to_include function, we are trying to assign the split contents of the token only to the tag_name variable (which is "try_to_include") and the template_name variable. If this doesn't work, the template syntax error is raised. The function returns the IncludeNode object, which gets the template_name field for later usage. In the render method of IncludeNode, we resolve the template_name variable. If a context variable was passed to the template tag, then its value will be used here for template_name. If a quoted string was passed to the template tag, then the content within quotes will be used for template_name. Lastly, we try to load the template and render it with the current template context. If that doesn't work, an empty string is returned. There are at least two situations where we could use this template tag: When including a template whose path is defined in a model, as follows: {% load utility_tags %}{% try_to_include object.template_path %} When including a template whose path is defined with the {% with %} template tag somewhere high in the template context variable's scope. This is especially useful when you need to create custom layouts for plugins in the placeholder of a template in Django CMS: #templates/cms/start_page.html{% with editorial_content_template_path="cms/plugins/editorial_content/start_page.html" %}{% placeholder "main_content" %}{% endwith %}#templates/cms/plugins/editorial_content.html{% load utility_tags %}{% if editorial_content_template_path %}{% try_to_include editorial_content_template_path %}{% else %}<div><!-- Some default presentation ofeditorial content plugin --></div>{% endif % There's more... You can use the {% try_to_include %} tag as well as the default {% include %} tag to include templates that extend other templates. This has a beneficial use for large-scale portals where you have different kinds of lists in which complex items share the same structure as widgets but have a different source of data. For example, in the artist list template, you can include the artist item template as follows: {% load utility_tags %}{% for object in object_list %}{% try_to_include "artists/includes/artist_item.html" %}{% endfor %} This template will extend from the item base as follows: {# templates/artists/includes/artist_item.html #}{% extends "utils/includes/item_base.html" %}  {% block item_title %}{{ object.first_name }} {{ object.last_name }}{% endblock %} The item base defines the markup for any item and also includes a Like widget, as follows: {# templates/utils/includes/item_base.html #}{% load likes_tags %}<h3>{% block item_title %}{% endblock %}</h3>{% if request.user.is_authenticated %}{% like_widget for object %}{% endif %} See also  The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to load a QuerySet in a template Most often, the content that should be shown in a web page will have to be defined in the view. If this is the content to show on every page, it is logical to create a context processor. Another situation is when you need to show additional content such as the latest news or a random quote on some specific pages, for example, the start page or the details page of an object. In this case, you can load the necessary content with the {% get_objects %} template tag, which we will implement in this recipe. Getting ready Once again, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of function parsing arguments passed to the tag and a node class that renders the output of the tag or modifies the template context. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django import templateregister = template.Library()### TAGS ###@register.tagdef get_objects(parser, token):"""Gets a queryset of objects of the model specified by appandmodel namesUsage:{% get_objects [<manager>.]<method> from<app_name>.<model_name> [limit <amount>] as<var_name> %}Example:{% get_objects latest_published from people.Personlimit 3 as people %}{% get_objects site_objects.all from news.Articlelimit 3 as articles %}{% get_objects site_objects.all from news.Articleas articles %}"""amount = Nonetry:tag_name, manager_method, str_from, appmodel,str_limit,amount, str_as, var_name = token.split_contents()except ValueError:try:tag_name, manager_method, str_from, appmodel, str_as,var_name = token.split_contents()except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires a following syntax: ""{% get_objects [<manager>.]<method> from ""<app_ name>.<model_name>"" [limit <amount>] as <var_name> %}"try:app_name, model_name = appmodel.split(".")except ValueError:raise template.TemplateSyntaxError, "get_objects tag requires application name and ""model name separated by a dot"model = models.get_model(app_name, model_name)return ObjectsNode(model, manager_method, amount, var_name) Then, we create the node class in the same file, as follows: class ObjectsNode(template.Node):def __init__(self, model, manager_method, amount, var_name):self.model = modelself.manager_method = manager_methodself.amount = amountself.var_name = var_namedef render(self, context):if "." in self.manager_method:manager, method = self.manager_method.split(".")else:manager = "_default_manager"method = self.manager_methodqs = getattr(getattr(self.model, manager),method,self.model._default_manager.none,)()if self.amount:amount = template.resolve_variable(self.amount,context)context[self.var_name] = qs[:amount]else:context[self.var_name] = qsreturn "" How it works... The {% get_objects %} template tag loads a QuerySet defined by the manager method from a specified app and model, limits the result to the specified amount, and saves the result to a context variable. This is the simplest example of how to use the template tag that we have just created. It will load five news articles in any template using the following snippet: {% load utility_tags %}{% get_objects all from news.Article limit 5 as latest_articles %}{% for article in latest_articles %}<a href="{{ article.get_url_path }}">{{ article.title }}</a>{% endfor %} This is using the all method of the default objects manager of the Article model, and will sort the articles by the ordering attribute defined in the Meta class. A more advanced example would be required to create a custom manager with a custom method to query objects from the database. A manager is an interface that provides database query operations to models. Each model has at least one manager called objects by default. As an example, let's create the Artist model, which has a draft or published status, and a new manager, custom_manager, which allows you to select random published artists: #artists/models.py# -*- coding: UTF-8 -*-from django.db import modelsfrom django.utils.translation import ugettext_lazy as _STATUS_CHOICES = (('draft', _("Draft"),('published', _("Published"),)class ArtistManager(models.Manager):def random_published(self):return self.filter(status="published").order_by('?')class Artist(models.Model):# ...status = models.CharField(_("Status"), max_length=20,choices=STATUS_CHOICES)custom_manager = ArtistManager() To load a random published artist, you add the following snippet to any template: {% load utility_tags %}{% get_objects custom_manager.random_published from artists.Artistlimit 1 as random_artists %}{% for artist in random_artists %}{{ artist.first_name }} {{ artist.last_name }}{% endfor %} Let's look at the code of the template tag. In the parsing function, there is one of two formats expected: with the limit and without it. The string is parsed, the model is recognized, and then the components of the template tag are passed to the ObjectNode class. In the render method of the node class, we check the manager's name and its method's name. If this is not defined, _default_manager will be used, which is, in most cases, the same as objects. After that, we call the manager method and fall back to empty the QuerySet if the method doesn't exist. If the limit is defined, we resolve the value of it and limit the QuerySet. Lastly, we save the QuerySet to the context variable. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to parse content as a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to parse content as a template In this recipe, we will create a template tag named {% parse %}, which allows you to put template snippets into the database. This is valuable when you want to provide different content for authenticated and non-authenticated users, when you want to include a personalized salutation, or when you don't want to hardcode media paths in the database. Getting ready No surprise, we will start with the utils app that should be installed and ready for custom template tags. How to do it... Template tags consist of two things: the function parsing the arguments of the template tag and the node class that is responsible for the logic of the template tag as well as for the output. Perform the following steps: First, let's create the function parsing the template-tag arguments, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-from django import templateregister = template.Library()### TAGS ###@register.tagdef parse(parser, token):"""Parses the value as a template and prints it or saves to avariableUsage:{% parse <template_value> [as <variable>] %}Examples:{% parse object.description %}{% parse header as header %}{% parse "{{ MEDIA_URL }}js/" as js_url %}"""bits = token.split_contents()tag_name = bits.pop(0)try:template_value = bits.pop(0)var_name = Noneif len(bits) == 2:bits.pop(0) # remove the word "as"var_name = bits.pop(0)except ValueError:raise template.TemplateSyntaxError, "parse tag requires a following syntax: ""{% parse <template_value> [as <variable>] %}"return ParseNode(template_value, var_name) Then, we create the node class in the same file, as follows: class ParseNode(template.Node):def __init__(self, template_value, var_name):self.template_value = template_valueself.var_name = var_namedef render(self, context):template_value = template.resolve_variable(self.template_value, context)t = template.Template(template_value)context_vars = {}for d in list(context):for var, val in d.items():context_vars[var] = valresult = t.render(template.RequestContext(context['request'], context_vars))if self.var_name:context[self.var_name] = resultreturn ""return result How it works... The {% parse %} template tag allows you to parse a value as a template and to render it immediately or to save it as a context variable. If we have an object with a description field, which can contain template variables or logic, then we can parse it and render it using the following code: {% load utility_tags %}{% parse object.description %} It is also possible to define a value to parse using a quoted string like this: {% load utility_tags %}{% parse "{{ STATIC_URL }}site/img/" as img_path %}<img src="{{ img_path }}someimage.png" alt="" /> Let's have a look at the code of the template tag. The parsing function checks the arguments of the template tag bit by bit. At first, we expect the name parse, then the template value, then optionally the word as, and lastly the context variable name. The template value and the variable name are passed to the ParseNode class. The render method of that class at first resolves the value of the template variable and creates a template object out of it. Then, it renders the template with all the context variables. If the variable name is defined, the result is saved to it; otherwise, the result is shown immediately. See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to modify request query parameters recipe Creating a template tag to modify request query parameters Django has a convenient and flexible system to create canonical, clean URLs just by adding regular expression rules in the URL configuration files. But there is a lack of built-in mechanisms to manage query parameters. Views such as search or filterable object lists need to accept query parameters to drill down through filtered results using another parameter or to go to another page. In this recipe, we will create a template tag named {% append_to_query %}, which lets you add, change, or remove parameters of the current query. Getting ready Once again, we start with the utils app that should be set in INSTALLED_APPS and should contain the templatetags package. Also, make sure that you have the request context processor set for the TEMPLATE_CONTEXT_PROCESSORS setting, as follows: #settings.pyTEMPLATE_CONTEXT_PROCESSORS = ("django.contrib.auth.context_processors.auth","django.core.context_processors.debug","django.core.context_processors.i18n","django.core.context_processors.media","django.core.context_processors.static","django.core.context_processors.tz","django.contrib.messages.context_processors.messages","django.core.context_processors.request",) How to do it... For this template tag, we will be using the simple_tag decorator that parses the components and requires you to define just the rendering function, as follows: #utils/templatetags/utility_tags.py# -*- coding: UTF-8 -*-import urllibfrom django import templatefrom django.utils.encoding import force_strregister = template.Library()### TAGS ###@register.simple_tag(takes_context=True)def append_to_query(context, **kwargs):""" Renders a link with modified current query parameters """query_params = context['request'].GET.copy()for key, value in kwargs.items():query_params[key] = valuequery_string = u""if len(query_params):query_string += u"?%s" % urllib.urlencode([(key, force_str(value)) for (key, value) inquery_params. iteritems() if value]).replace('&', '&amp;')return query_string How it works... The {% append_to_query %} template tag reads the current query parameters from the request.GET dictionary-like QueryDict object to a new dictionary named query_params, and loops through the keyword parameters passed to the template tag updating the values. Then, the new query string is formed, all spaces and special characters are URL-encoded, and ampersands connecting query parameters are escaped. This new query string is returned to the template. To read more about QueryDict objects, refer to the official Django documentation: https://docs.djangoproject.com/en/1.6/ref/request-response/#querydict-objects Let's have a look at an example of how the {% append_to_query %} template tag can be used. If the current URL is http://127.0.0.1:8000/artists/?category=fine-art&page=1, we can use the following template tag to render a link that goes to the next page: {% load utility_tags %}<a href="{% append_to_query page=2 %}">2</a> The following is the output rendered, using the preceding template tag: <a href="?category=fine-art&amp;page=2">2</a> Or we can use the following template tag to render a link that resets pagination and goes to another category: {% load utility_tags i18n %} <a href="{% append_to_query category="sculpture" page="" %}">{% trans "Sculpture" %}</a> The following is the output rendered, using the preceding template tag: <a href="?category=sculpture">Sculpture</a> See also The Creating a template tag to include a template if it exists recipe The Creating a template tag to load a QuerySet in a template recipe The Creating a template tag to parse content as a template recipe Summary In this article showed you how to create and use your own template filters and tags, as the default Django template system is quite extensive, and there are more things to add for different cases. Resources for Article: Further resources on this subject: Adding a developer with Django forms [Article] So, what is Django? [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 5762

article-image-installing-and-configuring-drupal-commerce
Packt
28 Jun 2013
8 min read
Save for later

Installing and Configuring Drupal Commerce

Packt
28 Jun 2013
8 min read
(For more resources related to this topic, see here.) Installing Drupal Commerce to an existing Drupal 7 website There are two approaches to installing Drupal Commerce; this recipe covers installing Drupal Commerce on an existing Drupal 7 website. Getting started You will need to download Drupal Commerce from http://drupal.org/project/ commerce. Download the most recent recommended release you see that couples with your Drupal 7 website's core version: You will also require the following modules to allow Drupal Commerce to function: Ctools: http://drupal.org/project/ctools Entity API: http://drupal.org/project/entity Views: http://drupal.org/project/views Rules: http://drupal.org/project/rules Address Field: http://drupal.org/project/addressfield How to do it... Now that you're ready, install Drupal Commerce by performing the following steps: Install the modules that Drupal Commerce depends on, first by copying the preceding module files into your Drupal site's modules directory, sites/all/modules. Install Drupal Commerce's modules next, by copying the files into the sites/all/ modules directory, so that they appear in the sites/all/modules/commerce directory. Enable the newly installed Drupal Commerce module in your Drupal site's administration panel (example.com/admin/modules if you've installed Drupal Commerce at example.com), under the Modules navigation option, by ensuring the checkbox to the left-hand side of the module name is checked. Now that Drupal Commerce is installed, a new menu option will appear in the administration navigation at the top of your screen when you are logged in as a user with administration permissions. You may need to clear the cache to see this. Navigate to Configuration | Development | Performance in the administration panel to do this. How it works... Drupal Commerce depends on a number of other Drupal modules to function, and by installing and enabling these in your website's administration panel you're on your way to getting your Drupal Commerce store off the ground. You can also install the Drupal Commerce modules via Drush (the Drupal Shell) too. For more information on Drush, see http://drupal.org/project/drush. Installing Drupal Commerce with Commerce Kickstart 2 Drupal Commerce requires quite a number of modules, and doing a basic installation can be quite time-consuming, which is where Commerce Kickstart 2 comes in. It packages Drupal 7 core and all of the necessary modules. Using Commerce Kickstart 2 is a good idea if you are building a Drupal Commerce website from scratch, and don't already have Drupal core installed. Getting started Download Drupal Commerce Kickstart 2 from its drupal.org project page at http://drupal.org/project/commerce kickstart. How to do it... Once you have decompressed the Commerce Kickstart 2 files to the location you want to install Drupal Commerce in, perform the following steps: Visit the given location in your web browser. For this example, it is assumed that your website is at example.com, so visit this address in your web browser. You'll see that you are presented with a welcome screen as shown in the following screenshot: Click the Let's Get Started button underneath this, and the installer moves to the next configuration option. Next, your server's requirements are checked to ensure Drupal can run in this environment. In the preceding screenshot you can see some common problems when installing Drupal that prevent installation. In particular, ensure that you create the /sites/ default/files directory in your Drupal installation and ensure it has permissions to allow Drupal to write to it (as this is where your website's images and files are stored). You will also need to copy the /sites/default/default.settings.php file to /sites/default/settings.php before you can start. Make sure this file is writeable by Drupal too (you'll secure it after installation is complete). Once these problems have been resolved, refresh the page and you will be taken to the Set up database screen. Enter the database username, password, and database name you want to use with Drupal, and click on Save and continue: The next step is the Install profile section, which can take some time as Drupal Commerce is installed for you. There's nothing for you to do here; just wait for installation to complete! You can now safely remove write permissions for the settings.php file in the /sites/default directory of your Drupal Commerce installation. The next step is Configure site. Enter the name of your new store and your e-mail address here, and provide a username and password for your Drupal Commerce administrator account. Don't forget to make a note of these as you'll need them to access your website later! Below these options, you can specify the country of your server and the default time zone. These are usually picked up from your server itself, but you may want to change them: Click on the Save and continue button to progress now; the next step is Configure store. Here you can set your Default store country field (if it's different from your server settings) and opt to install Drupal Commerce's demo, which includes sample content and a sample Drupal Commerce theme too: Further down on this screen, you're presented with more options. By checking the Do you want to be able to translate the interface of your store? field, Drupal Commerce provides you with an ability to translate your website for customers of different languages (for this simple store installation, leave this set to No). Finally, you can set the Default store currency field you wish to use, and whether you want Commerce Kickstart to set up a sales tax rule for your store (select which is more appropriate for your store, or leave it set to No sample tax rate for now): Click on Create and finish at the bottom of the screen. If you chose to install the demo store in the previous screen, you will have to wait as it is added for you. There are now options to allow Drupal to check for updates automatically, and to receive e-mails about security updates. Leave these both checked to help you stay on top of keeping your Drupal website secure and up-to-date. Wait as Commerce Kickstart installs everything Drupal Commerce requires to run. That's it! Your Drupal Commerce store is now up and running thanks to Commerce Kickstart 2. How it works... The Commerce Kickstart package includes Drupal 7 core and the Drupal Commerce module. By packaging these together, installation and initial configuration for your Drupal Commerce store is made much easier! Creating your first product Now that you've installed Drupal Commerce, you can start to add products to display to customers and start making money. In this recipe you will learn how to add a basic product to your Drupal Commerce store. Getting started Log in to your Drupal Commerce store's administration panel, and navigate to Products | Add a product: If you haven't, navigate to Site settings | Modules and ensure that the Commerce Kickstart Menu module is enabled for your store. Note the sample products from Drupal Kickstart's installation are displaying there. How to do it... To get started adding a product to your store, click on the Add product button and follow these steps: Click on the Product display. Product displays groups of multiple related product variations together for display on the frontend of your website. Fill in the form that appears, entering a suitable Title, using the Body field for the product's description, as well as filling in the SKU (stock keeping unit; a unique reference for this product) and Price fields. Ensure that the Status field is set to Active. You can also optionally upload an image for the product here: Optionally, you can assign the product to one of the pre-existing categories in the Product catalog tab underneath these fields, as well as a URL for it in the URL path settings tab: Click on the Save product button, and you've now created a basic product in your store. To view the product on the frontend of your store, you can navigate to the category listings if you imported Drupal Commerce's demo data, or else you can return to the Products menu and click on the name of the product in the Title column: You'll now see your product on the frontend of your Drupal Commerce store: How it works... In Drupal Commerce, a product can represent several things, listed as follows: A single product for sale (for example, a one-size-fits-all t-shirt) A variation of a product (for example, a medium-size t-shirt) An item that is not necessarily a purchase as such (for example, it may represent a donation to a charity) An intangible product which the site allows reservations for (for example, an event booking) Product displays (for example, a blue t-shirt) are used to group product variations (for example, a medium-sized blue t-shirt and a large-sized blue t-shirt), and display them on your website to customers. So, depending on the needs of your Drupal Commerce website, products may be displayed on unique pages, or multiple products might be grouped onto one page as a product display.
Read more
  • 0
  • 0
  • 5751
article-image-finding-and-fixing-joomla-15x-customization-problems
Packt
06 Oct 2009
12 min read
Save for later

Finding and Fixing Joomla! 1.5x Customization Problems

Packt
06 Oct 2009
12 min read
Understanding common errors There are five main areas that cause the majority of problems for Joomla! sites. Understanding these areas and the common problems that occur with in each of them is a very important part of fixing them and thus, our site. Even though there is a practically unlimited number of potential issues and problems that can occur, there are certain problems which occur much more regularly than others. If we understand these main problems, we should be able to take care of many of the problems that will occur on our site without needing to resort to hiring people to fix them, or waiting for extension developers to provide support. The five areas are: PHP code JavaScript code CSS/HTML code Web server Database We will now look at the two most common error sources, PHP and JavaScript. PHP code Because PHP code is executed on the server, we usually have some control over the conditions that it is subject to. Most PHP errors originate from one of four sources: Incorrect extension parameters PHP code error PHP version Server settings Incorrect extension parameters It is often easy to misunderstand what the correct value for an extension parameter is, or if a particular parameter is required or not. These misunderstandings are behind a large number of PHP "errors" that developers experience when building a site. Diagnosis In a well-coded extension, putting the wrong information into a parameter shouldn't result in an error, but will usually result in the extension producing strange or unexpected output, or even no output at all. In a poorly coded extension, an incorrect parameter value will probably cause an error. These errors are often easy to spot, especially in modules, because our site will output everything it processed up until the point of the error, giving our page the appearance of being cut off. Some very minor errors may even result in the whole page, except for the error causing extension, being output correctly, and error messages appearing in the page, where the extension with the error was supposed to appear. A critical error, however, may cause the site to crash completely, and output only an error message. In extreme cases not even an error message will be output, and visitors will only see a white screen. The messages should always appear in our PHP log though. Fixing the problem Incorrect extension parameters are the easiest problems to fix, and are often solved simply by going through the parameter screens for the extensions on the page with the errors, and making sure they all have correct values. If they all look correct, then we may want to try changing some parameters to see if that fixes the issue. If this still doesn't work, then we have a genuine error. PHP code error Extension developers aren't perfect, and even the best ones can overlook or miss small issues in the code. This is especially true with large, complex extensions so please remember that even if an extension has PHP code error, it may not necessarily mean that the whole extension is poorly coded. Diagnosis Similar to incorrect extension parameters, a PHP coding error will usually result in a cut-off page, or a white screen, sometimes with an error message displayed, sometimes without. Whether an error message is displayed or not depends partly on the configuration of your server, and partly on how severe the error was. Some servers are configured to suppress error output of certain types of errors. Regardless of the screen output, all PHP errors should be output to the PHP log. So, if we get a white screen, or even get a normal screen but strange output, checking our PHP log can often help us to find the problem. PHP logs can reside in different places on differently configured servers, although it will almost always be in a directory called logs. We may also not have direct access to the log, again depending on our server host. We should ask our web hosting company's support staff for the location of our PHP log, if we can't easily find it. Some common error messages and causes are: Parse error: parse error, unexpected T_STRING in... This is usually caused by a missing semi-colon at the end of a line, or a missing double quote (") or end bracket ()) after we opened one. For quotes and semicolons, the problem is usually the line above the one reported in the error. For missing brackets, the error will sometimes occur at the end of the script, even though the problem code is much earlier in the script. Parse error: syntax error, unexpected $end in... We are most likely missing a closing brace (}) somewhere. Make sure that each open brace ({) we have has been closed with a closing brace (}). Parse error: syntax error, unexpected T_STRING, expecting ',' or ';' in... There may be double quotes within double quotes. They either need to be escaped, using a forward slash before the inside quote, or changed to single quotes. Fixing the problem Fixing a PHP code error is possible but can be difficult depending on the extension. Usually when there is a PHP code error, it will give a brief description of the error, and a line number. If nothing is being output at all, then we may need to turn error reporting up as described later. We will then go to the line specified to examine it and the lines around it to try and find our problem. If we can't find an obvious error, then it might be better to take the error back to the developer and ask them for support. PHP version The current version of PHP is 5.x.x and version 6.x is expected soon, but because many older, but still popular, applications only run on PHP version 4.x.x. It is still very common to find many Web hosting companies still using PHP 4 on their servers. This problem is even more unfortunate due to the fact that PHP 4 isn't even supported anymore by the PHP developers. In PHP 5, there are many new functions and features that don't exist in PHP 4. As a result, using these functions in an extension will cause it to error when run on a PHP 4 server. Diagnosis Diagnosing if we have the wrong PHP version is not obvious, as it will usually result in an error about an unknown function when the extension tries to call a function that doesn't exist in the version of PHP installed on our server. Sometimes, the error will not be that the function is unknown, but that the number of parameters we are sending it is incorrect if they were changed between PHP 4 and PHP 5. Fixing the problem The only real way to fix the problem is to upgrade our PHP version. Some web hosts offer PHP 4 or 5 as an option and it might be as simple as checking a box or clicking a button to turn on PHP 5. In case if our host doesn't offer PHP 5 at all, the only solution is to use a different extension or change our web host. This may actually be a good idea anyway, because if our host is still using an unsupported PHP version with no option to upgrade, then what other unsupported, out of date software is running those servers? Server settings One of the most common problems encountered by site owners in regards to server settings is file permissions. Many web hosting companies run Linux, which uses a three-part permission model, on their servers. Using this model, every file can have separate permissions set for: The user who owns the particular file Other users in the same user group as the owner Everyone else (in a web site situation this is mainly the site visitors) Each file also has three permissions that enable, or disable, certain actions on the file. These permissions are read, write, and execute. Permissions are usually expressed in one of two ways, first as single characters in a file listing or as a three digit number. For example, a file listing on a Linux server might look like this: drwxr-x--- 2 auser agroup 4096 Dec 28 04:09 tmp-rwxr-x--- 1 auser agroup 345 Sep 1 04:12 somefile.php-rwxr--r-- 1 auser agroup 345 Sep 1 04:12 foo The very first character to the left, a d or – in this case, indicates if this is a directory (the d) or a file (the -). The next nine characters indicate the permissions and who they apply to. The first three belong to the file owner, next three to those in the same group as the owner, and the final three to everyone else. The letters used are: r—read permission w—write permission x—execute permission A dash (-) indicates that this permission hasn't been given to those particular people. So in our example above, tmp.php can be read, written to, or executed by the owner (a user). It can be read or executed (but not written to) by other users in the same group (agroup) as the owner, but the file cannot be used at all by people outside the group.foo however, can be read by people in the owners group, and also read by everyone else, but it cannot be executed by them. As mentioned above, permissions are also often expressed as a three-digit number. Each of the digits represents the sum of the numbers that represent the permissions granted. For example: r = 4, w = 2, and x = 1. Adding these together gives us a number from 0-7, which can indicate the permission level. So a file with a permission level of 644 would translate as: 6 = 4 + 2 = rw4 = r4 = r or -rw-r--r-- in the first notation that we looked at. Most servers are set by default to one of the following: 644 -rw-r--r--755 -rwxr-xr-x775 -rwxrwxr-x All of this looks fine so far. The problems start to creep in depending on how the server runs their PHP. PHP can either be set up to run as the same user who owns all the files (usually our FTP user or hosting account owner), or it can be set up to run as a different user, but in the same group as the owner. Or it can be set up to be a completely different user and group, as illustrated here: The ideal setup, from a convenience point of view, is the first one where PHP is executed as the same user who owns the files. This setup should have no problems with permissions. But the ideal setup for a very security-conscious web host is the third one since the PHP engine can't be used to hack the web site files, or the server itself. A web server with this setup though used to have a difficult time running a Joomla! site. It was difficult because changing the server preferences requires that files be edited by the PHP user, uploading extensions means that folders and files need to be created by the PHP user, and so on. If the PHP engine isn't even in the same group as the file owner, then it gets treated the same as any site visitor and can usually only read, and probably execute files, but not change them. This prevents us from editing preferences or uploading new extensions. However, if we changed the files so that the PHP engine could edit and execute them (permission 777, for example) then anyone who can see our site on the internet can potentially edit and execute our files by themselves, making our site very vulnerable to being hacked by even a novice hacker. We should never give files or directories a permission of 777 (read, write, and execute to all three user types) because it is almost guaranteed that our site will be hacked eventually as a result. If, for some reason, we need to do it for testing, or because we need to in order to install extensions, then we should change it back as soon as possible. Diagnosis To spot this problem is relatively simple. If we can't edit our web site configuration, or install any extensions at all, then nine times out of ten, server permissions will be the problem. Fixing the problem We can start by asking our web host if they allow PHP to be run as CGI, or install suEXEC (technical terms for running it as the same user who owns the files) and if so, how do we set it up. If they don't allow this, then the next best situation is to enable the Joomla! FTP layer in our configuration. This will force Joomla! to log into our site as the FTP user, which is almost always the same user that uploaded the site files, and edit or install files. We can enable the FTP layer by going to the Site | Global Configuration page and then clicking on the server item in the menu below the heading. We can then enter the required information for the FTP layer on this screen. The FTP layer should only be used on Linux-based servers. More information about the FTP layer can be found in the official Joomla! documentation at http://help.joomla.org/content/view/1941/302/1/2/ If for some reason the FTP layer doesn't work, we only have two other options. We could change our web hosting provider as one option. Or, whenever we want to install a new extension or change our configuration, we need to change the permissions on our folders, perform our tasks, and then change the permissions back to their original settings.
Read more
  • 0
  • 0
  • 5739

article-image-creating-enterprise-portal-oracle-webcenter-11g-ps3
Packt
01 Aug 2011
9 min read
Save for later

Creating an Enterprise Portal with Oracle WebCenter 11g PS3

Packt
01 Aug 2011
9 min read
  Oracle WebCenter 11g PS3 Administration Cookbook Over 100 advanced recipes to secure, support, manage, and administer Oracle WebCenter         Introduction An enterprise portal is a framework that allows users to interact with different applications in a secure way. There is a single point of entry and the security to the composite applications is transparent for the user. Each user should be able to create their own view on the portal. A portal is highly customizable, which means that most of the work will be done at runtime. An administrator should be able to create and manage pages, users, roles, and so on. Users can choose whatever content they want to see on their pages so they can personalize the portal to their needs. In this article, you will learn some basics about the WebCenter Portal application. Later chapters will go into further details on most of the subjects covered in this chapter. It is intended as an introduction to the WebCenter Portal. Preparing JDeveloper for WebCenter When you want to build WebCenter portals, JDeveloper is the preferred IDE. JDeveloper has a lot of built-in features that will help us to build rich enterprise applications. It has a lot of wizards that can help in building the complex configuration files. Getting ready You will need to install JDeveloper before you can start with this recipe. JDeveloper is the IDE from Oracle and can be downloaded from the following link: http://www.oracle.com/technetwork/developer-tools/jdev/downloads/index.html. You will need to download JDeveloper 11.1.1.5 Studio Edition and not JDeveloper 11.1.2 because that version is not compatible with WebCenter yet. This edition is the full-blown edition with all the bells and whistles. It has all the libraries for building an ADF application, which is the basis for a WebCenter application. How to do it... Open JDeveloper that was installed. Choose Default Role. From JDeveloper, open the Help menu and select Check for updates. Click Next on the welcome screen. Make sure all the Update Centers are selected and press Next. In the available Updates, enter WebCenter and select all the found updates. Press Next to start the download. After the download is finished, you will need to restart JDeveloper. You can check if the updates have been installed by opening the About window from the Help menu. Select the Extensions tab and scroll down to the WebCenter extensions. You should be able to see them: How it works... When you first open JDeveloper, you first need to select a role. The role determines the functionality you have in JDeveloper. When you select the default role, all the functionality will be available. By installing the WebCenter extensions, you are installing all the necessary jar files containing the libraries for the WebCenter framework. JDeveloper will have three additional application templates: Portlet Producer Application: This template allows you to create a producer based upon the new JSR286 standard. WebCenter Portal Application: Template that will create a preconfigured portal with ADF and WebCenter technology. WebCenter Spaces Taskflow Customizations: This application is configured for customizing the applications and services taskflows used with the WebCenter Spaces Application. The extensions also include the taskflows and data controls for each of the WebCenter services that we will be integrating in our portal. Creating a WebCenter portal In this release of WebCenter, we can easily build enterprise portals by using the WebCenter Portal application template in JDeveloper. This template contains a preconfigured portal that we can modify to our needs. It has basic administration pages and security. Getting ready For this recipe, you need the latest version of JDeveloper with the WebCenter extensions installed, which is described in the previous recipe. How to do it... Select New from the File menu. Select Application in the General section on the left-hand side. Select WebCenter Portal Application from the list on the right. Press OK. The Create WebCenter Portal Application dialog will open. In the dialog, you will need to complete a few steps in order to create the portal application: Application Name: Specify the application name, directory, and application package prefix. Project Name: Specify the name and directory of the portal project. At this stage, you can also add additional libraries to the project. Project Java Settings: Specify the default package, java source, and output directory. Project WebCenter settings: With this step, you can request to build a default portal environment. When you disable the Configure the application with standard Portal features checkbox, you will have an empty project with only the reference to the WebCenter libraries, but no default portal will be configured. You can also let JDeveloper create a special test-role, so you can test your application. Press the Finish button to create the application. You can test the portal without needing to develop anything. Just start the integrated WebLogic server, right-click the portal project, and select Run from the context menu. When you start the WebLogic server for the first time, it can take a few minutes. This is because JDeveloper will create the WebLogic domain for the integrated WebLogic server. Because we have installed the WebCenter extensions, JDeveloper will also extend the domain with the WebCenter libraries. How it works... When the portal has been started, you will see a single page, which is the Home page that contains a login form at the top right corner: When you log in with the default WebLogic user, you should have complete administration rights. The default user of the integrated WebLogic server is weblogic with password weblogic1. When logged in, you should see an Administration link. This links to the Administration Console where you can manage the resources of your portal like pages, resource catalogs, navigations, and so on. In the Administration Console you have five tabs: Resources: In this tab, you manage all the resources of your portal. The resources are divided into three parts: Structure: In the structure, you manage the resources about the structure of your portal, such as pages, templates, navigations, and resource catalogs. Look and Layout: In the look and layout part, you manage things like skins, styles, templates for the content presenter, and mashup styles. Mashups: Mashups are taskflows created during runtime. You can also manage data controls in the mashup section. Services: In the services tab, you can manage the services that are configured for your portal. Security: In the security tab, you can add users or roles and define their access to the portal application. Configuration: In this tab, you can configure default settings for the portal like the default page template, default navigation, default resource catalog, and default skin. Propagation: This tab is only visible when you create a specific URL connection. From this tab, you can propagate changes from your staging environment to your production environment. There's more... The WebCenter Portal application will create a preconfigured portal for us. It has a basic structure and page navigation to build complex portals. JDeveloper has created a lot of files for us. Here is an overview of the most important files created for us by JDeveloper: Templates The default portal has two page templates. They can be found in the Web Content/oracle/Webcenter/portalapp/pagetemplates folder: pageTemplate_globe.jspx: This is the default template used for a page pageTemplate_swooshy.jspx: This is the same template as the globe template, but with another header image You can of course create your own templates. Pages JDeveloper will create four pages for us. These can be found in the Web Content/oracle/Webcenter/portalapp/pages folder: error.jspx: This page looks like the login page and is designed to show error messages upon login. home.jspx: This is an empty page that uses the globe template. login.jspx: This is the login page. It is also based upon the globe template. Resource catalogs By default, JDeveloper will create a default resource catalog. This can be found in the Web Content/oracle/Webcenter/portalapp/catalogs folder. In this folder, you will find the default-catalog.xml file which represents the resource catalog. When you open this file, you will notice that JDeveloper has a design view for this file. This way it is easier to manage and edit the catalog without knowing the underlying XML. Another file in the catalogs folder is the catalog-registry.xml. This is the set of components that the user can use when creating a resource catalog at runtime. Navigations By using navigations, you can allow users to find content on different pages, taskflow, or even external pages. By defining different navigation, you allow users to have a personalized navigation that fits their needs. By default, you will find one navigation model in the Web content/oracle/Webcenter/portalapp/navigations folder: default-navigation-model.xml. It contains the page hierarchy and a link to the administration page. This model is not used in the template, but it is there as an example. You can of course use this model and modify it, or you can create your own models. You will also find the navigation-registry.xml. This file contains the items that can be used to create a navigation model at runtime. Page hierarchy With the page hierarchy, you can create parent-child relationships between pages. It allows you to create multi-level navigation of existing pages. Within the page hierarchy, you can set the security of each node. You are able to define if a child node inherits the security from its parent or it has its own security. By default, JDeveloper will create the pages.xml page hierarchy in the Web Content/oracle/Webcenter/portalapp/pagehierarchy folder. This hierarchy has only one node, being the Home page.
Read more
  • 0
  • 0
  • 5732

article-image-alice-3-controlling-behavior-animations
Packt
18 Jul 2011
11 min read
Save for later

Alice 3: Controlling the Behavior of Animations

Packt
18 Jul 2011
11 min read
  Alice 3 Cookbook 79 recipes to harness the power of Alice 3 for teaching students to build attractive and interactive 3D scenes and videos         Read more about this book       (For more resources related to this subject, see here.) Introduction You need to organize the statements that request the different actors to perform actions. Alice 3 provides blocks that allow us to configure the order in which many statements should be executed. This article provides many tasks that will allow us to start controlling the behavior of animations with many actors performing different actions. We will execute many actions with a specific order. We will use counters to run one or more statements many times. We will execute actions for many actors of the same class. We will run code for different actors at the same time to render complex animations. Performing many statements in order In this recipe, we will execute many statements for an actor with a specific order. We will add eight statements to control a sequence of movements for a bee. Getting ready We have to be working on a project with at least one actor. Therefore, we will create a new project and set a simple scene with a few actors. Select File | New... in the main menu to start a new project. A dialog box will display the six predefined templates with their thumbnail previews in the Templates tab. Select GrassyProject.a3p as the desired template for the new project and click on OK. Alice will display a grassy ground with a light blue sky. Click on Edit Scene, at the lower right corner of the scene preview. Alice will show a bigger preview of the scene and will display the Model Gallery at the bottom. Add an instance of the Bee class to the scene, and enter bee for the name of this new instance. First, Alice will create the MyBee class to extend Bee. Then, Alice will create an instance of MyBee named bee. Follow the steps explained in the Creating a new instance from a class in a gallery recipe, in the article, Alice 3: Making Simple Animations with Actors. Add an instance of the PurpleFlower class, and enter purpleFlower for the name of this new instance. Add another instance of the PurpleFlower class, and enter purpleFlower2 for the name of this new instance. The additional flower may be placed on top of the previously added flower. Add an instance of the ForestSky class to the scene. Place the bee and the two flowers as shown in the next screenshot: How to do it... Follow these steps to execute many statements for the bee with a specific order: Open an existing project with one actor added to the scene. Click on Edit Code, at the lower-right corner of the big scene preview. Alice will show a smaller preview of the scene and will display the Code Editor on a panel located at the right-hand side of the main window. Click on the class: MyScene drop-down list and the list of classes that are part of the scene will appear. Select MyScene | Edit run. Select the desired actor in the instance drop-down list located at the left-hand side of the main window, below the small scene preview. For example, you can select bee. Make sure that part: none is selected in the drop-down list located at the right-hand side of the chosen instance. Activate the Procedures tab. Alice will display the procedures for the previously selected actor. Drag the pointAt procedure and drop it in the drop statement here area located below the do in order label, inside the run tab. Because the instance name is bee, the pointAt statement contains the this.bee and pointAt labels followed by the target parameter and its question marks ???. A list with all the possible instances to pass to the first parameter will appear. Click on this.purpleFlower. The following code will be displayed, as shown in the next screenshot: this.bee.pointAt(this.purpleFlower) Drag the moveTo procedure and drop it below the previously dropped procedure call. A list with all the possible instances to pass to the first parameter will appear. Select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01, as shown in the following screenshot: Click on the more... drop-down menu button that appears at the right-hand side of the recently dropped statement. Click on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears. Click on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the second statement: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the third statement: this.bee.delay(2.0) Drag the moveAwayFrom procedure and drop it below the previously dropped procedure call. Select 0.25 for the first parameter. Click on the more... drop-down menu button that appears and select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fourth statement: this.bee.moveAwayFrom(0.25, this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the turnToFace procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fifth statement: this.bee.turnToFace(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the moveTo procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the sixth statement: this.bee.moveTo(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_AND_END_GENTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the seventh statement: this.bee.delay(2.0) Drag the move procedure and drop it below the previously dropped procedure call. Select FORWARD and then 10.0. Click on the more... drop-down menu button, on duration and then on 10.0 in the cascade menu that appears. Click on the additional more... drop-down menu that appears, on asSeenBy and then on this.bee. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the eighth and final statement. The following screenshot shows the eight statements that compose the run procedure: this.bee.move(FORWARD, duration: 10.0, asSeenBy: this.bee, style: BEGIN_ABRUPTLY_AND_END_GENTLY) (Move the mouse over the image to enlarge it.) Select File | Save as... from Alice's main menu and give a new name to the project. Then you can make changes to the project according to your needs. How it works... When we run a project, Alice creates the scene instance, creates and initializes all the instances that compose the scene, and finally executes the run method defined in the MyScene class. By default, the statements we add to a procedure are included within the do in order block. We added eight statements to the do in order block, and therefore Alice will begin with the first statement: this.bee.pointAt(this.purpleFlower) Once the bee finishes executing the pointAt procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following second statement after the first one finishes: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) The do in order statement encapsulates a group of statements with a synchronous execution. Thus, when we add many statements within a do in order block, these statements will run one after the other. Each statement requires its previous statement to finish before starting its execution, and therefore we can use the do in order block to group statements that must run with a specific order. The moveTo procedure moves the 3D model that represents the actor until it reaches the position of the other actor. The value for the target parameter is the instance of the other actor. We want the bee to move to one of the petals of the first flower, purpleFlower, and therefore we passed this value to the target parameter: this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01) We called the getPart function for purpleFlower with IStemMiddle_IStemTop_IHPistil_IHPetal01 as the name of the part to return. This function allows us to retrieve one petal from the flower as an instance. We used the resulting instance as the target parameter for the moveTo procedure and we could make the bee move to the specific petal of the flower. Once the bee finishes executing the moveTo procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following third statement after the second one finishes: this.bee.delay(2.0) The delay procedure puts the actor to sleep in its current position for the specified number of seconds. The next statement specified in the do in order block will run after waiting for two seconds. The statements added to the run procedure will perform the following visible actions in the specified order: Point the bee at purpleFlower. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee away 0.25 units from the position of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Turn the bee to the face of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee forward 10 units. Begin the movement abruptly but end it gently. The total duration for the animation must be 10 seconds. The bee will disappear from the scene. The following screenshot shows six screenshots of the rendered frames: (Move the mouse over the image to enlarge it.) There's more... When you work with the Alice code editor, you can temporarily disable statements. Alice doesn't execute the disabled statements. However, you can enable them again later. It is useful to disable one or more statements when you want to test the results of running the project without these statements, but you might want to enable them back to compare the results. To disable a statement, right-click on it and deactivate the IsEnabled option, as shown in the following screenshot: The disabled statements will appear with diagonal lines, as shown in the next screenshot, and won't be considered at run-time: To enable a disabled statement, right-click on it and activate the IsEnabled option.
Read more
  • 0
  • 0
  • 5729
article-image-using-jquery-script-creating-dynamic-table-contents
Packt
21 Oct 2009
6 min read
Save for later

Using jQuery Script for Creating Dynamic Table of Contents

Packt
21 Oct 2009
6 min read
  A typical jQuery script uses a wide assortment of the methods that the library offers. Selectors, DOM manipulation, event handling, and so forth come into play as required by the task at hand. In order to make the best use of jQuery, we need to keep in mind the wide range of capabilities it provides. A Dynamic Table of Contents As an example of jQuery in action, we'll build a small script that will dynamically extract the headings from an HTML document and assemble them into a table of contents for that page. Our table of contents will be nestled on the top right corner of the page: We'll have it collapsed initially as shown above, but a click will expand it to full height: At the same time, we'll add a feature to the main body text. The introduction of the text on the page will not be initially loaded, but when the user clicks on the word Introduction, the introductory text will be inserted in place from another file: Before we reveal the script that performs these tasks, we should walk through the environment in which the script resides. Obtaining jQuery The official jQuery website (http://jquery.com/) is always the most up-to-date resource for code and news related to the library. To get started, we need a copy of jQuery, which can be downloaded right from the home page of the site. Several versions of jQuery may be available at any given moment; the latest uncompressed version will be most appropriate for us. No installation is required for jQuery. To use jQuery, we just need to reside it on our site in a public location. Since JavaScript is an interpreted language, there is no compilation or build phase to worry about. Whenever we need a page to have jQuery available, we will simply refer to the file's location from the HTML document. Setting Up the HTML Document There are three sections to most examples of jQuery usage— the HTML document itself, CSS files to style it, and JavaScript files to act on it. For this example, we'll use a page containing the text of a book: <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xml_lang="en" lang="en">  <head>      <meta http-equiv="Content-Type" content="text/html;                                                   charset=utf-8"/>      <title>Doctor Dolittle</title>    <link rel="stylesheet" href="dolittle.css" type="text/css" />      <script src="jquery.js" type="text/javascript"></script>      <script src="dolittle.js" type="text/javascript"></script>  </head>  <body>    <div id="container">      <h1>Doctor Dolittle</h1>      <div class="author">by Hugh Lofting</div>      <div id="introduction">        <h2><a href="introduction.html">Introduction</a></h2>      </div>      <div id="content">        <h2>Puddleby</h2>        <p>ONCE upon a time, many years ago when our grandfathers           were little children--there was a doctor; and his name was           Dolittle-- John Dolittle, M.D.  &quot;M.D.&quot; means            that he was a proper doctor and knew a whole lot.       </p>           <!-- More text follows... -->      </div>    </div>  </body></html> The actual layout of files on the server does not matter. References from one file to another just need to be adjusted to match the organization we choose. In most examples in this book, we will use relative paths to reference files (../images/foo.png) rather than absolute paths (/images/foo.png).This will allow the code to run locally without the need for a web server. The stylesheet is loaded immediately after the standard <head> elements. Here are the portions of the stylesheet that affect our dynamic elements: /* -----------------------------------   Page Table of Contents-------------------------------------- */#page-contents {  position: absolute;  text-align: left;  top: 0;  right: 0;  width: 15em;  border: 1px solid #ccc;  border-top-width: 0;  border-right-width: 0;  background-color: #e3e3e3;}#page-contents h3 {  margin: 0;  padding: .25em .5em .25em 15px;  background: url(arrow-right.gif) no-repeat 0 2px;  font-size: 1.1em;  cursor: pointer;}#page-contents h3.arrow-down {  background-image: url(arrow-down.gif);}#page-contents a {  display: block;  font-size: 1em;  margin: .4em 0;  font-weight: normal;}#page-contents div {  padding: .25em .5em .5em;    display: none;  background-color: #efefef;}/* -----------------------------------   Introduction-------------------------------------- */.dedication {  margin: 1em;  text-align: center;  border: 1px solid #555;  padding: .5em;} After the stylesheet is referenced, the JavaScript files are included. It is important that the script tag for the jQuery library be placed before the tag for our custom scripts; otherwise, the jQuery framework will not be available when our code attempts to reference it.  
Read more
  • 0
  • 0
  • 5707

article-image-ibm-lotus-domino-creating-action-buttons-and-adding-style-views
Packt
11 May 2011
7 min read
Save for later

IBM Lotus Domino: Creating Action Buttons and Adding Style to Views

Packt
11 May 2011
7 min read
IBM Lotus Domino: Classic Web Application Development Techniques A step-by-step guide for web application development and quick tips to enhance applications using Lotus Domino Provide view navigation buttons Simple views intended to provide information (for example, a table of values) or links to a limited number of documents can stand alone quite nicely, embedded on a page or a view template. But if more than a handful of documents display in the view, you should provide users a way to move forward and backward through the view. If you use the View Applet, enable the scroll bars; otherwise add some navigational buttons to the view templates to enable users to move around in it. Code next and previous navigation buttons If you set the line count for a view, only that number of rows is sent to the browser. You need to add Action buttons or hotspots on the view template to enable users to advance the view to the next set of documents or to return to the previous set of documents—essentially paging backward and forward through the view. Code a Next button with this formula: @DbCommand("Domino"; "ViewNextPage") Code a Previous button with this formula: @DbCommand("Domino"; "ViewPreviousPage") Code first and last buttons Buttons can be included on the view template to page to the first and last documents in the view. Code an @Formula in a First button's Click event to compute and open a relative URL. The link reopens the current view and positions it at the first document: @URLOpen("/"+@WebDbName+"/"+@Subset(@ViewTitle;-1) + "?OpenView&Start=1") For a Last button, add a Computed for Display field to the view template with this @Formula: @Elements(@DbColumn("":"NoCache"; "" ; @ViewTitle; 1)) The value for the field (vwRows in this example) is the current number of documents in the view. This information is used in the @Formula for the Last button's Click event: url := "/" + @WebDbName + "/" + @Subset(@ViewTitle;-1) ; @URLOpen(url + "?OpenView&Start=" + @Text(vwRows)) When Last is clicked, the view reopens, positioned at the last document. Please note that for very large views, the @Formula for field vwRows may fail because of limitations in the amount of data that can be returned by @DbColumn. Let users specify a line count As computer monitors today come in a wide range of sizes and resolutions, it may be difficult to determine the right number of documents to display in a view to accommodate all users. On some monitors the view may seem too short, on others too long. Here is a strategy you might adapt to your application, that enables users to specify how many lines to display. The solution relies on several components working together: Several Computed for display fields on the view template A button that sets the number of lines with JavaScript Previous and Next buttons that run JavaScript to page through the view The technique uses the Start and Count parameters, which can be used when you open a view with a URL. The Start parameter, used in a previous example, specifies the row or document within a view that should display at the top of the view window on a page. The Count parameter specifies how many rows or documents should display on the page. The Count parameter overrides the line count setting that you may have set on an embedded view element. Here are the Computed for display fields to be created on the view template. The Query_String_Decoded field (a CGI variable) must be named as such, but all the other field names in this list are arbitrary. Following each field name is the @Formula that computes its value: Query_String_Decoded: Query_String_Decoded vwParms: @Right(@LowerCase(Query_String_Decoded); "&") vwStart: @If(@Contains(vwParms; "start="); @Middle(vwParms; "start="; "&"); "1") vwCount: @If(@Contains(vwParms; "count="); @Middle(vwParms; "count="; "&"); "10") vwURL: "/" + @WebDbName + "/"+ @Subset(@ViewTitle;1) + "?OpenView" vwRows: @Elements(@DbColumn("":"NoCache"; ""; @ViewTitle; 1)) countFlag "n" newCount: "1" Add several buttons to the view template. Code JavaScript in each button's onClick event. You may want to code these scripts inline for testing, and then move them to a JavaScript library when you know they are working the way you want them to. The Set Rows button's onClick event is coded with JavaScript that receives a line count from the user. If the user-entered line count is not good, then the current line count is retained. A flag is set indicating that the line count may have been changed: var f = document.forms[0] ; var rows = parseInt(f.vwRows.value) ; var count = prompt("Number of Rows?","10") ; if ( isNaN(count) | count < 1 | count >= rows ) { count = f.vwCount.value ; } f.newCount.value = count ; f.countFlag.value = "y" ; The Previous button's onClick event is coded to page backward through the view using the user-entered line count: var f = document.forms[0] ; var URL = f.vwURL.value ; var ctFlag = f.countFlag.value ; var oCT = parseInt(f.vwCount.value) ; var nCT = parseInt(f.newCount.value) ; var oST = parseInt(f.vwStart.value) ; var count ; var start ; if ( ctFlag == "n" ) { count = oCT ; start = oST - oCT ; } else { count = nCT ; start = oST - nCT ; } if (start < 1 ) { start = 1 ; } location.href = URL + "&Start=" + start + "&Count=" + count ; The Next button pages forward through the view using the user-entered line count: var f = document.forms[0] ; var URL = f.vwURL.value ; var ctFlag = f.countFlag.value ; var oCT = parseInt(f.vwCount.value) ; var nCT = parseInt(f.newCount.value) ; var start = parseInt(f.vwStart.value) + oCT ; if ( ctFlag == "n" ) { location.href = URL + "&Start=" + start + "&Count=" + oCT ; } else { location.href = URL + "&Start=" + start + "&Count=" + nCT ; } Finally, if First and Last buttons are included with this scheme, they need to be recoded as well to work with a user-specified line count. The @formula in the First button's Click event now looks like this: count := @If(@IsAvailable(vwCount); vwCount; "10") ; parms := "?OpenView&Start=1&Count=" + count ; @URLOpen("/" + @WebDbName + "/" + @Subset(@ViewTitle;-1) + parms) ;7 The @formula in the Last button's Click event is also a little more complicated. Note that if the field vwRows is not available, then the Start value is set to 1,000. This is really more for debugging since the Start parameter should always be set to the value of vwRows: start := @If(@IsAvailable(vwRows); @Text(vwRows); "1000") ; count := @If(@IsAvailable(vwCount); vwCount; "10") ; parms := "?OpenView&Start=" + start + "&Count=" + count ; url := "/" + @WebDbName + "/" + @Subset(@ViewTitle;-1) ; @URLOpen(url + parms) ; Code expand and collapse buttons for categorized views Two other navigational buttons should be included on the view template for categorized views or views that include document hierarchies. These buttons expand all categories and collapse all categories respectively: The Expand All button's Click event contains this @Command: @Command([ViewExpandAll]) The Collapse All button's Click event contains this @Command: @Command([ViewCollapseAll]) Co-locate and define all Action buttons Action Bar buttons can be added to a view template as well as to a view. If Action buttons appear on both design elements, then Domino places all the buttons together on the same top row. In the following image, the first button is from the view template, and the last three are from the view itself: If it makes more sense for the buttons to be arranged in a different order, then take control of their placement by co-locating them all either on the view template or on the view. Create your own Action buttons As mentioned previously, Action Bar buttons are rendered in a table placed at the top of a form. But on typical Web pages, buttons and hotspots are located below a banner, or in a menu at the left or the right. Buttons along the top of a form look dated and may not comply with your organization's web development standards. You can replace the view template and view Action buttons with hotspot buttons placed elsewhere on the view template: Create a series of hotspots or hotspot buttons on the view template, perhaps below a banner. Code @formulas for the hotspots that are equivalent to the Action Bar button formulas. Define a CSS class for those hotspots, and code appropriate CSS rules. Delete or hide from the Web all standard Action Bar buttons on the view template and on the view.
Read more
  • 0
  • 0
  • 5684
Modal Close icon
Modal Close icon