Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-improving-your-development-speed
Packt
23 Oct 2013
7 min read
Save for later

Improving Your Development Speed

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) What all developers want is to do their job as fast as they can without sacrificing the quality of their work. IntelliJ has a large range of features that will reduce the time spent in development. But, to achieve the best performance that IntelliJ can offer, it is important that you understand the IDE and adapt some of your habits. In this article, we will navigate through the features that can help you do your job even faster. You will understand IntelliJ's main elements and how they work, and beyond this, learn how IntelliJ can help you organize your activities and the files you are working on. To further harness IntelliJ's abilities, you will also learn how to manage plugins and see a short list of plugins that can help you. Identifying and understanding window elements Before we start showing you techniques you can use to improve your performance using IntelliJ, you need to identify and understand the visual elements present in the main window of the IDE. Knowing these elements will help you find what you want faster. The following screenshot shows the IntelliJ main window: The main window can be divided into seven parts as shown in the previous screenshot: The main menu contains options that you can use to do tasks such as creating projects, refactoring, managing files in version control, and more. The main toolbar element contains some essential options. Some buttons are shown or hidden depending on the configuration of the project; version control buttons are an example of this. The Navigation Bar is sometimes a quick and good alternative to navigate easily and fast through the project files. Tool tabs are shown on both sides of the screen and at the bottom of IntelliJ. They represent the tools that are available for the project. Some tabs are available only when facets are enabled in the project (e.g. the Persistence tab). When the developer clicks on a tool tab, a window appears. These windows will present the project in different perspectives. The options available in each tool window will provide the developer with a wide range of development tasks. The editor is where you can write your code. The Status Bar indicates the current IDE state and provides some options to manipulate the environment. For example, you can hide the tool tabs by clicking on the icon at the bottom-left of the window. In almost all elements, there are context menus available. These menus will provide extra options that may complement and ease your work. For example, the context menu, available in the tool bar, provides an option to hide itself and another to customize the menu and toolbars. You will notice that some tool tabs have numbers. These numbers are used in conjunction with the Alt key to access the tool window you want quickly, Alt + 1, for example, will open the Project tool window. Each tool window will have different options; some will present search facilities, others will show specific options. They use a common structure: a title bar, a toolbar, and the content pane. Some tool windows don't have a toolbar and, in others, the options in the title bar may vary. However, all of them will have at least two buttons in the rightmost part of the title bar: a gear and a small bar with an arrow. The first button is used to configure some properties of the tool and the second will just minimize the window. The following screenshot shows some options in the Database tool: The options available under the gear button icon generally differ from tool to tool. However, in the drop-down list, you will find four common options: Pinned, Docked, Floating, and Split modes. As you may have already imagined, these options are used to define how the tool window will be shown. The Pinned mode is very useful when it is unmarked; using this, when you focus on code editor you don't lose time minimizing the tool window. Identifying and understanding code editor elements The editor provides some elements that can facilitate navigation through the code and help identify problems in it. In the following screenshot, you can see how the editor is divided: The editor area, as you probably know, is where you edit your source code. The gutter area is where different types of information about the code is shown, simply using icons or special marks like breakpoints and ranges. The indicators used here aren't used to just display information; you can perform some actions depending on the indicator, such as reverting changes or navigating through the code. The smart completion popup, as you've already seen, provides assistance to the developer in accordance with the current context. The document tabs area is where the tabs of each opened document are available. The type of document is identified by an icon and the color in the name of the file shows its status in version control: blue stands for "modified", green for "new", red for "not in VCS", and black for "not changed". This component has a context menu that provides some other facilities as well. The marker bar is positioned to the right-hand side of the IDE and its goal is to show the current status of the code. At the top, the square mark can be green for when your code is OK, yellow for warnings that are not critical, and red for compilation errors, respectively. Below the square situated on top of the IDE this element can have other colored marks used to help the developer go directly to the desired part of the code. Sometimes, while you are coding, you may notice a small icon floating near the cursor; this icon represents that there are some intentions available that could help you: indicates that IntelliJ proposes a code modification that isn't totally necessary. It covers warning corrections to code improvement. indicates an intention action that can be used but doesn't provide any improvement or code correction. indicates there is a quick fix available to correct an eminent code error. indicates that the alert for the intention is disabled but the intention is still available. The following figure shows the working intention: Intention actions can be grouped in four categories listed as follows: Create from usage is the kind of intention action that proposes the creation of code depending on the context. For example, if you enter a method name that doesn't exist, this intention will recognize it and propose the creation of the method. Quick fixes is the type of intention that responds to code mistakes, such as wrong type usage or missing resources. Micro refactoring is the kind of intention that is shown when the code is syntactically correct; however, it could be improved (for readability for example). Fragment action is the type of intention used when there are string literals of an injected language; this type of injection can be used to permit you to edit the corresponding sentence in another editor. Intention actions can be enabled or disabled on-the-fly or in the Intention section in the configuration dialog; by default, all intentions come activated. Adding intentions is possible only after installing plugins for them or creating your own plugin. If you prefer, you can use the Alt + Enter shortcut to invoke the intentions popup. Summary As you have seen in this article, IntelliJ provides a wide range of functionalities that will improve your development speed. More important than knowing all the shortcuts IntelliJ offers, is to understand what is possible do with them and when to use a feature. Resources for Article: Further resources on this subject: NetBeans IDE 7: Building an EJB Application [Article] JBI Binding Components in NetBeans IDE 6 [Article] Smart Processes Using Rules [Article]
Read more
  • 0
  • 0
  • 4391

article-image-preparing-optimizations
Packt
04 Jun 2015
11 min read
Save for later

Preparing Optimizations

Packt
04 Jun 2015
11 min read
In this article by Mayur Pandey and Suyog Sarda, authors of LLVM Cookbook, we will look into the following recipes: Various levels of optimization Writing your own LLVM pass Running your own pass with the opt tool Using another pass in a new pass (For more resources related to this topic, see here.) Once the source code transformation completes, the output is in the LLVM IR form. This IR serves as a common platform for converting into assembly code, depending on the backend. However, before converting into an assembly code, the IR can be optimized to produce more effective code. The IR is in the SSA form, where every new assignment to a variable is a new variable itself—a classic case of an SSA representation. In the LLVM infrastructure, a pass serves the purpose of optimizing LLVM IR. A pass runs over the LLVM IR, processes the IR, analyzes it, identifies the optimization opportunities, and modifies the IR to produce optimized code. The command-line interface opt is used to run optimization passes on LLVM IR. Various levels of optimization There are various levels of optimization, starting at 0 and going up to 3 (there is also s for space optimization). The code gets more and more optimized as the optimization level increases. Let's try to explore the various optimization levels. Getting ready... Various optimization levels can be understood by running the opt command-line interface on LLVM IR. For this, an example C program can first be converted to IR using the Clang frontend. Open an example.c file and write the following code in it: $ vi example.c int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Now convert this into LLVM IR using the clang command, as shown here: $ clang –S –O0 –emit-llvm example.c A new file, example.ll, will be generated, containing LLVM IR. This file will be used to demonstrate the various optimization levels available. How to do it… Do the following steps: The opt command-line tool can be run on the IR-generated example.ll file: $ opt –O0 –S example.ll The –O0 syntax specifies the least optimization level. Similarly, you can run other optimization levels: $ opt –O1 –S example.ll $ opt –O2 –S example.ll $ opt –O3 –S example.ll How it works… The opt command-line interface takes the example.ll file as the input and runs the series of passes specified in each optimization level. It can repeat some passes in the same optimization level. To see which passes are being used in each optimization level, you have to add the --debug-pass=Structure command-line option with the previous opt commands. See Also To know more on various other options that can be used with the opt tool, refer to http://llvm.org/docs/CommandGuide/opt.html Writing your own LLVM pass All LLVM passes are subclasses of the pass class, and they implement functionality by overriding the virtual methods inherited from pass. LLVM applies a chain of analyses and transformations on the target program. A pass is an instance of the Pass LLVM class. Getting ready Let's see how to write a pass. Let's name the pass function block counter; once done, it will simply display the name of the function and count the basic blocks in that function when run. First, a Makefile needs to be written for the pass. Follow the given steps to write a Makefile: Open a Makefile in the llvm lib/Transform folder: $ vi Makefile Specify the path to the LLVM root folder and the library name, and make this pass a loadable module by specifying it in Makefile, as follows: LEVEL = ../../.. LIBRARYNAME = FuncBlockCount LOADABLE_MODULE = 1 include $(LEVEL)/Makefile.common This Makefile specifies that all the .cpp files in the current directory are to be compiled and linked together in a shared object. How to do it… Do the following steps: Create a new .cpp file called FuncBlockCount.cpp: $ vi FuncBlockCount.cpp In this file, include some header files from LLVM: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" Include the llvm namespace to enable access to LLVM functions: using namespace llvm; Then start with an anonymous namespace: namespace { Next declare the pass: struct FuncBlockCount : public FunctionPass { Then declare the pass identifier, which will be used by LLVM to identify the pass: static char ID; FuncBlockCount() : FunctionPass(ID) {} This step is one of the most important steps in writing a pass—writing a run function. Since this pass inherits FunctionPass and runs on a function, a runOnFunction is defined to be run on a function: bool runOnFunction(Function &F) override {      errs() << "Function " << F.getName() << 'n';      return false;    } }; } This function prints the name of the function that is being processed. The next step is to initialize the pass ID: char FuncBlockCount::ID = 0; Finally, the pass needs to be registered, with a command-line argument and a name: static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); Putting everything together, the entire code looks like this: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" using namespace llvm; namespace { struct FuncBlockCount : public FunctionPass { static char ID; FuncBlockCount() : FunctionPass(ID) {} bool runOnFunction(Function &F) override {    errs() << "Function " << F.getName() << 'n';    return false; }            };        }        char FuncBlockCount::ID = 0;        static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); How it works A simple gmake command compiles the file, so a new file FuncBlockCount.so is generated at the LLVM root directory. This shared object file can be dynamically loaded to the opt tool to run it on a piece of LLVM IR code. How to load and run it will be demonstrated in the next section. See also To know more on how a pass can be built from scratch, visit http://llvm.org/docs/WritingAnLLVMPass.html Running your own pass with the opt tool The pass written in the previous recipe, Writing your own LLVM pass, is ready to be run on the LLVM IR. This pass needs to be loaded dynamically for the opt tool to recognize and execute it. How to do it… Do the following steps: Write the C test code in the sample.c file, which we will convert into an .ll file in the next step: $ vi sample.c   int foo(int n, int m) { int sum = 0; int c0; for (c0 = n; c0 > 0; c0--) {    int c1 = m;  for (; c1 > 0; c1--) {      sum += c0 > c1 ? 1 : 0;    } } return sum; } Convert the C test code into LLVM IR using the following command: $ clang –O0 –S –emit-llvm sample.c –o sample.ll This will generate a sample.ll file. Run the new pass with the opt tool, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so -funcblockcount sample.ll The output will look something like this: Function foo How it works… As seen in the preceding code, the shared object loads dynamically into the opt command-line tool and runs the pass. It goes over the function and displays its name. It does not modify the IR. Further enhancement in the new pass is demonstrated in the next recipe. See also To know more about the various types of the Pass class, visit http://llvm.org/docs/WritingAnLLVMPass.html#pass-classes-and-requirements Using another pass in a new pass A pass may require another pass to get some analysis data, heuristics, or any such information to decide on a further course of action. The pass may just require some analysis such as memory dependencies, or it may require the altered IR as well. The new pass that you just saw simply prints the name of the function. Let's see how to enhance it to count the basic blocks in a loop, which also demonstrates how to use other pass results. Getting ready The code used in the previous recipe remains the same. Some modifications are required, however, to enhance it—as demonstrated in next section—so that it counts the number of basic blocks in the IR. How to do it… The getAnalysis function is used to specify which other pass will be used: Since the new pass will be counting the number of basic blocks, it requires loop information. This is specified using the getAnalysis loop function: LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); This will call the LoopInfo pass to get information on the loop. Iterating through this object gives the basic block information: unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; This will go over the loop to count the basic blocks inside it. However, it counts only the basic blocks in the outermost loop. To get information on the innermost loop, recursive calling of the getSubLoops function will help. Putting the logic in a separate function and calling it recursively makes more sense: void countBlocksInLoop(Loop *L, unsigned nest) { unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; std::vector<Loop*> subLoops = L->getSubLoops(); Loop::iterator j, f; for (j = subLoops.begin(), f = subLoops.end(); j != f; ++j)    countBlocksInLoop(*j, nest + 1); } virtual bool runOnFunction(Function &F) { LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); errs() << "Function " << F.getName() + "n"; for (Loop *L : *LI)    countBlocksInLoop(L, 0); return false; } How it works… The newly modified pass now needs to run on a sample program. Follow the given steps to modify and run the sample program: Open the sample.c file and replace its content with the following program: int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Convert it into a .ll file using Clang: $ clang –O0 –S –emit-llvm sample.c –o sample.ll Run the new pass on the previous sample program: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll The output will look something like this: Function main Loop level 0 has 11 blocks Loop level 1 has 3 blocks Loop level 1 has 3 blocks Loop level 0 has 15 blocks Loop level 1 has 7 blocks Loop level 2 has 3 blocks Loop level 1 has 3 blocks There's more… The LLVM's pass manager provides a debug pass option that gives us the chance to see which passes interact with our analyses and optimizations, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll –disable-output –debug-pass=Structure Summary In this article you have explored various optimization levels, and the optimization techniques kicking at each level. We also saw the step-by-step approach to writing our own LLVM pass. Resources for Article: Further resources on this subject: Integrating a D3.js visualization into a simple AngularJS application [article] Getting Up and Running with Cassandra [article] Cassandra Architecture [article]
Read more
  • 0
  • 0
  • 4371

article-image-managing-manufacturers-vendors-and-product-categories-joomla-e-commerce-virtuemart
Packt
28 Oct 2009
7 min read
Save for later

Managing Manufacturers, Vendors, and Product Categories with Joomla! E-Commerce VirtueMart

Packt
28 Oct 2009
7 min read
We are going to add and edit a lot of information for manufacturers, vendors, and product categories. Actually, in this article, our VirtueMart shop will really take shape with products we want to sell. Catalogue management The product catalog for an online shop comprises of the products we sell in the shop. Whatever products we want to sell should be added to this product catalog first. Once products are added to the catalog, customers can browse the products and decide to buy whatever they need. Therefore, managing the catalog is one of the primary tasks of the shop owner. Products that we add to the catalog need to be organized to help customers easily find the right products. In VirtueMart, customers can sort the products by product categories and manufacturers. Therefore, before adding products to the catalog, we will look into managing manufacturers and product categories. Managing manufacturers In VirtueMart, whenever we add a product to the catalog, we also need to assign a manufacturer for that product. In reality, every product has a manufacturer, and for better management of the shop, we should be able to find products by their manufacturer. Therefore, first step will be to identify the manufacturers and enter their information in VirtueMart store. We can also categorize the manufactures as publishers, software developers, and so on. Adding a manufacturer category There is a default manufacturer category for use in VirtueMart. We can use that default category for creating a manufacturer. However, when we are selling large number of products from a large number of manufacturers, classifying them into categories will be convenient for managing the manufacturers. For adding a manufacturer, in the VirtueMart administration panel, click on Manufacturer | Add Manufacturer Category. This shows Manufacturer Category Form: In the Manufacturer Category Form, provide information for the Category Name and the Category Description fields. Once these are provided, click the Save icon in the toolbar to save the manufacturer category. In the same process, you can add as many categories as you want. Adding a manufacturer For adding a manufacturer, in the VirtueMart administration panel, select Manufacturer | Add Manufacturer. This shows Add Information screen: In the Add Information screen, type the manufacturer's name, their URL, email address, and a brief description. In the Manufacturer Category field, select the category. The drop-down list will show the manufacturer categories you created earlier. Once all the information is provided in this screen, click the Save icon in the toolbar to save the manufacturer information. Listing the manufacturer categories Once you have added the manufacturer categories, you can view the list of manufacturer categories by selecting Manufacturer | List Manufacturer Categories. This shows Manufacturer Category List screen: In the Manufacturer Category List screen, you will see all manufacturer categories you have created. From this screen, you can add a new category by clicking the New icon in the toolbar. Similarly, you can remove a category by clicking on the trash icon in Remove column, or by selecting the categories and clicking the Remove icon in the toolbar. You can edit a category by clicking on the category name. To view the list of manufacturers, click on the Manufacturer List link in the Manufacturers column, or select Manufacturer | List Manufacturers. This shows Manufacturer List screen displaying all manufacturers you have added: From the Manufacturer List screen, you can create a new manufacturer, remove one or more manufacturers, and edit any manufacturer. For editing a manufacturer, click on the manufacturer's name or the Update link in Admin column. This will bring up the Add Information screen again. You can also create a new manufacturer by clicking the New icon in the toolbar. From the Manufacturer Category List screen, you may think that clicking on the Manufacturer List link against each category will display the manufacturers added to that category only. Ideally, this should be the case. However, until VirtueMart 1.1.2, it shows the list of manufacturers from all the categories. We hope this will be fixed in the upcoming releases of VirtueMart. Managing vendors The idea of multiple vendors is something what you can see on Amazon.com. Different vendors add their products to sell, when the order is placed, the store notifies the vendor to fulfill the order. The main store usually gets a commission from the vendor for each sell made through the store. However, VirtueMart's vendors feature is still in its infancy and does not yet function properly. You can add multiple vendors in VirtueMart, and assign products to the vendors. However, adding vendors has no effect on selling any product on the VirtueMart store, except when applying different vendor-specific tax rates and shopper groups. At the moment, it also helps to identify products from different vendors. In the following sections, you will see how to add and manage vendors. Vendor category Like manufacturers, you can also create vendor categories. For creating vendor categories, go to Vendor | Add Vendor Category. This displays Vendor Category Form: In the Vendor Category Form, type the name of the category and its description. Then click the Save icon in the toolbar. You can add as many categories as you want. Before trying to add vendor categories, first plan how you are going to categorize your vendors (for example, based on the product they sell or their location). Have a full category tree on hand and then start creating categories. Adding vendor Once you have created the necessary vendor categories, you can proceed to adding vendors. For adding vendors, click on Vendor | Add Vendor. This displays the Add Information screen: CautionNote that there is a warning sign at the top of Add Information screen. It warns you about using the vendor feature as it is in the 'Alpha' or pre-mature stage. Also note that we have used Simple Layout for displaying it. If you try adding a vendor from Extended Layout, you will open up an edit screen for existing vendor information, which you already added during the initial configuration of the shop. Up until VirtueMart 1.1.2, a bug has been encountered and which will hopefully be fixed in future releases when it crosses 'Alpha' stage. The Add Information screen shows three tabs: Store, Store Information, and Contact Information. From the Store tab, add the vendor's store name, company name, logo, web site URL, minimum purchase order value, and minimum amount for free shipping. You can also configure the currency symbol, decimal points, decimal symbol, thousand separator, positive format, and negative format. In the Store Information tab (seen in the previous screenshot), you can add the address of the store, city, state/province/region, zip/postal code, phone, currency and vendor category. The vendor categories you have created earlier will be available in Vendor Category drop-down list. In the Contact Information tab (seen in the previous screenshot), you can set the contact details of the vendor, such as name, title, phone, fax, email. You can also add a brief description of the vendor which will be displayed in the vendor details page in the store. Type a brief description in the Description rich-text editing box. In the Terms of Service rich-text editing box, provide terms of service applicable for that vendor. Once information in all the three tabs are provided, click the Save icon in the toolbar to add the vendor.
Read more
  • 0
  • 0
  • 4367

article-image-configuring-mysql
Packt
01 Apr 2010
14 min read
Save for later

Configuring MySQL

Packt
01 Apr 2010
14 min read
Let's get started. Setting up a fixed InnoDB tablespace When using the InnoDB storage engine of MySQL, the data is typically not stored in a per-database or per-table directory structure, but in several dedicated files, which collectively contain the so-called tablespace. By default (when installing MySQL using the configuration wizard) InnoDB is confi gured to have one small file to store data in, and this file grows as needed. While this is a very fl exible and economical confi guration to start with, this approach also has some drawbacks: there is no reserved space for your data, so you have to rely on free disk space every time your data grows. Also, if your database grows bigger, the file will grow to a size which makes it hard to handle—a dozen files of 1 GB each are typically easier to manage than one clumsy 12 GB file. Large data files might, for example, cause problems if you try to put those files into an archive for backup or data transmission purposes. Even if the 2 GB limit is not present any more for the current file systems, many compression programs still have problems dealing with large files. And finally, the constant adaptation of the file in InnoDB's default configuration size will cause a (small, but existent) performance hit if your database grows. The following recipe will show you how to define a fixed tablespace for your InnoDB installation, by which you can avoid these drawbacks of the InnoDB default configuration. Getting ready To install a fixed tablespace, you will have to reflect about some aspects: how much tablespace should be reserved for your database, and how to size the single data files which in sum constitute the tablespace. Note that once your database completely allocates your tablespace, you will run into table full errors (error code 1114) when trying to add new data to your database. Additionally, you have to make sure that your current InnoDB tablespace is completely empty. Ideally, you should set up the tablespace of a freshly installed MySQL instance, in which case this prerequisite is given. To check whether any InnoDB tables exist in your database, execute the following statement and delete the given tables until the result is empty: SELECT TABLE_SCHEMA, TABLE_NAME FROM information_schema.tables WHERE engine="InnoDB"; If your database already contains data stored in InnoDB tables that you do not want to lose, you will have to create a backup of your database and recover the data from it when you are done with the recipe. Please refer to the chapter Backing Up and Restoring MySQL Data for further information on this. And finally, you have to make sure that the InnoDB data directory (as defined by the innodb_data_home_dir variable) features sufficient free disk space to store the InnoDB data files. For the following example, we will use a fixed tablespace with a size of 500 MB and a maximal file size of 200 MB. How to do it... Open the MySQL configuration file (my.ini or my.cnf) in a text editor. Identify the line starting with innodb_data_file_path in the [mysqld] section. If no such line exists, add the line to the file. Change the line innodb_data_file_path to read as follows: innodb_data_file_path=ibdata1:200M;ibdata2:200M;ibdata3:100M Save the changed configuration file. Shut down your database instance (if running). Delete previous InnoDB data files (typically called ibdata1, ibdata2, and so on) from the directory defined by the innodb_data_home_dir variable. Delete previous InnoDB logfiles (named ib_logfile0, ib_logfile1, so on) from the directory defined by the innodb_log_group_home_dir variable. If innodb_log_group_home_dir is not configured explicitly, it defaults to the datadir directory. Start your database. Wait for all data and log files to be created. Depending on the size of your tablespace and the speed of your disk system, creation of InnoDB data fi les can take a significant amount of time (several minutes is not an uncommon time for larger installations). During this initialization sequence, MySQL is started but it will not accept any requests. How it works... Steps 1 through 4—and particularly 3—cover the actual change to be made to the MySQL configuration, which is necessary to adapt the InnoDB tablespace settings. The value of the innodb_data_file_path variable consists of a list of data file definitions that are separated by semicolons. Each data file definition is constructed of a fi le name and a file size with a colon as a separator. The size can be expressed as a plain numeric value, which defines the size of the data file in bytes. If the numeric value has a K, M, or G postfix, the number is interpreted as Kilobytes, Megabytes, or Gigabytes respectively. The list length is not limited to the three entries of our example; if you want to split a large tablespace into relatively small files, the list can easily contain dozens of data file definitions. If your tablespace consists of more than 10 files, we propose naming the first nine files ibdata01 through ibdata09 (instead of ibdata1 and so forth; note the zero), so that the files are listed in a more consistent order when they are displayed in your file browser or command line interface. Step 5 is prerequisite to the steps following after it, as deletion of vital InnoDB files while the system is still running is obviously not a good idea. In step 6, old data files are deleted to prevent collision with the new files. If InnoDB detects an existing file whose size differs from the size defined in the innodb_data_file_path variable, it will not initialize successfully. Hence, this step ensures that new, properly saved files can be created during the next MySQL start. Note that deletion of the InnoDB data files is only suffi cient if all InnoDB tables were deleted previously (as discussed in the Getting ready section). Alternatively, you could delete all *.frm files for InnoDB tables from the MySQL data directory, but we do not encourage this approach (clean deletion using DROP TABLE statements should be preferred over manual intervention in MySQL data directories whenever possible). Step 7 is necessary to prevent InnoDB errors after the data files are created, as the InnoDB engine refuses to start if the log files are older than the tablespace files. With steps 8 and 9, the new settings take effect. When starting the database for the first time after changes being made to the InnoDB tablespace configuration, take a look at the MySQL error log to make sure the settings were accepted and no errors have occurred. The MySQL error log after the first start with the new settings will look similar to this:   InnoDB: The first specified data file E:MySQLInnoDBTestibdata1 didnot exist:InnoDB: a new database to be created!091115 21:35:56 InnoDB: Setting file E:MySQLInnoDBTestibdata1 sizeto 200 MBInnoDB: Database physically writes the file full: wait...InnoDB: Progress in MB: 100 200...InnoDB: Progress in MB: 100091115 21:36:19 InnoDB: Log file .ib_logfile0 did not exist: new tobe createdInnoDB: Setting log file .ib_logfile0 size to 24 MBInnoDB: Database physically writes the file full: wait......InnoDB: Doublewrite buffer not found: creating newInnoDB: Doublewrite buffer createdInnoDB: Creating foreign key constraint system tablesInnoDB: Foreign key constraint system tables created091115 21:36:22 InnoDB: Started; log sequence number 0 0091115 21:36:22 [Note] C:Program FilesMySQLMySQL Server 5.1binmysqld: ready for connections.Version: '5.1.31-community-log' socket: '' port: 3306 MySQLCommunity Server (GPL)   There's more... If you already use a fixed tablespace, and you want to increase the available space, you can simply append additional files to your fixed tablespace by adding additional data file definitions to the current innodb_data_file_path variable setting. If you simply append additional files, you do not have to empty your tablespace first, but you can change the confi guration and simply restart your database. Nevertheless, as with all changes to the confi guration, we strongly encourage creating a backup of your database first.   Setting up an auto-extending InnoDB tablespace The previous recipe demonstrates how to define a tablespace with a certain fixed size. While this provides maximum control and predictability, you have to block disk space based on the estimate of the maximum size required in the foreseeable future. As long as you store less data in your database than the reserved tablespace allows for, this basically means some disk space is wasted. This especially holds true if your setting does not allow for a separate file system exclusively for your MySQL instance, because then other applications compete for disk space as well. In these cases, a dynamic tablespace that starts with little space and grows as needed could be an alternative. The following recipe will show you how to achieve this. Getting ready When defining an auto-extending tablespace, you should first have an idea about the minimum tablespace requirements of your database, which will set the initial size of the tablespace. Furthermore, you have to decide whether you want to split your initial tablespace into files of a certain maximum size (for better file handling). If the above settings are identical to the current settings and you only want to make your tablespace grow automatically if necessary, you will be able to keep your data. Otherwise, you have to empty your current InnoDB tablespace completely (please refer to the previous recipe Setting up a fixed InnoDB tablespace for details). As with all major confi guration changes to your database, we strongly advise you to create a backup of your data first. If you have to empty your tablespace, you can use this backup to recover your data after the changes are completed. Again, please refer to the chapter Backing Up and Restoring MySQL Data for further information on this. And as before, you have to make sure that there is enough disk space available in the innodb_data_home_dir directory—not only for the initial database size, but also for the anticipated growth of your database. The recipe also requires you to shut down your database temporarily; so you have to make sure all clients are disconnected while performing the required steps to prevent conflicting access. As the recipe demands changes to your MySQL confi guration file (my.cnf or my.ini), you need write access to this file. For the following example, we will use an auto-extending tablespace with an initial size of 100 MB and a file size of 50 MB. How to do it... Open the MySQL configuration file (my.ini or my.cnf) in a text editor. Identify the line starting with innodb_data_file_path in the [mysqld] section. If no such line exists, add the line to the file. Change the line innodb_data_file_path to read as follows: innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend Note that no file defi nition except the last one must have the :autoextend option; you will run into errors otherwise. Save the changed confi guration file. Shut down your database instance (if running). Delete previous InnoDB data files (typically called ibdata1, ibdata2, and so on) from the directory defi ned by the innodb_data_home_dir variable. Delete previous InnoDB logfiles (named ib_logfile0, ib_logfile1, and so on) from the directory defined by the innodb_log_group_home_dir variable. If innodb_log_group_home_dir is not configured explicitly, it defaults to the datadir directory Start your database. Wait for all data and log files to be created. Depending on the size of your tablespace and the speed of your disk system, creation of InnoDB data files can take a signifi cant amount of time (several minutes is not an uncommon time for larger installations). During this initialization sequence, MySQL is started but will not accept any requests. When starting the database for the first time after changes being made to the InnoDB tablespace configuration, take a look at the MySQL error log to make sure the settings were accepted and no errors have occurred. How it works... The above steps are basically identical to the steps of the previous recipe Setting up a fixed InnoDB tablespace, the only difference being the definition of the innodb_data_file_path variable. In this recipe, we create two files of 50 MB size, the last one having an additional :autoextend property. If the innodb_data_file_path variable is not set explicitly, it defaults to the value ibdata1:10M:autoextend. As data gets inserted into the database, parts of the tablespace will be allocated. As soon as the 100 MB of initial tablespace is not sufficient any more, the file ibdata2 will become larger to match the additional tablespace requirements. Note that the :autoextend option causes the tablespace files to be extended automatically, but they are not automatically reduced in size again if the space requirements decrease. Please refer to the Decreasing InnoDB tablespace recipe for instructions on how to free unused tablespace. There's more... The recipe only covers the basic aspects of auto-extending tablespaces; the following sections provide insight into some more advanced topics. Making an existing tablespace auto-extensible If you already have a database with live data in place and you want to change your current fixed configuration to use the auto-extension feature, you can simply add the :autoextend option to the last file definition. Let us assume a current configuration like the following: innodb_data_file_path=ibdata1:50M;ibdata2:50M The respective configuration with auto-extension will look like this: innodb_data_file_path=ibdata1:50M;ibdata2:50M:autoextend In this case, do not empty the InnoDB tablespace first, you can simply change the configuration file and restart your database, and you should be fine. As with all configuration changes, however, we strongly recommend to back up your database before editing these settings even in this case. Controlling the steps of tablespace extension The amount by which the size of the auto-extending tablespace file is increased is controlled by the innodb_autoextend_increment variable. The value of this variable defines the number of Megabytes by which the tablespace is enlarged. By default, 8 MB are added to the file if the current tablespace is no longer sufficient. Limiting the size of an auto-extending tablespace If you want to use an auto-extending tablespace, but also want to limit the maximum size your tablespace will grow to, you can add a maximum size for the auto-extended tablespace file by using the :autoextend:max:[size] option. The [size] portion is a placeholder for a size definition using the same notation as the size description for the tablespace file itself, which means a numeric value and an optional K, M, or G modifier (for sizes in Kilo-, Mega-, and Gigabytes). As an example, if you want to have a tiny initial tablespace of 10 MB, which is extended as needed, but with an upper limit of 2 GB, you would enter the following line to your MySQL configuration file: innodb_data_file_path=ibdata1:10M:autoextend:max:2G Note that if the maximum size is reached, you will run into errors when trying to add new data to your database. Adding a new auto-extending data file Imagine an auto-extending tablespace with an auto-extended file, which grew so large over time that you want to prevent the file from growing further and want to append a new auto-extending data file to the tablespace. You can do so using the following steps: Shut down your database instance. Look up the exact size of the auto-extended InnoDB data file (the last file in your current configuration). Put the exact size as the tablespace fi le size definition into the innodb_data_file_path configuration (number of bytes without any K, M, or G modifier), and add a new auto-extending data file. Restart your database. As an example, if your current confi guration reads ibdata1:10M:autoextend and the ibdata1 file has an actual size of 44,040,192 bytes, change configuration to innodb_data_file_path=ibdata1:44040192;ibdata2:10M:autoextend:max:2G.  
Read more
  • 0
  • 0
  • 4359

article-image-drupal-7-fieldscck-using-file-field-modules
Packt
08 Jul 2011
4 min read
Save for later

Drupal 7 fields/CCK: Using the file field modules

Packt
08 Jul 2011
4 min read
Adding and configuring file fields to content types There are many cases where we need to attach files to website content. For instance, a restaurant owner might like to upload their latest menu in PDF format to their website, or a financial institution would like to upload a new product catalog so customers can download and print out the catalog if they need it. The File module is built into the Drupal 7 core, which provides us with the ability to attach files to content easily, to decide the attachment display format, and also to manage file locations. Furthermore, the File module is integrated with fields and provides a file field type, so we can easily attach files to content using the already discussed field system making the process of managing files much more streamlined. Time for action – adding and configuring a file field to the Recipe content type In this section, we will add a file field to the Recipe content type, which will allow files to be attached to Recipe content. Follow these steps: Click on the Structure link in the administration menu at the top of the page. The following page will display a list of options. Click on the Content types link to go to the Content types administration page. Since we want to add a file field to the Recipe content type, we will click on the manage fields link on the Recipe row as shown in the following screenshot: (Move the mouse over the image to enlarge it.) This page will display the existing fields of the Recipe content type. In the Label field enter "File", and in the Field name field enter "file". In the field type select list select File as the field type, the field widget will be automatically switched to File as the field widget. After the values are entered, click on Save. A new window will pop up which provides the field settings for the file field that we are creating. There are two checkboxes, and we will enable both these checkboxes. The last radio button option will be selected by default. Then click on the Save field settings button at the bottom of the page. We clicked on the Save field settings button to store the values for the file field settings that we selected. After that, it will direct us to the file field settings administration page, as in the following screenshot: We can leave the Label field as default as it will be filled automatically with the value we entered previously. We will also leave the Required field as default, because we do not want to force users to attach files to every recipe. In the Help text field, we can enter "Attach files to this recipe". In the Allowed file extensions section, we can enter the file extensions that are allowed to be uploaded. In this case, we will enter "txt, pdf, zip". In the File directory section, we can enter the name of a subdirectory that will store the uploaded files, and in this case, we will enter "recipe_files": In the Maximum upload size section, we can enter a value to limit the file size when uploading files. We will enter "2MB" in this field. The Enable Description field checkbox allows users to enter a description about the uploaded files. In this case, we will enable this option, because we would like users to enter a description of the uploaded files. (Move the mouse over the image to enlarge it.) In the Progress indicator section, we can select which indicator will be used when uploading files. We select Throbber as the progress indicator for this field. (Move the mouse over the image to enlarge it.) You will notice the bottom part of the page is exactly same as in the previous section. We can ignore the bottom part and click on the Save settings button to store all the values we have entered. Drupal will direct us back to the manage fields administration page with a message saying we have successfully saved the configuration for the file field. After creating the file field, the file field row will be added to the table. This table will display the details about the file field we just created. (Move the mouse over the image to enlarge it.)
Read more
  • 0
  • 0
  • 4352

article-image-styling-your-joomla-form-using-chronoforms
Packt
27 Aug 2010
11 min read
Save for later

Styling your Joomla! form using ChronoForms

Packt
27 Aug 2010
11 min read
(For more resources on ChronoForms, see here.) Introduction Styling forms is more a subject for a book on Joomla! templating, but as not all templates handle it very well, ChronoForms has some basic formatting capabilities that we will look at here. We'll look at two areas—applying CSS to change the "look and feel" of a form and some simple layout changes that may be helpful. We'll be assuming here that you have some knowledge of both CSS and HTML. Using ChronoForms default style ChronoForms recognizes that many Joomla! templates are not strong in their provision of form styling, so it offers some default styling that you can apply (or not) and edit to suit your needs. Getting ready It might be helpful to have a form to look at. Try creating a test form using the ChronoForms Wizard to add "one of each" of the main inputs to a new form and then save it. How to do it... Each of the five steps here describes a different way to style your forms. You can choose the one (or more) that best meets your needs: When you create a form with the Wizard, ChronoForms does three things: Adds some <div> tags to the form HTML to give basic structure Adds classes to the <div> tags and to the input tags to allow CSS styling Loads some default CSS that uses the classes to give the form a presentable layout If you look at the Form HTML created by the Wizard you will see something like this (this is a basic text input): <div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label" style="width: 150px;"> Click Me to Edit</label> <input class="cf_inputbox" maxlength="150" size="30" title="" id="text_2" name="text_2" type="text" /> </div> <div class="cfclear">&nbsp;</div></div> This example uses the default values from the Wizard. The label text, size, and name may have been changed in the Wizard Properties box. There is a wrapper <div> with a class of form_item. Then, there is a second wrapper around the <label> and <input> tags with two classes—form_element and cf_textbox. There are the <label> and <input> tags themselves with classes of cf_label and cf_inputbox respectively. And lastly there is an "empty" <div> with a class of cfclear that is used to end any CSS floats used in styling the previous tags. The coding for other types of input is very similar, and usually the only difference is the class of the input tag and the <div> tag wrapped around the label and the input. There is nothing very special about any of this; it provides a basic framework for styling. You can't change the default styling used by the Wizard but you can use your own HTML, or edit the Form HTML created by the Wizard. If you change the class names or override the ChronoForms CSS styling with your own styles, then the ChronoForms CSS will no longer apply. Here's what the test form looks like with the default ChronoForms styling: To see the effect of the ChronoForms CSS, open the form in the Form Editor. Go to the General tab, open Core/View Form Settings, and change Load Chronoforms CSS/ JS Files to No. Save the form and refresh the front-end view. Here is the same form without the ChronoForms CSS styling loaded. Not so pretty, but still fully functional. Note: If you create your form in the Form Editor rather than the Wizard, the default setting for Load Chronoforms CSS/JS Files is No. So, you need to turn it on if you want to use the default styling. See also W3Schools CSS tutorials and references at http://www.w3schools.com/css/ default.asp provide a useful online introduction to CSS Switching styles with "Transform Form" The ChronoForms default styling doesn't always suit. So, ChronoForms provides a basic form theming capability. There are only two themes provided—"default" and "theme1". Getting ready We're using the same form as in the previous recipe. How to do it... In the Forms Manager, check the box next to your form name and then click the Transform Form icon in the toolbar. You will see a warning that using Transform Form will overwrite any manual changes to the Form HTML and two form images—one for the "default" theme and one for "theme1". There's a radio button under each theme, and Preview and Transform & Save buttons at the bottom left. The Preview button allows you to see your form with the theme applied. This will not overwrite manual changes; Transform & Save will! Warning: Using Transform & Save will recreate the Form HTML from the version that ChronoForms has saved in the database table. Any manual changes that you have made to the Form HTML will be lost. Applying "theme1" changes the Form HTML structure significantly. Select the "theme1" radio button and click the Preview button to see the result. You can't see this from the preview screen but here's what the text input block now looks like: <div class="cf_item"> <h3 class="cf_title" style="width: 150px;"> Click Me to Edit</h3> <div class="cf_fields"> <input name="text_2" type="text" value="" title="" class="cf_inputtext cf_inputbox" maxlength="150" size="30" id="text_2" /> <br /> <label class="cf_botLabel"></label> </div></div> The wrapping <div> tags and the input are still the same; the old label is now an <h3> tag and there's a new <label> after the input with a cf_botlabel class. The <div> with the cfclear class has gone. This theme may work better with forms that need narrower layouts or where the cfclear <div> tags cause large breaks in the form layout. Neither theme creates a very accessible form layout, and "theme1" is rather less accessible than the "default" theme. If this is important for you then you can create your own form theme. How it works... A ChronoForms theme has two parts—a PHP file that defines the form elements and a CSS file that sets the styling. The Transform Form gets the "Wizard" version of your form that is saved in the database, and regenerates the form HTML using the element structures from the PHP file. When the file is loaded, the theme CSS file will be loaded instead of the default ChronoForms CSS. See also The article "Accessible Forms using WCAG 2.0" (http://www.usability.com.au/resources/wcag2/) is a practical introduction to the topic of web form accessibility. Adding your own CSS styling Many users will want to add their own styling to their forms. This is a short guide about ways to do that. It's not a guide to create the CSS. To add your own Form CSS, you will need to have a working knowledge of HTML and CSS. Getting ready You need nothing to follow the recipe, but when you come to it out, you'll need CSS and a form or two. How to do it... Adding CSS directly in the Form HTML: The quickest and least desirable way of styling is to add CSS directly to the Form HTML. The HTML is accessible on the Form Code tab in the Form Editor. You can type directly into the text area. For example: <input name="text_2" type="text" value="" title="" class="cf_inputtext cf_inputbox" maxlength="150" size="30" id="text_2" style="border: 1px solid blue;" /> The only time when you might need to use this approach is to mark one or two inputs in some special way. Even then it might be better to use a class and define the style outside the Form HTML. Using the Form CSS styles box: In the Form Code tab, ChronoForms has a CSS Styles box, which is opened by clicking the [+/-] link beside CSS Styles. You can add valid CSS definitions in this box (without <style> or </style> tags) and the CSS will be included in the page when it is loaded. For example, you could put this definition into the box: cf_inputbox { border: 1px solid blue; } This will add the following script snippet to the page. If you look at the page source for your form in the front-end you'll find it correctly loaded inside the <head> section. <style type="text/css">cf_inputbox { border: 1px solid blue; } </style> Editing the ChronoForms default CSS: If you have Load Chronoforms CSS/JS Files? set to Yes, then ChronoForms will apply one of its themes, the default one unless you have picked another. The theme CSS files that are used in the front-end are in the components/com_ chronocontact/themes/{theme_name}/css/ folder. Usually there are three files in the folder. The style1-ie6.css file is loaded if the browser detected is IE6; style1-ie7. css is loaded as well for IE7 or IE8; Lstyle1.css is loaded for other browsers. If you edit the ChronoForms CSS, you may need to edit all three files. Note: The themes are duplicated in the Administrator part of ChronoForms, but those files are used in the Transform Form page only. Editing the template CSS: If you want to apply styling more broadly across your site then you may want to integrate the Form CSS with your template style sheets. This is entirely possible; the only thing to make sure of is that the classes in your Form HTML are reflected in the template CSS. You can either manually edit the Form HTML or add the ChronoForms classes to your template styles sheets. Note that this is a much better approach than editing the ChronoForms theme CSS files. Upgrading ChronoForms could well overwrite the theme files. If you have the styles in your template's style sheets, this is not a problem. Creating a new ChronoForms theme is a better solution than editing the default themes as it is protected against overwriting, and allows you to change the layout of the HTML elements in the form. The simplest way to do this is to copy one of the existing theme folders, rename the copy, and edit the files in the new folder. The CSS files are straightforward, but the elements.php file needs a little explanation. If you open the file in an editor, you will find a series of code blocks that define the way in which ChronoForms will structure each of the form elements in the Wizard. Here is an example of a text input: <!--start_cf_textbox--><div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label"{cf_labeloptions}>{cf_labeltext} </label> <input class="{cf_class}" maxlength="{cf_maxlength}" size="{cf_size}" title="{cf_title}" id="{cf_id}" name="{cf_name}" type="{cf_type}" /> {cf_tooltip} </div> <div class="cfclear">&nbsp;</div></div><!--end_cf_textbox--> The comment lines at the beginning and end mark out this element and must be left intact. Between the comment lines you may add any valid HTML body tags that you like, except that the text input element must include <input type='text' . . . /> and so on. The entries in curly brackets, for example {cf_labeltext}, will be replaced by the corresponding values from the Properties box for this element in the Form Wizard. If they appear they must be exactly the same as the entries in the ChronoForms default theme. Most of the time you will not need to create a new theme, but if you are building Joomla! applications, this provides a very flexible way of letting users create forms with a predetermined structure and style. Note that if you create a new theme, you need to ensure that the files are the same in both theme folders (administrator/ components/com_chronocontact/themes/ and components/com_chronocontact/themes/). Maybe a future version of ChronoForms will remove the duplication.
Read more
  • 0
  • 0
  • 4337
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-drupal-7-customizing-existing-theme
Packt
15 Jul 2011
9 min read
Save for later

Drupal 7: Customizing an Existing Theme

Packt
15 Jul 2011
9 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling    With the arrival of Drupal 6, sub-theming really came to the forefront of theme design. While previously many people copied themes and then re-worked them to achieve their goals, that process became less attractive as sub-themes came into favor. This article focuses on sub-theming and how it should be used to customize an existing theme. We'll start by looking at how to set up a workspace for Drupal theming. Setting up the workspace Before you get too far into attempting to modify your theme files, you should put some thought into your tools. There are several software applications that can make your work modifying themes more efficient. Though no specific tools are required to work with Drupal themes, you could do it all with just a text editor—there are a couple of applications that you might want to consider adding to your tool kit. The first item to consider is browser selection. Firefox has a variety of extensions that make working with themes easier. The Web Developer extension, for example, is hugely helpful when dealing with CSS and related issues. We recommend the combination of Firefox and the Web developer extension to anyone working with Drupal themes. Another extension popular with many developers is Firebug, which is very similar to the Web developer extension, and is indeed more powerful in several regards. Pick up Web developer, Firebug, and other popular Firefox add-ons at https://addons.mozilla.org/en-US/firefox/. There are also certain utilities you can add into your Drupal installation that will assist with theming the site. Two modules you definitely will want to install are Devel and Theme developer. Theme developer can save you untold hours of digging around trying to find the right function or template. When the module is active all you need to do is click on an element and the Theme developer pop-up window will show you what is generating the element, along with other useful information like potential template suggestions. The Devel module performs a number of functions and is a prerequisite for running Theme developer. Download Devel from: http://drupal.org/project/devel. You can find Theme developer at: http://drupal.org/project/devel_themer. Note that neither Devel nor Theme Developer are suitable for use in a development environment—you don't want these installed and enabled on a client's public site, as they can present a security risk. When it comes to working with PHP files and the various theme files, you will also need a good code editor. There's a whole world of options out there, and the right choice for you is really a personal decision. Suffice it to say: as long as you are comfortable with it, it's probably the right choice. Setting up a local development server Another key component of your workspace is the ability to preview your work—preferably locally. As a practical matter, previewing Drupal themes requires the use of a server; themes are difficult to preview with any accuracy without a server to execute the PHP code. While you can work on a remote server on your webhost, often this is undesirable due to latency or simple lack of availability. A quick solution to this problem is to set up a local server using something like the XAMPP package (or the MAMP package for Mac OSX). XAMPP provides a one step installer containing everything you need to set up a server environment on your local machine (Apache, MySQL, PHP, phpMyAdmin, and more). Visit http://www.ApacheFriends.org to download XAMPP and you can have your own Dev Server set up on your local machine in no time at all. Follow these steps to acquire the XAMPP installation package and get it set up on your local machine: Connect to the Internet and direct your browser to http://www.apachefriends.org. Select XAMPP from the main menu. Click the link labeled XAMPP for Windows. Click the .zip option under the heading XAMPP for Windows. Note that you will be re-directed to the SourceForge site for the actual download. When the pop-up prompts you to save the file, click OK and the installer will download to your computer. Locate the downloaded archive (.zip) package on your local machine, and double-click it. Double-click the new file to start the installer. Follow the steps in the installer and then click Finish to close the installer. That's all there is to it. You now have all the elements you need for your own local development server. To begin, simply open the XAMPP application and you will see buttons that allow you to start the servers. To create a new website, simply copy the files into a directory placed inside the /htdocs directory. You can then access your new site by opening the URL in your browser, as follows: http://localhost/sitedirectoryname. As a final note, you may also want to have access to a graphics program to handle editing any image files that might be part of your theme. Again, there is a world of options out there and the right choice is up to you. Planning the modifications A proper dissertation on site planning and usability are beyond the scope of this article. Similarly, this article is neither an HTML nor a CSS tutorial; accordingly, in this article we are going to focus on identifying the issues and delineating the process involved in the customization of an existing theme, rather than focusing on design techniques or coding-specific changes. Any time you set off down the path of transforming an existing theme into something new, you need to spend some time planning. The principle here is the same as in many other areas. A little time spent planning at the frontend of a project can pay off big in savings later. When it comes to planning your theming efforts, the very first question you have to answer is whether you are going to customize an existing theme or whether you will create a new theme. In either event, it is recommended that you work with sub-themes. The key difference is the nature of the base theme you select, that is, the theme you are going to choose as your starting point. In sub-theming, the base theme is the starting point. Sub-themes inherit the parent theme's resources; hence, the base theme you select will shape your theme building. Some base themes are extremely simple, designed to impose on the themer the fewest restrictions; others are designed to give you the widest range of resources to assist your efforts. However, since you can use any theme for a base theme, the reality is that most themes fall in between, at least in terms of their suitability for use as a base theme. Another way to think of the relationship between a base theme and a subtheme is in terms of a parent-child relationship. The child (sub-theme) inherits its characteristics from its parent (the base theme). There are no limits to the ability to chain together multiple parent-child relationships; a sub-theme can be the child of another sub-theme. When it comes to customizing an existing theme, the reality is often that the selection of the base theme will be dictated by the theme's default appearance and feature set; in other words, you are likely to select the theme that is already the closest to what you want. That said, don't limit yourself to a shallow surface examination of the theme. In order to make the best decision, you need to look carefully at the underlying theme's file and structures and see if it truly is the best choice. While the original theme may be fairly close to what you want, it may also have limitations that require work to overcome. Sometimes it is actually faster to start with a more generic theme that you already know and can work with easily. Learning someone else's code is always a bit of a chore and themes are like any other code—some are great, some are poor, most are simply okay. A best practices theme makes your life easier. In simplest terms, the process of customizing an existing theme can be broken into three steps: Select your base theme. Create a sub-theme from your base theme. Make the changes to your new sub-theme. Why it is not recommended to simply modify the theme directly? There are two following reasons: First, best practices say not to touch the original files; leave them intact so you can upgrade them without losing customizations. Second, as a matter of theming philosophy, it's better to leave the things you don't need to change in the base theme and focus your sub-theme on only the things you want to change. This approach to theming is more manageable and makes for much easier testing as you go. Selecting a base theme For the sake of simplicity, in this article, we are going to work with the default Bartik theme. We'll take Bartik, create a new sub-theme and then modify the subtheme to create the customized theme. Let's call the new theme "JeanB". Note that while we've named the theme "JeanB", when it comes to naming the theme's directory, we will use "jeanb" as the system only supports lowercase letters and underscores. In order to make the example easier to follow and to avoid the need to install a variety of third-party extensions, the modifications we will make in this article will be done using only the default components. Arguably, when you are building a site like this for deployment in the real world (rather than simply for skills development) you might wish to consider implementing one or more specialized third-party extensions to handle certain tasks.
Read more
  • 0
  • 0
  • 4325

article-image-customizing-menus-menu-joomla
Packt
12 Oct 2011
8 min read
Save for later

Customizing the Menus Menu in Joomla!

Packt
12 Oct 2011
8 min read
The Top Menu is a horizontal menu; the other menus are vertical. Each menu is coupled with a so-called module, which is administered in the module manager. Menus By clicking on this menu item, you get an overview of the available menus. You can also access the content of these menus by means of the menu bar—Menus | Main Menu, Top Menu, or by clicking the respective menu link in the overview. This Menu Manager serves as an overview and shows you the number of Published and Unpublished menu items, the number of menu items that are in the Trash can, and the respective menu ID. In this section you can, for instance, copy a menu or create a new one. Customizing an Existing Menu Experiment a little with the menus to get a feel for things. The following edit steps are same for all the menus. Go to the menu item Menus | Main Menu. You will see a listing of the menu items that turn up in the mainmenu. Several functions can be executed in the table with a simple mouse click. By clicking on the checkmark, you can enable or disable a menu link. You can change the order of the items by clicking on the triangles or by typing numbers into the fields under Order. If you use the numbers method, you have to click on the disk symbol in the header in order to make the change effective. In the Access Level column, via mouse click you can decide whether the menu is available to all users (Public), only to registered users (Registered), or only to a particular circle of users (Special). The menu items are then displayed or hidden, independent of the user's rights. Menus Icon If you click on this icon, you are taken to the menu overview screen. Default Icon The menu item that is marked as default here with a star is displayed as the start page when someone calls up the URL of your website. At the moment this is the menu item Home, but you can designate any element that you want as the start page. Just mark the checkbox and click on the Default icon. Publish/Unpublish Icon The status of a content element can either be published (activated) or unpublished (deactivated). You can toggle this status individually by clicking the green checkmark and/or the red cross, or marking the checkbox and subsequently clicking on the appropriate icon. If you follow the later method, you can toggle several menu items at the same time. Move Icon This entails the moving of menu entries. Let's move the text More about Joomla! into the top menu. Select the respective menu elements or even several menu elements and click the Move icon. This opens a form, listing the available menus. On the right you will see the elements that you want to move: Select the menu into which you would like to move the marked menu items. Here, we have moved More about Joomla! from Main Menu into the Top Menu. You can admire the results in the front end. Copy Icon You can also copy menu items. To do that, select one or more menu items and click on the Copy icon. Just as with moving, a form with the available menus opens. Select the menu into which you want to copy the marked menu entries. Trash Icon In order to protect you from inadvertently deleting items, when editing them you cannot delete them immediately; you can only throw them in the trash. To throw them into trash can, select one or several menu elements and click on the Trash icon. The marked menu items are then dumped into the trash can. You can display the content of the trash can by clicking on Menus | Menu Trash. Edit Icon (Edit Menu Items) Here you can modify an existing menu, for instance the Web Links. After clicking on the name Web Links you will see the edit form for menu elements: The form is divided into three parts.    Menu Item Type    Menu Item Details    Parameters Menu Item Type Every menu item is of a particular type. We will get into greater details when we create new menus. For instance, a menu item can refer to an installed Joomla! component, a content element, a link to an external website, or many other things. You can see what the type of the link is in this section; in our case it is a link to the Joomla! weblinks component, and you can also see a button with the label Change Type. If you click on that button, you get the following screen: This manager is new in Joomla! version 1.5 and really handy. In version 1.0.x there was no option to change the type of a menu item. You had to delete the old menu item and create a new one. Now you can change the display to a single category or to a link-suggestion menu item, with which you invite other users to suggest links. Now close this; we will get back to it when we create a new menu. Menu Item Details It contains the following options: ID: Everything in an administration requires an ID number and so does our menu item. In this case the menu item has the ID number 48. Joomla! assigns this number for internal administration purposes at the time the item is created. This number cannot be changed. Title: This is the name of the menu and it will be displayed that way on your website. Alias: This is the name of the search-engine friendly URL after the domain name. When this is enabled, the URL for this menu will look as follows: http://localhost/joomla150/web-links Link: This is the request for a component, in other words also the part of the URL after the domain name with which you call up your website. In this case it is  index.php?option=com_weblinks&view=categories Display in: With this you can change the place where the item is displayed; in other words you can move it to another menu. The options field presents you with a list of the available menus. Parent Item: Of course menus can also contain nested, tree-like items. Top means that the item is at the uppermost level. The rest of the items represent existing menu items. If, for instance, you classify and save Web Links under The News, the display on the item list and the display on your website are changed. The following figures show the change. The menu item Web Links has now moved into The News on your website. So you have to first click on The News in order to see the Web Links item. Your website can easily and effectively be structured like a database tree in this manner. Published: With this you can publish a menu item. Order: From the options list, you can select after which link you want to position this link. Access Level: You can restrict users that can see this list. On Click, Open in: A very handy option that influences the behavior of the link. The page is either opened in the existing window or in a new browser window after clicking. You can also define whether the new window will be displayed with or without browser navigation. Parameters The possible parameters of a menu item depend on the type of the item. A simple link, of course, has fewer parameters than a configurable list or for example the front page link. In this case we have a link to the categories. The number and type of parameters depend on the type of the menu item. You can open and collapse the parameter fields by clicking on the header. If the parameter fields are open, the arrow next to header points down. Parameters–Basic The basic parameters are the same for all menu links. Image: Here you can specify an image that must be in the root directory of the media manager (/images/stories/). Depending on the template, this picture is displayed on the left, next to the menu item. Image Align: You can decide if the image should be on the left or right. Show a Feed Link: It is possible to create an RSS feed for every list display in Joomla! 1.5. This could be desirable or undesirable depending on the content of the list. In this case, with list displays, RSS feed links that contain the list items are enabled in the browser.
Read more
  • 0
  • 0
  • 4324

article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 4310

article-image-creating-autocad-command
Packt
10 Oct 2013
5 min read
Save for later

Creating an AutoCAD command

Packt
10 Oct 2013
5 min read
Some custom AutoCAD applications are designed to run unattended, such as when a drawing loads or in reaction to some other event that occurs in your AutoCAD drawing session. But, the majority of your AutoCAD programming work will likely involve custom AutoCAD commands, whether automating a sequence of built-in AutoCAD commands, or implementing new functionality to address a business need. Commands can be simple (printing to the command window or a dialog box), or more difficult (generating a new design on-the-fly, based on data stored in an existing design). Our first custom command will be somewhat simple. We will define a command which will count the number of AutoCAD entities found in ModelSpace (the space in AutoCAD where you model your designs). Then, we will display that data in the command window. Frequently, custom commands acquire information about an object in AutoCAD (or summarize a collection of user input), and then present that information to the user, either for the purpose of reporting data or so the user can make an informed choice or selection based upon the data being presented. Using Netload to load our command class You may be wondering at this point, "How do we load and run our plugin?" I'm glad you asked! To load the plugin, enter the native AutoCAD command NETLOAD. When the dialog box appears, navigate to the DLL file, MyAcadCSharpPlugin1.dll, select it and click on OK. Our custom command will now be available in the AutoCAD session. At the command prompt, enter COUNTENTS to execute the command. Getting ready In our initial project, we have a class MyCommands, which was generated by the AutoCAD 2014 .NET Wizard. This class contains stubs for four types of AutoCAD command structures: basic command; command with pickfirst selection; a session command; and a lisp function. For this plugin, we will create a basic command, CountEnts, using the stub for the Modal command. How to do it... Let's take a look at the code we will need in order to read the AutoCAD database, count the entities in ModelSpace, identify (and count) block references, and display our findings to users: First, let's get the active AutoCAD document and the drawing database. Next, begin a new transaction. Use the using keyword, which will also take care of disposing of the transaction. Open the block table in AutoCAD. In this case, open it for read operation using the ForRead keyword. Similarly, open the block table record for ModelSpace, also for read (ForRead) (we aren't writing new entities to the drawing database at this time). We'll initialize two counters: one to count all AutoCAD entities; one to specifically count block references (also known as Inserts). Then, as we iterate through all of the entities in AutoCAD's ModelSpace, we'll tally AutoCAD entities in general, as well as block references. Having counted the total number of entities overall, as well as the total number of block references, we'll display that information to the user in a dialog box. How it works... AutoCAD is a multi-document application. We must identify the active document (the drawing that is activated) in order to read the correct database. Before reading the database we must start a transaction. In fact, we use transactions whenever we read from or write to the database. In the drawing database, we open AutoCAD's block table to read it. The block table contains the block table records ModelSpace, PaperSpace, and PaperSpace0. We are going to read the entities in ModelSpace so we will open that block table record for reading. We create two variables to store the tallies as we iterate through ModelSpace, keeping track of both block references and AutoCAD entities in general. A block reference is just a reference to a block. A block is a group of entities that is selectable as if it was a single entity. Blocks can be saved as drawing files (.dwg) and then inserted into other drawings. Once we have examined every entity in ModelSpace, we display the tallies (which are stored in the two count variables we created) to the user in a dialog box. Because we used the using keyword when creating the transaction, it is automatically disposed of when our command function ends. Summary The Session command, one of the four types of command stubs added to our project by the AutoCAD 2014 .NET Wizard, has application (rather than document) context. This means it is executed in the context of the entire AutoCAD session, not just within the context of the current document. This allows for some operations that are not permitted in document context, such as creating a new drawing. The other command stub, described as having pickfirst selection is executed with pre-selected AutoCAD entities. In other words, users can select (or pick) AutoCAD entities just prior to executing the command and those entities will be known to the command upon execution. Resources for Article: Further resources on this subject: Dynamically enable a control (Become an expert) [Article] Introduction to 3D Design using AutoCAD [Article] Getting Started with DraftSight [Article]
Read more
  • 0
  • 0
  • 4309
article-image-installation-and-getting-started-firebug
Packt
05 Apr 2010
3 min read
Save for later

Installation and Getting Started with Firebug

Packt
05 Apr 2010
3 min read
What is Firebug? Firebug is a free, open source tool that is available as a Mozilla Firefox extension, and allows debugging, editing, and monitoring of any website's CSS, HTML, DOM, and JavaScript. It also allows performance analysis of a website. Furthermore, it has a JavaScript console for logging errors and watching values. Firebug has many other tools to enhance the productivity of today's web developer. Firebug integrates with Firefox to put a wealth of development tools at our fingertips while we browse a website. Firebug allows us to understand and analyze the complex interactions that take place between various elements of any web page when it is loaded by a browser. Firebug simply makes it easier to develop websites/applications. It is one of the best web development extensions for Firefox. Firebug provides all the tools that a web developer needs to analyze, debug, and monitor JavaScript, CSS, HTML, and AJAX. It also includes a debugger, error console, command line, and a variety of useful inspectors. Although Firebug allows us to make changes to the source code of our web page, the changes are made to the copy of the HTML code that has been sent to the browser by the server. Any changes to the code are made in the copy that is available with the browser. The changes don't get reflected in the code that is on the server. So, in order to ensure that the changes are permanent, corresponding changes have to be made in the code that resides on the server. The history of Firebug Firebug was initially developed by Joe Hewitt, one of the original Firefox creators, while working at Parakey Inc. Facebook purchased Parakey in July, 2007. Currently, the open source development and extension of Firebug is overseen by the Firebug Working Group. It has representation from Mozilla, Google, Yahoo, IBM, Facebook, and many other companies. Firebug 1.0 Beta was released in December 2006. Firebug usage has grown very fast since then. Approximately 1.3 million users have Firebug installed as of January 2009. The latest version of Firebug is 1.5. Today, Firebug has a very open and thriving community. Some individuals as well as some companies have developed very useful plugins on top of Firebug. The need for Firebug During the 90s, websites were mostly static HTML pages, JavaScript code was considered a hack, and there were no interactions between page components on the browser side. The situation is not the same anymore. Today's websites are a product of several distinct technologies and web developers must be proficient in all of them—HTML, CSS, JavaScript, DOM, and AJAX, among others. Complex interactions happen between various page components on the browser side. However, web browsers have always focused on the needs of the end users; as a result, web developers have long been deprived of a good tool on the client/browser side to help them develop and debug their code. Firebug fills this gap very nicely—it provides all the tools that today's web developer needs in order to be productive and efficient with code that runs in the browser. Firebug capabilities Firebug has a host of features that allow us to do the following (and much more!): Inspect and edit HTML Inspect and edit CSS and visualize CSS metrics Use a performance tuning application Profile and debug JavaScript Explore the DOM Analyze AJAX calls
Read more
  • 0
  • 0
  • 4306

article-image-basics-wordpress-and-jquery-plugin
Packt
27 Sep 2010
10 min read
Save for later

The Basics of WordPress and jQuery Plugin

Packt
27 Sep 2010
10 min read
  WordPress 3.0 jQuery Enhance your WordPress website with the captivating effects of jQuery. Enhance the usability and increase visual interest in your WordPress 3.0 site with easy-to-implement jQuery techniques Create advanced animations, use the UI plugin to your advantage within WordPress, and create custom jQuery plugins for your site Turn your jQuery plugins into WordPress plugins and share with the world Implement all of the above jQuery enhancements without ever having to make a WordPress content editor switch over into HTML view   Read more about this book (For more resources on WordPress and jQuery, see here.) WordPress plugins overview So themes change the look of WordPress without affecting its functionality. But what if you want to change or add functionality? WordPress plugins allow easy modification, customization, and enhancement to a WordPress site. Instead of having to dig in to the main files and change the core programming of WordPress, you can add functionality by installing and activating WordPress plugins. The WordPress development team took great care to make it easy to create plugins using access points and methods provided by the WordPress' Plugin API (Application Program Interface). The best place to search for plugins is: http://wordpress.org/extend/plugins/. The following is a screenshot of the WordPress plugin directory's main page: (Move the mouse over the image to enlarge.) Once you have a plugin, it's a simple matter of decompressing the file (usually just unzipping it) and reading the included readme.txt file for installation and activation instructions. For most WordPress plugins, this is simply a matter of uploading the file or directory to your WordPress installation's wp-content/plugins directory and then navigating to the Administration | Plugins | Installed panel to activate it. The next screenshot shows the Plugins admin panel with the activation screen for the default Askimet, Hello Dolly, and new WordPress Importer plugins: So how does a WordPress plugin differ from a jQuery plugin? In theory and intent, not that much, but in practice, there are quite a few differences. Let's take a look at jQuery plugins. jQuery plugins overview jQuery has the ability to allow you to take the scripts that you've created and encapsulate them into the jQuery function object. This allows your jQuery code to do a couple of key things. First, it becomes more easily ported over to different situations and uses. Second, your plugin works as a function that can be integrated into larger scripts as part of the jQuery statement chain. The best place to browse for jQuery plugins is the jQuery plugins page (http://plugins.jquery.com/), as seen in the next screenshot: In addition to having jQuery already bundled, WordPress has quite a few jQuery plugins already bundled with it as well. WordPress comes bundled with Color, Thickbox as well as Form and most of the jQuery UI plugins. Each of these plugins can be enabled with the wp_enqueue_script either in the theme's header.php file or function.php file, in WordPress. In this article, we'll shortly learn how to enable a jQuery plugin directly in a WordPress plugin. Of course, you can also download jQuery plugins and include them manually into your WordPress theme or plugins. You'd do this for plugins that aren't bundled with WordPress, or if you need to amend the plugins in anyway. Yes, you've noticed there's no easy jQuery plugin activation panel in WordPress. This is where understanding your chosen theme and WordPress plugins will come in handy! You'll soon find you have quite a few options to choose from when leveraging jQuery. Now that we have an overview of what WordPress themes, plugins, and jQuery plugins are, let's learn how to take better advantage of them. The basics of a WordPress plugin The goal here is to show you the structure of a simple WordPress plugin and the basics of how to construct one. Understanding this, you can begin to write your own basic plugins and feel more confident looking through other people's plugins when assessing what kind of features they provide to your WordPress site and if you need to tweak anything for your jQuery enhancements. Even as simply and basically as we're going to work, you'll see how truly powerful WordPress plugins can be. Want to become a WordPress plugin rockstar? You can start off with, yet again, WordPress 2.7 Complete by April Hodge Silver and Hasin Hayder. There's a chapter on plugins that walks you through creating very useful simple plugins, as well as a more complex plugin that writes to the WordPress database. Beyond that, you'll want to check out WordPress Plugin Development: Beginner's Guide by Vladimir Prelovac. Don't let the title fool you, Vladimir will have you generating feature rich and dynamic WordPress plugins using WordPress' coding standards all explained with clear, step-by-step code. Working with plugins does require some experience with PHP. I'll keep this explanation fairly straightforward for non-PHP developers, and those of you with PHP experience should be able to see how to expand on this example to your advantage in WordPress. Just as with themes, WordPress plugins require a little structure to get started with them. There's no defined hierarchy for plugin files, but you do need, at the very least, a PHP file with a special comment up top so that WordPress can display it within the Plugin Management page. While there are some single-file plugins out there, such as the Hello Dolly plugin, which comes with your WordPress installation, you never know when you first start developing, the ways in which a plugin may grow. To be safe, I like to organize my plugins into a uniquely named folder. Again, like with themes, WordPress relies on the plugin directory's namespace, so uniqueness is of key importance! In the wp-content/plugins directory you can place a unique folder and inside that, create a .php file, and at the beginning of the file, inside the <?php ?> tags, include the following header information. Only the bold information is absolutely required. The rest of the information is optional and populates the Manage Plugins page in the Administration panel. <?php /* Plugin Name: your WordPress Plugin Name goes here Plugin URI: http://yoururl.com/plugin-info Description: Explanation of what it does Author: Your Name Version: 1.0 Author URI: http://yoururl.com */ //plugin code will go here ?> Make sure that you don't have any spaces before your <?php tag or after your ?> tag. If you do, WordPress will display some errors because the system will get some errors about page headers already being sent. Once you have your .php file set up in its own directory, inside your plugin directory, you can add a basic PHP function. You can then decide how you want to evoke that function, using an action hook or a filter hook. For example: <?php /* Plugin Name: your WordPress Plugin Name goes here Plugin URI: http://yoururl.com/plugin-info Description: Explanation of what it does Author: Your Name Version: 1.0 Author URI: http://yoururl.com */ function myPluginFunction(){ //function code will go here } add_filter('the_title', 'myPluginFunction'); //or you could: /*add_action('wp_head', 'myPluginFunction');*/ ?> If you didn't have wp_head or wp_footer in your theme, many plugins can't function, and you limit yourself to the plugins you can write. In my plugins, I mostly use wp_header and the init action hooks. Luckily, most filter hooks will work in your plugins as WordPress will run through them in The Loop. For the most part, you'll get the most work done in your plugin using the_title and the_content filter hooks. Each of these filter's hooks will execute your function when WordPress loops through those template tags in the loop. Want to know what filter and action hooks are available? The list is exhaustive. In fact, it's so immense that the WordPress codex doesn't seem to have them all documented! For the most complete listing available of all action and filter hooks, including newer hooks available in version 2.9.x, you'll want to check out Adam Brown's WordPress Hooks Database: http://adambrown.info/p/wp_hooks. Overwhelmed by the database? Of course, checking out Vladimir's WordPress Plugin Development: Beginner's Guide will get you started with an arsenal of action and filter hooks as well. You now understand the basics of a WordPress plugin! Let's do something with it. Project: Writing a WordPress plugin to display author bios As we've discussed, plugins can help expand WordPress and give it new functionality. However, we've seen that adding jQuery scripts directly to the theme and editing its template pages here and there will do the trick in most cases. But let's imagine a more complicated scenario using our modified default theme and the hypothetical client. While we tweaked the default theme, I figured that this client might want to have her site's focus be more journalism oriented, and thus, she'd want some attention drawn to the author of each post upfront. I was right, she does. However, there's a catch. She doesn't just want their WordPress nickname displayed; she'd prefer their full first and last name be displayed as well, as it's more professional. She'd also like their quick biography displayed with a link to their own URL and yet, not have that information "get in the way" of the article itself, or lost down at the bottom of the post. And here's the really fun part; she wants this change affected not just on this site, but across her network of genre-specific news sites, over 20 of them at last count (dang, I forgot she had so many sites! Good thing she's hypothetical). For this specific WordPress site, it's easy enough to go in and comment out the custom function we dealt with earlier: add in the_author tag and display it twice, passing each tag some parameters to display the first and last name. I can also add a tag to display the author's biography snippet from the user panel and URL (if they've filled out that information). Also, it's certainly easy enough to add a little jQuery script to make that bio div show up on a rollover of the author's name. However, having to take all that work and then re-copy it into 20 different sites, many of which are not using the default theme, and most of which have not had jQuery included into their theme, does sound like an unnecessary amount of work (to boot, the client has mentioned that she's deciding on some new themes for some of the sites, but she doesn't know which sites will get what new themes yet). It is an unnecessary amount of work. Instead of amending this theme and then poking through, pasting, testing, and tweaking in 20 other themes, we'll spend that time creating a WordPress plugin. It will then be easy to deploy it across all the client's sites, and it shouldn't matter what theme each site is using. Let's get started!
Read more
  • 0
  • 0
  • 4295

article-image-ibm-filenet-p8-content-manager-end-user-tools-and-tasks
Packt
15 Feb 2011
10 min read
Save for later

IBM FileNet P8 Content Manager: End User Tools and Tasks

Packt
15 Feb 2011
10 min read
Getting Started with IBM FileNet P8 Content Manager Install, customize, and administer the powerful FileNet Enterprise Content Management platform Quickly get up to speed on all significant features and the major components of IBM FileNet P8 Content Manager Provides technical details that are valuable both for beginners and experienced Content Management professionals alike, without repeating product reference documentation Gives a big picture description of Enterprise Content Management and related IT areas to set the context for Content Manager Written by an IBM employee, Bill Carpenter, who has extensive experience in Content Manager product development, this book gives practical tips and notes with a step-by-step approach to design real Enterprise Content Management solutions to solve your business needs Parts of some of these topics will cover things that are features of the XT application rather than general features of CM and the P8 platform. We'll point those out so there is no confusion. What is Workplace XT? IBM provides complete, comprehensive APIs for writing applications to work with the CM product and the P8 platform. They also provide several pre-built, ready to use environments for working with CM. These range from connectors and other integrations, to IBM and third-party applications, to standalone applications provided with CM. Business needs will dictate which of these will be used. It is common for a given enterprise to use a mix of custom coding, product integrations, and standalone CM applications. Even in cases where the standalone CM applications are not widely deployed throughout the enterprise, they can still be used for ad hoc exploration or troubleshooting by administrators or power users. XT is a complete, standalone application included with CM. It's a good application for human-centered document management, where users in various roles actively participate in the creation and management of individual items. XT exposes most CM features, including the marriage of content management and process management (workflow). XT is a thin client web application built with modern user interface technologies so that it has something of a Web 2.0 look and feel. To run XT, open its start page with your web browser. The URL is the server name where XT is installed, the appropriate port number, and the default context of WorkplaceXT. In our installation, that's http://wjc-rhel.example.net:9080/WorkplaceXT. We don't show it here, but for cases where XT is in wider use than our all-in-one development system, it's common to configure things so that it shows up on port 80, the default HTTP port. This can be done by reconfiguring the application server to use those ports directly or by interposing a web server (for example, IBM HTTP Server, IHS) as a relay between the browser clients and the application server. It's also common to configure things such that at least the login page is protected by TLS/SSL. Details for both of these configuration items are covered in depth in the product documentation (they vary by application server type). For some of the examples in this article, we'll log on as the high-privileged user poweruser, and, for others, we'll log on as the low-privileged user unpriv. You can create them now or substitute any pair of non-administrator accounts from your own directory. Browsing folders and documents Let's have a look at XT's opening screen. Log onto XT as user poweruser. With the folder icon selected from the top-left group of four icons, as in the figure below, XT shows a tree view that allows browsing through folders for content. Of course, we don't actually have any content in the Object Store yet, so all we see when we expand the Object Store One node are pseudo-folders (that is, things XT puts into the tree but which are not really folders in the Object Store). Let's add some content right now. For now, we'll concentrate on the user view of things. Adding folders In the icon bar are two icons with small, green "+" signs on them (you can see them in the screenshot above). The left icon, which looks like a piece of paper, is for adding documents to the currently expanded folder. The icon to the right of that, which looks like an office supply folder, is for adding a subfolder to the currently expanded folder. Select Object Store One in the tree view, and click the icon for adding a folder. The first panel of a pop-up wizard appears, as shown above, prompting you for a folder name. We have chosen the name literature to continue the example that we started in Administrative Tools and Tasks. Click the Add button, and the folder will be created and will appear in the tree view. Follow the same procedure to add a subfolder to that called shakespeare. That is, create a folder whose path is /literature/shakespeare. You can modify the security of most objects by right-clicking and selecting More Information | Security. A pop-up panel shows the object's Access Control List (ACL). For now, we just want to allow other users to add items to the shakespeare folder (we'll need that for the illustration of entry templates when we get to that section below). Open that folder's security panel. Click the link for #AUTHENTICATEDUSERS, and check the File In Folder box in the Allow column, highlighted in the following screenshot: Adding documents Now let's add some actual documents to our repository. We'll add a few of Shakespeare's famous works as sample documents. There are many sources for electronic copies of Shakespeare's works readily available on the Internet. One of our favorites for exercises like this is at the Massachusetts Institute of Technology: http://shakespeare.mit.edu. It's handy because it's really just the text without a lot of notes, criticisms, and so on. The first thing you see is a list of all the works categorized by type of work, and you're only a click or two away from the full HTML text of the work. It doesn't hurt that they explicitly state that they have placed the HTML versions in the public domain. We'll use the full versions in a single HTML page for our sample documents. In some convenient place on your desktop machine, download a few of the full text files. We chose As You Like It (asyoulikeit_full.html), Henry V (henryv_full.html), Othello (othello_full.html), and Venus and Adonis (VenusAndAdonis.html). Select the /literature/shakespeare folder in the tree view, and click the icon for adding a document. The document add wizard pops up, as shown next: Browse to the location of the first document file, asyoulikeit_full.html, and click the Next button. Don't click Add Now or you won't get the correct document class for our example. Initially, the class Document is indicated. Click on Class and select Work of Literature. The list of properties automatically adjusts to reflect the custom properties defined for our custom class. Supply the values indicated (note in particular that you have to adjust the Document Title property because it defaults to the file name). XT uses the usual convention of marking required properties with an asterisk. Click Add. Repeat the above steps for the other three documents. You'll now have a short list in the shakespeare folder. XT also provides a "landing zone" for the drag-and-drop of documents. It's located in the upper right-hand corner of the browser window, as shown next. This can save you the trouble of browsing for documents in your filesystem. Even though it can accept multiple documents in a single drag-and-drop, it prompts only for a single set of property values that are applied to all of the documents. Viewing documents Clicking on a document link in XT will lead to the download of the content and the launching of a suitable application. For most documents, the web browser is used to find and launch an application based on the document content type, although XT does have some configurability in its site preferences for customizing that behavior. The behavior you can normally expect is the same as if you clicked on a link for a document on any typical website. For graphical image content (JPEG, PNG, and similar formats), XT launches the Image Viewer applet. The Image Viewer applet is especially handy for dealing with Tagged Image Format Files (TIFF) graphics because most browsers do not handle TIFF natively. It is common for fax and scanning applications to generate TIFF images of pages. However, even for common graphics formats that can be rendered by the browser, the Image Viewer applet has more functionality. The most interesting extra features are for adding textual or graphical annotations to the image. Rather than directly manipulating the original image, the annotations are created in an overlay layer and saved as Annotation objects in the repository. For example, in the image below, being displayed in the Image Viewer applet, the stamp tool has been used to mark it as a DRAFT. That annotation can easily be repositioned or even removed without affecting the original image. The included Image Viewer applet is licensed only for use within the FileNet components where it's already integrated. It is an OEM version of ViewONE from Daeja Image Systems. The ViewONE Pro application, which has additional functionality, is available for license directly from Daeja and can be integrated into FileNet applications as a supported configuration. However, in such cases, support for the viewer itself comes directly from Daeja. Entry templates Although each step of document and folder creation is individually straightforward, taken together they can become bewildering to non-technical users, especially if coupled with naming, security, and other conventions. Even when the process is completely understood, there are several details which are purely clerical in nature but which still might suffer from mis-typing and so on. From these motivations comes an XT feature called Entry Templates. Someone, usually an administrator, creates an entry template as an aid for other users who are creating folders or documents. A great many details can be specified in advance, but the user can still be given choices at appropriate points. To create an entry template, navigate to Tools | Advanced Tools | Entry Templates | Add. A wizard is launched from which you can define a Document Entry Template or a Folder Entry Template. We won't go through all of the steps here since the user interface is easy to understand. Both types of entry templates are Document subclasses, and XT files created entry templates into folders. When you double-click on an entry template, XT presents a user interface that adheres to the entry template design. For example, in this screen shot which uses an entry template called Shakespearean Document, the document class and target folder are already selected and cannot be changed by the user. Likewise, the author last and full names are pre-populated. Other properties, which genuinely need user input, can be edited as usual.
Read more
  • 0
  • 0
  • 4292
article-image-jquery-table-manipulation-part-2
Packt
24 Oct 2009
6 min read
Save for later

jQuery Table Manipulation: Part 2

Packt
24 Oct 2009
6 min read
Advanced Row Striping Row striping can be as simple as two lines of code to alternate the background color: $(document).ready(function() {   $('table.sortable tbody tr:odd').addClass('odd');   $('table.sortable tbody tr:even').addClass('even'); }); If we declare background colors for the odd and even classes as follows, we can see the rows in alternating shades of gray: tr.even {   background-color: #eee; } tr.odd {   background-color: #ddd; } While this code works fine for simple table structures, if we introduce non‑standard rows into the table, such as sub-headings, the basic odd-even pattern no longer suffices. For example, suppose we have a table of news items grouped by year, with columns for date, headline, author, and topic. One way to express this information is to wrap each year's news items in a <tbody> element and use <th colspan="4"> for the subheading. Such a table's HTML (in abridged form) would look like this: <table class="striped"> <thead> <tr> <th>Date</th> <th>Headline</th> <th>Author</th> <th class="filter-column">Topic</th> </tr> </thead><tbody> <tr> <th colspan="4">2007</th> </tr> <tr> <td>Mar 11</td> <td>SXSWi jQuery Meetup</td> <td>John Resig</td> <td>conference</td> </tr> <tr> <td>Feb 28</td> <td>jQuery 1.1.2</td> <td>John Resig</td> <td>release</td> </tr> <tr> <td>Feb 21</td> <td>jQuery is OpenAjax Compliant</td> <td>John Resig</td> <td>standards</td> </tr> <tr> <td>Feb 20</td> <td>jQuery and Jack Slocum's Ext</td> <td>John Resig</td> <td>third-party</td> </tr></tbody><tbody> <tr> <th colspan="4">2006</th> </tr> <tr> <td>Dec 27</td> <td>The Path to 1.1</td> <td>John Resig</td> <td>source</td> </tr> <tr> <td>Dec 18</td> <td>Meet The People Behind jQuery</td> <td>John Resig</td> <td>announcement</td> </tr> <tr> <td>Dec 13</td> <td>Helping you understand jQuery</td> <td>John Resig</td> <td>tutorial</td> </tr></tbody><tbody> <tr> <th colspan="4">2005</th> </tr> <tr> <td>Dec 17</td> <td>JSON and RSS</td> <td>John Resig</td> <td>miscellaneous</td> </tr></tbody></table> With separate CSS styles applied to <th> elements within <thead> and <tbody>, a snippet of the table might look like this: To ensure that the alternating gray rows do not override the color of the subheading rows, we need to adjust the selector expression: $(document).ready(function() { $('table.striped tbody tr:not([th]):odd').addClass('odd'); $('table.striped tbody tr:not([th]):even').addClass('even');}); The added selector, :not([th]), removes any table row that contains a <th> from the matched set of elements. Now the table will look like this: Three-color Alternating Pattern There may be times when we want to apply more complex striping. For example, we can apply a pattern of three alternating row colors rather than just two. To do so, we first need to define another CSS rule for the third row. We'll also reuse the odd and even styles for the other two, but add more appropriate class names for them: tr.even,tr.first { background-color: #eee;}tr.odd,tr.second { background-color: #ddd;}tr.third { background-color: #ccc;} To apply this pattern, we start the same way as the previous example—by selecting all rows that are descendants of a <tbody>, but filtering out the rows that contain a <th<. This time, however, we attach the .each() method so that we can use its built-in index: $(document).ready(function() { $('table.striped tbody tr').not('[th]').each(function(index) { //Code to be applied to each element in the matched set. });}); To make use of the index, we can assign our three classes to a numeric key: 0, 1, or 2. We'll do this by creating an object, or map: $(document).ready(function() { var classNames = { 0: 'first', 1: 'second', 2: 'third' }; $('table.striped tbody tr').not('[th]').each(function(index) { // Code to be applied to each element in the matched set. });}); Finally, we need to add the class that corresponds to those three numbers, sequentially, and then repeat the sequence. The modulus operator, designated by a %, is especially convenient for such calculations. A modulus returns the remainder of one number divided by another. This modulus, or remainder value, will always range between 0 and one less than the dividend. Using 3 as an example, we can see this pattern: 3/3 = 1, remainder 0.4/3 = 1, remainder 1.5/3 = 1, remainder 2.6/3 = 2, remainder 0.7/3 = 2, remainder 1.8/3 = 3, remainder 2. And so on. Since we want the remainder range to be 0 – 2, we can use 3 as the divisor (second number) and the value of index as the dividend (first number). Now we simply put that calculation in square brackets after classNames to retrieve the corresponding class from the object variable as the .each() method steps through the matched set of rows: $(document).ready(function() { var classNames = { 0: 'first', 1: 'second', 2: 'third' }; $('table.striped tbody tr').not('[th]').each(function(index) { $(this).addClass(classNames[index % 3]); });}); With this code in place, we now have the table striped with three alternating background colors: We could of course extend this pattern to four, five, six, or more background colors by adding key-value pairs to the object variable and increasing the value of the divisor in classNames[index % n].
Read more
  • 0
  • 0
  • 4287

article-image-easily-writing-sql-queries-spring-python
Packt
24 May 2010
9 min read
Save for later

Easily Writing SQL Queries with Spring Python

Packt
24 May 2010
9 min read
(For more resources on Spring, see here.) Many of our applications contain dynamic data that needs to be pulled from and stored within a relational database. Even though key/value based data stores exist, a huge majority of data stores in production are housed in a SQL-based relational database. Given this de facto requirement, it improves developer efficiency if we can focus on the SQL queries themselves, and not spend lots of time writing plumbing code and making every query fault tolerant. The classic SQL issue SQL is a long existing standard that shares a common paradigm for writing queries with many modern programming languages (including Python). The resulting effect is that coding queries by hand is laborious. Let's explore this dilemma by writing a simple SQL query using Python's database API. SQL is a long existing standard that shares a common paradigm for writing queries with many modern programming languages (including Python). The resulting effect is that coding queries by hand is laborious. Let's explore this dilemma by writing a simple SQL query using Python's database API. DROP TABLE IF EXISTS article;CREATE TABLE article ( id serial PRIMARY KEY, title VARCHAR(11), wiki_text VARCHAR(10000));INSERT INTO article(id, title, wiki_textVALUES(1, 'Spring Python Book', 'Welcome to the [http://springpythonbook.com Spring Python] book, where you can learn more about [[Spring Python]].');INSERT INTO article(id, title, wiki_textVALUES(2, 'Spring Python', ''''Spring Python''' takes the concepts of Spring and applies them to world of [http://python.org Python].'); Now, let's write a SQL statement that counts the number of wiki articles in the system using the database's shell. SELECT COUNT(*) FROM ARTICLE Now let's write some Python code that will run the same query on an sqlite3 database using Python's official database API (http://www.python.org/dev/peps/pep-0249). import sqlite3db = sqlite3.connect("/path/to/sqlite3db")cursor = db.cursor()results = Nonetry: try: cursor.execute("SELECT COUNT(*) FROM ARTICLE") results = cursor.fetchall() except Exception, e: print "execute: Trapped %s" % efinally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % ereturn results[0][0] That is a considerable block of code to execute such a simple query. Let's examine it in closer detail. First, we connect to the database. For sqlite3, all we needed was a path. Other database engines usually require a username and a password. Next, we create a cursor in which to hold our result set. Then we execute the query. To protect ourselves from any exceptions, we need to wrap this with some exception handlers. After completing the query, we fetch the results. After pulling the results from the result set into a variable, we close the cursor. Finally, we can return our response. Python bundles up the results into an array of tuples. Since we only need one row, and the first column, we do a double index lookup. What is all this code trying to find in the database? The key statement is in a single line. cursor.execute("SELECT COUNT(*) FROM ARTICLE") What if we were writing a script? This would be a lot of work to find one piece of information. Granted, a script that exits quickly could probably skip some of the error handling as well as closing the cursor. But it is still is quite a bit of boiler plate to just get a cursor for running a query. But what if this is part of a long running application? We need to close the cursors after every query to avoid leaking database resources. Large applications also have a lot of different queries we need to maintain. Coding this pattern over and over can sap a development team of its energy. Parameterizing the code This boiler plate block of code is a recurring pattern. Do you think we could parameterize it and make it reusable? We've already identified that the key piece of the SQL statement. Let's try and rewrite it as a function doing just that. import sqlite3def query(sql_statement): db = sqlite3.connect("/path/to/sqlite3db") cursor = db.cursor() results = None try: try: cursor.execute(sql_statement) results = cursor.fetchall() except Exception, e: print "execute: Trapped %s" % efinally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % ereturn results[0][0] Our first step nicely parameterizes the SQL statement, but that is not enough. The return statement is hard coded to return the first entry of the first row. For counting articles, what we have written its fine. But this isn't flexible enough for other queries. We need the ability to plug in our own results handler. import sqlite3def query(sql_statement, row_handler): db = sqlite3.connect("/path/to/sqlite3db") cursor = db.cursor() results = None try: try: cursor.execute(sql_statement) results = cursor.fetchall() except Exception, e: print "execute: Trapped %s" % e finally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % e return row_handler(results) We can now code a custom handler. def count_handler(results): return results[0][0]query("select COUNT(*) from ARTICLES", count_handler) With this custom results handler, we can now invoke our query function, and feed it both the query and the handler. The only thing left is to handle creating a connection to the database. It is left as an exercise for the reader to wrap the sqlite3 connection code with a factory solution. What we have coded here is essentially the core functionality of DatabaseTemplate. This method of taking an algorithm and parameterizing it for reuse is known as the template pattern. There are some extra checks done to protect the query from SQL injection attacks. Replacing multiple lines of query code with one line of Spring Python Spring Python has a convenient utility class called DatabaseTemplate that greatly simplifies this problem. Let's replace the two lines of import and connect code from the earlier example with some Spring Python setup code. from springpython.database.factory import Sqlite3ConnectionFactoryfrom springpython.database.core import DatabaseTemplateconn_factory = Sqlite3ConnectionFactory("/path/to/sqlite3db")dt = DatabaseTemplate(conn_factory) At first glance, we appear to be taking a step back. We just replaced two lines of earlier code with four lines. However, the next block should improve things significantly. Let's replace the earlier coded query with a call using our instance of return dt.query_for_object("SELECT COUNT(*) FROM ARTICLE") Now we have managed to reduce a complex 14-line block of code into one line of Spring Python code. This makes our Python code appear as simple as the original SQL statement we typed in the database's shell. And it also reduces the noise. The Spring triangle—Portable Service Abstractions From this diagram earlier , as an illustration of the key principles behind Spring Python is being made. The DatabaseTemplate represents a Portable Service Abstraction because: It is portable because it uses Python's standardized API, not tying us to any database vendor. Instead, in our example, we injected in an instance of Sqlite3ConnectionFactory It provides the useful service of easily accessing information stored in a relational database, but letting us focus on the query, not the plumbing code It offers a nice abstraction over Python's low level database API with reduced code noise. This allows us to avoid the cost and risk of writing code to manage cursors and exception handling DatabaseTemplate handles exceptions by catching and holding them, then properly closing the cursor. It then raises it wrapped inside a Spring Python DataAccessException. This way, database resources are properly disposed of without losing the exception stack trace. Using DatabaseTemplate to retrieve objects Our first example showed how we can easily reduce our code volume. But it was really only for a simple case. A really useful operation would be to execute a query, and transform the results into a list of objects. First, let's define a simple object we want to populate with the information retrieved from the database. As shown on the Spring triangle diagram, using simple objects is a core facet to the 'Spring way'. class Article(object): def __init__(self, id=None, title=None, wiki_text=None): self.id = id self.title = title self.wiki_text = wiki_text If we wanted to code this using Python's standard API, our code would be relatively verbose like this: cursor = db.cursor()results = []try: try: cursor.execute("SELECT id, title, wiki_text FROM ARTICLE") temp = cursor.fetchall() for row in temp: results.append( Article(id=temp[0], title=temp[1], wiki_text=temp[2])) except Exception, e: print "execute: Trapped %s" % efinally: try: cursor.close() except Exception, e: print "close: Trapped %s, and throwing away" % ereturn results This isn't that different from the earlier example. The key difference is that instead of assigning fetchall directly to results, we instead iterate over it, generating a list of Article objects. Instead, let's use DatabaseTemplate to cut down on the volume of code return dt.query("SELECT id, title, wiki_text FROM ARTICLE", ArticleMapper()) We aren't done yet. We have to code ArticleMapper, the object class used to iterate over our result set. from springpython.database.core import RowMapperclass ArticleMapper(RowMapper): def map_row(self, row, metadata=None): return Article(id=row[0], title=row[1], wiki_text=row[2]) RowMapper defines a single method: map_row. This method is called for each row of data, and includes not only the information, but also the metadata provided by the database. ArticleMapper can be re-used for every query that performs the same mapping This is slightly different from the parameterized example shown earlier where we defined a row-handling function. Here we define a class that contains the map_row function. But the concept is the same: inject a row-handler to convert the data.
Read more
  • 0
  • 0
  • 4284
Modal Close icon
Modal Close icon