Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-application-performance
Packt
15 Nov 2013
8 min read
Save for later

Application Performance

Packt
15 Nov 2013
8 min read
(For more resources related to this topic, see here.) Data sizing The cost of abstractions in terms of data size plays an important role. For example, whether or not a data element can fit into a processor cache line depends directly upon its size. On a Linux system, we can find out the cache line size and other parameters by inspecting the values in the files under /sys/devices/system/cpu/cpu0/cache/. Another concern we generally find with data sizing is how much data we are holding at a time in the heap. GC has direct consequences on the application's performance. While processing data, often we do not really need all the data we hold on to. Consider the example of generating a summary report of sold items for a certain period (months) of time. After the subperiod (month wise), summary data is computed. We do not need the item details anymore, hence it's better to remove the unwanted data while we add the summaries. This is shown in the following example: (defn summarize [daily-data] ; daily-data is a map (let [s (items-summary (:items daily-data))] (-> daily-data (select-keys [:digest :invoices]) ; we keep only the required key/val pairs (assoc :summary s)))) ;; now inside report generation code (-> (fetch-items period-from period-to :interval-day) (map summarize) generate-report) Had we not used select-keys in the preceding summarize function, it would have returned a map with extra summary data along with all the other existing keys in the map. Now, such a thing is often combined with lazy sequences. So, for this scheme to work, it is important not to hold on to the head of the lazy sequence. Reduced serialization An I/O channel is a common source of latency. The perils of over-serialization cannot be overstated. Whether we read or write data from a data source over an I/O channel, all of that data needs to be prepared, encoded, serialized, de-serialized, and parsed before being worked on. It is better for every step to have less data involved in order to lower the overhead. Where there is no I/O involved, such as in-process communication, it generally makes no sense to serialize. A common example of over-serialization is encountered while working with SQL databases. Often, there are common SQL query functions that fetch all columns of a table or a relation—they are called by various functions that implement the business logic. Fetching data that we do not need is wasteful and detrimental to the performance for the same reason that we discussed in the preceding paragraph. While it may seem more work to write one SQL statement and one database query function for each use case, it pays off with better performance. Code that uses NoSQL databases is also subject to this anti-pattern—we have to take care to fetch only what we need even though it may lead to additional code. There's a pitfall to be aware of when reducing serialization. Often, some information needs to be inferred in absence of the serialized data. In such cases where some of the serialization is dropped so that we can infer other information, we must compare the cost of inference versus the serialization overhead. The comparison may not be necessarily done per operation, but rather on the whole. Then, we can consider the resources we can allocate in order to achieve capacities for various parts of our systems. Chunking to reduce memory pressure What happens when we slurp a text file regardless of its size? The contents of the entire file will sit in the JVM heap. If the file is larger than the JVM heap capacity, the JVM will terminate by throwing OutOfMemoryError. If the file is large but not large enough to force the JVM into an OOM error, it leaves a relatively smaller JVM heap space for other operations in the application to continue. A similar situation takes place when we carry out any operation disregarding the JVM heap capacity. Fortunately, this can be fixed by reading data in chunks and processing them before reading further. Sizing for file/network operations Let us take the example of a data ingestion process where a semi-automated job uploads large Comma Separated File (CSV) files via the File Transfer Protocol (FTP) to a file server, and another automated job, which is written in Clojure, runs periodically to detect the arrival of files via the Network File System (NFS). After detecting a new file, the Clojure program processes the file, updates the result in a database, and archives the file. The program detects and processes several files concurrently. The size of the CSV files is not known in advance, but the format is predefined. As per the preceding description, one potential problem is that since there could be multiple files being processed concurrently, how do we distribute the JVM heap among the concurrent file-processing jobs? Another issue could be that the operating system imposes a limit on how many files can be opened at a time; on Unix-like systems, you can use the ulimit command to extend the limit. We cannot arbitrarily slurp the CSV file contents—we must limit each job to a certain amount of memory and also limit the number of jobs that can run concurrently. At the same time, we cannot read a very small number of rows from a file at a time because this may impact performance. (def ^:const K 1024) ;; create the buffered reader using custom 128K buffer-size (-> filename java.io.FileInputStream java.io.InputStreamReader (java.io.BufferedReader (* K 128))) Fortunately, we can specify the buffer size when reading from a file or even from a network stream so as to tune the memory usage and performance as appropriate. In the preceding code example, we explicitly set the buffer size of the reader to facilitate the same. Sizing for JDBC query results Java's interface standard for SQL databases, JDBC (which is technically not an acronym), supports fetch-size for fetching query results via JDBC drivers. The default fetch size depends on the JDBC driver. Most JDBC drivers keep a low default value so as to avoid high memory usage and attain internal performance optimization. A notable exception to this norm is the MySQL JDBC driver that completely fetches and stores all rows in memory by default. (require '[clojure.java.jdbc :as jdbc]) ;; using prepare-statement directly (we rarely use it directly, shown just for demo) (with-open [stmt (jdbc/prepare-statement conn sql :fetch-size 1000 max-rows 9000) rset (resultset-seq (.executeQuery stmt))] (vec rset)) ;; using query (query db [{:fetch-size 1000} "SELECT empno FROM emp WHERE country=?" 1]) When using the Clojure Contrib library java.jdbc (https://github.com/clojure/java.jdbc as of Version 0.3.0), the fetch size can be set while preparing a statement as shown in the preceding example. The fetch size does not guarantee proportional latency; however, it can be used safely for memory sizing. We must test any performance-impacting latency changes due to fetch size at different loads and use cases for the particular database and JDBC driver. Besides fetch-size, we can also pass the max-rowsargument to limit the maximum rows to be returned by a query. However, this implies that the extra rows will be truncated from the result, not that the database will internally limit the number of rows to realize. Resource pooling There are several types of resources on the JVM that are rather expensive to initialize. Examples are HTTP connections, execution threads, JDBC connections, and so on. The Java API recognizes such resources and has built-in support for creating a pool of some of those resources so that the consumer code borrows a resource from a pool when required and at the end of the job simply returns it to the pool. Java's thread pools and JDBC data sources are prominent examples. The idea is to preserve the initialized objects for reuse. Even when Java does not support pooling of a resource type directly, you can always create a pool abstraction around custom expensive resources. The pooling technique is common in I/O activities, but it can be equally applicable to non-I/O purposes where the initialization cost is high. Summary Designing an application for performance should be based on the use cases and patterns of anticipated system load and behavior. Measuring performance is extremely important to guide optimization in the process. Fortunately, there are several well-known optimization patterns to tap into, such as resource pooling, and data sizing. Thus we analysed the performance optimization using these patterns. Resources for Article: Further resources on this subject: Improving Performance with Parallel Programming [Article] Debugging Java Programs using JDB [Article] IBM WebSphere Application Server Security: A Threefold View [Article]
Read more
  • 0
  • 0
  • 7320

article-image-installation-freenas
Packt
28 Oct 2009
28 min read
Save for later

Installation of FreeNAS

Packt
28 Oct 2009
28 min read
Downloading FreeNAS Before you can install the FreeNAS server, you will need to download the latest version from the FreeNAS website (http://www.freenas.org). Go to the download section and find the latest "LiveCD" version. The LiveCD version is what is known as an ISO image file and will have the .iso file extension. An ISO image is an exact copy of the structure and data for a CD or DVD disk. Using a CD burning program, you can create a FreeNAS bootable CD. We will look at this in more detail later on. What Hardware Do I Need? In this tutorial, we will start exploring FreeNAS, so you will need a machine on which to install the FreeNAS software. At this point in time, it doesn't have to be the final machine you are going to use as the FreeNAS server. You can use a "test" machine now and having learnt all about FreeNAS, you can build, install, and deploy a production machine (or machines) later. So, what we need now is a PC with at least 96Mb of RAM (but 128Mb or more is recommended), a bootable CD-ROM drive, a network card, one or more hard disks, and either a floppy disk drive (and a blank formatted disk) or a USB flash disk (MS-DOS formatted and empty). The hard disk will be for the data that you want to store and the floppy disk or USB flash disk will be for storing the configuration information. For the installation and initialization stages, you will also need a monitor and keyboard (but not mouse) attached to the PC. You can remove the monitor later, once FreeNAS is up and running. Warning FreeNAS boots as a LiveCD, which means that it does not use the disks on the host machine during boot up. However, when you start to configure storage on the FreeNAS server (specifically, when you format drives) all the data on the disk will be LOST. Do NOT use a machine that contains important data or an operating system that you will need afterwards. Virtualization  & VMWare The average PC runs just one operating system and inside that operating system, you would run your applications like word processing and email. There is a technology (called virtualization), which allows PCs to run more than one operating system, or to be more precise, to allow a guest virtual PC to run inside your actual PC. This virtual PC is an independent software box that can run its own OS and applications as if it were a physical computer. A virtual PC behaves exactly like a physical PC and has its own virtual CPU, RAM, hard disk, and network interface card (NIC). You can install FreeNAS on a virtual PC and FreeNAS can't tell the difference between the virtual PC and any other physical machine, also, it appears on the network just as a real PC would, running FreeNAS. There are lots of virtualization products available for Windows, Linux, and Apple OS X today. You can learn more at Wikipedia http://en.wikipedia.org/wiki/Virtualization. A very popular virtualization solution is from VMWare (http://www.vmware.com). VMWare have both commercial and freeware offerings and there are pre-configured FreeNAS images available for the VMWare range of products. This makes it an ideal environment for testing the FreeNAS server. Quick Start Guide For the Impatient If you are comfortable with burning ISO images to CDs, setting your computer's BIOS to boot from CDROM, disk partitions, and TCP/IP networking then this little guide should help you get a simple version of the FreeNAS server up and running in just a few minutes. If, however, some of these things sound daunting, then skip this section and go on to the next one where we shall go through the installation process one step at a time. For this example, we will use a USB flash disk to store the configuration information. You can use a floppy but be careful that during the boot process, the PC doesn't try to boot from the floppy before it boots from the CDROM. Burning and Booting Once you have downloaded the ISO image file from the FreeNAS website, you need to burn it to a CD. Having done that, put the CD into the PC as well as the flash disk and switch it on. Make sure that the BIOS is set to boot from CD. If it isn't, you need to enter into the BIOS and configure it to boot from CD. On many modern PCs, it is possible to select the boot device at start-up by pressing a special key (which is often either F8 or F12) to show a boot device menu. You can then select the CD as the boot device. The boot process is in four distinct parts: First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and , which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just a few seconds and you should just let it boot normally, which is the default. The final stage is when the FreeNAS logo appears and the system will boot as FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. To enable access to the web interface, the network of the FreeNAS server must be configured. Press the SPACE bar on the keyboard and the FreeNAS logo will disappear and a simple text menu will appear.       There are two aspects to configuring the network, first, you need to choose which network card to use and second, you need to assign it an address. If you have only one network card in your machine, then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically, by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like this: then the network has been recognized and assigned automatically by FreeNAS. The default IPv4 address for FreeNAS is 192.168.1.250, if this is good for your network, then you can just leave it unchanged. However, if you need to change it then press 2 followed by ENTER. If you want the machine to get its address from DHCP (Dynamic Host Configuration Protocol), answer yes (y) to the IPv4 DHCP question, otherwise answer no (n). If you are not using DHCP, you can now enter the desired IP address. Next, you need to enter the subnet mask. For 255.255.255.0, enter 24, for 255.255.0.0 enter 16, and for 255.0.0.0, enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). If you do want to enter a default gateway and DNS server at this point, they will usually be the IP address of your Internet router. We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! You are now ready to access the web interface. The FreeNAS web interface can be accessed from any machine on the network with a web browser (including Windows, Linux, and OS X machines). On this client machine, type the address of the FreeNAS server with http:// in front of it into your web browser. For example: http://192.168.1.250 Configuring The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. You should now be in the web interface. To configure some storage space, you need to work with "Disks". The logical order of working is that disks must be added, then formatted (if need be), then mounted. Finally, access is given to the various mounted disks by configuring different system services like CIFS and FTP.     So, to add a disk, go to Disks: Management. There is a + sign in a circle on the right-hand-side of the page (it can be easy to miss first time), click on it to add a disk. On the next page, select the disk you want to add. If you click on the drop-down menu, you should see the hard disks of the machines, the CDROM, and the USB flash disk. Dis'k Names in FreeBSD'The disk naming convention in FreeBSD is:/dev/ad0: Is the IDE/ATA Primary Master /dev/ad1 : Is the IDE/ATA Primary Slave/dev/ad2 : Is the IDE/ATA Secondary Master/dev/ad3 : Is the IDE/ATA Secondary Slave/dev/acd0 : Is the first ATA CD/DVD drive detected/dev/da0: Is the first SCSI hard drive, /dev/da1 the second and so on.USB flash disks are controlled using the SCSI driver, so they will appear as /dev/daN drives as well. Make sure ad0 is selected (which it should be by default). The rest of the page you can leave alone. Click Add to add the disk to the system. You then need to click Apply in order for the changes to take effect. You will now have a table showing you the disk you have added, including its size and a description. ApplyIn FreeNAS, the majority of steps need to be applied (which saves the configuration file to disk) by clicking the Apply button. It is normally found near the top of the page before any tables or configuration information is given. If you do not apply the changes, the interface will, on the whole, remember your changes but they will not be enacted in the system. After a reboot, unapplied changes will disappear. It is possible on some pages to make multiple operations and apply them all at the end. Next, the disk needs to be formatted. In Disks: Format, select the disk ad0 (which you just added above). Leave everything else unchanged and click Format disk. The disk will then be formatted. The low level output of the format command will be displayed in a box. It should end with Done!. Now the disk needs to be mounted. Go to Disks: Mount Point. Click on the + in the circle (which I shall refer to as the "add circle" from now on). Leave the Type as Disk and select the disk ad0 again. You need to type in a name, store is as good a name as any, but feel free to use which ever descriptive name you want to. Be DescriptiveIn setting up and configuring your FreeNAS server, you will be called upon to invent various names for mount points and share names etc. Try to be as descriptive as you can without being long winded. Temp, scratch, blob, and even zob are OK for testing, but try more meaningful names like storeage1, storage60gb or backupstorage etc. Don't use spaces in the names, instead use underline and in general, the names should be no longer than 15 characters. Although filling-in the description isn't mandatory in the web interface, it is worth using. Once you have completed the form click Add and then apply the changes. Sharing with Windows Machines Now that the disk has been added, formatted, and mounted, it is time to share it on the network and give other users the ability to read and write to it. FreeNAS supports many different types of access protocol, for this start guide, we will only look at Microsoft's CIFS protocol that primarily allows Windows machines (but also Apple OS X and Linux machines) to access the storage. In Services: CIFS/SMB, tick the enable box (in the title of the configuration data table). At this point, you can just about leave everything else as is with the exception of the workgroup name. We will be leaving the authentication method as "Anonymous" here as this is the easiest to get working and provides unrestricted read/write access to everyone. To make sure that the Windows machines are able to find the shared storage, we need to set the workgroup name, on the FreeNAS server, to be the same as the workgroup name of the Windows PC that will access the share. The default workgroup name for Windows Vista is WORKGROUP but note that the default for Window XP Home Edition was MSHOME. Now click Save and Restart. This will save the changes you have made and restart the CIFS service. Go to the Shares tab and click on add circle. Enter a name for the share. Repeating the name of the mount point is probably the safest policy, so in this case, store and also add a comment. Then click ... in the Path section. This will bring up a simple file system browser. The files you are seeing are on the FreeNAS server and NOT on your local PC. Click store and /mnt/store/ will appear in the little edit box at the click. OK it and you will be taken back to the shares page. Now /mnt/store/ has been added as the path. Leave everything else as it is and click Add and then apply the changes. So now the first hard disk of the computer is formatted, mounted, and shared to the rest of the network. Now, we will access the share from a Windows Vista machine. Testing the Share You can perform this test from any machine that supports the CIFS protocol including Windows 95/98/ME, Windows 2000/XP, Apple OS X, and Linux. Here, we are going to use Windows Vista. Open the Network and Sharing Center by clicking Network on the Start menu. When the window appears, Vista will automatically scan the network for any shared network resources. When it has finished, you will see the available machines on the network including FREENAS.     Open up the FREENAS computer and you will see store, the storage area that you configured. Double click on that and you now "inside" the FreeNAS server from within your Windows machine. Try dragging and dropping a few files in to the store area. Then try deleting them again. To access the FreeNAS server without using the Network and Sharing Center, click Start, and type freenas and then press Enter. This will bring up the shares available on the FreeNAS server directly:     Detailed Overview of Installation It is time to get your hands on a working FreeNAS server and to do that, we need to boot it up onto a PC. There are several steps to this. First, you must burn a CD of the ISO image file you have downloaded. Then, you need to boot the PC from the CD; this may involve changing your computers BIOS to make it boot from the optical drive. Then, you can configure the FreeNAS server to make some storage space available on the network. When using the LiveCD to boot FreeNAS, there are two types of storage on FreeNAS: data and configuration information. The data will be held on the hard drive of the PC, but the configuration needs to be held on a floppy disk or a USB flash disk. For this example, we will use a USB flash disk to store the configuration information. Making the FreeNAS CD To boot the PC into FreeNAS, you need a CD. The ISO image file you have downloaded contains all the information needed for the CD, but it needs to be written onto a physical CD. This process is often known as burning the CD as the laser writes to the disk by heating it and marking or scorching the surface layer. You need to use a PC with a CD-RW drive and a blank CD-R disk (I recommend using a good brand name CD-R for best results). Download the FreeNAS ISO image on to that machine. The PC with the CD writer should have some CD writing software on it (for example Roxio Easy CD or Nero). If you are familiar with the CD writing software, go ahead and burn the ISO file to the CD-R disk. If you aren't familiar with the CD writing software or it doesn't have any CD writing software, then I recommend ISO Recorder. You can download it from http://isorecorder.alexfeinman.com/isorecorder.htm.     Booting from CD Put your newly made FreeNAS CD into the CD drive of the machine on which you want to install FreeNAS, and also put the USB flash disk into a USB port. The flash disk will be used to store the configuration data. (You can also use a floppy disk. If you have both a USB flash disk and a floppy inserted, FreeNAS will save the configuration on the USB device). Now, you need to switch on the PC. When a PC starts, it goes through what is known as the Power On Self Test sequence. Here, the PC will check the amount of memory installed in the PC and find the installed hard drives. After the checks, the PC will try and boot from one of the hard drives, the CDROM, the floppy disk or even a USB flash disk. Which device the PC chooses first as its boot device can be changed by a built-in setup program. The setup program lets you modify basic system configuration settings. These settings are stored in a special battery-backed area of the computer's memory that retains the settings even when the power is switched off. During the POST sequence, there is normally a message telling you how to enter into the built-in setup program. It is normally either the DEL key or F2, on some systems it is also F10. You need to enter into the setup to check and/or change the first boot device to be the CDROM so that the computer will boot into FreeNAS. Each PC has a slightly different setup program, so you will need to search around until you find what you need. The three most popular types of setup programs (also known as BIOS Basic Input Output Program) are the Phoenix setup program, the Phoenix-Award setup program, and the AMI setup program. There are many types of BIOS setup programs and each PC manufacturer modifies the setup program for their own use. The information below is really only a "rough guide" to help you feel your way around. Your BIOS setup program may be significantly different from the examples below. The best source of information is the manual that came with your PC or your motherboard. If you don't have one, most PC manufacturers have them available for download on their websites. Phoenix BIOS If your machine has a Phoenix BIOS, then normally you need to press F2 to enter the setup program. The top of the setup program has a menu that you can navigate with the left and right arrow keys, you need to select the Boot menu.     On the Boot menu page, you can move up and down the available boot devices using the up and down arrow keys. You can expand and collapse sections with the + or signs using the ENTER key. To change the boot order, you use the + and keys. You want to make sure that the CDROM is the first device in the list. After you have changed the boot order list, you need to go to the Exit menu (by pressing the right arrow key) and select Exit Saving Changes. The PC will then reboot and after the POST, it will start to boot from the FreeNAS CD.     Phoenix-Award BIOS If your PC has a Phoenix-Award BIOS, then normally, you need to press DEL to enter the setup program. Once inside, you can the up, down, left, and right keys to navigate around the menus. Go in to Advanced BIOS Features and set the First Boot Device to be CDROM by using the + and keys. You now need to save your changes and exit. Pressing ESC will bring you back to the main menu, then select Save & Exit Setup. Often, pressing F10 will have the same effect. The PC will then reboot and if you have made the changes correctly, it will boot from the FreeNAS CD. AMI BIOS The American Megatrends, Inc (AMI) BIOS normally displays a message telling you to Hit <DEL> if you want to run setup. Once inside, it is quite different to that of the setup programs for Phoenix or Award. Here, the Tab key is used to navigate and the arrow keys are used to change values. To go from one page to the next, press the ALT+P keys. This information should also be printed at the bottom of the BIOS setup page. You need to find the variable Boot Sequence and make sure that it is set to boot from the CDROM first. First Look at FreeNAS The boot process is in 4 distinct parts. First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just five seconds and you should just let it boot normally which is the default. The final stage is when the FreeNAS logo appears and the system will boot as a FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. Configuring the Network The majority of the configuration for FreeNAS is done via a web interface, but before you can use the web interface, the FreeNAS server needs to be configured for your network. This is done via a simple text menu system using the keyboard and monitor attached to the PC with FreeNAS running on it. You probably only need to do this once, and after that this new network information will be saved on the USB flash disk (or floppy disk) and the server will boot into this configuration every time. If you press the SPACE bar on FreeNAS machine, the FreeNAS logo will disappear and a simple menu will appear.     Here, you have a number of options including options to reboot or power off the system. The first two options are about configuring the network and they reflect the two parts to configuring the network, first you need to choose which network card to use (option 1) and second you need to assign it an address (option 2). If you have only one network card in your machine then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like the following screenshot:     then the network has been recognised and assigned automatically by FreeNAS. What is a LAN IP Address? IP stands for Internet Protocol and it is the basic low level language that computers use to talk to each other on the Internet. It is also used on private networks (in the office or at home) to connect different PCs and even printers to each other. An IPv4 address is made up of 4 sets of number (0 to 255) and is expressed in what is known as dot notation (meaning that each number has a dot between it). So 192.168.1.250 is an IP address, it also happens to be the default IP address for the FreeNAS server. Like email, the postal service and telephone, each destination (email account, mailbox or handset) needs a unique way of being identified. This is what IP addresses do; they allow each piece of equipment on the network to have a unique identifier so that messages can be addressed to the right place on the network. Pronouncing IP AddressesIf you need to speak to someone about an IP address, the simplest way is to speak about each digit separately, so 192.168.1.250 isn't "one hundred and ninety two dot" but rather "one nine two dot one six eight dot one dot two five zero". There are two ways in which you can obtain an IP address for the FreeNAS server. The first is to have the address assigned automatically via the DHCP service (Dynamic Host Configuration Protocol), and the second is to assign it manually. What is DHCP?The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses and other IP parameters (like subnet masks and default gateway). A computer that needs an IP address will send a request to the DHCP server and the server will reply with an IP address from a pool of addresses that have been set aside for this purpose. A DHCP server can be a PC or server (running Windows, OS X or Linux) as well as small devices like modern DSL modems and firewalls. The advantage of the DHCP method is that the IP address assignment, all happens in the background and you don't need to worry about setting it yourself. The disadvantages are that first you need to have an already configured and running DHCP server on your network; and second, DHCP assigns addresses from a pool of available addresses. This means that every time the FreeNAS server boots, it is not guaranteed to have the same address as it had previously. This isn't a problem when using the CIFS protocol, however, for accessing the web interface or using protocols like FTP, it is desirable to have a stable IP address to refer to. However, for testing the FreeNAS server and learning about how it works using a DHCP assigned address could be acceptable for now. It is actually possible to assign fixed, permanent IP address to certain pieces of hardware, including a FreeNAS server over DHCP, but that requires extra advanced configuration changes in the DHCP server that cannot be covered in this tutorial. So opting for the manual IP address, you now need to obtain two pieces of information. The first is the actual IP address for the FreeNAS and the second is what is known as the subnet mask. The subnet mask will also be expressed in the dot notation and is normally something like 255.255.255.0. If you are in an office environment, you need to speak to the network administrator and he/she will be able to give you the information you need. If you are administering your own network, you need to choose an IP that isn't currently allocated to any other machine on your network (and also, isn't part of the address pool of any DHCP server on your network). Having obtained the IP address and subnet mask, you can now configure the FreeNAS server for your network. Select option 2 on the console menu. If you have chosen to have DHCP assign the address, answer yes (y) to the first question about using DHCP for IPv4. Otherwise answer no (n). If you are setting the address manually, you can now enter the address in dot notation, i.e. 192.168.1.240. Next, comes the subnet mask. If your subnet mask is 255.255.255.0: enter 24, for 255.255.0.0: enter 16, and for 255.0.0.0: enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! After you have successful set the IP address, there will be a small message on the screen inviting you to access the web interface by opening the listed URL in your web browser. If you have used DHCP, note down the URL listed. If you set the IP address manually, check that the URL listed is the same as the IP address you set with [http:// http://] in front of it. You are now ready to access the web interface. What is IPv4 and IPv6?The Internet Protocol has been around since the mid 1980's and when it was designed, the popularity of the Internet was not envisaged. The number of computers connected to the Internet is quickly growing beyond the addressing capabilities of the original protocol. As an answer to this, a new version of the IP protocol has been designed and has been given the name IP version 6 or IPv6 for short and the older version has taken the name IP version 4 or IPv4 for short. FreeNAS supports both versions of the Internet Protocol. In this tutorial, we will concentrate just on IPv4 as it still remains the most popular of the two protocols. Basic Configuration With your FreeNAS server now being up and running, it is time to access the web interface. Open a web browser on a computer on the same network as the FreeNAS server. Enter in the URL of the FreeNAS server. This should be the same as the IP address of the server with [http:// http://] in the front. The default URL is http://192.168.1.250     The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. FreeNAS Web Interface You should now have the web interface in your browser. The interface is split into two main sections. Down the left-hand-side are the menus, and the right-hand-side contains the pages for configuration. The menus are split into various sections: System, Interfaces, Disks, Services, Access, Status, Diagnostics, and Advanced.     When talking about a particular menu item, we shall use the notation Subsection: Menu Item to help you find the right menu option easily. So, the Management option, which is in the Disks subsection, will be referred to as Disks: Management. System This section is for system level configuration and operations, here for example you can change the username and password, backup and restore the configuration data, and shutdown or reboot the server. Interfaces Here, you can configure the network of the FreeNAS server much like you did via the console menu. You can change the network card that is used for the web interface and assign permanent or automatic IP addresses. Be careful when you change things here as some changes won't take effect until you reboot. If you have changed any of the addressing, you will need to access the web interface with the IP address. Disks This section of the menu is for administering the disks on the server. Here, you can set up disk redundancy (RAID), control encryption, format disks, and mount the disks on the server. Services The various access protocols like CIFS, NFS, and FTP are controlled from here. Each service is administered individually and by default NONE of the services are enabled, so before you can access files stored on the FreeNAS server, you need to enable at least one of these services. Access Most of the services offered by FreeNAS use some form of list of users to control who has access and who does not. This section is for defining these users and the groups they belong to as well as connecting the FreeNAS server to other directory services. Status The status menu has several reporting tools for you to see the current state of your FreeNAS server including a general overview, memory usage, disk usage, and network usage. You can also configure emails to be sent periodically about the status of the server. Diagnostics The diagnostics menu contains different tools to help diagnose any problem with the FreeNAS server, including logs of all the important services and diagnostic information from the hard disks and other system modules. Advanced The advanced section provides some simple tools for executing commands at the operating system level and should not be used by those unfamiliar with FreeBSD.    
Read more
  • 0
  • 0
  • 7275

article-image-jasperreports-36-creating-report-xml-data-using-xpath
Packt
24 Jun 2010
3 min read
Save for later

JasperReports 3.6: Creating a Report from XML Data using XPath

Packt
24 Jun 2010
3 min read
XML is a popular data source used in many applications. JasperReports allows you to generate reports directly from XML data. This first section of the article teaches you how to connect iReport to an XML file stored on your PC. In the second section of this article by Bilal Siddiqui, author of JasperReports 3.6 Development Cookbook, you will create a report from data stored in an XML file. In order to process an XML file and extract information from it, JasperReports uses XPath, which is a popular query language to filter XML data. So you will also learn how to use XPath expressions for report generation. (For more resources on JasperReports, see here.) Connecting to an XML datasource XML is a popular data source used in many applications. JasperReports allows you to generate reports directly from XML data. This section teaches you how to connect iReport to an XML file stored on your PC. Getting ready You need an XML file that contains report data. The EventsData.xml file is contained in the source code download (chap4). Unzip the source code file for this article (chap:4) and copy the Task2 folder from the unzipped source code to a location of your choice. How to do it... Run iReport; it will open showing a Welcome Window, as shown in the following screenshot: If you have not made any database connection so far in your iReport installation, you will see an Empty datasource shown selected in a drop-down list just below the main menu. Click on the Report Datasources icon shown encircled to the right of the drop-down list in the screen-shot shown below: A new window named Connections / Datasources will open, as shown below. This window lists an Empty datasource as well as the data sources you have made so far. Click the New button at the top-right of the Connections / Datasources window. This will open a new Datasource selection window, as shown in the following screenshot: Select XML file datasource from the datasources type list. Click Next. A new window named XML file datasource will open, as in the following screenshot: Enter XMLDatasource as the name for your new connection for the XML datasource in the text box beside the Name text field, as shown in the following screenshot: Click the Browse button beside the XML file text box to browse to the EventsData.xml file located in the Task2 folder that you copied in the Getting ready section. Click the Open button, as shown in the following screenshot: Select the Use the report XPath expression when filling the report option in the XML file datasource window, as shown in the following screenshot: Leave the other fields at their default values. Click the Test button to test the new XML datasource connection. You will see a Connection test successful message dialog. Click the Save button to save the newly created connection. A Connections / Datasources window will open showing your new XML datasource connection set as the default connection in the connections list, as shown highlighted in the following screenshot:
Read more
  • 0
  • 0
  • 7168

article-image-working-report-builder-microsoft-sql-server-2008-part-1
Packt
28 Oct 2009
16 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 1

Packt
28 Oct 2009
16 min read
The Microsoft SQL Server 2008 Reporting Services Report Builder 2.0 tool can be installed from a standalone installer available at this Microsoft site, http://download.microsoft.com/download/a/f/6/af64f194-8b7e-4118-b040-4c515a7dbc46/ReportBuilder.msi. The same file is also available from a collection of download files when you access the Microsoft SQL Server 2008 Feature Pack, October 2008 at http://www.microsoft.com/downloads/details.aspx?FamilyId=228DE03F-3B5A-428A-923F-58A033D316E1&displaylang=en. Report Builder overview In the present version of SQL Server 2008 [Enterprise Evaluation edition] there  are two Report Builders available. Report Builder 1.0, which has remained as a program that can be launched from the Report Manager, and the new Report Builder 2.0, which is a stand alone report authoring tool that needs to be independently launched. Although Report Builder 1.0 can access Report Models built with Visual Studio 2008 and the Report Manager, it cannot be used to create reports using those models. It also does not work with Reports generated by Visual Studio 2008/BIDS/Report Builder 2.0. The errors can be summarized as follows: When you try to access the Report Server 2008 from the link provided on the Report Builder 1.0 interface you get the following error message: Specifying credentials in a URL is not supported When you try to open a report created using VS2008/BIDS/ReportBuilder2.0 using the Open Report… and Open File… navigational items in Report Builder 1.0 you get the following error message: System.IO.StreamReader: The Report element was not found Report Builder 1.0 allows you to access Report Models created with VS2008/BIDS/Report Manager and even allows you create a report in design view but this report cannot be processed on the Report Server. If you try to do so, you get the following error message: MemoryStream length must be non-negative and less than 2^31-1-origin. Parameter name: offset; Remote GDI stream version: ?. Expected version: 11.0.1 In this article the Report Builder 2.0 interface will be described along with the new features that are incorporated into this version. Report Builder 2.0 is admirably suited to address all items in the Report Definition Language of 2008. One of the important features of Report Builder 2.0 is the empowerment it provides business users to create ad hoc reports using the Report Models built on the databases they use. In this article you will be learning mostly about the Report Builder 2.0  interface details and working with it to create reports or modify them. It may be noted that Report Builder generates 2008 compliant RDL files as described in http://download.microsoft.com/download/6/5/7/6575f1c8-4607-48d2-941d-c69622e11c32/RDL_spec_08.pdf and therefore, cannot work with reports generated using 2005 technology. Report Builder 2.0 user interface description Report Builder is a report authoring tool and the basic procedure for authoring a report consists of the following steps: Report planning Connecting to a source of data Extracting a dataset from source Designing the report and data binding Previewing the report Although deploying the report is not included in the above, Report Builder can deploy the report as well. It is not always necessary to deploy a completed report, as any part of a report definition file can be deployed. This makes modifying a report on the server very flexible. In the following sections, the various parts of the Report Builder interface will be described starting at the very top and going to the bottom of the interface The menu for file operations Report Builder 2.0 can be accessed from Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. This brings up the Report Builder Interface 2.0 as shown with the design area containing two icons: Table or Matrix and Chart. Each of these will launch a  related wizard which will step you through the various tasks. The Report Builder 2.0 interface is very similar to Office 2007. More than one instance of Report Builder can be launched. At the very top of the following screen shown you have the undo and redo controls as well as a save icon. When you click on the save icon the Save as Report window gets displayed as shown. Here you provide a name for the report. The default save extension is  *.rdl and it will be saved to the report server. It may also be persisted to a folder on your machine. Clicking on the Office Button (top left) opens a drop-down window shown in the following screenshot: In this window, you can carry out a number of tasks such as creating a new report, opening an existing report, saving a report, and saving a report with a different name. The Save button saves it to the default location seen earlier and Save as invokes the same window to save the report with a different name as seen earlier displying the report server instance as the Save to location. The Recent Documents pane shows the more recent reports created with this tool. New allows you to create a new report. When you click on Open, the following Open Report window gets displayed with the default location http://Hodentek2:8080/ReportServer_SANGAM/My Reports. You will also notice the message: This folder is not available because the My Reports feature is not enabled on the computer. Also the Open Reports window allows you look for reports with the extension .rdl. Therefore, unless the My Reports feature is enabled, this window is unusable. This is supposed to be possible from Report Manager but there are no controls in Report Manager that would do this. An alternative was suggested by one of the MSDN forum moderators (see http://social.msdn.microsoft.com/forums/en-US/sqlreportingservices/thread/6c695160-29e8-4185-be6d-5fe027a6975c/). Hands-on exercise (Part 2) will describe how you may enable My Reports. The idea of My Reports is similar to My Documents where each user can keep his reports. When the Options button (in the previous screenshot) is clicked it opens the window Report Builder Options window with two tabbed pages Settings and Resource shown as follows: Here you can view, as well as modify, Report Builder settings. The defaults are more than adequate to work with the examples in this book. Clicking on the Resources button brings up this interesting window which enables you to interact with Microsoft regarding SSRS activities, concerns, community, and so on. If you are serious about Reporting Services, these are very valuable links. The About button when clicked can provide you with Report Builder version information. The ribbon The main menu consists of Home, Insert, and View menu items which are part of the "ribbon". The ribbon introduced by Microsoft in Office 2007 is actually a container for other toolbar items. The ribbon is the replacement for the classic menus, toolbars, and is supposed to be more efficient and discoverable by the user. In fact you see a lot more on the "ribbon" than in the classic menu. Home The next figure shows the Home menu with its toolbar arranged from left to right and divided into sections. The Run toolbar item with the title Views when clicked would run the report open in the design view (in fact, even without a report open in the design view, the report can be run. The result would be the current date and time getting displayed in the center of the screen of an untitled report which has just ExecutionTime as the only item in the report). The Font, Paragraph, Border, and Number toolbar sections become enabled if parts of a report need editing. The formatting of textboxes in the report, the formatting of numbers in the report, and the alignment of components in the layout can all be independently managed using these toolbar items. Insert When you click on the Insert menu item on the "ribbon", the tabbed page for this item is displayed as shown in the following screenshot: It has four sections: Data Regions, Report Items, Subreports, and Header & Footer. These are all the normal items that are used either individually or together to make up a report. There can be more than one data region in a report. Data Regions In the Data Regions section you have both the Tablix (Table, Matrix, and List) and the graphic controls that can be bound to data—the Chart and the Gauge. Gauge is new in SQL Server Reporting Services 2008. Chart and gauge implementations are the off shoot of collaboration with Dundas (http://www.dundas.com/). Report Builder is built in such a way that the dataset must be defined before any of the data regions are added to the report body. For the purpose of describing the various data regions in this section, it is assumed (in order to get the screen shots shown here) that a dataset has been defined and the default wizards on the design surface have been removed. Table The Table is meant for displaying data retrieved from a database either all data detailed in groups or a combination (some grouped and some detailed) of both. It has a fixed number of columns which can be adjusted at design time. The table length expands to accommodate the rows. Data can be grouped by a single field or by multiple fields. Expression designer can be used in grouping as well. The grouping is carried out by creating row groups. Static rows can be added for row headings (labels) and totals. Aggregates for groups can be added. Both detailed data as well as grouped data can be hidden initially and the user can interactively reveal the data needed by drill downs. When you click on Insert | Table | Insert Table and then click on the design surface you can add a table to the design area. The table appears as shown with handles to adjust its dimensions. The table can be dragged to any other location on the design surface (the body of the report) as well. After placing the table, which by default has three columns and two rows, when you click on any other part of the design area you will see the table as shown. When you hover over the cell marked Data on the table you will see a little icon. This icon is a minimized version of the dataset fields. The grayed out feature that surrounds the table indicate the position of the rows and columns of the table. It also shows such other features as whether it is a detail, or whether it is a group. In the case of group, within a group the feature would indicate the nesting schematically as well. When you want to increase the size of a column or a row you can drag the double headed arrow that gets displayed when your cursor is placed between two columns or between two cells as shown. When you click on the dataset icon in the cell Data you get a drop-down list containing the fields in the dataset as shown. You can choose any of the fields to occupy the cell you clicked and the corresponding header will be added to the table. In this particular dataset there are nine fields and you can choose any of them to occupy the cell. When you right-click on a cell, a drop-down menu will be available. It can be used for the following: Work with the highlighted textbox (each cell of the table is a textbox) including to copy, cut, delete, and paste contents. Work with the properties of the Textbox. Populate the textbox with an expression using the expression builder. The expression builder gets displayed when fx Expression is clicked. Use Select to select the body or the Tablix. Insert a new column or a new row. Columns can be added to the right or the left of the clicked cell and rows can be added above or below the clicked cell. Delete columns and rows. Add a group. Both row and column groups can be added. When you click on the properties of the textbox, the Text Box Properties window is displayed. The textbox has several properties which are arranged on the left as a list with each item having its own page as shown. The Help button on any of the pages will take you directly to the definition of the properties and is extremely useful. In the General page, you can make changes to the elements in the Name, Value, and Sizing options page as shown. The Value is one which you choose among the column values (from the drop-down) from the dataset. You may also add a text for the ToolTip, which will display this text when the report is generated and this cell is accessed by hovering over it in the report. Alternatively you can set the Value and Tooltip using fx—the button that brings up the Expression window. In the Number page you can set the number and date data type formatting options for the cell that contains a number or a date. This is what you normally would find in most Microsoft products such as Excel and Access. In the Alignment page you can choose the vertical and horizontal alignments as well as the padding of the textbox content from the edges of the cell. Similarly the Font and Border properties are the same ones you find in most Microsoft products. The Fill property lets you add or change background color to the report as well as add a graphic element. The graphic element can be embedded, external, or originate from a database (being one of the fields accessed). Expressions can be developed to set a desired color for the Fill. The Visibility of the textbox can be any of Show, Hide, Show or Hide based on an expression. In each of these cases the visibility can be toggled when another table cell is clicked (which can be chosen). This page also gives access to the Expression window which is similar to the MS Access expression builder. The Interactive Sorting page allows you to define interactive sorting options on  the textbox. Matrix Matrix provides a similar functionality (roughly speaking rows against columns) to cross-tab reports in MS Access (http://aspalliance.com/1041_Creating_a_Crosstab_Report_in_Visual_Studio_2005_Using_Crystal_Reports.all) and Pivot Table dynamic views (http://www.aspfree.com/c/a/MS-SQL-Server/On-Accessing-Data-From-An-OLAP-Server-Using-MS-Excel/3/). The matrix should have at least one row group and one column group. The matrix can expand both ways to accommodate the data, horizontally for column groups and vertically for row groups. The matrix cells (intersection of rows and columns) display summary information (aggregates). When you click on Insert Matrix in the Insert menu and drop it on the design area of Report Builder 2.0, it gets displayed as shown in the following figure: Now if you click inside the boundary of the (2x2) empty matrix you will see more features of the matrix as shown in the following screenshot. The basic elements are the ColumnGroup (Column Groups), the RowGroup (Row Groups), and the Data. The group information is also displayed as shown by overlaid lines pointing to them. There needs to be a minimum of one group and one column for the matrix and there could be a hierarchy of column and row groups. The row and column group cells have their own properties which can be displayed when you right-click on them as shown in the next screenshot for the row group. When you right-click on the cell marked Rows, the following drop-down menu  pops up. In addition to the properties that you can set for the textbox in that cell, you have additional submenu items that work with the grouping and totaling. These are part of representing data in a matrix. Each of the Tablix for the Rows and Columns has the additional submenu items which are shown here for the Rows. Similar ones apply for the Columns as well. These are useful when you want to create nested groups. With the Matrix design interface in SQL Server 2005 this would not have been possible. Add Group Row Group Parent Group... Child Group... -------------------- Adjacent Above Adjacent Below Row Group Delete Group Group Properties Add Total Before After In addition to the above, each of the items Rows and Columns cells has the following items as well. These specify how new columns and rows are inserted with reference to the current cell as shown. The differences are due to the geometrical positions that are allowed for the new columns or rows as shown. For the "Columns" cell: Insert Column Inside Group-Left Inside Group-Right ------------------ Outside Group-Left Outside Group-Right Insert Row Inside Group-Above Inside Group-Below ------------------ Outside Group_Above For the "Rows" cell: Insert Column Inside Group-Left Inside Group-Right ------------------ Outside Group-Left Insert Row Inside Group-Above Inside Group-Below ------------------ Outside Group_Above Outside Group_Below Besides using a cell as a starting point, one could also use the rows as a whole or column as a whole to add further structure as shown in the next figure. Of course you need to use the proper submenu option to arrive at a particular matrix structure. Clicking at the indicated points would let you choose the structure you want for your matrix. If you click at the location shown for the Tablix you could choose to the delete the whole matrix. The Tablix graphical arrangement gives you the maximum flexibility in extending the matrix in 2-dimensions. List The list data region repeats for each row of data. List element provides a single container for the data which can be used to generate what are called Free Form Reports. In this kind of report there is no rigid structure such as a table for the data. You can also place a list inside another list or even a chart inside a list. You can drag a column from a dataset and drop it into the list. You can work with the list using the properties of the Rectangle it contains as well as its Tablix properties. As described earlier, the design interface is very flexible and you can leverage all features provided by the Tablix structure like displaying details and adding groups either independent, or nested. The properties pages described earlier allow you to sort and filter grouped data. When you drop a List on the design surface you will see just a single cell as shown. You can change its dimensions to suit your needs. When you click on the List you can access its handles as shown: When you add a List, there is one column and one row (just one cell). This can be extended in both directions by choosing the appropriate submenu items. These can be displayed by right-clicking on the handles as shown:
Read more
  • 0
  • 0
  • 7158

article-image-drupal-site-configuration-site-information-triggers-and-file-system
Packt
08 Sep 2010
9 min read
Save for later

Drupal Site Configuration: Site Information, Triggers and File System

Packt
08 Sep 2010
9 min read
(For more resources on Drupal, see here.) Not everything that is available in Drupal's Configuration section is discussed in this article. Some settings are very straightforward and really don't warrant more than perhaps a brief mention. Before we start It is sensible to make note of a few important things before getting our hands dirty. Make it second nature to check how the changes made to the settings affect the site. Quite often settings you modify, or features you add, will not behave precisely as expected and without ensuring that you use a prudent approach to making changes, you can sometimes end up with a bit of a mess. Changes to the site's structure (for example, adding new modules) can affect what is and isn't available for configuration so be aware that it may be necessary to revisit this section. Click on Configuration in the toolbar menu. You should see something like the following screenshot: A quick point to mention is that we aren't giving over much space to the final option—Regional and Language. This is because the settings here are very basic and should give you no trouble at all. There is also an online exercise available to help you with date types and formats if you are interested in customizing these. Let's begin! Site information This page contains a mixed bag of settings, some of which are pretty self-explanatory, while others will require us to think quite carefully about what we need to do. To start with, we are presented with a few text boxes that control things like the name of the site and the site slogan. Nothing too earth shattering, although I should point out that different themes implement these settings differently, while some don't implement them at all. For example, adding a slogan in the default Garland theme prints it after the site name as shown in the following screenshot: Whereas, the Stark theme places the slogan beneath the site name: Let's assume that there is a page of content that should be displayed by default—before anyone views any of the other content. For example, if you wanted to display some sort of promotional information or an introduction page, you could tell Drupal to display that using this setting. Remember that you have to create the content for this post first, and then determine its path before you tell Drupal to use it. For example, we could reference a specific node with its node ID, but equally, a site's blogs could be displayed if you substitute node/x (in node/ID format) for the blog. Once you are looking at the content intended for the front page, take note of the relative URL path and simply enter that into the text box provided. Recall that the relative URL path is that part of the page's address that comes after the standard domain, which is shared by the whole site. For example, setting node/2 works because Drupal maps this relative path to http://localhost/drupal/node/2 The first part of this address, http://localhost/drupal/ is the base URL, and everything after that is the relative URL path. Sometimes, the front page is a slightly more complex beast and it is likely that you will want to consider Panels to create a unique front page. In this case, Panels settings can override this setting to make a specific panel page as the front page. The following settings allow you to broadly deal with the problem of two common site errors that may crop up during a site's normal course of operation—from the perspective of a site visitor. In particular, you may wish to create a couple of customized error pages that will be displayed to the users in the event of a "page not found" or "access denied" problem. Remember that there are already pretty concise pages, which are supplied by default. However, if you wish to make any changes, then the process for creating an error page is exactly the same as creating any other normal page. Let's make a change very quickly. Click on Add new content in the Shortcuts menu and select Basic page. Add whatever content you want for, say the Page not found! error: Don't worry about the host of options available on this page—we will talk about all of this later on. For now, simply click on the Save button and make note of the URL of the page when it is displayed. Then head back to the Site information page, add this URL to the Default 404 (not found) page dialog, and then click on the Save configuration button: If you navigate to a page that doesn't exist, for example, node/3333, you should receive the new error message as follows: In this example, we asked Drupal to find a node that does not exist yet and so it displayed the Page not found! error message. Since Drupal can also provide content that is private or available to only certain users, it also needs the access denied error to explain to the would-be users that they do not have sufficient permissions to view the requested page. This is not the same as not finding a page, of course, but you can create your own access denied page in exactly the same way. Finally, you will need to specify how often cron should run in the Automatically run cron drop-down at the bottom of the Site information page. Cronjobs are automated tasks (of any type—they could be search indexing, feed aggregation, and so on) that should run at specified intervals. Drupal uses them to keep itself up-to-date and ensure optimal operation. Drupal uses web page requests to initiate new cron runs once the specified interval has elapsed. If your website does not get visited regularly, cron itself cannot run regularly. Running cron every few hours is reasonable for the vast majority of sites. Setting it to run too quickly can create a huge load on the server because each time the cron is run, all sorts of scripts are updating data, performing tasks, and consuming server resources. By the same token, run cron too infrequently and your site's content can become outdated, or worse, important module, theme, and core updates can go unnoticed, among other things. Actions and triggers Quite often, it happens that for specific events, it is useful to have Drupal automatically perform a specified task or action. An action, in the Drupal sense, is one of a number of tasks that the system can perform, and these usually relate to e-mailing people or acting upon user accounts or content. There are a number of simple actions that are available as well as a few more advanced ones that can be set up by anyone with sufficient permissions. To configure actions, navigate to Actions in SYSTEM under the Configuration menu in the toolbar: Default simple actions cannot be modified, so we will ignore these for the moment and focus on creating a new, advanced action. Set up a new Send e-mail action by selecting it from the drop-down list and click on the Create button, as shown in the preceding screenshot. This brings up the following page that can be set according to how this specific action will be used: It should be clear that the intention of this e-mail is to notify the staff/administration of any new site members. The Label field is important in this respect because this is how you will distinguish this action from the other ones that you may create in the future. Make the description as accurate, meaningful, and concise as possible to avoid any potential confusion. Also notice that there are several placeholder variables that can be inserted into the Recipient, Subject, and Message fields. In this instance, one has been used to inform the e-mail recipient of the new user name, as part of the message. A click on the Save button adds this new action to the list where it can be modified or deleted, accordingly: So far so good—we have set the action, but this in itself does absolutely nothing. An action cannot do anything unless there is a specific system event that can be triggered to set it off. These system events are, perspicaciously enough, called triggers and Drupal can look for any number of triggers, and perform the actions that are associated with it—this is how actions and triggers work together. Triggers are not part of the topic of Drupal configuration. However, we will discuss them here for completeness, since actions and triggers are integrally linked. Triggers are not enabled by default, so head on over to the Modules section and enable the Triggers module. With the module enabled, there will now be a new Triggers link from the Actions page. Clicking on this brings up the following page: Triggers are divided into five different categories, each providing a range of triggers to which actions can be attached. Assigning a trigger is basically selecting an action to apply from the drop-down list of the relevant trigger and clicking on the Assign button. To continue with our example, select the USER tab from the top of the Triggers overlay and, in the TRIGGER: AFTER CREATING A NEW USER ACCOUNT box, select the newly defined action, as shown in the following screenshot: Click on the Assign button, and the newly assigned action will show up in the relevant trigger box: In the same way, a large number of actions can be automated depending on the system event (or trigger) that fires. To test this out, log off and register a new account—you will find that the New User Alert e-mail is dutifully sent out once the account has been registered (assuming your web server is able to send e-mail).
Read more
  • 0
  • 6
  • 7127

article-image-showing-your-google-calendar-your-joomla-site-using-gcalendar
Packt
21 Oct 2010
3 min read
Save for later

Showing your Google calendar on your Joomla! site using GCalendar

Packt
21 Oct 2010
3 min read
To use a Google calendar with GCalendar, you must make the Google Calendar publicly available. To do so, select the Make this calendar public checkbox on the Share this calendar tab on the Settings screen. Getting ready... The GCalendar extension allows you to display your Google calendar inside the Joomla! site. Visit http://g4j.laoneo.net/content/extensions/download/cat_view/20-joomla-15x/21-gcalendar.html and download the latest version of the GCalendar suite. Once downloaded, extract the ZIP package and you will get an installation file for one component, three modules, and two plugins. Install these from the Extensions | Install/Uninstall screen. How to do it... After installation, follow these steps: From the Joomla! administration panel, select Components | GCalendar. This shows you the Google calendar manager screen. Click on the Tools link of the Google calendar manager screen, and then click on the System check link. This checks the system requirements and connectivity with Google and shows you the results. Resolve any issue raised by a system check. Then click on the GCalendars link, followed by clicking on Please login to connect to Google data. You will see the Google authorization page. Click on the Grant access button on the Google accounts page. You will see the list of calendars under Google Calendar. If you are not logged in to Google already, the Google Login page will be shown. Enter the username and password for your Google account (or create one if you don't have an account) and click on the Sign In button. Select the calendars you want to use on your Joomla! site, and click on the Add button in the toolbar. The calendars are added and you will see them in the list. Now select Menus | Main Menu, and click on the New button on the Menu Item Manager: [mainmenu] screen. This will show you the Menu Item: [New] screen. Select Internal Link | GCalendar | GCalendar. It will show the Menu Item: [New] screen, as shown in the following screenshot: Type a Title for the menu, and select Yes in the Published field. From the Parameters (Basic) section, select a calendar, select the default view, the day the week starts on, and then the date format. You can also add text to display before and after the calendar. Then click on the Save button in the toolbar. Preview the site's frontend and click on the menu item. This shows the Google calendar. As you see, you can change the view of the calendar by clicking on any of the view icons shown above the calendar. You can also go to a specific date or month. Summary Google Calendar is gaining popularity. If you are a user of Google Calendar, you already know how flexible it is. In this article, you saw how to display your Google calendar on your Joomla! site. In the next article we will add a Booking System for Events. Further resources on this subject: Adding an Event Calendar to your Joomla! Site using JEvents Joomla! 1.5 Top Extensions: Adding a Booking System for Events Joomla! 1.5 Top Extensions for Using Languages Manually Translating Your Joomla! Site's Content into Your Desired Language
Read more
  • 0
  • 0
  • 7123
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-creating-layout-example-inkscape
Packt
09 Nov 2010
6 min read
Save for later

Creating a Layout Example in Inkscape

Packt
09 Nov 2010
6 min read
  Inkscape 0.48 Essentials for Web Designers Use the fascinating Inkscape graphics editor to create attractive layout designs, images, and icons for your website The first book on the newly released Inkscape version 0.48, with an exclusive focus on web design Comprehensive coverage of all aspects of Inkscape required for web design Incorporate eye-catching designs, patterns, and other visual elements to spice up your web pages Learn how to create your own Inkscape templates in addition to using the built-in ones Written in a simple illustrative manner, which will appeal to web designers and experienced Inkscape users alike Designing the background In Inkscape, since your canvas is white, it looks like your web page might have a white background. It does not, without creating a background image, currently it would be transparent or 0 percent opacity. So we need to start by determining which colors to use for the background. You can get really creative here, as you don't need to use just one color! Create a header background color, one for the main body, and then another for the footer area. Let's start by creating a simple background based on our Basic Layout layer. Making the header background To start, we're going to create a simple header background—a rectangle with a simple divider line and drop shadow. Here are the detailed steps to make this header background: With your new document open and the newly created guides viewable, first lock the Basic Layout layer (click the lock icon) and hide its elements (click the Eye icon). Now create a Background Layer. Draw a rectangle that fills the header and navigational areas. We're going to create only one background for both of these, as the navigation will be in the same color and area as the header. Choose a fill color and no stroke (border). If you decide you want to change either of the option, open the fill and stroke dialog box (from the main menu select File and then Fill and Stroke or the keypad shortcut Shift + Ctrl + F) and adjust colors accordingly. Want to add in a gradient? Click the Gradient icon, and adjust how you want your gradient to appear. By default the Inkscape gradient applies an alpha setting of 0 to the stop gradient, which will be fully transparent. This means, in the above settings, the right side of the gradient would be transparent. Click Edit to change this setting. From the Linear gradient field, choose the Add stop. Change the alpha opacity setting (A) to a solid color—either move the slider to the left side of the screen or change the value to 255. Next change the solid color value. In this example, we used white and changed the R, G, B values to achieve the results. For this example, the gradient goes from a bit darker green on the left to a lighter shade on the right side. Next, let's add a simple drop shadow. From the main menu select Filters, Shadows and Glows, and then Drop Shadow. For this example, the Blur Radius px 10, Opacity, 20% and a vertical offset px of 5 and click Apply. Close the drop shadow box and then save your file. Your header is complete! Now let's add to this to create the main content background. To change the gradient orientation, you can drag the outer two gradient stop nodes indicated by the square and circle handles on the object. You can also add more gradient stops and edit their transparency (A) values and colors to adjust to your liking. Building the main body background For the main part of the web page sample, we're using a white box as a background with a similar drop shadow. Here's how to add this to your Background layer: Draw a rectangle that fills the entire content, main content area. This includes the entire middle portion of your web page, and covers all the 'sections' between the header and the footer in the basic layout. The example above shows white as the fill color and no stroke (border). Let's also put a drop shadow on this box as well. From the main menu select Filters, Shadows and Glows, and then Drop Shadow. Adjust the settings to be the same as the previous drop shadow so they match (Blur Radius px 10, Opacity, 20% and a vertical offset px of 5) and click Apply. Close the drop shadow box and then save your file. The main content background is complete. Lastly, we need to create the footer background. Creating the footer background Creating a footer background is the last step to this process. Very much like the header, we'll follow these steps: Duplicate the header background. Select the header and from the main menu, choose Edit and then Duplicate. Drag the duplicate rectangle down to the footer area of the page and adjust the size so it fits within the footer defined area. Notice, since the object was duplicated all formatting—gradients and drop shadows—were maintained. Save your file. Now that your footer background is complete, so is the entire web page background. Let's move on to details. Designing the header Now that we have the entire background already created, it's time to add details to the header area like a logo and company name. Here are the steps to add in these basic details to our example design: Before any more work is done, it makes sense to 'lock' the background layer. We don't want any items shifting, or any elements being selected accidentally when we are working on other layers. To lock the background layer, click the lock icon in the Background layer dialog box. Create a new layer and name it Logo. To create a new layer, you can use the Shift+Ctrl+N keyboard shortcut. Within the Logo layer, create and/or import the logo and place it on the header background. If you are importing a graphic that has already been created, it's as simple as clicking File and selecting Import. Find the logo file and use the selection tool to place it correctly in the header area. Lock the logo layer, and then create a new layer and name it Title. Within this layer, use the Create and Edit Text tool and type in the business name and then place it on the header background. Save your file. Next up, is to create the Navigation tool bar.
Read more
  • 0
  • 0
  • 7034

article-image-making-ajax-calls-using-jquery
Packt
19 Apr 2011
7 min read
Save for later

Making AJAX Calls using jQuery

Packt
19 Apr 2011
7 min read
AJAX (Asynchronous JavaScript and XML), a term coined by Jesse James Garrett of Adaptive Path, stands for a combination of different technologies that help to communicate seamlessly with the server without the need for a page refresh. AJAX applications involve the following technologies: JavaScript for running the core AJAX engine XmlHttpRequest object to communicate with the server Web presentation using XHTML and CSS or XSLT DOM to work with the HTML structure XML and JSON for data interchange The XmlHttpRequest object is used for posting HTTP/HTTPS requests to the server. Most modern browsers have a built-in XmlHttpRequest object. JSON (JavaScript Object Notation) is a lightweight data interchange format and is increasingly used in AJAX applications today. It is basically a collection of name/value pairs. In classic web applications, the client submits data to the server for processing and the server sends back refreshed content to the client. This causes a visible page refresh and the web user must wait for a page reload before further interaction with the web application. AJAX, however, eliminates the need for an explicit page refresh by communicating with the server behind the scenes. It uses the power of XmlHttpRequest object to post a request to the server. Thus, the backend communication with the server is transparent to the end user. In addition, using AJAX, only the data that is required to be updated can be selectively refreshed on the page. The previous figure is the traditional model for web applications (left) compared to the AJAX model (right). The previous figure shows the basic difference between traditional and AJAX-enabled applications. In traditional web applications, the client sends requests directly to the server and waits to receive the corresponding response. In AJAX-based applications, however, this is replaced by a JavaScript call to the AJAX engine instead, which sends the request asynchronously to the server. As a result, web users' interaction with the application is not interrupted and users can continue to work with the application. The jQuery library provides many methods for working with AJAX. In this article, we will explore the use of the following methods: $.ajax(settings): This is a generic low level function that helps to create any type of AJAX request. There are a number of configuration settings that can be applied using this function to customize an AJAX call. It helps to set the type of HTTP request (GET/ POST), the URL, parameters, datatype, as well as the callback functions to execute successful/unsuccessful invocation of the AJAX call. $.ajaxSetup(options): This method helps to define default settings for making AJAX calls on the page. The setup is done one time and all the subsequent AJAX calls on the page are made using the default settings. Getting started Let's start by creating a new ASP.NET website in Visual Studio and name it Chapter6. Save the jQuery library in a script folder js in the project. To enable jQuery on any web form, drag-and-drop the library to add the following to the page: <script src="js/jquery-1.4.1.js" type="text/javascript"></script> Now let's move on to the recipes where we will see how jQuery can be used to make AJAX calls to ASP.NET code-behind. Basically, in this article, we will explore three possible approaches of communicating with the server: Using page methods Using web services Using HTTP Handlers So, let's get started. Setting up AJAX with ASP.NET using jQuery In this recipe, we will set up our ASP.NET web page to execute an AJAX call. We will also see how to set the default AJAX properties. Getting ready Create an HTML file Content.html in the project and add the following contents: <html > <head> <title>Content Page</title> <link href="css/content.css" rel="stylesheet" type="text/css" /> </head> <body> <p> <table cellpadding="3" cellspacing="3" id="box-table-a"> <tr><td>Title</td><td>Author</td><td>Genre</td></tr> <tr><td>Alchemist</td><td>Paulo Coelho</td><td>Fiction</td></tr> <tr><td>Heart of Darkness</td><td>Joseph Conrad</td><td>Classic</ td></tr> <tr><td>David Copperfield</td><td>Charles Dickens</ td><td>Classic</td></tr> <tr><td>Have a Little Faith</td><td>Mitch Albom</td><td>Fiction</ td></tr> </table> </p> </body> </html> Add a new web form Recipe1.aspx to the current project. Add a button control to the web form to trigger the AJAX request. <asp:Button ID="btnSubmit" runat="server" Text="Click Here" /> Thus, the ASPX markup of the web form is as follows: <form id="form1" runat="server"> <div align="center"> <fieldset style="width:400px;height:200px;"> <div id="contentArea"> <asp:Button ID="btnSubmit" runat="server" Text="Click Here" /> </div> </fieldset> </div> </form> On page load, the form will appear as shown in the following screenshot: When the button is clicked, we will retrieve the contents of the HTML file using AJAX and display it on the page. How to do it… In the document.ready() function of the jQuery script block, call the ajaxSetup() method to set the default properties of all AJAX calls on the page: $.ajaxSetup({ Turn off the cache so that the contents are not cached by the browser: cache: false, Set the data type of the response. In this case, since we are going to load the contents from the HTML file, the dataType is HTML: dataType: "html", Define the callback function if the AJAX request fails. The callback function takes in three parameters: the XMLHttpRequest object, the error status, and an exception object: error: function(xhr, status, error) { alert('An error occurred: ' + error); }, Define the global timeout in milliseconds: timeout: 30000, Set the type of HTTP request (GET/POST): type: "GET", Define the callback function to be called before the AJAX request is initiated. This function can be used to modify the Xml HttpRequest object: beforeSend: function() { console.log('In Ajax beforeSend function'); }, Define the function to be called when the AJAX request is completed: complete: function() { console.log('In Ajax complete function'); } }); Now, having set the default properties in the previous code block, we will now invoke the actual AJAX call on clicking the button control on the form. Attach a handler for the click event of the button control: $("#btnSubmit").click(function(e) { Prevent default form submission: e.preventDefault(); Initiate the AJAX call using the .ajax() method: $.ajax({ Specify the URL to send the request to. In this case, we're sending the request to the HTML file: url: "Content.htm", Define the callback function for successful execution of the AJAX call: success: function(data) { The HTML response from the server is received in the data parameter in the preceding callback function. Clear the contents of the containing div area to remove the button control and append the received HTML response: $("#contentArea").html("").append(data); } }); }); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { $.ajaxSetup({ cache: false, dataType: "html", error: function(xhr, status, error) { alert('An error occurred: ' + error); }, timeout: 30000, type: "GET", beforeSend: function() { console.log('In Ajax beforeSend function'); }, complete: function() { console.log('In Ajax complete function'); } }); $("#btnSubmit").click(function(e) { e.preventDefault(); $.ajax({ url: "Content.htm", success: function(data) { $("#contentArea").html("").append(data); } }); }); }); </script> How it works… Run the web form. Click on the button to initiate the AJAX request. You will see that the page content is updated without any visible page refresh as follows:
Read more
  • 0
  • 0
  • 7005

article-image-choosing-lync-2013-clients
Packt
25 Jul 2013
5 min read
Save for later

Choosing Lync 2013 Clients

Packt
25 Jul 2013
5 min read
(For more resources related to this topic, see here.) What clients are available? At the moment, we are writing a list that includes the following clients: Full client, as a part of Office 2013 Plus The Lync 2013 app for Windows 8 Lync 2013 for mobile devices The Lync Basic 2013 version A plugin is needed to enable Lync features on a virtual desktop. We need the full Lync 2013 client installation to allow Lync access to the user. Although they are not clients in the traditional sense of the word, our list must also include the following ones: The Microsoft Lync VDI 2013 plugin Lync Online (Office 365) Lync Web App Lync Phone Edition Legacy clients that are still supported (Lync 2010, Lync 2010 Attendant, and Lync 2010 Mobile) Full client (Office 2013) This is the most complete client available at the moment. It includes full support for voice, video, IM (similarly to the previous versions), and integration for the new features (for example, high-definition video, the gallery feature to see multiple video feeds at the same time, and chat room integration). In the following screenshot, we can see a tabbed conversation in Lync 2013: Its integration with Office implies that the group policies for Lync are now part of the Office group policy's administrative templates. We have to download the Office 2013 templates from the Microsoft site and install the package in order to use them (some of the settings are shown in the following screenshot): Lync is available with the Professional Plus version of Office 2013 (and with some Office 365 subscriptions). Lync 2013 app for Windows 8 The Lync 2013 app for Windows 8 (also called Lync Windows Store app) has been designed and optimized for devices with a touchscreen (with Windows 8 and Windows RT as operating systems). The app (as we can see in the following screenshot) is focused on images and pictures, so we have a tile for each contact we want in our favorites. The Lync Windows Store app supports contact management, conversations, and calls, but some features such as Persistent Chat and the advanced management of Enterprise Voice, are still an exclusive of the full client. Also, talking about conferencing, we will not be able to act as the presenter or manage other participants. The app is integrated with Windows 8, so we are able to use Search to look for Lync contacts (as shown in the following screenshot): Lync 2013 for mobile devices The Lync 2013 client for mobile devices is the solution Microsoft offers for the most common tablet and smartphone systems (excluding those tablets using Windows 8 and Windows RT with their dedicated app). It is available for Windows phones, iPad/iPhone, and for Android. The older version of this client was basically an IM application, and that is something that somehow limited the interest in the mobile versions of Lync. The 2013 version that we are talking about includes support for VOIP and video (using Wi-Fi networks and cellular data networks), meetings, and for voice mail. From an infrastructural point of view, enabling the new mobile client means to apply the Lync 2013 Cumulative Update 1 (CU1) on our Front End and Edge servers and publish a DNS record (lyncdiscover) on our public name servers. If we have had previous experience with Lync 2010 mobility, the difference is really noticeable. The lyncdiscover record must be pointed to the reverse proxy. Reverse proxy deployment requires for a product to be enabled to support Lync mobility, and a certificate with the lyncdiscover's public domain name needs to be included. Lync Basic 2013 version Lync Basic 2013 is a downloadable client that provides basic functionalities. It does not provide support for advanced call features, multiparty videos or galleries, and skill-based searches. Lync Basic 2013 is dedicated to companies with Lync 2013 on-premises, and it is for Office 365 customers that do not have the full client included with their subscription. A client will look really similar to the full one, but the display name on top is Lync Basic as we can see in the following screenshot: Microsoft Lync VDI 2013 plugin As we said before, the VDI plugin is not a client; it is software we need to install to enable Lync on virtual desktops based on the most used technologies, such as Microsoft RDS, VMware View, and XenDesktop. The main challenge of a VDI scenario is granting the same features and quality we expect from a deployment on a physical machine. The plugin uses "Media Redirection", so that audio and video originate and terminate on the plugin running on the thin client. The user is enabled to connect conferencing/telephony hardware (for example microphones, cams, and so on) to the local terminal and use the Lync 2013 client installed on the virtual desktop as it was running locally. The plugin is the only Lync software installed at the end-user workplace. The details of the deployment (Deploying the Lync VDI Plug-in ) are available at http://technet.microsoft.com/en-us/library/jj204683.aspx. Resources for Article : Further resources on this subject: Innovation of Communication and Information Technologies [Article] DPM Non-aware Windows Workload Protection [Article] Installing Microsoft Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 6986

article-image-make-spacecraft-fly-and-shoot-special-effects-using-blender-3d-249
Packt
18 Nov 2009
4 min read
Save for later

Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49

Packt
18 Nov 2009
4 min read
Blender particles In the last versions of Blender 3D, the particle system received a huge upgrade, making it more complex and powerful than before. This upgrade, however, made it necessary to create more parameters and options in order for the system to acts. What didn't change was the need for an object that works as emitter of the particles. The shape and look of this object will be directly related to the type of effects we want to create. Before we discuss the effects that we will be creating, let's look at how the particles work in Blender. To create any type of particle system, go to the Objects panel and find the Particles button. This is where we will set up and change our particles for a variety of effects. The first time we open this menu, nothing will be displayed. But, if we select a mesh object and press the Add New button, this object will immediately turn into a new emitter. When a new emitter is created, we have to choose the type of behavior this emitter has in the particle system. In the top-left part of the menu, we will find a selector that lets us choose the type of interaction of the emitter. These are the three types of emitters: Emitter: This is the standard type, which is a single object that emits particles according to the parameters and rules that we set up in the particles controls. Hair: Here, we have a type of particle emitter that creates particles as thin lines for representing hair and fur. Since this is more related to characters, we won't use this type of emitter in this book. Reactor: With this emitter, we can create particle systems that interact with each other. It works by setting up a particle system that interferes with the motion and changes the trajectories of other particles. In our projects, we will use only the emitter type. However, you can create indirect animations and use particles to interact with each other. For instance, if you want to create a set of asteroids that block the path of our spacecraft, we could create this type of animation easily with a reactor particle system. How particles work To create and use a particle system, we will look at the most important features and parameters of each menu and create some pre-systems to use later in this article for the spacecraft. To fully understand how particles work, we have to become familiar with the forces or parameters that control the look and feel of particles. For each of those parameters and forces, we have a corresponding menu in Blender. Here corresponding parameters that control the particle system: Quantity: This is a basic feature of any particle system that allows us to set up how many particles will be in the system. Life: As a particle system is based on animation parameters, we have to know from how many frames the particle will be visible in the 3D world. Mesh emitting: Our emitters are all meshes, and we have to determine from which part of those 3D objects the particles will be emitted. We have several options to choose from, such as vertices or parts of the objects delimited by vertex groups. Motion: If we set up our particle system and don't give it enough force to make the particles move, nothing will happen to the system. So, even more important than setting up the appearance of the particles is choosing the right forces for the initial velocity of the particles. Physics and forces: Along with the forces that we use in the motion option, we will also apply some force fields and deflectors to particles to simulate and change the trajectories of the objects based on physical reactions. Visualization: A standard particle system has only small dots as particles, but we can change the way particles look in a variety of ways. To create flares and special effects such as the ones we need, we can use mesh objects that have Halo effects and many more. Interaction: At the end of the particle life, we can use several types of actions and behaviors to control the destiny of a particle. Should it spawn a new particle or simply die when it hits a special object? These are the things we have to consider before we begin setting up the animation.
Read more
  • 0
  • 0
  • 6976
article-image-building-tiny-web-applications-ruby-using-sinatra-2
Packt
03 Sep 2009
5 min read
Save for later

Building tiny Web-applications in Ruby using Sinatra

Packt
03 Sep 2009
5 min read
What’s Sinatra? Sinatra is not a framework but a library i.e. a set of classes that allows you to build almost any kind of web-based solution (no matter what the complexity) in a very simple manner, on top of the abstracted HTTP layer it implements from Rack. When you code in Sinatra you’re bound only by HTTP and your Ruby knowledge. Sinatra doesn’t force anything on you, which can lead to awesome or evil code, in equal measures. Sinatra apps are typically written in a single file. It starts up and shuts down nearly instantaneously. It doesn’t use much memory and it serves requests very quickly. But, it also offers nearly every major feature you expect from a full web framework: RESTful resources, templating (ERB, Haml/Sass, and Builder), mime types, file streaming, etags, development/production mode, exception rendering. It’s fully testable with your choice of test or spec framework. It’s multithreaded by default, though you can pass an option to wrap actions in a mutex. You can add in a database by requiring ActiveRecord or DataMapper. And it uses Rack, running on Mongrel by default. Blake Mizerany the creator of Sinatra says that it is better to learn Sinatra before Ruby on Rails: When you learn a large framework first, you’re introduced to an abundance of ideas, constraints, and magic. Worst of all, they start you with a pattern. In the case of Rails, that’s MVC. MVC doesn’t fit most web-applications from the start or at all. You’re doing yourself a disservice starting with it. Back into patterns, never start with them- Reference here Installing Sinatra      The simplest way to obtain Sinatra is through Rubygems. Open a command window in Windows and type: c:> gem install sinatra Linux/OS X the command would be: sudo gem install sinatra   Installing its Dependencies Sinatra depends on the Rack gem which gets installed along with Sinatra. Installing Mongrel (a fast HTTP library and server for Ruby that is intended for hosting Ruby web applications of any kind using plain HTTP - http://mongrel.rubyforge.org/) is quite simple. In the already open command window, type: c:> gem install mongrel What are Routes? The main feature of Sinatra is defining ‘routes’ as an HTTP verb for a path that executes an arbitrary block of Ruby code. Something like: verb ‘path’ do ... # return/render something end Sinatra’s routes are designed to respond to the HTTP request methods (GET, POST, PUT, DELETE). In Sinatra, a route is an HTTP method paired with an URL matching pattern. These URL handlers (also called "routing") can be used to match anything from a static string (such as /hello) to a string with parameters (/hello/:name) or anything you can imagine using wildcards and regular expressions. Each route is associated with a block. Let us look at an example: get '/' do .. show something ..endget '/hello/:name' do # The /hello portion matches that portion of the URL from the # request you made, and :name will absorb any other text you # give it and put it in the params hash under the key :nameendpost '/' do .. create something ..endput '/' do .. update something ..enddelete '/' do .. delete something ..end Routes are matched in the order they are defined. When a new request comes in, the first route that matches the request is invoked i.e. the handler (the code block) attached to that route gets executed. For this reason, you should put your most specific handlers on top and your most vague handlers on the bottom. A tiny web-application Here’s an example of a simple Sinatra application. Write a Ruby program myapp1.rb and store it in the folder: c:sinatra_programs Though the name of the folder is c:>sinatra_programs, we are going to have only one Sinatra program here. The program is: # myapp1.rbrequire 'sinatra' Sinatra applications can be run directly: ruby myapp1.rb [-h] [-x] [-e ENVIRONMENT] [-p PORT] [-s HANDLER] The above options are: -h # help -p # set the port (default is 4567) -e # set the environment (default is development) -s # specify rack server/handler (default is thin) -x # turn on the mutex lock (default off) – currently not used In the article- http://gist.github.com/54177, it states that using: require ‘rubygems’, is wrong. It is an environmental issue and not an app issue. The article mentions that you might It is an environmental issue and not an app issue. The article mentions that you might use: ruby -rubygems myapp1.rb Another way is to use RUBYOPT. Refer article – http://rubygems.org/read/chapter/3. By setting the RUBYOPT environment variable to the value rubygems, you tell Ruby to load RubyGems every time it starts up. This is similar to the -rubygems options above, but you only have to specify this once (rather than each time you run a Ruby script).
Read more
  • 0
  • 0
  • 6960

article-image-creating-your-first-module-drupal-7-module-development
Packt
14 Dec 2010
15 min read
Save for later

Using Drupal 7 for Module Development

Packt
14 Dec 2010
15 min read
  Drupal 7 Module Development Create your own Drupal 7 modules from scratch Specifically written for Drupal 7 development Write your own Drupal modules, themes, and libraries Discover the powerful new tools introduced in Drupal 7 Learn the programming secrets of six experienced Drupal developers Get practical with this book's project-based format         Read more about this book       The focus of this article by Matt Butcher, author of Drupal 7 Module Development, is module creation. We are going to begin coding in this article. Here are some of the important topics that we will cover in this article: Starting a new module Creating .info files to provide Drupal with module information Creating .module files to store Drupal code Adding new blocks using the Block Subsystem Using common Drupal functions Formatting code according to the Drupal coding standards (For more resources on this subject, see here.) Our goal: a module with a block In this article we are going to build a simple module. The module will use the Block Subsystem to add a new custom block. The block that we add will simply display a list of all of the currently enabled modules on our Drupal installation. We are going to divide this task of building a new module into the three parts: Create a new module folder and module files Work with the Block Subsystem Write automated tests using the SimpleTest framework included in Drupal We are going to proceed in that order for the sake of simplicity. One might object that, following agile development processes, we ought to begin by writing our tests. This approach is called Test-driven Development (TDD), and is a justly popular methodology. Agile software development is a particular methodology designed to help teams of developers effectively and efficiently build software. While Drupal itself has not been developed using an agile process, it does facilitate many of the agile practices. To learn more about agile, visit http://agilemanifesto.org/ However, our goal here is not to exemplify a particular methodology, but to discover how to write modules. It is easier to learn module development by first writing the module, and then learn how to write unit tests. It is easier for two reasons: SimpleTest (in spite of its name) is the least simple part of this article. It will have double the code-weight of our actual module. We will need to become acquainted with the APIs we are going to use in development before we attempt to write tests that assume knowledge of those APIs. In regular module development, though, you may certainly choose to follow the TDD approach of writing tests first, and then writing the module. Let's now move on to the first step of creating a new module. Creating a new module Creating Drupal modules is easy. How easy? Easy enough that over 5,000 modules have been developed, and many Drupal developers are even PHP novices! In fact, the code in this article is an illustration of how easy module coding can be. We are going to create our first module with only one directory and two small files. Module names It goes without saying that building a new module requires naming the module. However, there is one minor ambiguity that ought to be cleared up at the outset, a Drupal module has two names: A human-readable name: This name is designed to be read by humans, and should be one or a couple of words long. The words should be capitalized and separated by spaces. For example, one of the most popular Drupal modules has the human-readable name Views. A less-popular (but perhaps more creatively named) Drupal 6 module has the human-readable name Eldorado Superfly. A machine-readable name: This name is used internally by Drupal. It can be composed of lower-case and upper-case letters, digits, and the underscore character (using upper-case letters in machine names is frowned upon, though). No other characters are allowed. The machine names of the above two modules are views and eldorado_superfly, respectively. By convention, the two names ought to be as similar as possible. Spaces should be replaced by underscores. Upper-case letters should generally be changed to lower-case. Because of the convention of similar naming, the two names can usually be used interchangeably, and most of the time it is not necessary to specifically declare which of the two names we are referring to. In cases where the difference needs to be made (as in the next section), the authors will be careful to make it. Where does our module go? One of the less intuitive aspects of Drupal development is the filesystem layout. Where do we put a new module? The obvious answer would be to put it in the /modules directory alongside all of the core modules. As obvious as this may seem, the /modules folder is not the right place for your modules. In fact, you should never change anything in that directory. It is reserved for core Drupal modules only, and will be overwritten during upgrades. The second, far less obvious place to put modules is in /sites/all/modules. This is the location where all unmodified add-on modules ought to go, and tools like Drush ( a Drupal command line tool) will download modules to this directory. In some sense, it is okay to put modules here. They will not be automatically overwritten during core upgrades. However, as of this writing, /sites/all/modules is not the recommended place to put custom modules unless you are running a multi-site configuration and the custom module needs to be accessible on all sites. The current recommendation is to put custom modules in the /sites/default/modules directory, which does not exist by default. This has a few advantages. One is that standard add-on modules are stored elsewhere, and this separation makes it easier for us to find our own code without sorting through clutter. There are other benefits (such as the loading order of module directories), but none will have a direct impact on us. We will always be putting our custom modules in /sites/default/modules. This follows Drupal best practices, and also makes it easy to find our modules as opposed to all of the other add-on modules. The one disadvantage of storing all custom modules in /sites/default/modules appears only under a specific set of circumstances. If you have Drupal configured to serve multiple sites off of one single instance, then the /sites/default folder is only used for the default site. What this means, in practice, is that modules stored there will not be loaded at all for other sites. In such cases, it is generally advised to move your custom modules into /sites/all/modules/custom. Other module directories Drupal does look in a few other places for modules. However, those places are reserved for special purposes. Creating the module directory Now that we know that our modules should go in /sites/default/modules, we can create a new module there. Modules can be organized in a variety of ways, but the best practice is to create a module directory in /sites/default/modules, and then place at least two files inside the directory: a .info (pronounced "dot-info") file and a .module ("dot-module") file. The directory should be named with the machine-readable name of the module. Similarly, both the .info and .module files should use the machine-readable name. We are going to name our first module with the machine-readable name first, since it is our first module. Thus, we will create a new directory, /sites/default/modules/first, and then create a first.info file and a first.module file: Those are the only files we will need for our module. For permissions, make sure that your webserver can read both the .info and .module files. It should not be able to write to either file, though. In some sense, the only file absolutely necessary for a module is the .info file located at a proper place in the system. However, since the .info file simply provides information about the module, no interesting module can be built with just this file. Next, we will write the contents of the .info file. Writing the .info file The purpose of the .info file is to provide Drupal with information about a module—information such as the human-readable name, what other modules this module requires, and what code files this module provides. A .info file is a plain text file in a format similar to the standard INI configuration file. A directive in the .info file is composed of a name, and equal sign, and a value: name = value By Drupal's coding conventions, there should always be one space on each side of the equals sign. Some directives use an array-like syntax to declare that one name has multiple values. The array-like format looks like this: name[] = value1 name[] = value2 Note that there is no blank space between the opening square bracket and the closing square bracket. If a value spans more than one line, it should be enclosed in quotation marks. Any line that begins with a ; (semi-colon) is treated as a comment, and is ignored by the Drupal INI parser. Drupal does not support INI-style section headers such as those found in the php.ini file. To begin, let's take a look at a complete first.info file for our first module: ;$Id$ name = First description = A first module. package = Drupal 7 Development core = 7.x files[] = first.module ;dependencies[] = autoload ;php = 5.2 This ten-line file is about as complex as a module's .info file ever gets. The first line is a standard. Every .info file should begin with ;$Id$. What is this? It is the placeholder for the version control system to store information about the file. When the file is checked into Drupal's CVS repository, the line will be automatically expanded to something like this: ;$Id: first.info,v 1.1 2009/03/18 20:27:12 mbutcher Exp $ This information indicates when the file was last checked into CVS, and who checked it in. CVS is going away, and so is $Id$. While Drupal has been developed in CVS from the early days through Drupal 7, it is now being migrated to a Git repository. Git does not use $Id$, so it is likely that between the release of Drupal 7 and the release of Drupal 8, $Id$ tags will be removed. You will see all PHP and .info files beginning with the $Id$ marker. Once Drupal uses Git, those tags may go away. The next couple of lines of interest in first.info are these: name = First description = A first module. package = Drupal 7 Development The first two are required in every .info file. The name directive is used to declare what the module's human-readable name is. The description provides a one or two-sentence description of what this module provides or is used for. Among other places, this information is displayed on the module configuration section of the administration interface in Modules. In the screenshot, the values of the name and description fields are displayed in their respective columns. The third item, package, identifies which family (package) of modules this module is related to. Core modules, for example, all have the package Core. In the screenshot above, you can see the grouping package Core in the upper-left corner. Our module will be grouped under the package Drupal 7 Development to represent its relationship. As you may notice, package names are written as human-readable values. When choosing a human-readable module name, remember to adhere to the specifications mentioned earlier in this section. The next directive is the core directive: core = 7.x. This simply declares which main-line version of Drupal is required by the module. All Drupal 7 modules will have the line core = 7.x. Along with the core version, a .info file can also specify what version of PHP it requires. By default, Drupal 7 requires Drupal 5.1 or newer. However, if one were to use, say, closures (a feature introduced in PHP 5.3), then the following line would need to be added: php = 5.3 Next, every .info file must declare which files in the module contain PHP functions, classes, or interfaces. This is done using the files[] directive. Our small initial module will only have one file, first.module. So we need only one files[] directive. files[] = first.module More complex files will often have several files[] directives, each declaring a separate PHP source code file. JavaScript, CSS, image files, and PHP files (like templates) that do not contain functions that the module needs to know about needn't be included in files[] directives. The point of the directive is simply to indicate to Drupal that these files should be examined by Drupal. One directive that we will not use for this module, but which plays a very important role is the dependencies[] directive. This is used to list the other modules that must be installed and active for this module to function correctly. Drupal will not allow a module to be enabled unless its dependencies have been satisfied. Drupal does not contain a directive to indicate that another module is recommended or is optional. It is the task of the developer to appropriately document this fact and make it known. There is currently no recommended best practice to provide such information. Now we have created our first.info file. As soon as Drupal reads this file, the module will appear on our Modules page. In the screenshot, notice that the module appears in the DRUPAL 7 DEVELOPMENT package, and has the NAME and DESCRIPTION as assigned in the .info file. With our .info file completed, we can now move on and code our .module file. Modules checked into Drupal's version control system will automatically have a version directive added to the .info file. This should typically not be altered. Creating a module file The .module file is a PHP file that conventionally contains all of the major hook implementations for a module. We will gain some practical knowledge of them. A hook implementation is a function that follows a certain naming pattern in order to indicate to Drupal that it should be used as a callback for a particular event in the Drupal system. For Object-oriented programmers, it may be helpful to think of a hook as similar to the Observer design pattern. When Drupal encounters an event for which there is a hook (and there are hundreds of such events), Drupal will look through all of the modules for matching hook implementations. It will then execute each hook implementation, one after another. Once all hook implementations have been executed, Drupal will continue its processing. In the past, all Drupal hook implementations had to reside in the .module file. Drupal 7's requirements are more lenient, but in most moderately sized modules, it is still preferable to store most hook implementations in the .module file. There are cases where hook implementations belong in other files. In such cases, the reasons for organizing the module in such a way will be explained. To begin, we will create a simple .module file that contains a single hook implementation – one that provides help information. <?php // $Id$ /** * @file * A module exemplifying Drupal coding practices and APIs. * * This module provides a block that lists all of the * installed modules. It illustrates coding standards, * practices, and API use for Drupal 7. */ /** * Implements hook_help(). */ function first_help($path, $arg) { if ($path == 'admin/help#first') { return t('A demonstration module.'); } } Before we get to the code itself, we will talk about a few stylistic items. To begin, notice that this file, like the .info file, contains an $Id$ marker that CVS will replace when the file is checked in. All PHP files should have this marker following a double-slash-style comment: // $Id$. Next, the preceding code illustrates a few of the important coding standards for Drupal. Source code standards Drupal has a thorough and strictly enforced set of coding standards. All core code adheres to these standards. Most add-on modules do, too. (Those that don't generally receive bug reports for not conforming.) Before you begin coding, it is a good idea to familiarize yourself with the standards as documented here: http://drupal.org/coding-standards. The Coder module can evaluate your code and alert you to any infringement upon the coding standards. We will adhere to the Drupal coding standards. In many cases, we will explain the standards as we go along. Still, the definitive source for standards is the URL listed above, not our code here. We will not re-iterate the coding standards. The details can be found online. However, several prominent standards deserve immediate mention. I will just mention them here, and we will see examples in action as we work through the code. Indenting: All PHP and JavaScript files use two spaces to indent. Tabs are never used for code formatting. The <?php ?> processor instruction: Files that are completely PHP should begin with <?php, but should omit the closing ?>. This is done for several reasons, most notably to prevent the inclusion of whitespace from breaking HTTP headers. Comments: Drupal uses Doxygen-style (/** */) doc-blocks to comment functions, classes, interfaces, constants, files, and globals. All other comments should use the double-slash (//) comment. The pound sign (#) should not be used for commenting. Spaces around operators: Most operators should have a whitespace character on each side. Spacing in control structures: Control structures should have spaces after the name and before the curly brace. The bodies of all control structures should be surrounded by curly braces, and even that of if statements with one-line bodies. Functions: Functions should be named in lowercase letters using underscores to separate words. Later we will see how class method names differ from this. Variables: Variable names should be in all lowercase letters using underscores to separate words. Member variables in objects are named differently. As we work through examples, we will see these and other standards in action. As we work through examples, we will see these and other standards in action.
Read more
  • 0
  • 0
  • 6880

article-image-coldfusion-ajax-programming
Packt
27 Oct 2009
9 min read
Save for later

ColdFusion AJAX Programming

Packt
27 Oct 2009
9 min read
Binding When it comes to programming, the two most commonly used features are CFAJAXProxy and binding. The binding feature allows us to bind or tie things together by using a simpler technique than we would otherwise have needed to create. Binding acts as a double-ended connector in some scenarios. You can set the bind to pull data from another ColdFusion tag on the form. These must be AJAX tags with binding abilities. There are four forms of bindings, on page, CFC, JavaScript, and URL. Let's work through each style so that we will understand them well. We will start with on page binding. Remember that the tag has to support the binding. This is not a general ColdFusion feature, but we can use it wherever we desire. On Page Binding We are going to bind 'cfdiv' to pull its content to show on page binding. We will set the value of a text input to the div. Refer to the following code. ColdFusion AJAX elements work in a manner different from how AJAX is written traditionally. It is more customary to name our browser-side HTML elements with id attributes. This is not the case with the binding features. As we can see in our code example, we have used the name attribute. We should remember to be case sensitive, since this is managed by JavaScript. When we run the code, we will notice that we must leave the input field before the browser registers that there has been a change in the value of the field. This is how the event model for the browser DOM works. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="{myText}"></cfdiv> Notice how we use curly brackets to bind the value of the 'myText' input box. This inserts the contents into 'div' when the text box loses focus. This is an example of binding to in-page elements. If the binding we use is tied to a hidden window or tab, then the contents may not be updated. CFC Binding Now, we are going to bind our div to a CFC method. We will take the data that was being posted directly to the object, and then we will pass it out to the CFC. The CFC is going to repackage it, and send it back to the browser. The binding will enable the modified version of the content to be sent to the div. Refer to the following CFC code: <cfcomponent output="false"> <cffunction name="getDivContent" returntype="string" access="remote"> <cfargument name="edit"> <cfreturn "This is the content returned from the CFC for the div, the calling page variable is '<strong>#arguments.edit#</strong>'."> </cffunction></cfcomponent> From the previous code, we can see that the CFC only accepts the argument and passes it back. This could have even returned an image HTML segment with something like a user picture. The following code shows the new page code modifications. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="cfc:bindsource.getDivContent({myText})"></cfdiv> The only change lies in how we bind the cfdiv element tag. Here, you can see that it starts with CFC. Next, it calls bindsource, which is the name of a local CFC. This tells ColdFusion to wire up the browser page, so it will connect to the CFC and things will work as we want. You can observe that inside the method, we are passing the bound variable to the method. When the input field changes by losing focus, the browser sends a new request to the CFC and updates the div. We need to have the same number of parameters going to the CFC as the number of arguments in our CFC method. We should also make sure that the method has its access method set to remote. Here we can see an example results page. It is valid to pass the name of the CFC method argument with the data value. This can prevent exceptions caused by not pairing the data in the same order as the method arguments. The last line of the previous code can be modified as follows: <cfdiv bind="cfc:bindsource.getDivContent(edit:{myText})"></cfdiv> JavaScript Binding Now, we will see how simple power can be managed on the browser. We will create a standard JavaScript function and pass the same bound data field through the function. Whenever we update the text box and it looses focus, the contents of the div will be updated from the function on the page. It is suggested that we include all JavaScript rather than put it directly on the page. Refer to the following code: <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="javascript:updateDiv({myText})"></cfdiv><script> updateDiv = function(myEdit){ return 'This is the result that came from the JavaScript function with the edit box sending "<strong>'+myEdit+'</strong>"'; } </script> Here is the result of placing the same text into our JavaScript example. URL Binding We can achieve the same results by calling a web address. We can actually call a static HTML page. Now, we will call a .cfm page to see the results of changing the text box reflected back, as for CFC and JavaScript. Here is the code for our main page with the URL binding. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="url:bindsource.cfm?myEdit={myText}"></cfdiv> In the above code, we can see that the binding type is set to URL. Earlier, we used the CFC method bound to a file named bindsource.cfc. Now, we will bind through the URL to a .cfm file. The bound myText data will work in a manner similar to the other cases. It will be sent to the target; in this case, it is a regular server-side page. We require only one line. In this example, our variables are URL variables. Here is the handler page code: <cfoutput> 'This is the result that came from the server page with the edit box sending "<strong>#url.myEdit#</strong>"'</cfoutput> This tells us that if there is no prefix to the browse request on the bind attribute of the <cfdiv> tag, then it will only work with on-page elements. If we prefix it, then we can pass the data through a CFC, a URL, or through a JavaScript function present on the same page. If we bind to a variable present on the same page, then whenever the bound element updates, the binding will be executed. Bind with Event One of the features of binding that we might overlook its binding based on an event. In the previous examples, we mentioned that the normal event trigger for binding took place when the bound field lost its focus. The following example shows a bind that occurs when the key is released. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText"></cfform><hr />And this is the bound div container.<br /><cfdiv bind="{myText@keyup}"></cfdiv> This is similar to our first example, with the only difference being that the contents of the div are updated as each key is pressed. This works in a manner similar to CFC, JavaScript, and URL bindings. We might also consider binding other elements on a click event, such as a radio button. The following example shows another feature. We can pass any DOM attribute by putting that as an item after the element id. It must be placed before the @ symbol, if you are using a particular event. In this code, we change the input in order to have a class in which we can pass the value of the class attribute and change the binding attribute of the cfdiv element. <cfform id="myForm" format="html"> This is my edit box.<br /> <cfinput type="text" name="myText" class="test"> </cfform><hr />And this is the bound div container.<br /><cfdiv bind="{myText.class@keyup}.{myText}"></cfdiv> Here is a list of the events that we can bind. @click @keyup @mousedown @none The @none event is used for grids and trees, so that changes don't trigger bind events. Extra Binding Notes If you have an ID on your CFForm element, then you can refer to the form element based on the container form. The following example helps us to understand this better. Bind = "url:bindsource.cfm?myEdit={myForm:myText}" The ColdFusion 8 documents give the following guides in order to specify the binding expressions. cfc: componentPath.functionName (parameters) The component path cannot use a mapping. The componentPath value must be a dot-delimited path from the web root or the directory that contains the page. javascript: functionName (parameters) url: URL?parameters ULR?parameters A string containing one or more instances of {bind parmeter}, such as {firstname}.{lastname}@{domain} The following table represents the supported formats based on attributes and tags: Attribute Tags Supported Formats Autosuggest cfinput type="text" 1,2,3 Bind cfdiv, cfinput, cftextarea 1,2,3,5 Bind cfajaxproxy, cfgrid, cfselect cfsprydataset, cftreeitem 1,2,3 onChange cfgrid 1,2,3 Source cflayoutarea, cfpod, cfwindow 4
Read more
  • 0
  • 0
  • 6873
article-image-linux-shell-script-logging-tasks
Packt
28 Jan 2011
7 min read
Save for later

Linux Shell Script: Logging Tasks

Packt
28 Jan 2011
7 min read
Linux Shell Scripting Cookbook Collecting information about the operating environment, logged in users, the time for which the computer has been powered on, and any boot failures are very helpful. This recipe will go through a few commands used to gather information about a live machine. Getting ready This recipe will introduce the commands who, w, users, uptime, last, and lastb. How to do it... To obtain information about users currently logged in to the machine use: $ who slynux   pts/0   2010-09-29 05:24 (slynuxs-macbook-pro.local) slynux   tty7    2010-09-29 07:08 (:0) Or: $ w 07:09:05 up  1:45,  2 users,  load average: 0.12, 0.06, 0.02 USER     TTY     FROM    LOGIN@   IDLE  JCPU PCPU WHAT slynux   pts/0   slynuxs 05:24  0.00s  0.65s 0.11s sshd: slynux slynux   tty7    :0      07:08  1:45m  3.28s 0.26s gnome-session It will provide information about logged in users, the pseudo TTY used by the users, the command that is currently executing from the pseudo terminal, and the IP address from which the users have logged in. If it is localhost, it will show the hostname. who and w format outputs with slight difference. The w command provides more detail than who. TTY is the device file associated with a text terminal. When a terminal is newly spawned by the user, a corresponding device is created in /dev/ (for example, /dev/pts/3). The device path for the current terminal can be found out by typing and executing the command tty. In order to list the users currently logged in to the machine, use: $ users Slynux slynux slynux hacker If a user has opened multiple pseudo terminals, it will show that many entries for the same user. In the above output, the user slynux has opened three pseudo terminals. The easiest way to print unique users is to use sort and uniq to filter as follows: $ users | tr ' ' 'n' | sort | uniq slynux hacker We have used tr to replace ' ' with 'n'. Then combination of sort and uniq will produce unique entries for each user. In order to see how long the system has been powered on, use: $ uptime 21:44:33 up  3:17,  8 users,  load average: 0.09, 0.14, 0.09 The time that follows the word up indicates the time for which the system has been powered on. We can write a simple one-liner to extract the uptime only. Load average in uptime's output is a parameter that indicates system load. In order to get information about previous boot and user logged sessions, use: $ last slynux tty7         :0              Tue Sep 28 18:27   still logged in reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:46 (03:35) slynux pts/0      :0.0          Tue Sep 28 05:31 - crash (12:39) The last command will provide information about logged in sessions. It is actually a log of system logins that consists of information such as tty from which it has logged in, login time, status, and so on. The last command uses the log file /var/log/wtmp for input log data. It is also possible to explicitly specify the log file for the last command using the –f option. For example: $ last -f /var/log/wtmp In order to obtain info about login sessions for a single user, use: $ last USER Get information about reboot sessions as follows: $ last reboot reboot system boot 2.6.32-21-generi Tue Sep 28 18:10 - 21:48 (03:37) reboot system boot 2.6.32-21-generi Tue Sep 28 05:14 - 21:48 (16:33) In order to get information about failed user login sessions use: # lastb test     tty8    :0          Wed Dec 15 03:56 - 03:56  (00:00) slynux tty8    :0          Wed Dec 15 03:55 - 03:55  (00:00) You should run lastb as the root user. Logging access to files and directories Logging of file and directory access is very helpful to keep track of changes that are happening to files and folders. This recipe will describe how to log user accesses. Getting ready The inotifywait command can be used to gather information about file accesses. It doesn't come by default with every Linux distro. You have to install the inotify-tools package by using a package manager. It also requires the Linux kernel to be compiled with inotify support. Most of the new GNU/Linux distributions come with inotify enabled in the kernel. How to do it... Let's walk through the shell script to monitor the directory access: #/bin/bash #Filename: watchdir.sh #Description: Watch directory access path=$1 #Provide path of directory or file as argument to script inotifywait -m -r -e create,move,delete $path -q A sample output is as follows: $ ./watchdir.sh . ./ CREATE new ./ MOVED_FROM new ./ MOVED_TO news ./ DELETE news How it works... The previous script will log events create, move, and delete files and folders from the given path. The -m option is given for monitoring the changes continuously rather than going to exit after an event happens. -r is given for enabling a recursive watch the directories. -e specifies the list of events to be watched. -q is to reduce the verbose messages and print only required ones. This output can be redirected to a log file. We can add or remove the event list. Important events available are as follows: Logfile management with logrotate Logfiles are essential components of a Linux system's maintenance. Logfiles help to keep track of events happening on different services on the system. This helps the sysadmin to debug issues and also provides statistics on events happening on the live machine. Management of logfiles is required because as time passes the size of a logfile gets bigger and bigger. Therefore, we use a technique called rotation to limit the size of the logfile and if the logfile reaches a size beyond the limit, it will strip the logfile and store the older entries from the logfile in an archive. Hence older logs can be stored and kept for future reference. Let's see how to rotate logs and store them. Getting ready logrotate is a command every Linux system admin should know. It helps to restrict the size of logfile to the given SIZE. In a logfile, the logger appends information to the log file. Hence the recent information appears at the bottom of the log file. logrotate will scan specific logfiles according to the configuration file. It will keep the last 100 kilobytes (for example, specified SIZE = 100k) from the logfile and move rest of the data (older log data) to a new file logfile_name.1 with older entries. When more entries occur in the logfile (logfile_name.1) and it exceeds the SIZE, it updates the logfile with recent entries and creates logfile_name.2 with older logs. This process can easily be configured with logrotate.logrotate can also compress the older logs as logfile_name.1.gz, logfile_name2.gz, and so on. The option for whether older log files are to be compressed or not is available with the logrotate configuration. How to do it... logrotate has the configuration directory at /etc/logrotate.d. If you look at this directory by listing contents, many other logfile configurations can be found. We can write our custom configuration for our logfile (say /var/log/program.log) as follows: $ cat /etc/logrotate.d/program /var/log/program.log { missingok notifempty size 30k compress weekly rotate 5 create 0600 root root } Now the configuration is complete. /var/log/program.log in the configuration specifies the logfile path. It will archive old logs in the same directory path. Let's see what each of these parameters are: The options specified in the table are optional; we can specify the required options only in the logrotate configuration file. There are numerous options available with logrotate. Please refer to the man pages (http://linux.die.net/man/8/logrotate) for more information on logrotate.  
Read more
  • 0
  • 0
  • 6841

article-image-php-data-objects-error-handling
Packt
22 Oct 2009
11 min read
Save for later

PHP Data Objects: Error Handling

Packt
22 Oct 2009
11 min read
In this article, we will extend our application so that we can edit existing records as well as add new records. As we will deal with user input supplied via web forms, we have to take care of its validation. Also, we may add error handling so that we can react to non-standard situations and present the user with a friendly message. Before we proceed, let's briefly examine the sources of errors mentioned above and see what error handling strategy should be applied in each case. Our error handling strategy will use exceptions, so you should be familiar with them. If you are not, you can refer to Appendix A, which will introduce you to the new object-oriented features of PHP5. We have consciously chosen to use exceptions, even though PDO can be instructed not to use them, because there is one situation where they cannot be avoided. The PDO constructors always throw an exception when the database object cannot be created, so we may as well use exceptions as our main error‑trapping method throughout the code. Sources of Errors To create an error handling strategy, we should first analyze where errors can happen. Errors can happen on every call to the database, and although this is rather unlikely, we will look at this scenario. But before doing so, let's check each of the possible error sources and define a strategy for dealing with them. This can happen on a really busy server, which cannot handle any more incoming connections. For example, there may be a lengthy update running in the background. The outcome is that we are unable to get any data from the database, so we should do the following. If the PDO constructor fails, we present a page displaying a message, which says that the user's request could not be fulfilled at this time and that they should try again later. Of course, we should also log this error because it may require immediate attention. (A good idea would be emailing the database administrator about the error.) The problem with this error is that, while it usually manifests itself before a connection is established with the database (in a call to PDO constructor), there is a small risk that it can happen after the connection has been established (on a call to a method of the PDO or PDO Statement object when the database server is being shutdown). In this case, our reaction will be the same—present the user with an error message asking them to try again later. Improper Configuration of the Application This error can only occur when we move the application across servers where database access details differ; this may be when we are uploading from a development server to production server, where database setups differ. This is not an error that can happen during normal execution of the application, but care should be taken while uploading as this may interrupt the site's operation. If this error occurs, we can display another error message like: "This site is under maintenance". In this scenario, the site maintainer should react immediately, as without correcting, the connection string the application cannot normally operate. Improper Validation of User Input This is an error which is closely related to SQL injection vulnerability. Every developer of database-driven applications must undertake proper measures to validate and filter all user inputs. This error may lead to two major consequences: Either the query will fail due to malformed SQL (so that nothing particularly bad happens), or an SQL injection may occur and application security may be compromised. While their consequences differ, both these problems can be prevented in the same way. Let's consider the following scenario. We accept some numeric value from a form and insert it into the database. To keep our example simple, assume that we want to update a book's year of publication. To achieve this, we can create a form that has two fields: A hidden field containing the book's ID, and a text field to enter the year. We will skip implementation details here, and see how using a poorly designed script to process this form could lead to errors and put the whole system at risk. The form processing script will examine two request variables:$_REQUEST['book'], which holds the book's ID and $_REQUEST['year'], which holds the year of publication. If there is no validation of these values, the final code will look similar to this: $book = $_REQUEST['book'];$year = $_REQUEST['year'];$sql = "UPDATE books SET year=$year WHERE id=$book";$conn->query($sql); Let's see what happens if the user leaves the book field empty. The final SQL would then look like: UPDATE books SET year= WHERE id=1; This SQL is malformed and will lead to a syntax error. Therefore, we should ensure that both variables are holding numeric values. If they don't, we should redisplay the form with an error message. Now, let's see how an attacker might exploit this to delete the contents of the entire table. To achieve this, they could just enter the following into the year field: 2007; DELETE FROM books; This turns a single query into three queries: UPDATE books SET year=2007; DELETE FROM books; WHERE book=1; Of course, the third query is malformed, but the first and second will execute, and the database server will report an error. To counter this problem, we could use simple validation to ensure that the year field contains four digits. However, if we have text fields, which can contain arbitrary characters, the field's values must be escaped prior to creating the SQL. Inserting a Record with a Duplicate Primary Key or Unique Index Value This problem may happen when the application is inserting a record with duplicate values for the primary key or a unique index. For example, in our database of authors and books, we might want to prevent the user from entering the same book twice by mistake. To do this, we can create a unique index of the ISBN column of the books table. As every book has a unique ISBN, any attempt to insert the same ISBN will generate an error. We can trap this error and react accordingly, by displaying an error message asking the user to correct the ISBN or cancel its addition. Syntax Errors in SQL Statements This error may occur if we haven't properly tested the application. A good application must not contain these errors, and it is the responsibility of the development team to test every possible situation and check that every SQL statement performs without syntax errors. If this type of an error occurs, then we trap it with exceptions and display a fatal error message. The developers must correct the situation at once. Now that we have learned a bit about possible sources of errors, let's examine how PDO handles errors. Types of Error Handling in PDO By default, PDO uses the silent error handling mode. This means that any error that arises when calling methods of the PDO or PDOStatement classes go unreported. With this mode, one would have to call PDO::errorInfo(), PDO::errorCode(), PDOStatement::errorInfo(), or PDOStatement::errorCode(), every time an error occurred to see if it really did occur. Note that this mode is similar to traditional database access—usually, the code calls mysql_errno(),and mysql_error() (or equivalent functions for other database systems) after calling functions that could cause an error, after connecting to a database and after issuing a query. Another mode is the warning mode. Here, PDO will act identical to the traditional database access. Any error that happens during communication with the database would raise an E_WARNING error. Depending on the configuration, an error message could be displayed or logged into a file. Finally, PDO introduces a modern way of handling database connection errors—by using exceptions. Every failed call to any of the PDO or PDOStatement methods will throw an exception. As we have previously noted, PDO uses the silent mode, by default. To switch to a desired error handling mode, we have to specify it by calling PDO::setAttribute() method. Each of the error handling modes is specified by the following constants, which are defined in the PDO class: PDO::ERRMODE_SILENT – the silent strategy. PDO::ERRMODE_WARNING – the warning strategy. PDO::ERRMODE_EXCEPTION – use exceptions. To set the desired error handling mode, we have to set the PDO::ATTR_ERRMODE attribute in the following way: $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); To see how PDO throws an exception, edit the common.inc.php file by adding the above statement after the line #46. If you want to test what will happen when PDO throws an exception, change the connection string to specify a nonexistent database. Now point your browser to the books listing page. You should see an output similar to: This is PHP's default reaction to uncaught exceptions—they are regarded as fatal errors and program execution stops. The error message reveals the class of the exception, PDOException, the error description, and some debug information, including name and line number of the statement that threw the exception. Note that if you want to test SQLite, specifying a non-existent database may not work as the database will get created if it does not exist already. To see that it does work for SQLite, change the $connStr variable on line 10 so that there is an illegal character in the database name: $connStr = 'sqlite:/path/to/pdo*.db'; Refresh your browser and you should see something like this: As you can see, a message similar to the previous example is displayed, specifying the cause and the location of the error in the source code. Defining an Error Handling Function If we know that a certain statement or block of code can throw an exception, we should wrap that code within the try…catch block to prevent the default error message being displayed and present a user-friendly error page. But before we proceed, let's create a function that will render an error message and exit the application. As we will be calling it from different script files, the best place for this function is, of course, the common.inc.php file. Our function, called showError(), will do the following: Render a heading saying "Error". Render the error message. We will escape the text with the htmlspecialchars() function and process it with the nl2br() function so that we can display multi-line messages. (This function will convert all line break characters to tags.) Call the showFooter() function to close the opening and tags. The function will assume that the application has already called the showHeader() function. (Otherwise, we will end up with broken HTML.) We will also have to modify the block that creates the connection object in common.inc.php to catch the possible exception. With all these changes, the new version of common.inc.php will look like this: <?php/*** This is a common include file* PDO Library Management example application* @author Dennis Popel*/// DB connection string and username/password$connStr = 'mysql:host=localhost;dbname=pdo';$user = 'root';$pass = 'root';/*** This function will render the header on every page,* including the opening html tag,* the head section and the opening body tag.* It should be called before any output of the/*** This function will 'close' the body and html* tags opened by the showHeader() function*/function showFooter(){?></body></html><?php}/*** This function will display an error message, call the* showFooter() function and terminate the application* @param string $message the error message*/function showError($message){echo "<h2>Error</h2>";echo nl2br(htmlspecialchars($message));showFooter();exit();}// Create the connection objecttry{$conn = new PDO($connStr, $user, $pass);$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);}catch(PDOException $e){showHeader('Error');showError("Sorry, an error has occurred. Please try your requestlatern" . $e->getMessage());} As you can see, the newly created function is pretty straightforward. The more interesting part is the try…catch block that we use to trap the exception. Now with these modifications we can test how a real exception will get processed. To do that, make sure your connection string is wrong (so that it specifies wrong databasename for MySQL or contains invalid file name for SQLite). Point your browser to books.php and you should see the following window:
Read more
  • 0
  • 0
  • 6809
Modal Close icon
Modal Close icon