Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-master-virtual-desktop-image-creation
Packt
11 Sep 2013
11 min read
Save for later

Master Virtual Desktop Image Creation

Packt
11 Sep 2013
11 min read
(For more resources related to this topic, see here.) When designing your VMware Horizon View infrastructure, creating a Virtual Desktop master image is second only to infrastructure design in terms of importance. The reason for this is simple; as ubiquitous as Microsoft Windows is, it was never designed to be a hosted Virtual Desktop. The good news is that with a careful bit of planning, and a thorough understanding of what your end users need, you can build a Windows desktop that serves all your needs, while requiring the bare minimum of infrastructure resources. A default installation of Windows contains many optional components and configuration settings that are either unsuitable for, or not needed in, a Virtual Desktop environment, and understanding their impact is critical to maintaining Virtual Desktop performance over time and during peak levels of use. Uninstalling unneeded components and disabling services or scheduled tasks that are not required will help reduce the amount of resources the Virtual Desktop requires, and ensure that the View infrastructure can properly support the planned number of desktops even as resources are oversubscribed. Oversubscription is defined as having assigned more resources than what is physically available. This is most commonly done with processor resources in Virtual Desktop environments, where a single server processor core may be shared between multiple desktops. As the average desktop does not require 100 percent of its assigned resources at all times, we can share those resources between multiple desktops without affecting the performance. Why is desktop optimization important? To date, Microsoft has only ever released a version of Windows designed to be installed on physical hardware. This isn't to say that Microsoft is unique is this regard, as neither Linux and Mac OS X offers an installation routine that is optimized for a virtualized hardware platform. While nothing stops you from using a default installation of any OS or software package in a virtualized environment, you may find it difficult to maintain consistent levels of performance in Virtual Desktop environments where many of the resources are shared, and in almost every case oversubscribed in some manner. In this section, we will examine a sample of the CPU and disk IO resources that can be recovered were you to optimize the Virtual Desktop master image. Due to the technological diversity that exists from one organization to the next, optimizing your Virtual Desktop master image is not an exact science. The optimization techniques used and their end results will likely vary from one organization to the next due to factors unrelated to View or vSphere. The information contained within this article will serve as a foundation for optimizing a Virtual Desktop master image, focusing primarily on the operating system. Optimization results – desktop IOPS Desktop optimization benefits one infrastructure component more than any other: storage. Until all flash storage arrays achieve price parity with the traditional spinning disk arrays many of us use today, reducing the per-desktop IOPS required will continue to be an important part of any View deployment. On a per-disk basis, a flash drive can accommodate more than 15 times the IOPS of an enterprise SAS or SCSI disk, or 30 times the IOPS of a traditional desktop SATA disk. Organizations that choose an all-flash array may find that they have more than sufficient IOPS capacity for their Virtual Desktops, even without doing any optimization. The following graph shows the reduction in IOPS that occurred after performing the optimization techniques described later in this article. The optimized desktop generated 15 percent fewer IOPS during the user workload simulation. By itself that may not seem like a significant reduction, but when multiplied by hundreds or thousands of desktops the savings become more significant. Optimization results – CPU utilization View supports a maximum of 16 Virtual Desktops per physical CPU core. There is no guarantee that your View implementation will be able to attain this high consolidation ratio, though, as desktop workloads will vary from one type of user to another. The optimization techniques described in this article will help maximize the number of desktops you can run per each server core. The following graph shows the reduction in vSphere host % Processor Time that occurred after performing the optimization techniques described later in this article: % Processor Time is one of the metrics that can be used to measure server processor utilization within vSphere. The statistics in the preceding graph were captured using the vSphere ESXTOP command line utility, which provides a number of performance statistics that the vCenter performance tabs do not offer, in a raw format that is more suited for independent analysis. The optimized desktop required between 5 to 10 percent less processor time during the user workload simulation. As was the case with the IOPS reduction, the savings are significant when multiplied by large numbers of desktops. Virtual Desktop hardware configuration The Virtual Desktop hardware configuration should provide only what is required based on the desktop needs and the performance analysis. This section will examine the different virtual machine configuration settings that you may wish to customize, and explain their purpose. Disabling virtual machine logging Every time a virtual machine is powered on, and while it is running, it logs diagnostic information within the datastore that hosts its VMDK file. For environments that have a large number of Virtual Desktops, this can generate a noticeable amount of storage I/O. The following steps outline how to disable virtual machine logging: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight General . Clear Enable logging as shown in the following screenshot, which sets the logging = "FALSE" option in the virtual machine VMX file: While disabling logging does reduce disk IO, it also removes log files that may be used for advanced troubleshooting or auditing purposes. The implications of this change should be considered before placing the desktop into production. Removing unneeded devices By default, a virtual machine contains several devices that may not be required in a Virtual Desktop environment. In the event that these devices are not required, they should be removed to free up server resources. The following steps outline how to remove the unneeded devices: In the vCenter client, right-click on the desktop master image virtual machine and click on Edit Settings to open the Virtual Machine Properties window. In the Virtual Machine Properties window, under Hardware , highlight Floppy drive 1 as shown in the following screenshot and click on Remove : In the Virtual Machine Properties window, select the Options tab. Under Settings , highlight Boot Options . Check the checkbox under the Force BIOS Setup section as shown in the following screenshot: Click on OK to close the Virtual Machine Properties window. Power on the virtual machine; it will boot into the PhoenixBIOS Setup Utility . The PhoenixBIOS Setup Utility menu defaults to the Main tab. Use the down arrow key to move down to the Legacy Diskette A , and then press the Space bar key until the option changes to Disabled . Use the right arrow key to move to the Advanced tab. Use the arrow down key to select I/O Device Configuration and press Enter to open the I/O Device Configuration window. Disable the serial ports, parallel port, and floppy disk controller as shown in the following screenshot. Use the up and down arrow keys to move between devices, and the Space bar to disable or enable each as required: Press the F10 key to save the configuration and exit the PhoenixBIOS Setup Utility . Do not remove the virtual CD-ROM device, as it is used by vSphere when performing an automated installation or upgrade of the VMware Tools software. Customizing the Windows desktop OS cluster size Microsoft Windows uses a default cluster size, also known as allocation unit size, of 4 KB when creating the boot volume during a new installation of Windows. The cluster size is the smallest amount of disk space that will be used to hold a file, which affects how many disk writes must be made to commit a file to disk. For example, when a file is 12 KB in size, and the cluster size is 4 KB, it will take three write operations to write the file to disk. The default 4 KB cluster size will work with any storage option that you choose to use with your environment, but that does not mean it is the best option. Storage vendors frequently do performance testing to determine which cluster size is optimal for their platforms, and it is possible that some of them will recommend that the Windows cluster size should be changed to ensure optimal performance. The following steps outline how to change the Windows cluster size during the installation process; the process is the same for both Windows 7 and Windows 8. In this example, we will be using an 8 KB cluster size, although any size can be used based on the recommendation from your storage vendor. The cluster size can only be changed during the Windows installation, not after. If your storage vendor recommends the 4 KB Windows cluster size, the default Windows settings are acceptable. Boot from the Windows OS installer ISO image or physical CD and proceed through the install steps until the Where do you want to install Windows? dialog box appears. Press Shift + F10 to bring up a command window. In the command window, enter the following commands: diskpart select disk 0 create partition primary size=100 active format fs=ntfs label="System Reserve" quick create partition primary format fs=ntfs label=OS_8k unit=8192 quick assign exit Click on Refresh to refresh the Where do you want to install Windows? window. Select Drive 0 Partition 2: OS_8k , as shown in the following screenshot, and click on Next to begin the installation: The System Reserve partition is used by Windows to store files critical to the boot process and will not be visible to the end user. These files must reside on a volume that uses a 4 KB cluster size, so we created a small partition solely for that purpose. Windows will automatically detect this partition and use it when performing the Windows installation. In the event that your storage vendor recommends a different cluster size than shown in the previous example, replace the 8192 in the sample command in step 3 with whatever value the vendor recommends, in bytes, without any punctuation. Windows OS pre-deployment tasks The following tasks are unrelated to the other optimization tasks that are described in this article but they should be completed prior to placing the desktop into production. Installing VMware Tools VMware Tools should be installed prior to the installation of the View Agent software. To ensure that the master image has the latest version of the VMware Tools software, apply the latest updates to the host vSphere Server prior to installing the tools package on the desktop. The same applies if you are updating your VMware Tools software. The View Agent software should be reinstalled after the VMware Tools software is updated to ensure that the appropriate View drivers are installed in place of the versions included with VMware Tools. Cleaning up and defragmenting the desktop hard disk To minimize the space required by the Virtual Desktop master image and ensure optimal performance, the Virtual Desktop hard disks should be cleaned of nonessential files and optimized prior to deployment into production. The following actions should be taken once the Virtual Desktop master image is ready for deployment: Use the Windows Disk Cleanup utility to remove any unnecessary files. Use the Windows Defragment utility to defragment the virtual hard disk. If the desktop virtual hard disks are thinly provisioned, you may wish to shrink them after the defragmentation completes. This can be performed with utilities from your storage vendor if available, by using the vSphere vmkfstools utility, or by using the vSphere storage vMotion feature to move the virtual machine to a different datastore. Visit your storage vendor or the VMware vSphere Documentation (http://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html) for instructions on how to shrink virtual hard disks or perform a storage vMotion.
Read more
  • 0
  • 0
  • 2512

article-image-authoring-enterprisedb-report-report-builder-20
Packt
07 Oct 2009
4 min read
Save for later

Authoring an EnterpriseDB report with Report Builder 2.0

Packt
07 Oct 2009
4 min read
{literal} Overview The following steps will be followed in authoring an Intranet report using Postgres Plus as the backend database. Creating an ODBC DSN to access the data on Postgres Plus Creating a report with Report Builder 2.0 Configuring the data source for the control Configuring the report layout to display data Deploying the report on the Report Server The categories table in the EnterpriseDB shown will be used to provide the data for the report. The categories table is on the PGNorthwind database on the EnterpriseDB Advanced Server 8.3 shown in the next figure. Creating an ODBC DSN to access the data on Postgres Plus Click on Start | Control Panel | Administrative Tools | Data Sources (ODBC) to bring up the ODBC Data Source Administrator window. Click on Add.... In the Create New Data Source window scroll down to choose EnterpriseDB 8.3 under the list heading Name as shown. Click Finish. The EnterpriseDB ODBC Driver page gets displayed as shown. Accept the default name for the Data Source(DSN) or, if you prefer, change the name. Here the default is accepted. The Database, Server, User Name, Port and the Password should all be available to you. If you click on the option button Datasource you display a window with two pages as shown. Make no changes to the pages and accept defaults but make sure you review the pages. Click OK and you will be back in the EnterpriseDB Driver window. If you click on the option button Global the Global Settings window gets displayed (not shown). These are logging options as the page describes. Click Cancel to the Global Settings window. Click on the Test button and verify that the connection was successful. Click on the Save button and save the DSN under the list heading User DSN. The DSN EnterpriseDB enters the list of DSN's created as shown here.      Creating a report with Report Builder 2.0 It is assumed that the Report Builder 2.0 is installed on the computer on which both EnterpriseDB's Advanced Server 8.3 as well as the Reporting Services / SQL Server 2008 are installed. To proceed further the SQL Server 2008 Database Engine and the Reporting Services must have started and running. Start | All programs | Microsoft SQL Server 2008 Report Builder 2.0 will bring up the following display. Click on Report Builder 2.0. The Report Builder 2.0 gets displayed as shown. Double click the Table or Matrix icon. This displays the New Table or Matrix window as shown. Configuring the data source for the control Click on the New... button. This brings up the Data Source Properties window as shown. Click on the drop-down handle for "Select connection type:" as shown, scroll down and click on ODBC Click on the Build... button. This brings up the Connection Properties window as shown. Click on the option "Use user or system data source name" and then click on the drop-down handle. In the list of ODBC DSN's on this machine which gets displayed in the list click on EnterpriseDB as shown. Enter credentials for the EnterpriseDB as shown. Test the connection with the Test Connection button. A Test Results message will be generated displaying success if the connection information provided is correct as shown Click OK to the two open windows. This will bring you back to the Data Source Properties window displaying the connection string in the window. Click on navigational link Credentials on the left which opens the "Change the credentials used to connect to the data source". Make note that the credentials you supplied earlier have arrived at this page. If you need to be prompted you may check this option. Herein no changes will be made. Click on the OK button. In the New Table or Matrix window DataSource1 (in this report) will be high lighted. Click on the Next button. This brings up the "Design a query" window as shown. In the top pane of the query window type in the SQL Statement as shown. You may also test the query by clicking on the (!) button and the query result will appear in the bottom pane as shown.
Read more
  • 0
  • 0
  • 2505

article-image-cooking-xml-oop
Packt
23 Oct 2009
9 min read
Save for later

Cooking XML with OOP

Packt
23 Oct 2009
9 min read
Formation of XML Let us look at the structure of a common XML document in case you are totally new to XML. If you are already familiar with XML, which we greatly recommend for this article, then it is not a section for you. Let's look at the following example, which represents a set of emails: <?xml version="1.0" encoding="ISO-8859-1" ?><emails> <email> <from>nowhere@notadomain.tld</from> <to>unknown@unknown.tld</to> <subject>there is no subject</subject> <body>is it a body? oh ya</body> </email></emails> So you see that XML documents do have a small declaration at the top which details the character set of the document. This is useful if you are storing Unicode texts. In XML, you must close the tags as you start it. (XML is more strict than HTML, you must follow the conventions.) Let's look at another example where there are some special symbols in the data: <?xml version="1.0" encoding="ISO-8859-1" ?><emails> <email> <from>nowhere@notadomain.tld</from> <to>unknown@unknown.tld</to> <subject>there is no subject</subject> <body><![CDATA[is it a body? oh ya, with some texts & symbols]]></body> </email></emails> This means you have to enclose all the strings containing special characters with CDATA. Again, each entity may have some attributes with it. For example consider the following XML where we describe the properties of a student: <student age= "17" class= "11" title= "Mr.">Ozniak</student> In the above example, there are three attributes to this student tag—age, class, and title. Using PHP we can easily manipulate them too. In the coming sections we will learn how to parse XML documents, or how to create XML documents on the fly. Introduction to SimpleXML In PHP4 there were two ways to parse XML documents, and these are also available in PHP5. One is parsing documents via SAX (which is a standard) and another one is DOM. But it takes quite a long time to parse XML documents using SAX and it also needs quite a long time for you to write the code. In PHP5 a new API has been introduced to easily parse XML documents. This was named SimpleXML API. Using SimpleXML API you can turn your XML documents into an array. Each node will be converted to an accessible form for easy parsing. Parsing Documents In this section we will learn how to parse basic XML documents using SimpleXML. Let's take a breath and start. $str = <<< END<emails> <email> <from>nowhere@notadomain.tld</from> <to>unknown@unknown.tld</to> <subject>there is no subject</subject> <body><![CDATA[is it a body? oh ya, with some texts & symbols]]></body> </email></emails>END;$sxml = simplexml_load_string($str);print_r($sxml);?> The output is like this: SimpleXMLElement Object( [email] => SimpleXMLElement Object ( [from] => nowhere@notadomain.tld [to] => unknown@unknown.tld [subject] => there is no subject [body] => SimpleXMLElement Object ( ) )) So now you can ask how to access each of these properties individually. You can access each of them like an object. For example, $sxml->email[0] returns the first email object. To access the from element under this email, you can use the following code like: echo $sxml->email[0]->from So, each object, unless available more than once, can be accessed just by its name. Otherwise you have to access them like a collection. For example, if you have multiple elements, you can access each of them using a foreach loop: foreach ($sxml->email as $email)echo $email->from; Accessing Attributes As we saw in the previous example, XML nodes may have attributes. Remember the example document with class, age, and title? Now you can easily access these attributes using SimpleXML API. Let's see the following example: <?$str = <<< END<emails> <email type="mime"> <from>nowhere@notadomain.tld</from> <to>unknown@unknown.tld</to> <subject>there is no subject</subject> <body><![CDATA[is it a body? oh ya, with some texts & symbols]]></body> </email></emails>END;$sxml = simplexml_load_string($str);foreach ($sxml->email as $email)echo $email['type'];?> This will display the text mime in the output window. So if you look carefully, you will understand that each node is accessible like properties of an object, and all attributes are accessed like keys of an array. SimpleXML makes XML parsing really fun. Parsing Flickr Feeds using SimpleXML How about adding some milk and sugar to your coffee? So far we have learned what SimpleXML API is and how to make use of it. It would be much better if we could see a practical example. In this example we will parse the Flickr feeds and display the pictures. Sounds cool? Let's do it. If you are interested what the Flickr public photo feed looks like, here is the content. The feed data is collected from http://www.flickr.com/services/feeds/photos_public.gne: <?xml version="1.0" encoding="utf-8" standalone="yes"?><feed > <title>Everyone's photos</title> <link rel="self" href="http://www.flickr.com/services/feeds/photos_public.gne" /> <link rel="alternate" type="text/html" href="http://www.flickr.com/photos/"/> <id>tag:flickr.com,2005:/photos/public</id> <icon>http://www.flickr.com/images/buddyicon.jpg</icon> <subtitle></subtitle> <updated>2007-07-18T12:44:52Z</updated> <generator uri="http://www.flickr.com/">Flickr</generator> <entry> <title>A-lounge 9.07_6</title> <link rel="alternate" type="text/html" href="http://www.flickr.com/photos/dimitranova/845455130/"/> <id>tag:flickr.com,2005:/photo/845455130</id> <published>2007-07-18T12:44:52Z</published> <updated>2007-07-18T12:44:52Z</updated> <dc:date.Taken>2007-07-09T14:22:55-08:00</dc:date.Taken> <content type="html">&lt;p&gt;&lt;a href=&quot;http://www.flickr.com/people/dimitranova/&quot; &gt;Dimitranova&lt;/a&gt; posted a photo:&lt;/p&gt; &lt;p&gt;&lt;a href=&quot;http://www.flickr.com/photos/dimitranova/845455130/ &quot; title=&quot;A-lounge 9.07_6&quot;&gt;&lt;img src=&quot; http://farm2.static.flickr.com/1285/845455130_dce61d101f_m.jpg &quot; width=&quot;180&quot; height=&quot;240&quot; alt=&quot; A-lounge 9.07_6&quot; /&gt;&lt;/a&gt;&lt;/p&gt;</content> <author> <name>Dimitranova</name> <uri>http://www.flickr.com/people/dimitranova/</uri> </author> <link rel="license" type="text/html" href="deed.en-us" /> <link rel="enclosure" type="image/jpeg" href="http://farm2.static.flickr.com/1285/ 845455130_7ef3a3415d_o.jpg" /> </entry> <entry> <title>DSC00375</title> <link rel="alternate" type="text/html" href="http://www.flickr.com/photos/53395103@N00/845454986/"/> <id>tag:flickr.com,2005:/photo/845454986</id> <published>2007-07-18T12:44:50Z</published>...</entry></feed> Now we will extract the description from each entry and display it. Let's have some fun: <?$content = file_get_contents( "http://www.flickr.com/services/feeds/photos_public.gne ");$sx = simplexml_load_string($content);foreach ($sx->entry as $entry){ echo "<a href='{$entry->link['href']}'>".$entry->title."</a><br/>"; echo $entry->content."<br/>"; }?> This will create the following output. See, how easy SimpleXML is? The output of the above script is shown below: Managing CDATA Sections using SimpleXML As we said before, some symbols can't appear directly as a value of any node unless you enclose them using CDATA tag. For example, take a look at following example: <?$str = <<<EOT<data> <content>text & images </content></data>EOT;$s = simplexml_load_string($str);?> This will generate the following error: <br /><b>Warning</b>: simplexml_load_string() [<a href='function.simplexml-load-string'> function.simplexml-load-string</a>]: Entity: line 2: parser error : xmlParseEntityRef: no name in <b>C:OOP with PHP5Codesch8cdata.php</b> on line <b>10</b><br /><br /><b>Warning</b>: simplexml_load_string() [<a href='function.simplexml-load-string'> function.simplexml-load-string</a>]: &lt;content&gt;text &amp; images &lt;/content&gt; in <b>C:OOP with PHP5Codesch8cdata.php</b> on line <b>10</b><br /><br /><b>Warning</b>: simplexml_load_string() [<a href='function.simplexml-load-string'> function.simplexml-load-string</a>]: ^ in <b>C:OOP with PHP5Codesch8cdata.php</b> on line <b>10</b><br /> To avoid this problem we have to enclose using a CDATA tag. Let's rewrite it like this: <data> <content><![CDATA[text & images ]]></content></data> Now it will work perfectly. And you don't have to do any extra work for managing this CDATA section. <?$str = <<<EOT<data> <content><![CDATA[text & images ]]></content></data>EOT;$s = simplexml_load_string($str);echo $s->content;//print "text & images"?> However, prior to PHP5.1, you had to load this section as shown below: $s = simplexml_load_string($str,null,LIBXML_NOCDATA);
Read more
  • 0
  • 0
  • 2504

article-image-drag-and-drop-yui-part-3
Packt
30 Nov 2009
6 min read
Save for later

Drag-and-Drop with the YUI: Part-3

Packt
30 Nov 2009
6 min read
Visual Selection with the Slider Control So far in this chapter we've focused on the functionality provided by the Drag-and-Drop utility. Let's shift our focus back to the interface controls section of the library and look at one of the components which is related very closely to drag-and-drop—the Slider control. A slider can be defined using a very minimal set of HTML. All you need are two elements: the slider background and the slider thumb, with the thumb appearing as a child of the background element: <div id="slider_bg" title="the slider background"><div id="slider_thumb" title="the slider thumb"><img src="images/slider_thumb.gif"></div></div> These elements go together to form the basic Slider control, as shown in below: The Slider control works as a specific implementation of DragDrop in that the slider thumb can be dragged along the slider background either vertically or horizontally. The DragDrop classes are extended to provide additional properties, methods, and events specific to Slider. One of the main concepts differentiating Slider from DragDrop is that with a basic slider, the slider thumb is constrained to just one axis of motion, either X or Y depending on whether the Slider is horizontal or vertical respectively. The Slider is another control that can be animated by including a reference to the Animation control in the head of the page. Including this means that when any part of the slider background is clicked on, the slider thumb will gracefully slide to that point of the background rather than just moving there instantly. The Constructor and Factory Methods The constructor for the slider control is always called in conjunction with one of the three factory methods, depending on which type of slider you want to display. To generate a horizontal slider the YAHOO.widget.Slider.getHorizSlider is called, with the appropriate arguments. To generate a vertical slider, on the other hand, the YAHOO.widget.Slider.getVertSlider would instead be used. There is also another type of slider that can be created—the YAHOO.widget.Slider.getSliderRegion constructor and factory method combination creates a two-dimensional slider, the thumb of which can be moved both vertically and horizontally. There are a range of arguments used with the different types of slider constructor. The first two arguments are the same for all of them, with the first argument corresponding to the id of the background HTML element and the second corresponding to the id of the thumb element. The type of slider you are creating denotes what the next two (or four when using the SliderRegion) arguments relate to. With the horizontal slider or region slider the third argument is the number of pixels that the thumb element can move left, but with the horizontal slider it is the number of pixels it can move up. The fourth argument is either the number of pixels which the thumb can move right, or the number of pixels it can move down. When using the region slider, the fifth and sixth arguments are the number of pixels the thumb element can move up and down, so with this type of slider all four directions must be specified. Alternatively, with either the horizontal or vertical sliders, only two directions need to be accounted for. The final argument (either argument number five for the horizontal or vertical sliders, or argument number seven for the region slider) is optional and refers to the number of pixels between each tick, also known as the tick size. This is optional because you may not use ticks in your slider, therefore making the Slider control analogue rather than digital. Class of Two There are just two classes that make up the Slider control—the YAHOO.widget.Slider class is a subclass of the YAHOO.util.DragDrop class and inherits a whole bunch of its most powerful properties and methods, as well as defining a load more of its own natively. The YAHOO.widget.SliderThumb class is a subclass of the YAHOO.util.DD class and inherits properties and methods from this class (as well as defining a few of its own natively). Some of the native properties defined by the Slider class and available for you to use include: animate—a boolean indicating whether the slider thumb should animate. Defaults to true if the Animation utility is included, false if not animationDuration—an integer specifying the duration of the animation in seconds. The default is 0.2 backgroundEnabled—a boolean indicating whether the slider thumb should automatically move to the part of the background that is selected when clicked. Defaults to true enableKeys—another boolean which enables the home, end and arrow keys on the visitors keyboard to control the slider. Defaults to true, although the slider control must be clicked once with the mouse before this will work keyIncrement—an integer specifying the number of pixels the slider thumb will move when an arrow key is pressed. Defaults to 25 pixels A large number of native methods are also defined in the class, but a good deal of them are used internally by the slider control and will therefore never need to be called directly by you in your own code. There are a few of them that you may need at some point however, including: .getThumb()—returns a reference to the slider thumb .getValue()—returns an integer determining the number of pixels the slider thumb has moved from the start position .getXValue()—an integer representing the number of pixels the slider has moved along the X axis from the start position .getYValue()—an integer representing the number of pixels the slider has moved along the Y axis from the start position .onAvailable()—executed when the slider becomes available in the DOM .setRegionValue() and .setValue()—allow you to programmatically set the value of the region slider's thumb More often than not, you'll find the custom events defined by the Slider control to be most beneficial to you in your implementations. You can capture the slider thumb being moved using the change event, or detect the beginning or end of a slider interaction by subscribing to slideStart or slideEnd respectively. The YAHOO.widget.SliderThumb class is a subclass of the DD class; this is a much smaller class than the one that we have just looked at and all of the properties are private, meaning that you need not take much notice of them. The available methods are similar to those defined by the Slider class, and once again, these are not something that you need to concern yourself with in most basic implementations of the control.
Read more
  • 0
  • 0
  • 2502

article-image-building-ext-js-theme-oracle-apex
Packt
15 Apr 2011
6 min read
Save for later

Building a Ext JS Theme into Oracle APEX

Packt
15 Apr 2011
6 min read
Oracle Application Express 4.0 with Ext JS Deliver rich desktop-styled Oracle APEX applications using the powerful Ext JS JavaScript library        Theme basics Out of the box, APEX comes with twenty themes, each theme comprising a collection of templates used to define the layout and appearance of an entire application. An application can have many themes, but only one theme is active at any time; switching themes is done at design time only. You can create custom themes, which may be published, either within a workspace by a Workspace Manager or for the whole APEX instance by an Internal Workspace Manager. Publishing a theme encourages consistency across applications. A theme comprises nine template types: breadcrumb, button, calendar, label, list, page, popup list of values, region, and report. A theme must have at least one of each of the nine template types. Within each template type are a set of predefined classes and eight custom classes. For example, the label template has the following classes: - Not Identified - No Label Optional Label Optional Label with Help Required Label Required Label with Help Custom 1... Custom 8 Programmers use these templates to construct the HTML pages that make up an application. Each page is declaratively defined using metadata to select templates to be used for the presentation. The APEX engine dynamically renders an HTML page using the metadata, assembling relevant templates and injecting dynamic data into placeholders within the templates. The HTML page is viewed when you request a page through a web browser. When you submit a page, the APEX engine performs page processing, once again using declaratively defined metadata to perform computations, validations, processes, and branching. This type of processing is a typical Model-View-Controller(MVC) pattern, where the view is the HTML generated using the application templates. The APEX engine is the controller and receives the GET or POST input and decides what to do with it, handing over to domain objects. The domain objects model is encapsulated in the page definition and contains the business rules and functionality to carry out specific tasks. Separation of concerns The MVC pattern also promotes another good design principle—separation of concerns. APEX has been designed so that the APEX engine, templates, and the application functionality can be optimized independently of each other. Clearly, the process of assembling and sequencing the steps necessary to render a page, and process a page are important to the overall solution. By separating this out and letting Oracle deal with the complexities of this through the APEX engine, it allows programmers to concentrate on providing business functionality. Equally, by separating presentation templates from the business logic, it allows each aspect to be maintained separately. This provides a number of advantages including ease of design change, allowing templates to be modified either by different people or at different times to enhance the interface without breaking the application. An excellent example of this is the standard themes provided in APEX, which have been designed to be completely interchangeable. Switching standard themes is simply a matter of loading a theme from the repository and then switching the active theme. APEX then remaps components to the new active theme using the template class identifiers. Standard themes We will be building our own custom theme rather than using one of the twenty pre-built ones. Nevertheless, it's worthwhile knowing what they provide, as we will build our custom theme by using one of them as a "starter". Looking at this image, we can see a preview of the standard APEX themes. Each theme provides a similar interface, so really each standard theme is just a visual variation on the others. The colors used are a little different, or the HTML layout is tweaked slightly, but in reality they are all much the same. Theme 4 is used by APEX as the "starter" theme and contains one template for each template class for all the template types—a total of 69 templates. Theme 19 is also worth noting as it's designed for mobile devices. Each of these themes are full of good HTML practices and show how and where to use the substitution strings. Creating a theme When creating a theme, you can choose to copy one from the repository, create one from scratch or from an export file. The repository and export file options copy the entire theme, and you start editing the template to suit. Creating a theme from scratch creates a theme without any templates. You then need to define templates for each different type before you can switch from the active theme. In my opinion, the easiest way to build a new theme is to take the approach that the application should always be in a working state, and the way to do this is to create a new empty TEMPLATE application using a standard theme and build from there. From this working base, you can progressively convert the templates to use Ext functionality, building simple test pages as you go to verify the templates. These test pages also form part of your template documentation, allowing team members to examine and understand specific functionality in isolation. Once a theme has templates for each of the nine template types, you can publish the theme into the workspace to be used by your business applications. The following screenshot shows a dialog named Create Workspace Theme from the APEX wizard. Notice that you can change the theme number when you publish a theme, providing a very simple mechanism for you to version control your themes. A published theme can't be edited directly once it has been created, but using a TEMPLATE application, you can republish it using a different theme number. Applications can have multiple themes, but only one active theme. By switching themes, applications can easily test a new version, safe in the knowledge that changing back to the earlier version is just a matter of switching back to the prior theme. So before we go any further, create a new TEMPLATE application based on Theme 4, and let's begin the process of creating our Ext JS theme. Building a Viewport Page template Several of the examples provided with Ext feature the Viewport utility container, including the RSS Feed Viewer shown in the screenshot below. The Viewport automatically renders to the document body, sizing itself to the browser viewport and dividing the page into up to five distinct regions; the center region is mandatory, with north, south, east, and west regions being optional. Viewport regions are configurable, and by setting a few simple attributes, you quickly find yourself with a very interactive page, with expanding/collapsing regions and splitters to resize regions by clicking and dragging with the mouse. We are going to build a basic viewport for our APEX page template including all five regions.
Read more
  • 0
  • 0
  • 2499

article-image-foundations
Packt
20 Nov 2013
6 min read
Save for later

Foundations

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Installation If you do not have node installed, visit: http://nodejs.org/download/. There is also an installation guide on the node GitHub repository wiki if you prefer not to or cannot use an installer: https://github.com/joyent/node/wiki/Installation. Let's install Express globally: npm install -g express If you have downloaded the source code, install its dependencies by running this command: npm install Testing Express with Mocha and SuperTest Now that we have Express installed and our package.json file in place, we can begin to drive out our application with a test-first approach. We will now install two modules to assist us: mocha and supertest. Mocha is a testing framework for node; it's flexible, has good async support, and allows you to run tests in both a TDD and BDD style. It can also be used on both the client and server side. Let's install Mocha with the following command: npm install -g mocha –-save-dev SuperTest is an integration testing framework that will allow us to easily write tests against a RESTful HTTP server. Let's install SuperTest: npm install supertest –-save-dev Continuous testing with Mocha One of the great things about working with a dynamic language and one of the things that has drawn me to node is the ability to easily do Test-Driven Development and continuous testing. Simply run Mocha with the -w watch switch and Mocha will respond when changes to our codebase are made, and will automatically rerun the tests: mocha -w Extracting routes Express supports multiple options for application structure. Extracting elements of an Express application into separate files is one option; a good candidate for this is routes. Let's extract our route heartbeat into ./lib/routes/heartbeat.js; the following listing simply exports the route as a function called index: exports.index = function(req, res){ res.json(200, 'OK'); }; Let's make a change to our Express server and remove the anonymous function we pass to app.get for our route and replace it with a call to the function in the following listing. We import the route heartbeat and pass in a callback function, heartbeat.index: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); http.createServer(app).listen(app.get('port')); module.exports = app; 404 handling middleware In order to handle a 404 Not Found response, let's add a 404 not found middleware. Let's write a test, ./test/heartbeat.js; the content type returned should be JSON and the status code expected should be 404 Not Found: describe('vision heartbeat api', function(){ describe('when requesting resource /missing', function(){ it('should respond with 404', function(done){ request(app) .get('/missing') .expect('Content-Type', /json/) .expect(404, done); }) }); }); Now, add the following middleware to ./lib/middleware/notFound.js. Here we export a function called index and call res.json, which returns a 404 status code and the message Not Found. The next parameter is not called as our 404 middleware ends the request by returning a response. Calling next would call the next middleware in our Express stack; we do not have any more middleware due to this, it's customary to add error middleware and 404 middleware as the last middleware in your server: exports.index = function(req, res, next){ res.json(404, 'Not Found.'); }; Now add the 404 not found middleware to ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; Logging middleware Express comes with a logger middleware via Connect; it's very useful for debugging an Express application. Let's add it to our Express server ./lib/express/index.js: var express = require('express') , http = require('http') , config = require('../configuration') , heartbeat = require('../routes/heartbeat') , notFound = require('../middleware/notFound') , app = express(); app.set('port', config.get('express:port')); app.use(express.logger({ immediate: true, format: 'dev' })); app.get('/heartbeat', heartbeat.index); app.use(notFound.index); http.createServer(app).listen(app.get('port')); module.exports = app; The immediateoption will write a log line on request instead of on response. The devoption provides concise output colored by the response status. The logger middleware is placed high in the Express stack in order to log all requests. Logging with Winston We will now add logging to our application using Winston; let's install Winston: npm install winston --save The 404 middleware will need to log 404 not found, so let's create a simple logger module, ./lib/logger/index.js; the details of our logger will be configured with Nconf. We import Winston and the configuration modules. We define our Logger function, which constructs and returns a file logger—winston.transports. File—that we configure using values from our config. We default the loggers maximum size to 1 MB, with a maximum of three rotating files. We instantiate the Logger function, returning it as a singleton. var winston = require('winston') , config = require('../configuration'); function Logger(){ return winston.add(winston.transports.File, { filename: config.get('logger:filename'), maxsize: 1048576, maxFiles: 3, level: config.get('logger:level') }); } module.exports = new Logger(); Let's add the Loggerconfiguration details to our config files ./config/ development.jsonand ./config/test.json: { "express": { "port": 3000 }, "logger" : { "filename": "logs/run.log", "level": "silly", } } Let's alter the ./lib/middleware/notFound.js middleware to log errors. We import our logger and log an error message via logger when a 404 Not Found response is thrown: var logger = require("../logger"); exports.index = function(req, res, next){ logger.error('Not Found'); res.json(404, 'Not Found'); }; Summary This article has shown in detail with all the commands how Node.js is installed along with Express. The testing of our Express with Mocha and SuperTest is shown in detail. The logging in into our application is shown with middleware and Winston. Resources for Article: Further resources on this subject: Spring Roo 1.1: Working with Roo-generated Web Applications [Article] Building tiny Web-applications in Ruby using Sinatra [Article] Develop PHP Web Applications with NetBeans, VirtualBox and Turnkey LAMP Appliance [Article]
Read more
  • 0
  • 0
  • 2499
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-themes-e107
Packt
22 Oct 2009
4 min read
Save for later

Themes in e107

Packt
22 Oct 2009
4 min read
What is a Theme? Back in the mid 1970s, programming code became so immense that changing even the smallest part of a piece of code could have unpredictable effects in the other areas of the program. This led to the development of a concept called modular programming, which essentially broke the program down into smaller more manageable junks that were called upon by the main program when needed. The term for modular programming these days is separation of concerns or SoC. But no matter what you call it, this is its function. In e107 a theme contains a set of modular code containing XHTML (eXtensible HyperText Markup Language) and CSS (Cascading Style Sheets). In its most basic explanation, XHTML allows us to take a text document, create a link to other documents, and the eXtensible part of the language allows you to create your own tags describing data. Thus a program like e107 can use these tags to extract the information it needs. The data shouldn’t show up on the screen like a computer printout; CSS is employed to define the layout of the various elements and tags such that they may be viewed with consistency by all browsers. The screenshot below shows you the theme files that are available in e107 on the left, and the typical files and folders that make up a particular theme on the right. They say a picture is worth a thousand words so I have used the PHP code that makes up the Reline theme, which is what we are using for our front end. Open up your FTP client and go to /public_html/e107_themes/reline/. Locate the file theme.php and use your FTP client's view/edit feature to open the file. As you can see, there is a fair amount of work that goes into creating a theme. If you want to design your own themes, I strongly recommend that you are thoroughly familiar with PHP, XHTML, and CSS before making the attempt. We won't be tackling theme making in this article. We completely need a different book for that subject; however, if you have the knowledge and want to create your own theme you can find information in the WIKI at http://wiki.e107.org/?title=Creating_a_theme_from_scratch. But I wanted to show you that these themes take effort so you will appreciate those who take the time to develop themes, especially those who develop themes and share them at no charge. Remember to thank them, encourage them, and offer to send a little money if you like the theme and can use it. It is not a requirement but it encourages more development. For more information on customizing your website, visit: ThemesWiki.org Understanding the Theme Layout The first thing you can to do is log in as administrator and select Admin Area | Menus. The screenshot below shows the correlation between the areas displayed on the administrator's menu items control page and those on the non-administrator page. Psychology of Color One of the biggest mistakes people make is to choose their theme based on their personal preferences for layout and colors. You can have the perfect layout and great material on your site and yet, people will just not like the site. So you need to ask yourself, "why do people not like my site?" or "why aren't they buying from my site?" The answer is probably that your theme uses a color that is sending out a very different message to your viewers' brains. This is a subject of protracted discussion and there are college courses on this subject. Professionally it is referred to as psychology of color. Your best bet for online information on colors is at http://www.pantone.com. Selecting a Theme Sometimes, the default theme does not quite convey the style you want for your site. While functionality is always the primary consideration, it does not mean that you have to abandon your sense of style just because you are using a CMS. There are three types of themes available for e107. These are the core themes, additional themes (located at http://www.e107themes.org), and custom themes.
Read more
  • 0
  • 0
  • 2497

article-image-metadata-oracle-universal-content-management
Packt
09 Aug 2010
5 min read
Save for later

Metadata in Oracle Universal Content Management

Packt
09 Aug 2010
5 min read
Let's begin by looking in the metadata. Exploring metadata In case you forgot, metadata fields are there to describe the actual data such as the file name, shooting date, and camera name for a digital picture. There're two types of metadata fields: Standard and Extended or Custom. Let's take a closer look at what they can do for us. Standard metadata Standard metadata is essential for the system to function. These are fields like content ID, revision ID, check-in date, and author. Let's take a quick look at all of them so you have a full picture. Lab 2: Exploring standard metadata Click on the Quick Search button on the top right. Yes, leave the search box blank. If you do that, you'll get all content in the repository. In the last column on the Search Results page click on the i icon on any of the result rows. That brings up a Content Info screen. From this screen there is no way to tell which fields are Standard and which are Extended. So how do you tell? Explore the database That's right. A Content Server uses a relational database, like Oracle or SQL Server to store its metadata, so let's look there. If you are using SQL Server 2005 as your database, then open SQL Server Management Studio, and if not then bring up your SQL tool of choice. Check the list of columns in the table called Revisions (as shown in the following screenshot): Most of the column names in Revisions are the standard metadata fields. Here's a list of the fields you will be using most often: dID: ID of the document revision. This number is globally unique. If you have a project plan with three revisions—each of the three will have unique dID and all of them will have the same Content ID. dDocName: this is the actual Content ID. dDocType: content type of the document. dDocName or Content ID is the unique identifier for a content revision set. dID is the unique identifier of each individual content revision within a set. Being able to identify a content revision set is very useful, as it shows and tracks (makes auditable) the changes of content items over time. Being able to identify each individual revision with dID is also very useful, so we can work with specific content revisions. This is one of the great advantages of the Content Server over other systems, which only store the changes between revisions. Full revision sets as well as individual revisions are managed objects and each one can be accessed by its own unique URL. Now run this SQL statement: select * from Revisions; This shows the actual documents in the system and their values for standard meta fields (as shown in the following screenshot): And now let's look at the all-important Content Types. Content Types Content Type is a special kind of meta field. That's all. UCM puts a special emphasis on it as this is the value that differentiates a project plan from a web page and a team photo from a vendor invoice. You may even choose to change the way your check-in and content info form looks —based on the type of the document. Let's look how UCM handles Content Types. Lab 3: Exploring content types In Content Server go to Administration | Admin Applets. Launch the Configuration Manager. Select Options| Content Types... (as shown in the following screenshot): The Content Types dialog opens. As you see, out of the box, Content Server has seven types—one for each imaginary department. This is a good way of segregating content. You can also go by the actual type of content. For instance, you can have one Content Type for Invoice and one for Project Plan. They will also have different meta fields. For instance, an Invoice will have a Contract Number and a Total Amount. A Project Plan will have a project name and manager's name. Now let me show you how to add content types. How to add a Content Type It's easy to add a new Content Type. Just click on Add..., fill in the type name and the description. You can also select an icon for the new type. What if you need to upload a new icon? Just make it into an 8-bit GIF file, 30x37 px, 96 dpi and upload it to: C:oracleucmserverweblayoutimagesdocgifs If your install path is different or you're not running on Windows then make appropriate corrections. How to edit or delete a Content Type The only thing to know about editing is that you can not really change the type name. All you can update is the icon or description. If you're ready to delete a type then make sure there is no content in the repository that's using it. Either update it all or delete. How would you go about doing a mass-update? I'll show you one of the ways in on using Archiver(Ways on archiever is out of the scope of this article). And now let's proceed to Custom Metadata.
Read more
  • 0
  • 0
  • 2497

article-image-how-expand-your-knowledge
Packt
17 Feb 2014
11 min read
Save for later

How to Expand your Knowledge

Packt
17 Feb 2014
11 min read
(For more resources related to this topic, see here.) One of the most frequently asked questions on help forums probably is, "How can I learn Google Apps Script?" The answer is almost always the same: learn JavaScript and follow the numerous tutorials available on the Internet. No doubt, it is one of the possible ways to learn but it is also one of the most difficult ways. I shall express my opinion on that subject at the end of this article, but let us first summarize what we really need to be able to use Google Apps Script efficiently. The first and most important thing we must have is a clear idea of what we want to achieve. This seems a bit silly because we think, "Oh well, of course I know what I want; I just don't know how to do it!" As a matter of fact, this is often not the case. Let us have an example: a colleague asked me recently how he could count the time he was spending at school for meetings and other administrative tasks, not taking into account his hours as a teacher. This was supposed to be a simple problem as everyone in our school has a personal calendar in which all the events that we are invited to are recorded. So, he began to search for a way to collect every possible event from his calendar to a spreadsheet and from there—since he can definitely use a spreadsheet—he intended to do some data filtering to get the result he wanted. I told him to have a look at the Google Apps Script documentation and see what tools he had, to pick up data from calendars and import them into a spreadsheet. A few days later, he came back to me complaining that he didn't find any appropriate methods to do what he needs to. And, in a way, he was right; nowhere is such a workflow explained and it is actually not surprising. One can't imagine compiling all the possible workflows into a single help resource; there are definitely too many different use cases, each of them needing a particular approach. We had a discussion where I told him to think about his research as a series of simple and accurate parts and steps before trying to get the whole process in one stroke. The following is what he told me another few days later: "I knew nothing about this macro language, so I discovered that it is based on JavaScript with the addition of Google's own services that use a similar syntax and that the whole thing is composed of functions calling each other and having parameters. Then, I examined the calendar service and saw that it needs so-called date objects to choose a start and end date. Date object methods are pretty well explained on Mozilla's page, so once I got that I had an array of events, I thought what the heck is an array of objects? You gave me the link to this w3schools site, so I took a look at their definition; that was enough for me to go further and discover that I could use a loop to handle each event separately. Google documentation shows all the methods to get events details; that part was easy and now I have all my calendar events with dates, hours, description, title. All of it! I tell you." I'm not going to transcribe all of our conversation—it finally took a couple of hours—but towards the end, he was describing the process so well that the actual writing of his script was almost just a formality. With the help of the Content assist (autocomplete) feature of the script editor and a couple of browser tabs left open on JavaScript and Google documentation, he managed to write his script in one day. Of course, the script was not perfect and by no way optimized his speed or gave  nice-looking results, but it worked and he had the data he was looking for. At that point, he could post his script on a help forum if something went wrong or try to improve another version if he's a perfectionist, but that depends only on his will to go further or not. I would simply say one thing: you will learn what you need. If you don't need it, don't try to learn it as you will forget it faster than you learned it. If you do, then be prepared to need something else right after; it is an endless journey! JavaScript versus Google Apps Script The following is stated on the overview of Google Apps Script documentation page: Google Apps Script is a scripting language based on JavaScript that lets you do new and cool things with Google Apps like Docs, Sheets, and Forms. They should use a bigger typeface to make it more visible! The keyword here is based on JavaScript because it does indeed use most of the JavaScript Version 1.6 (with some portions of Version 1.7 and Version 1.8) and its general syntax. But, it has so many other methods that knowing only JavaScript is clearly not sufficient to use it adequately. I would even say that you can learn it step-by-step when you need it, looking for information on a specific item each time you use a new type of object. The following is the code that was used to get the integer part of the result that uses the getTime method: function myAgeInHours(){  var myBirthDate = new Date('1958/02/19 02:00:00').getTime();  myBirthDate = parseInt(myBirthDate/3600000, 10);  var today = parseInt(new Date().getTime()/3600000, 10);  return today-myBirthDate;} We looked at the documentation about the Date object to find the getTime() method and then found parseInt to get the integer part of the result. Well, I'm convinced that this approach is more efficient than spending hours on a site or in a book that shows all JavaScript information from A to Z. We have the opportunity to have powerful search engines in our browsers, so let's use them; they always find the answer for us in less time than it takes to write the question. Concerning methods specific to Google Apps Script, I think the approach should be different. The Google API documentation is pretty well organized and is full of code examples that clearly show us how to use almost every single method. If we start a project in a spreadsheet, it is a good idea to carefully read the section about spreadsheets (https://developers.google.com/apps-script/reference/spreadsheet) at least once and just check if what it says makes any sense. For example, in the Sheet class, I found this description: Returns the range with the top left cell at the given coordinates, and with the given number of rows. The following screenshot displays the same description: If I understand what range and co-ordinates are, then I probably know enough to be able to use that method (getRange(row, column, numRows) or a similar one. You want me to tell you the truth? I didn't know we could get a range this way by simply defining the top-left cell and just the number of rows (only three parameters). I always use the next one in the list, which is shown as follows: The description says: Returns the range with the top left cell at the given coordinates with the given number of rows and columns. So after all this time I spent on dozens of spreadsheet scripts, there still are methods that I can't even imagine exist! That's actually a nice confirmation of what I was suggesting: one doesn't need to know everything to be able to use it but it's always a good idea to read the docs from time to time. Infinite resources JavaScript is a very popular language; there are thousands of websites that show us examples and explain methods and functions. We must add all the forums and Q&A sites that return many results when we search something on Google to these websites (or any other search engine), and that is actually an unforeseen difficulty. It happens quite often that we find false information or code snippets that simply don't work, either because they have typos in them or they are so badly written that they work only in a very specific and peculiar situation. My personal solution is to use only a couple of websites and perform a search on their search engine, avoiding all the sources I'm not sure of. Maybe I miss something at times, but at least the information I get is trustworthy. Last but certainly not least, the help forum recommended by the Google Apps Script team, http://stackoverflow.com/questions/tagged/google-apps-script (with the google-apps-script tag), is certainly the best resource that is available. With more than 5000 questions (as of January, 2014), the help forum probably has threads about every possible use case and an important part of it has answers as well. There are of course other interesting tags: JavaScript, Google docs, Google spreadsheets, and a few even more specific ones. I have rarely seen really bad answers—although it does happen sometimes—simply because so many people read these posts that they generally flag or comment answers that show wrong information. There are also people from Google that regularly keep an eye on it and clarify any ambiguous response. Being a newbie is, by definition, temporary When I began to use Google spreadsheets and scripts, I found the Google Group Help forum (which does not exist anymore) an invaluable source of information and help, so I asked dozens of questions—some of them very basic and naive—and always got answers. After a while, since I was spending hours on this forum reading about every post I found, I began to answer too. I was so proud of being able to answer a question! It was almost like passing an examination; I knew that one of the experts there was going to read what I wrote and evaluate my knowledge; quite stressful but also satisfying when you don't fail! So after a couple of months I gained my first level point (on the Google Group forum, there are no reputation points but levels, starting from 1 for new arriving members up to TC(Top Contributors), whose level is unknown but is generally more than 15 or 20; anyway, that's not important). That little story is just a way to encourage any beginner to spend some time on this forum and consider every question as a challenge and try to answer it. Of course, there is no need to publish your answer every time as there are chances that you may get it all wrong, but just use this as an exercise that will give you more and more expertise. From time to time, you'll be able to be the first or best answerer and gain a few reputation points; consider it as a game, just a funny game where all you can finally win is knowledge and all you can lose is your newbie status—not a bad deal after all! Try to find your own best learning method I'm certainly not pretending that I know the best learning method for anyone. All the tips I presented in the previous section did work for me—and for a few other people I know—but there is no magic formula that would suit everyone. I know that each of us has a different background and follows a different path, but I wanted to say loud and clear that you don't need to have to be a graduate in IT to begin with Google Apps Script nor do you have to spend hours learning rules and conventions. Practice will make it easier everyday and motivation will give you enough energy to complete your projects, from simple ones to more ambiguous ones. Summary This article has given an overview of the many resources available to improve your learning experience. There are certainly more that I don't know of but as I already mentioned a few times before, we have powerful search engines in our browsers to help us. We also have to keep in mind that Google Apps Script will probably be different as compared to what it will be in a couple of years. Resources for Article: Further resources on this subject: Google Apps: Surfing the Web [Article] Developing apps with the Google Speech APIs [Article] Data Modeling and Scalability in Google App [Article]
Read more
  • 0
  • 0
  • 2496

article-image-shipping-modules-magento-part-1
Packt
29 Jan 2010
10 min read
Save for later

Shipping Modules in Magento: Part 1

Packt
29 Jan 2010
10 min read
What shipping modules do Shipping modules are used to define the handling of the order, before it goes through the payment method section of the order process. They take the order itself and decide how to go about charging and delivering it to the customer. Magento takes the order through each shipping module that is installed and active in the system. Each shipping module is then able to process the order currently in the cart and present any available options to the user, from what it sees in the order. For example: we have a shipping module in our system that provides free delivery to people located within the UK and ordering over £40. Let's presume that we are ordering £50 worth of goods and are located within the UK. When the order is sent through this module, it checks whether or not the user is in the UK and the order total is over £40. If these conditions are met, (which our order does), then we are presented with a free delivery option, among others, to choose during our order process. This is a very simple version of what a shipping module can do. The full range of what can be done using shipping modules can be grasped by looking at Shipping Modules on Magento Connect and finding what it has on offer. Here's a small range of what has been made public: Royal Mail UK, EU, and Worldwide Shipping module — For calculation and handling of all of Royal Mail's shipping methods using weight and distance. This module sends key product information to Royal Mail via their API, authenticating with information that the Magento store administrator has input, and outputs pricing for various shipping options on the current order. Regional Free Shipping module — Gives the Magento administrator the option to allow free shipping by region, instead of by country. This provides a choice to the store administrator of breaking down free shipping further than country. For example, this would be good for someone who runs a store based in Nottingham that wants to reward local customers in the East Midlands, as opposed to simply giving free shipping to the entire United Kingdom. Basic Store Pickup Shipping module — Enables customers to choose between picking up the item themselves and having it delivered to them. This is advantageous for companies which use Magento and have a physical store presence with stock held. Magento Connect can be accessed at the following URL:http://www.magentocommerce.com/magento-connect. The three core types of shipping module can be summarized in the following: Third-party API and/or web service integration. For integration with existing couriers that have web-based APIs or web services that offer the same for organization of your shipping. Many existing services make an API available to us for integrating into Magento. We should check Magento Connect for any existing modules that others have created. Using customer data to provide unique calculations and opportunities to certain users. Anything that the customer puts in it while checking out can be used for this type of module. Shipping methods that involve a physical store or location for additional options that compliment others. We could theoretically build up a set of stores under the Magento store company's business. We could then link them up with local computers at the locations and use the shipping module to check for stocks at each location. We should do that before suggesting the store as a location for the user to be able to pick up the item. How to begin with a shipping module Here we'll be learning about how to begin with a shipping module. This is the skeletal structure of a shipping module that we can use as a template for any shipping module which we create. We will also be using it to create a very basic shipping module at the end of this article. For the purpose of following this tutorial, we will be creating all files in /app/code/local/MagentoBook/ShippingModule/ , which will be the base directory for all files created (from this point onwards). We must make sure that this directory and sub-directory are set up before continuing onwards. This means that if the file is declared to be /hello/world.php, we place this on the end of our initial base address and it becomes /app/code/local/MagentoBook/ShippingModule/hello/world.php Please start by creating the directory MagentoBook in /app/code/local/ and a sub-directory within that called ShippingModule, creating the directory structure /MagentoBook/ShippingModule/. The configuration files We create /app/code/local/MagentoBook/ShippingModule/etc/config.xml in our module's folder. Here, we'll place the following code which will declare our new module for use, make it depend on Mage_Shipping being enabled, set it at version 0.1.0 and allow it to use the existing global database connection which is set up for our store. <?xml version="1.0"?><config> <modules> <MagentoBook_ShippingModule> <version>0.1.0</version> <depends> <Mage_Shipping /> </depends> </MagentoBook_ShippingModule> </modules> <global> <models> <shippingmodule> <class>MagentoBook_ShippingModule_Model</class> </shippingmodule> </models> <resources> <shippingmodule_setup> <setup> <module>MagentoBook_ShippingModule</module> </setup> <connection> <use>core_setup</use> </connection> </shippingmodule_setup> </resources> </global> <default> <carriers> <shippingmodule> <model>MagentoBook/carrier_ShippingModule</model> </shippingmodule> </carriers> </default></config> Let's walk back through this code and go over what each individual section does. We start by defining our XML header tag for the file, to ensure that it is accepted as an XML file when read by the XML parsing class in the system. <?xml version="1.0"?> We define the <config> tag, to ensure that everything within it is read as configuration variables to be loaded into Magento's configuration for whatever we define internally within this tag. <config> We define the <modules> tag, so that we're setting configuration variables for modules defined within this tag. <modules> <MagentoBook_ShippingModule> We set the module's version number to 0.1.0, to supply Magento with versioning for the module in the future, if we update and need to perform statements within the update portion of the module, so as to execute above a certain version number. <version>0.1.0</version> We have to make sure that our module cannot be activated, or possibly run, without the Mage_Shipping core shipping handler and module activated. This is vital because the module being a shipping module is simply going to cause fatal errors without the parent Mage_Shipping module providing the helper functions needed internally. <depends> <Mage_Shipping /></depends> Next, we close off our module declaration tag and modules tag. </MagentoBook_ShippingModule></modules> We set up our <global> tag to define global assets to Magento. <global> Next, we define the <models> tag to define global models to the system and for setting up our module's default model to be one of those global models which is automatically loaded. <models> <shippingmodule> <class>MagentoBook_ShippingModule_Model</class> </shippingmodule></models> Next, we define the <resources>tag, so that we can configure the database resources available to the module within the system. <resources> Defining the <resources> tag allows us to include a setup file with our module that accesses the database. This helps if we need to load in any variables (such as default table rate rules) for our module, or for loading additional data required locally by the module, when calculating the shipping rates. <shippingmodule_setup> <setup> <module>MagentoBook_ShippingModule</module> </setup> Here, we'll use the core database connection (the default one), and ensure that we do not overwrite the database connection set up for this particular module. <connection> <use>core_setup</use> </connection> We close off all tag pairs, besides <config>, that have been opened at this point. </shippingmodule_setup> </resources></global> Finally, we end the configuration file with a declaration that our module is a shipping module and should be processed as one, within the system. This will register the module to the system, so that it can actually display shipping methods to the user on checkout. Without this, nothing will be returned to the user from this module. <default> <carriers> <shippingmodule> <model>MagentoBook/carrier_ShippingModule</model> </shippingmodule> </carriers></default> We close the <config> tag to end the XML configuration file. </config> After we've done this, we need to declare our module to Magento by creating a configuration file in /app/etc/modules/MagentoBook_ShippingModule.xml. Next, we place the following code in our new configuration file, to allow this module to interact with Magento and be turned on/off under the System Configuration menu: <?xml version="1.0"?><config> <modules> <MagentoBook_ShippingModule> <active>true</active> <codePool>local</codePool> </MagentoBook_ShippingModule> </modules></config> We break this file down into the individual lines: <?xml version="1.0"?> The <config> wrapper tag defines the XML to be read, as a configuration of something inside Magento. <config> The <modules> wrapping tag defines this as a module to Magento. <modules> The next tag is used for defining that this is the configuration of a module entitled <MagentoBook_ShippingModule> and for applying the settings inside the tag to the module: <MagentoBook_ShippingModule> We make sure that it's active by default (this will be overwritten when activated/deactivated in the Magento administrative back-end). <active>true</active> The <code pool> tag is used for keeping this module in our local module's directory. <codePool>local</codePool> Closing tags are for closing the XML tags that we started the <config> tag with. </MagentoBook_ShippingModule> </modules></config> Now that we have the configuration set up, to allow the module to be managed within Magento and versioning control to allow for upgrades in the future, we can progress onto the module itself. It also means that we can now turn our module on/off within the administration. To do this, we go to System|Configuration , then to Advanced under the Advanced heading on the left-hand side. Once here, we will be presented Enable/Disable dropdowns for each module installed in the system. We'll set the dropdown for our module to Disable until we have completed the adaptor model and administration setup. This will prevent the module from crashing the system, while it is incomplete. We will re-activate the module once we're ready to see the output.
Read more
  • 0
  • 0
  • 2487
article-image-creating-responsive-project
Packt
08 Apr 2015
14 min read
Save for later

Creating a Responsive Project

Packt
08 Apr 2015
14 min read
In today's ultra connected world, a good portion of your students probably own multiple devices. Of course, they may want to take your eLearning course on all their devices. They might want to start the course on their desktop computer at work, continue it on their phone while commuting back home, and finish it at night on their tablet. In other situations, students might only have a mobile phone available to take the course, and sometimes the topic to teach only makes sense on a mobile device. To address these needs, you want to deliver your course on multiple screens. As of Captivate 6, you can publish your courses in HTML5, which makes them available on mobile devices that do not support the Flash technology. Now, Captivate 8 takes it one huge step further by introducing Responsive Projects. A Responsive Project is a project that you can optimize for the desktop, the tablet, and the mobile phone. It is like providing three different versions of the course in a single project. In this article, by Damien Bruyndonckx, author of the book Mastering Adobe Captivate 8, you will be introduced to the key concepts and techniques used to create a responsive project in Captivate 8. While reading, keep the following two things in mind. First, everything you have learned so far can be applied to a responsive project. Second, creating a responsive project requires more experience than what a book can offer. I hope that this article will give you a solid understanding of the core concepts in order to jump start your own discovery of Captivate 8 Responsive Projects. (For more resources related to this topic, see here.) About Responsive Projects A Responsive Project is meant to be used on multiple devices, including tablets and smartphones that do not support the Flash technology. Therefore, it can be published only in HTML5. This means that all the restrictions of a traditional HTML5 project also apply to a Responsive Project. For example, you will not be able to add Text Animations or Rollover Objects in a Responsive Project because these features are not supported in HTML5. Responsive design is not limited to eLearning projects made in Captivate. It is actually used by web designers and developers around the world to create websites that have the ability to automatically adapt themselves to the screen they are viewed on. To do so, they need to detect the screen width that is available to their content and adapt accordingly. Responsive Design by Ethan Marcotte If you want to know more about responsive design, I strongly recommend a book by Ethan Marcotte in the A Book Apart collection. This is the founding book of responsive design. If you have some knowledge of HTML and CSS, this is a must have resource in order to fully understand what responsive design is all about. More information on this book can be found at http://www.abookapart.com/products/responsive-web-design. Viewport size versus screen size At the heart of the responsive design approach is the width of the screen used by the student to consume the content. To be more exact, it is the width of the viewport that is detected—not the width of the screen. The viewport is the area that is actually available to the content. On a desktop or laptop computer, the difference between the screen width and the viewport width is very easy to understand. Let's do a simple experiment to grasp that concept hands-on: Open your default web browser and make sure it is in fullscreen mode. Browse to http://www.viewportsizes.com/mine. The main information provided by this page is the size of your viewport. Because your web browser is currently in fullscreen mode, the viewport size should be close (but not quite the same) to the resolution of your screen. Use your mouse to resize your browser window and see how the viewport size evolves. As shown in the following screenshot, the size of the viewport changes as you resize your browser window but the actual screen you use is always the same: This viewport concept is also valid on a mobile device, even though it may be a bit subtler to grasp. The following screenshot shows the http://www.viewportsizes.com/mine web page as viewed in the Safari mobile browser on an iPad mini held in landscape (left) and in portrait (right). As you can see, the viewport size changes but, once again, the actual screen used is always the same. Don't hesitate to perform these experiments on your own mobile devices and compare your results to mine. Another thing that might affect the viewport size on a mobile device is the browser used. The following screenshot shows the viewport size of the same iPad mini held in portrait mode in Safari mobile (left) and in Chrome mobile (right). Note that the viewport size is slightly different in Chrome than in Safari. This is due to the interface elements of the browser (such as the address bar and the tabs) that use a variable portion of the screen real estate in each browser. Understanding breakpoints Before setting up your own Responsive Project there is one more concept to explore. To discover this second concept, you will also perform a simple experiment with your desktop or laptop computer: Open the web browser of your desktop or laptop computer and maximize it to fullscreen size. Browse to http://courses.dbr-training.eu/8/goingmobile. This is the online version of the Responsive Project that you will build in this article. When viewed on a desktop or laptop computer in fullscreen mode, you should see a version of the course optimized for larger screens. Use your mouse to slowly scale your browser window down. Note how the size and the position of the elements are automatically recalculated as you resize the browser window. At some point, you should see that the height of the slide changes and that another layout is applied. The point at which the layout changes is situated at a width of exactly 768 px. In other words, if the width of the browser (actually the viewport) is above 768 px, one layout is applied, but if the width of the viewport falls under 768 px, another layout is applied. You just discovered a breakpoint. The layout that is applied after the breakpoint (in other words when the viewport width is lower than 768 px) is optimized for a tablet device held in portrait mode. Note that even though you are using a desktop or laptop computer, it is the tablet-optimized layout that is applied when the viewport width is at or under 768 px. Keep scaling the browser window down and see how the position and the size of the elements of the slide are recalculated in real time as you resize the browser window. This simple experiment should better explain what a breakpoint is and how these breakpoints work. Before moving on to the next section, let's take some time to summarize the important concepts uncovered in this section: The aim of responsive design is to provide an optimized viewing experience across a wide range of devices and form factors. To achieve this goal, responsive design uses fluid sizing and positioning techniques, responsive images, and breakpoints. Responsive design is not limited to eLearning courses made in Captivate, but is widely used in web and app design by thousands of designers around the world. A Captivate 8 Responsive Project can only be published in HTML5. The capabilities and restrictions of a standard HTML5 project also apply to a Responsive Project. A breakpoint defines the exact viewport width at which the layout breaks and another layout is applied. The breakpoints, and therefore the optimized layouts, are based on the width of the viewport and not on the detection of an actual device. This explains why the tablet-optimized layout is applied to the downsized browser window on a desktop computer. The viewport width and the screen width are two different things. In the next section, you will start the creation of your very first Responsive Project. To learn more about these concepts, there is a video course on Responsive eLearning with Captivate 8 available on Adobe KnowHow. The course itself is for a fee, but there is a free sample of 15 minutes that walks you through these concepts using another approach. I suggest you take some time to watch this 15-minute sample at https://www.adobeknowhow.com/courselanding/create-responsive-elearning-adobe-captivate-8. Setting up a Responsive Project It is now time to open Captivate and set up your first Responsive Project using the following steps: Open Captivate or close every open file. Switch to the New tab of the Welcome screen. Double-click on the Responsive Project thumbnail. Alternatively, you can also use the File | New Project | Responsive Project menu item. This action creates a new Responsive Project. Note that the choice to create a Responsive Project or a regular Captivate project must be done up front when creating the project. As of Captivate 8, it is not yet possible to take an existing non-responsive project and make it responsive after the fact. The workspace of Captivate should be very similar to what you are used to, with the exception of an extra ruler that spans across the top of the screen. This ruler contains three predefined breakpoints. As shown in the following screenshot, the first breakpoint is called the Primary breakpoint and is situated at 1024 pixels. Also, note that the breakpoint ruler is green when the Primary breakpoint is selected. You will now discover the other two breakpoints using the following steps. In the breakpoint ruler, click on the icon of a tablet to select the second breakpoint. The stage and all the elements it contains are resized. In the breakpoint ruler at the top of the stage, the second breakpoint is now selected. It is called the Tablet breakpoint and is situated at 768 pixels. Note the blue color associated with the Tablet breakpoint. In the breakpoint ruler, click on the icon of a smartphone to select the third and last breakpoint. Once again, the stage and the elements it contains are resized. The third breakpoint is called the Mobile breakpoint and is situated at 360 pixels. The orange color is associated with this third breakpoint. Adjusting the breakpoints In some situations, the default location of these three breakpoints works just fine But, in other situations, some adjustments are needed. In this project, you want to target the regular screen of a desktop or laptop computer in the Primary view, an iPad mini held in portrait in the Tablet view, and an iPhone 4 held in portrait in the Mobile view. You will now adjust the breakpoints to fit these particular specifications by using the following steps: Click on the Primary breakpoint in the breakpoints ruler to select it. Use your mouse to move the breakpoint all the way to the left. Captivate should stop at a width of 1280 pixels. It is not possible to have a stage wider than 1280 pixels in a Responsive Project. For this project, the default width of 1024 pixels is perfect, so you will now move this breakpoint back to its original location. Move the Primary breakpoint to the right until it is placed at 1024 pixels. Return to your web browser and browse to http://www.viewportsizes.com. Once on the website, type iPad in the Filter field at the top of the page. The portrait width of an iPad mini is 768 pixels. In Captivate, the Tablet breakpoint is placed at 768 pixels by default, which is perfectly fine for the needs of this project. Still on the http://www.viewportsizes.com website, type iPhone in the Filter field at the top of the page. The portrait width of an iPhone 4 is 320 pixels. In Captivate, the Mobile breakpoint is placed at 360 pixels by default. You will now move it to 320 pixels so that it matches the portrait width of an iPhone 4. Return to Captivate and select the Mobile breakpoint. Move the Mobile breakpoint to the right until it is placed at exactly 320 pixels. Note that the minimum width of the stage in the Mobile breakpoint is 320 pixels. In other words, the stage cannot be narrower than 320 pixels in a Responsive Project. The viewport size of your device Before moving on to the next section, take some time to inspect the http://viewportsizes.com site a little further. For example, type the name of the devices you own and compare their characteristics to the breakpoints of the current project. Will the project fit on your devices? How do you need to change the breakpoints so the project perfectly fits your devices? The breakpoints are now in place. But these breakpoints only take care of the width of the stage. In the next section, you will adjust the height of the stage in each breakpoint. Adjusting the slide height Captivate slides have a fixed height. This is the primary difference between a Captivate project and a regular responsive website whose page height is infinite. In this section, you will adjust the height of the stage in all three breakpoints. The steps are as follows: Still in Captivate, click on the desktop icon situated on the left side of the breakpoint switcher to return to the Primary view. On the far right of the breakpoint ruler, select the View Device Height checkbox. As shown in the following screenshot, a yellow border now surrounds the stage in the Primary view, and the slide height is displayed in the top left corner of the stage: For the Primary view, a slide height of 627 pixels is perfect. It matches the viewport size of an iPad held in landscape and provides a big enough area on a desktop or laptop computer. Click on the Tablet breakpoint to select it. Return to http://www.viewportsizes.com/ and type iPad in the filter field at the top of the page. According to the site, the height of an iPad is 1024 pixels. Use your mouse to drag the yellow rectangle situated at the bottom of the stage down until the stage height is around 950 pixels. It may be needed to reduce the zoom magnification to perform this action in good conditions. After this operation, the stage should look like the following screenshot (the zoom magnification has been reduced to 50 percent in the screenshot): With a height of 950 pixels, the Captivate slide can fit on an iPad screen and still account for the screen real estate consumed by the interface elements of the browser such as the address bar and the tabs. Still in the Tablet view, make sure the slide is the selected object and open the Properties panel. Note that, at the end of the Properties panel, the Slide Height property is currently unavailable. Click on the chain icon (Unlink from Device height) next to the Slide Height property. By default, the slide height is linked to the device height. By clicking on the chain icon you have broken the link between the slide height and the device (or viewport) height. This allows you to modify the height of the Captivate slide without modifying the height of the device. Use the Properties panel to change the Slide Height to 1024 pixels. On the stage, note that the slide is now a little bit higher than the yellow rectangle. This means that this particular slide will generate a vertical scrollbar on the tablet device held in portrait. Scrolling is something you want to avoid as much as possible, so you will now enable the link between the device height and the Slide Height. In the Properties panel, click on the chain icon next to the Slide Height property to enable the link. The slide height is automatically readjusted to the device height of 950 pixels. Use the breakpoint ruler to select the Mobile breakpoint. By default, the device height in the Mobile breakpoint is set to 415 pixels. According to the http://www.viewportsizes.com/ website, the screen of an iPhone 4 has a height of 480 pixels. A slide height of 415 pixels is perfect to accommodate the slide itself plus the interface elements of the mobile browser. Summary In this article, you learned the key concepts and techniques used to create a responsive project in Captivate 8. Resources for Article: Further resources on this subject: Publishing the project for mobile [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article] Creating Motion Through the Timeline [article]
Read more
  • 0
  • 0
  • 2483

article-image-android-application-testing-getting-started
Packt
24 Jun 2011
9 min read
Save for later

Android Application Testing: Getting Started

Packt
24 Jun 2011
9 min read
We will avoid introductions to Android and the Open Handset Alliance (http://www.openhandsetalliance.com) as they are covered in many books already and I am inclined to believe that if you are reading this article covering this more advanced topic you have started with Android development before. However, we will be reviewing the main concepts behind testing and the techniques, frameworks, and tools available to deploy your testing strategy on Android. Brief history Initially, when Android was introduced by the end of 2007, there was very little support for testing in the platform, and for some of us very accustomed to using testing as a component intimately coupled with the development process, it was the time to start developing some frameworks and tools to permit this approach. By that time Android had some rudimentary support for unit testing using JUnit (https://junit.org/junit5/), but it was not fully supported and even less documented. In the process of writing my own library and tools, I discovered Phil Smith's Positron , an Open Source library and a very suitable alternative to support testing on Android, so I decided to extend his excellent work and bring some new and missing pieces to the table. Some aspects of test automation were not included and I started a complementary project to fill that gap, it was consequently named Electron. And although positron is the anti-particle of the electron, and they annihilate if collide, take for granted that that was not the idea, but more the conservation of energy and the generation of some visible light and waves. Later on, Electron entered the first Android Development Challenge (ADC1) in early 2008 and though it obtained a rather good score in some categories, frameworks had no place in that competition. Should you be interested in the origin of testing on Android, please find some articles and videos that were published in my personal blog (http://dtmilano.blogspot.co.uk/search/label/electron). By that time Unit Tests could be run on Eclipse. However, testing was not done on the real target but on a JVM on the local development computer. Google also provided application instrumentation code through the Instrumentation class. When running an application with instrumentation turned on, this class is instantiated for you before any of the application code, allowing you to monitor all of the interaction the system has with the application. An Instrumentation implementation is described to the system through an AndroidManifest.xml file. Software bugs It doesn't matter how hard you try and how much time you invest in design and even how careful you are when programming, mistakes are inevitable and bugs will appear. Bugs and software development are intimately related. However, the term bugs to describe flaws, mistakes, or errors has been used in hardware engineering many decades before even computers were invented. Notwithstanding the story about the term bug coined by Mark II operators at Harvard University, Thomas Edison wrote this in 1878 in a letter to Puskás Tivadar showing the early adoption of the term: "It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that 'Bugs' — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached." How bugs severely affect your projects Bugs affect many aspects of your software development project and it is clearly understood that the sooner in the process you find and squash them, the better. It doesn't matter if you are developing a simple application to publish on the Android Market, you are re-branding the Android experience for an operator, or creating a customized version of Android for a device manufacturer, bugs will delay your shipment and will cost you money. From all of the software development methodologies and techniques, Test Driven Development, an agile component of the software development process, is likely the one that forces you to face your bugs earlier in the development process and thus it is also likely that you will solve more problems up front. Furthermore, the increase in productivity can be clearly appreciated in a project where a software development team uses this technique versus one that is, in the best of the cases, writing tests at the end of the development cycle. If you have been involved in software development for the mobile industry, you will have reasons to believe that with all the rush this stage never occurs. It's funny because usually, this rush is to solve problems that could have been avoided. In a study conducted by the National Institute of Standards and Technology (USA) in 2002, it was reported that software bugs cost the country economy $59.5 billion annually. More than a third of this cost can be avoided if better software testing is performed. But please, don't misunderstand this message. There are no silver bullets in software development and what will lead you to an increase in productivity and manageability of your project is the discipline applying these methodologies and techniques to stay in control. Why, what, how, and when to test You should understand that early bug detection saves huge amount of project resources and reduces software maintenance costs. This is the best known reason to write software tests for your development project. Increased productivity will soon be evident. Additionally, writing the tests will give you a deeper understanding of the requirements and the problem to be solved. You will not be able to write tests for a piece of software you don't understand. This is also the reason behind the approach of writing tests to clearly understand legacy or third party code and having the infrastructure to confidently change or update it. The more the code is covered by your tests, the higher could be your expectations of discovering the hidden bugs. If during this coverage analysis you find that some areas of your code are not exercised, additional tests should be added to cover this code as well. This technique requires a special instrumented Android build to collect probe data and must be disabled for any release code because the impact on performance could severely affect application behavior. To fill in this gap, enter EMMA (http://emma.sourceforge.net/), an open-source toolkit for measuring and reporting Java code coverage, that can offline instrument classes for coverage. It supports various coverage types: class method line basic block Coverage reports can also be obtained in different output formats. EMMA is supported up to some degree by the Android framework and it is possible to build an EMMA instrumented version of Android. This screenshot shows how an EMMA code coverage report is displayed in the Eclipse editor, showing green lines when the code has been tested, provided the corresponding plugin is installed. (Move the mouse over the image to enlarge it.) Unfortunately, the plugin doesn't support Android tests yet, so right now you can use it for your JUnit tests only. Android coverage analysis report is only available through HTML. Tests should be automated and you should run some or all tests every time you introduce a change or addition to your code in order to ensure that all the conditions that were met before are still met and that the new code satisfies the tests as expected. This leads us to the introduction of Continuous Integration. It relies on the automation of tests and building processes. If you don't use automated testing, it is practically impossible to adopt Continuous Integration as part of the development process and it is very difficult to ensure that changes would not break existing code. What to test Strictly speaking you should test every statement in your code but this also depends on different criteria and can be reduced to test the path of execution or just some methods. Usually there's no need to test something that can't be broken, for example it usually makes no sense to test getters and setters as you probably won't be testing the Java compiler on your own code and the compiler would have already performed its tests. In addition to the functional areas you should test, there are some specific areas of Android applications that you should consider. We will be looking at these in the following sections. Activity lifecycle events You should test that your activities handle lifecycle events correctly. If your activity should save its state during onPause() or onDestroy() events and later be able to restore it in onCreate(Bundle savedInstanceState), you should be able to reproduce and test all these conditions and verify that the state was correctly saved and restored. Configuration-changed events should also be tested as some of these events cause the current Activity to be recreated, and you should test correct handling of the event and the newly created Activity preserves the previous state. Configuration changes are triggered even by rotation events, so you should test you application's ability to handle these situations. Database and filesystem operations Database and filesystem operations should be tested to ensure that they are handled correctly. These operations should be tested in isolation at the lower system level, at a higher level through ContentProviders, or from the application itself. To test these components in isolation, Android provides some mock objects in the android.test.mock package. Physical characteristics of the device Much before delivering your application you should be sure that all of the different devices it can be run on are supported or at less you should detect the situation and take pertinent measures. Among other characteristics of the devices, you may find that you should test: Network capabilities Screen densities Screen resolutions Screen sizes Availability of sensors Keyboard and other input devices GPS External storage In this respect Android Virtual Devices play an important role because it is practically impossible to have access to all of the devices with all of the possible combinations of features but you can configure AVD for almost every situation. However, as it was mentioned before, leave your final tests for actual devices where the real users will run the application to understand its behavior.
Read more
  • 0
  • 0
  • 2483

article-image-drupal-7-fieldscck-using-image-field-modules
Packt
11 Jul 2011
6 min read
Save for later

Drupal 7 Fields/CCK: Using the Image Field Modules

Packt
11 Jul 2011
6 min read
Drupal 7 Fields/CCK Beginner's Guide Explore Drupal 7 fields/CCK and master their use Adding image fields to content types We have learned how to add file fields to content types. In this section, we will learn how to add image fields to content types so that we can attach images to our content. Time for action – adding an image field to the Recipe content type In this section, we will add an image field to the Recipe content type. Follow these steps: Click on the Structure link in the administration menu at the top of the page. Click on the Content types link to go to the content types administration page. Click on the manage fields link on the Recipe row as in the following screenshot, because we would like to add an image field to the recipe content type. (Move the mouse over the image to enlarge it.) Locate the Add new field section. In the Label field enter "Image", and in the Field name field enter "image". In the field type select list, select "image" as the field type; the field widget will be automatically switched to select Image as the field widget. After the values are entered, click on Save. (Move the mouse over the image to enlarge it.)   What just happened?   We added an image field to the Recipe content type. The process of adding an image field to the Recipe content type is similar to the process of adding a file field to the Recipe content type, except that we selected image field as the field type and we selected image as the field widget. We will configure the image field in the next section. Configuring image field settings We have already added the image field. In this section, we will configure the image field, learn how to configure the image field settings, and understand how they reflect to image outputs by using those settings. Time for action – configuring an image field for the Recipe content type In this section, we will configure image field settings in the Recipe content type. Follow these steps: After clicking on the Save button, Drupal will direct us to the next page, which provides the field settings for the image field. The Upload destination option is the same as the file field settings, which provide us an option to decide whether image files should be public or private. In our case, we select Public files. The last option is the Default image field. We will leave this option for now, and click on the Save field settings button to go to the next step. (Move the mouse over the image to enlarge it.) This page contains all the settings for the image field. The most common field settings are the Label field, the Required field, and the Help text field. We will leave these fields as default. (Move the mouse over the image to enlarge it.) The Allowed file extension section is similar to the file field we have already learned about. We will use the default value in this field, so we don't need to enter anything for this field. The File directory section is also the same as the settings in the file field. Enter "image_files" in this field. (Move the mouse over the image to enlarge it.) Enter "640" x "480" in the Maximum image resolution field and the Minimum image resolution field, and enter "2MB" in the maximum upload size field. (Move the mouse over the image to enlarge it.) Check the Enable Alt field and the Enable Title field checkboxes. (Move the mouse over the image to enlarge it.) Select thumbnail in the Preview image style select list, and select Throbber in the Progress indicator section. (Move the mouse over the image to enlarge it.) The bottom part of this page, the image field settings section, is the same as the previous page we just saved, so we don't need to re-enter the values. Click on the Save settings button at the bottom of the page to store all the values we entered on this page. (Move the mouse over the image to enlarge it.) After clicking on the Save settings button, Drupal sends us back to the Manage fields setting administration page. Now the image field is added to the Recipe content type. (Move the mouse over the image to enlarge it.)   What just happened?   We have added and configured an image field for the Recipe content type. We left the default values in the Label field, the Required field, and the Help text field. They are the most common settings in fields. The Allowed file extension section is similar to the file field that we have seen, which provides us with the ability to enter the file extension of the images which are allowed to be uploaded. The File directory field is the same as the one in the file field, which provides us with the option to save the uploaded files to a different directory to the default location of the file directory. The Maximum image resolution field allows us to specify the maximum width and height of image resolution that will be uploaded. If the uploaded image is bigger than the resolution we specified, it will resize images to the size we specified. If we did not specify the size, it will not have any restriction to images. The Minimum image resolution field is the opposite of the maximum image resolution. We specify the minimum width and height of image resolution that is allowed to be uploaded, not the maximum width and height of image resolution. If we upload image resolution less than the minimum size we specified, it will throw an error message and reject the image upload. The Enable Alt field and the Enable Title field can be enabled to allow site administrators to enter the ALT and Title attributes of the img tag in XHTML, which can improve the accessibility and usability of a website when using images. The Preview image style select list allows us to select which image style will be used to display while editing content. Currently it provides three image styles, thumbnail, medium, and large. The thumbnail image style will be used by default. We will learn how to create a custom image style in the next section. Have a go hero – adding an image field to the Cooking Tip content type It's time for another challenge. We have created an image field to the Recipe content type. We can use the same method we have learned here to add and configure an image field to the Cooking Tip content type. You can apply the same steps used to create image fields to the Recipe content type and try to understand the differences between the settings on the image field settings administration page.
Read more
  • 0
  • 0
  • 2480
article-image-guided-rules-jboss-brms-guvnor
Packt
07 Oct 2009
7 min read
Save for later

Guided Rules with the JBoss BRMS (Guvnor)

Packt
07 Oct 2009
7 min read
Passing information in and out The main reason for the simplicity of our Hello World example was that it neither took in any information, nor passed any information out—the rule always fired, and said the same thing. In real life, we need to pass information between our rules and the rest of the system. You may remember that in our tour of the Guvnor, we came across models that solved this problem of 'How do we get information into and out of the rules?'. Here is the spreadsheet that we will use as an example in this article: If we want to duplicate this in our model/JavaBean, we would need places to hold four key bits of sales-related information. Customer Name: String (that is, a bit of text) Sales: Number Date of Sale: Date Chocolate Only Customer: Boolean (that is, a Y/N type field) We also need a description for this group of information that is useful when we have many spreadsheets/models in our system (similar to the way this spreadsheet tab is called Sales). Note that one JavaBean (model) is equal to one line in the spreadsheet. Because we can have multiple copies of JavaBeans in memory, we are able to represent the many lines of information that we have in a spreadsheet. Later, we'll loop and add 10, 100, or 1000 lines (that is, JavaBeans) of information into Drools (for as many lines as we need). As we loop, adding them one at a time, the various rules will fire as a match is made. Building the fact model We will now build this model in Java using the Eclipse editor. We're going to do it step-by-step. Open the Eclipse/JBoss IDE editor that you installed earlier. If prompted, use the default workspace. (Unless you've a good reason to put it somewhere else.) From the menu bar at the top the screen, select File | New Project. Then choose Java Project from the dialog box that appears. You can either select this by starting to type "Java Project" into the wizard, or by finding it by expanding the various menus. In the Create a new Java Project dialog that appears, give the project a name in the upper box. For our example, we'll call it SalesModel (one word, no spaces). Accept the other defaults (unless you have any other reason to change them). Our screen will now look something like this: When you've finished entering the details, click on Finish. You will be redirected to the main screen, with a new project (SalesModel) created. If you can't see the project, try opening either the Package or the Navigator tab. When you can see the project name, right-click on it. From the menu, choose New | Package. The New Java Package dialog will be displayed, as shown below. Enter the details as per the screenshot to create a new package called org.sample, and then click on Finish. If you are doing this via the navigator (or you can take a peek via Windows Explorer), you'll see that this creates a new folder org, and within it a subfolder called sample. Now that we've created a set of folders to organize our JavaBeans, let's create the JavaBean itself by creating a class. To create a new Java class, expand/select the org.sample package (folder) that we created in the previous step. Right-click on it and select New Class. Fill in the dialog as shown in the following screenshot, and then click on Finish: We will now be back in the main editor, with a newly created class called Sales.java (below). For the moment, there isn't much there—it's akin to two nested folders (a sample folder within one called org) and a new (but almost empty) file / spreadsheet called Sales. package org.sample;public class Sales {} By itself, this is not of much use. We need to tell Java about the information that we want our class (and hence the beans that it creates) to hold. This is similar to adding new columns to a spreadsheet. Edit the Java class until it looks something like the code that follows (and take a quick look of the notes information box further down the page if you want to save a bit of typing). If you do it correctly, you should have no red marks on the editor (the red marks look a little like the spell checking in Microsoft Word). package org.sample;import java.util.Date;public class Sales {private String name;private long sales;private Date dateOfSale;private boolean chocolateOnlyCustomer;public String getName() {return name;}public void setName(String name) {this.name = name;}public long getSales() {return sales;}public void setSales(long sales) {this.sales = sales;}public Date getDateOfSale() {return dateOfSale;}public void setDateOfSale(Date dateOfSale) {this.dateOfSale = dateOfSale;}public boolean isChocolateOnlyCustomer() {return chocolateOnlyCustomer;}public void setChocolateOnlyCustomer(boolean choclateOnlyCustomer) {this.chocolateOnlyCustomer = chocolateOnlyCustomer;}} Believe it or not, this piece of Java code is almost the same as the Excel Spreadsheet we saw at the beginning of the article. If you want the exact details, let's go through what it means line by line. The braces ({ and }) are a bit like tabs. We use them to organize our code. package—This data holder will live in the subdirectory sample within the directory org. import—List of any other data formats that we need (for example, dates). Text and number data formats are automatically imported. Public class Sales—This is the mould that we'll use to create a JavaBean. It's equivalent to a spreadsheet with a Sales tab. Private String name—create a text (string) field and give it a column heading of 'name'. The private bit means 'keep it hidden for the moment'. The next three lines do the same thing, but for sales (as a number/long), dateOfSale (as a date) and chocolateOnlyCustomer (a Boolean or Y/N field). The rest of the lines (for example, getName and setName) are how we control access to our private hidden fields. If you look closely, they follow a similar naming pattern. The get and set lines (in the previous code) are known as accessor methods. They control access to hidden or private fields. They're more complicated than may seem necessary for our simple example, as Java has a lot more power than we're using at the moment.Luckily, Eclipse can auto-generate these for us. (Right-click on the word Sales in the editor, then select Source | Generate Getters and Setters from the context menu. You should be prompted for the Accessor methods that you wish to create.) Once you have the text edited like the sample above, check again that there are no spelling mistakes. A quick way to do this is to check the Problems tab in Eclipse (which is normally at the bottom of the screen). Now that we've created our model in Java, we need to export it so that we can use in the Guvnor. In Eclipse, right-click on the project name (SalesModel) and select Export. From the pop-up menu, select jar (this may be under Java; you might need to type jar to bring it up). Click on Next. The screen shown above will be displayed. Fill out this screen. Accept the defaults, but give the JAR file a name (SalesModel.jar) and a location (in our case C:tempSalesModel.jar). Remember these settings as we'll need them shortly. All being well, you should get an 'export successful' message when you click on the Finish button, and you will be able to use Windows Explorer to find the JAR file that you just created.
Read more
  • 0
  • 0
  • 2477

article-image-tinkering-around-django-javascript-integration
Packt
18 Jan 2011
9 min read
Save for later

Tinkering Around in Django JavaScript Integration

Packt
18 Jan 2011
9 min read
Minor tweaks and bugfixes Good tinkering can be a process that begins with tweaks and bugfixes, and snowballs from there. Let's begin with some of the smaller tweaks and bugfixes before tinkering further. Setting a default name of "(Insert name here)" Most of the fields on an Entity default to blank, which is in general appropriate. However, this means that there is a zero-width link for any search result which has not had a name set. If a user fills out the Entity's name before navigating away from that page, everything is fine, but it is a very suspicious assumption that all users will magically use our software in whatever fashion would be most convenient for our implementation. So, instead, we set a default name of "(Insert name here)" in the definition of an Entity, in models.py: name = models.TextField(blank = True, default = u'(Insert name here)') Eliminating Borg behavior One variant on the classic Singleton pattern in Gang of Four is the Borg pattern, where arbitrarily many instances of a Borg class may exist, but they share the same dictionary, so that if you set an attribute on one of them, you set the attribute on all of them. At present we have a bug, which is that our views pull all available instances. We need to specify something different. We update the end of ajax_profile(), including a slot for time zones to be used later in this article, to: return render_to_response(u'profile_internal.html', { u'entities': directory.models.Entity.objects.filter( is_invisible = False).order_by(u'name'), u'entity': entity, u'first_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[:directory.settings.INITIAL_STATI], u'gps': gps, u'gps_url': gps_url, u'id': int(id), u'emails': directory.models.Email.objects.filter( entity = entity, is_invisible = False), u'phones': directory.models.Phone.objects.filter( entity = entity, is_invisible = False), u'second_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[directory.settings.INITIAL_STATI:], u'tags': directory.models.Tag.objects.filter(entity = entity, is_invisible = False).order_by(u'text'), u'time_zones': directory.models.TIME_ZONE_CHOICES, u'urls': directory.models.URL.objects.filter(entity = entity, is_invisible = False), }) Likewise, we update homepage(): profile = template.render(Context( { u'entities': directory.models.Entity.objects.filter( is_invisible = False), u'entity': entity, u'first_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[:directory.settings.INITIAL_STATI], u'gps': gps, u'gps_url': gps_url, u'id': int(id), u'emails': directory.models.Email.objects.filter( entity = entity, is_invisible = False), u'phones': directory.models.Phone.objects.filter( entity = entity, is_invisible = False), u'query': urllib.quote(query), u'second_stati':directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[directory.settings.INITIAL_STATI:], u'time_zones': directory.models.TIME_ZONE_CHOICES, u'tags': directory.models.Tag.objects.filter( entity = entity, is_invisible = False).order_by(u'text'), u'urls': directory.models.URL.objects.filter( entity = entity, is_invisible = False), })) Confusing jQuery's load() with html() If we have failed to load a profile in the main search.html template, we had a call to load(""). What we needed was: else { $("#profile").html(""); } $("#profile").load("") loads a copy of the current page into the div named profile. We can improve on this slightly to "blank" contents that include the default header: else { $("#profile").html("<h2>People, etc.</h2>"); } Preventing display of deleted instances In our system, enabling undo means that there can be instances (Entities, Emails, URLs, and so on) which have been deleted but are still available for undo. We have implemented deletion by setting an is_invisible flag to True, and we also need to check before displaying to avoid puzzling behavior like a user deleting an Entity, being told Your change has been saved, and then seeing the Entity's profile displayed exactly as before. We accomplish this by a specifying, for a Queryset .filter(is_invisible = False) where we might earlier have specified .all(), or adding is_invisible = False to the conditions of a pre-existing filter; for instance: def ajax_download_model(request, model): if directory.settings.SHOULD_DOWNLOAD_DIRECTORY: json_serializer = serializers.get_serializer(u'json')() response = HttpResponse(mimetype = u'application/json') if model == u'Entity': json_serializer.serialize(getattr(directory.models, model).objects.filter( is_invisible = False).order_by(u'name'), ensure_ascii = False, stream = response) else: json_serializer.serialize(getattr(directory.models, model).objects.filter(is_invisible = False), ensure_ascii = False, stream = response) return response else: return HttpResponse(u'This feature has been turned off.') In the main view for the profile, we add a check in the beginning so that a (basically) blank result page is shown: def ajax_profile(request, id): entity = directory.models.Entity.objects.filter(id = int(id))[0] if entity.is_invisible: return HttpResponse(u'<h2>People, etc.</h2>') One nicety we provide is usually loading a profile on mouseover for its area of the search result page. This means that users can more quickly and easily scan through drilldown pages in search of the right match; however, there is a performance gotcha for simply specifying an onmouseover handler. If you specify an onmouseover for a containing div, you may get a separate event call for every time the user hovers over an element contained in the div, easily getting 3+ calls if a user moves the mouse over to the link. That could be annoying to people on a VPN connection if it means that they are getting the network hits for numerous needless profile loads. To cut back on this, we define an initially null variable for the last profile moused over: Then we call the following function in the containing div element's onmouseover: PHOTO_DIRECTORY.last_mouseover_profile = null; Then we call the following function in the containing div element's onmouseover: PHOTO_DIRECTORY.mouseover_profile = function(profile) { if (profile != PHOTO_DIRECTORY.last_mouseover_profile) { PHOTO_DIRECTORY.load_profile(profile); PHOTO_DIRECTORY.last_mouseover_profile = profile; PHOTO_DIRECTORY.register_editables(); } } The relevant code from search_internal.html is as follows: <div class="search_result" onmouseover="PHOTO_DIRECTORY.mouseover_profile( {{ result.id }});" onclick="PHOTO_DIRECTORY.click_profile({{ result.id }});"> We usually, but not always, enable this mouseover functionality; not always, because it works out to annoying behavior if a person is trying to edit, does a drag select, mouses over the profile area, and reloads a fresh, non-edited profile. Here we edit the Jeditable plugin's source code and add a few lines; we also perform a second check for if the user is logged in, and offer a login form if so: /* if element is empty add something clickable (if requested) */if (!$.trim($(this).html())) { $(this).html(settings.placeholder);}$(this).bind(settings.event, function(e) { $("div").removeAttr("onmouseover"); if (!PHOTO_DIRECTORY.check_login()) { PHOTO_DIRECTORY.offer_login(); } /* abort if disabled for this element */ if (true === $(this).data('disabled.editable')) { return; } For Jeditable-enabled elements, we can override the placeholder for an empty element at method call, but the default placeholder is cleared when editing begins; overridden placeholders aren't. We override the placeholder with something that gives us a little more control and styling freedom: // publicly accessible defaults $.fn.editable.defaults = { name : 'value', id : 'id', type : 'text', width : 'auto', height : 'auto', event : 'click.editable', onblur : 'cancel', loadtype : 'GET', loadtext : 'Loading...', placeholder:'<span class="placeholder"> Click to add.</span>', loaddata : {}, submitdata : {}, ajaxoptions: {} }; All of this is added to the file jquery.jeditable.js. We now have, as well as an @ajax_login_required decorator, an @ajax_permission_required decorator. We test for this variable in the default postprocessor specified in $.ajaxSetup() for the complete handler. Because Jeditable will place the returned data inline, we also refresh the profile. This occurs after the code to check for an undoable edit and offer an undo option to the user. complete: function(XMLHttpRequest, textStatus) { var data = XMLHttpRequest.responseText; var regular_expression = new RegExp("<!-" + "-# (d+) #-" + "->"); if (data.match(regular_expression)) { var match = regular_expression.exec(data); PHOTO_DIRECTORY.undo_notification( "Your changes have been saved. " + "<a href='JavaScript:PHOTO_DIRECTORY.undo(" + match[1] + ")'>Undo</a>"); } else if (data == '{"not_permitted": true}' || data == "{'not_permitted': true}") { PHOTO_DIRECTORY.send_notification( "We are sorry, but we cannot allow you " + "to do that."); PHOTO_DIRECTORY.reload_profile(); } }, Note that we have tried to produce the least painful of clear message we can: we avoid both saying "You shouldn't be doing that," and a terse, "bad movie computer"-style message of "Access denied" or "Permission denied." We also removed from that method code to call offer_login() if a call came back not authenticated. This looked good on paper, but our code was making Ajax calls soon enough that the user would get an immediate, unprovoked, modal login dialog on loading the page. Adding a favicon.ico In terms of minor tweaks, some visually distinct favicon.ico (http://softpedia.com/ is one of many free sources of favicon.ico files, or the favicon generator at http://tools.dynamicdrive.com/favicon/ which can take an image like your company logo as the basis for an icon) helps your tabs look different at a glance from other tabs. Save a good, simple favicon in static/favicon.ico. The icon may not show up immediately when you refresh, but a good favicon makes it slightly easier for visitors to manage your pages among others that they have to deal with. It shows up in the address bar, bookmarks, and possibly other places. This brings us to the end of the minor tweaks; let us look at two slightly larger additions to the directory.
Read more
  • 0
  • 0
  • 2476
Modal Close icon
Modal Close icon