Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-implementing-workflow-alfresco-3
Packt
20 Oct 2009
5 min read
Save for later

Implementing Workflow in Alfresco 3

Packt
20 Oct 2009
5 min read
Workflow is the automation of a business process, during which documents are passed from one participant to another for action, according to a set of procedural rules. Every Content Management System implementation has its own workflow requirements. For some companies, workflow could be a simple approval process. For some companies, it could be a complex business process management system. Workflow provides ownership and control over the content and processes. Introduction to the Alfresco workflow process Alfresco includes two types of out of the box workflow. The first is the Simple Workflow, which is content-oriented, and the other is the Advanced Workflow, which is task-oriented. The Simple Workflow process in Alfresco involves the movement of documents through various spaces.A content item is moved or copied to a new space at which point a new workflow instance is attached, which is based on the workflow definition of the space. A workflow definition is unaware of other related workflow definitions. The Advanced Workflow process is task-oriented, where you create a task, attach documents that are to be reviewed, and assign it to appropriate reviewers. The same robust workflow capabilities are available in Document Management (DM), Records Management (RM), Web Content Management (WCM), and throughout our applications, which includes Alfresco Share. You can use the out of the box features provided by both types of workflow, or you can create your own custom advanced workflow, according to the business processes of your organization. Simple Workflow Consider a purchase order that moves through various departments for authorization and eventual purchase. To implement Simple Workflow for this in Alfresco, you will create spaces for each department and allow documents to move through various department spaces. Each department space is secured, only allowing the users of that department to edit the document and to move it to the next departmental space in the workflow process. The workflow process is so flexible that you could introduce new steps for approval into the operation without changing any code. Out of the box features Simple Workflow is implemented as an aspect that can be attached to any document in a space through the use of business rules. Workflow can also be invoked on individual content items as actions. Workflow has two steps. One is for approval while the other one is for rejection. You can refer to the upcoming image, where workflow is defined for the documents in a space called Review Space. The users belonging to the Review Space can act upon the document. If they choose to Reject, then the document moves to a space called Rejected Space. If they choose to Approve, then the document moves to a space called Approved Space. You can define the names of the spaces and the users on the spaces, according to your business requirements. The following figure gives a graphical view of the Approved Space and the Rejected Space: Define and use Simple Workflow The process to define and use Simple Workflow in Alfresco is as follows: Identify spaces and set security on those spaces Define your workflow process Add workflow to content in those spaces, accordingly Select the email template and the people to send email notifications to Test the workflow process Let us define and use a Simple Workflow process to review and approve the engineering documents on your intranet. Go to the Company Home > Intranet > Engineering Department space and create a space named ProjectA by using an existing Software Engineering Project space template. Identify spaces and security If you go to the Company Home > Intranet > Engineering Department > ProjectA > Documentation space, then you will notice the following sub-spaces: Samples: This space is for storing sample project documents. Set the security on this space in such a way that only the managers can edit the documents. Drafts: This space contains initial drafts and documents of ProjectA that are being edited. Set the security in such a way that only a few selected users (such as Engineer1, Engineer2— as shown in the upcoming image) can add or edit the documents in this space. Pending Approval: This space contains all of the documents that are under review. Set the security in such a way that only the Project Manager of ProjectA can edit these documents. Published: This space contains all of the documents that are Approved and visible to others. Nobody should edit the documents while they are in the Published space. If you need to edit a document, then you need to Retract it to the Drafts space and follow the workflow process, as shown in the following image:    Defining the workflow process Now that you have identified the spaces, the next step is to define your workflow process. We will add workflow to all of the documents in the Drafts space. When a user selects the Approve action called Submit for Approval on a document, then the document moves from the Drafts space to the Pending Approval space. We will add workflow to all of the documents in the Pending Approval space. When a user selects the Approve action called Approved on a document, then the document moves from the Pending Approval space to the Published space. Similarly, when a user selects the Reject action called Re-submit on a document, then it moves from the Pending Approval space to the Drafts space. We will add workflow to all of the documents in the Published space. When a user selects the Reject action called Retract on a document, then it moves from the Published space to the Drafts space. You can have review steps and workflow action names according to your business's requirements.
Read more
  • 0
  • 0
  • 2474

article-image-php-web-20-mashup-projects-your-own-video-jukebox-part-1
Packt
19 Feb 2010
11 min read
Save for later

PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 1

Packt
19 Feb 2010
11 min read
Project Overview What Mashup the web APIs from Last.fm and YouTube to create a video jukebox of songs Protocols Used REST (XML-RPC available) Data Formats XML, XPSF, RSS Tools Featured PEAR APIs Used Last.fm and YouTube   Now that we've had some experience using web services, it's time to fine tune their use. XML-RPC, REST, and SOAP will be frequent companions when you use web services and create mashups. You will encounter a lot of different data formats, and interesting ways in which the PHP community has dealt with these formats. This is especially true because REST has become so popular. In REST, with no formalized response format, you will encounter return formats that vary from plain text to ad-hoc XML to XML-based standards. The rest of our projects will focus on exposing us to some new formats, and we will look at how to handle them through PHP. We will begin with a project to create our own personalized video jukebox. This mashup will pull music lists feeds from the social music site, Last.fm. We will parse out artist names and song titles from these feeds and use that information to search videos on YouTube, a user-contributed video site, using the YouTube web service. By basing the song selections on ever-changing feeds, our jukebox selection will not be static, and will change as our music taste evolves. As YouTube is a user-contributed site, we will see many interesting interpretations of our music, too. This jukebox will be personalized, dynamic, and quite interesting. Both Last.fm and YouTube's APIs offer their web services through REST, and YouTube additionally offers an XML-RPC interface. Like with previous APIs, XML is returned with each service call. Last.fm returns either plain text, an XML playlist format called XSPF (XML Shareable Playlist Format), or RSS (Really Simple Syndication). In the case of YouTube, the service returns a proprietary format. Previously, we wrote our own SAX-based XML parser to extract XML data. In this article, we will take a look at how PEAR, the PHP Extension and Application Repository, can do the XSPF parsing work for us on this project and might help in other projects. Let's take a look at the various data formats we will be using, and then the web services themselves. XSPF One of XML's original goals was to allow industries to create their own markup languages to exchange data. Because anyone can create their own elements and schemas, as long as people agreed on a format, XML can be used as the universal data transmission language for that industry. One of the earliest XML-based languages was ChemXML, a language used to transmit data within the chemical industry. Since then, many others have popped up. XSPF was a complete grassroots project to create an open, non-proprietary music playlist format based on XML. Historically, playlists for software media players and music devices were designed to be used only on the machine or device, and schemas were designed by the vendor themselves. XSPF's goal was to create a format that could be used in software, devices, and across networks. XSPF is a very simple format, and is easy to understand. The project home page is at http://www.xspf.org. There, you will find a quick start guide which outlines a simple playlist as well as the official specifications at http://www.xspf.org/specs. Basically, a typical playlist has the following structure: <?xml version="1.0" encoding="UTF-8"?><playlist version="1" > <title>Shu Chow's Playlist</title> <date>2006-11-24T12:01:21Z</data> <trackList> <track> <title>Pure</title> <creator>Lightning Seeds</creator> <location> file:///Users/schow/Music/Pure.mp3 </location> </track> <track> <title>Roadrunner</title> <creator>The Modern Lovers</creator> <location> file:///Users/schow/Music/Roadrunner.mp3 </location> </track> <track> <title>The Bells</title> <creator>April Smith</creator> <location> file:///Users/schow/Music/The_Bells.mp3 </location> </track> </trackList></playlist> playlist is the parent element for the whole document. It requires one child element, trackList, but there can be several child elements that are the metadata for the playlist itself. In this example, the playlist has a title specified in the title element, and the creation date is specified in the date element. Underneath trackList are the individual tracks that make up the playlist. Each track is encapsulated by the track element. Information about the track, including the location of its file, is encapsulated in elements underneath track. In our example, each track has a title, an artist name, and a local file location. The official specifications allow for more track information elements such as track length and album information. Here are the playlist child elements summarized: Playlist Child Element Required? Description trackList Yes The parent of individual track elements. This is the only required child element of a playlist. Can be empty if the playlist has no songs. title No A human readable title of the XSPF playlist. creator No The name of the playlist creator. annotation No Comments on the playlist. info No A URL to a page containing more information about the playlist. location No The URL to the playlist itself. identifier No The unique ID for the playlist. Must be a legal Uniform Resource Name (URN). image No A URL to an image representing the playlist. date No The creation (not the last modified!) date of the playlist. Must be in XML schema dateTime format. For example, "2004-02-27T03:30:00". license No If the playlist is under a license, the license is specified with this element. attribution No If the playlist is modified from another source, the attribution element gives credit back to the original source, if necessary. link No Allows non-XSPF resources to be included in the playlist. meta No Allows non-XSPF metadata to be included in the playlist. extension No Allows non-XSPF XML extensions to be included in the playlist. A trackList element has an unlimited number of track elements to represent each track. track is the only allowed child of trackList. track's child elements give us information about each track. The following table summarizes the children of track: Track Child Element Required? Description location No The URL to the audio file of the track. identifier No The canonical ID for the playlist. Must be a legal URN. title No A human readable title of the track. Usually, the song's name. creator No The name of the track creator. Usually, the song's artist. annotation No Comments on the track. info No A URL to a page containing more information about the track. image No A URL to an image representing the track. album No The name of the album that the track belongs to. trackNum No The ordinal number position of the track in the album. duration No The time to play the track in milliseconds. link No Allows non-XSPF resources to be included in the track. meta No Allows non-XSPF metadata to be included in the track. extension No Allows non-XSPF XML extensions to be included in the track. Note that XSPF is very simple and track oriented. It was not designed to be a repository or database for songs. There are not a lot of options to manipulate the list. XSPF is merely a shareable playlist format, and nothing more. RSS The simplest answer to, "What is RSS?", is that it's an XML file used to publish frequently updated information, like news items, blogs entries, or links to podcast episodes. News sites like Slashdot.org and the New York Times provide their news items in RSS format. As new news items are published, they are added to the RSS feed. Being XML-based, third-party aggregator software makes reading news items easy. With one piece of software, I can tell it to grab feeds from various sources and read the news items in one location. Web applications can also read and parse RSS files. By offering an RSS feed for my blog, another site can grab the feed and keep track of my daily life. This is one way by which a small site can provide rudimentary web services with minimal investment. The more honest answer is that it is a group of XML standards (used to publish frequently updated information like news items or blogs) that may have little compatibility with each other. Each version release also has a tale of conflict and strife behind it. We won't dwell on the politicking of RSS. We'll just look at the outcomes. The RSS world now has three main flavors: The RSS 1.0 branch includes versions 0.90, 1.0, and 1.1. It's goal is to be extensible and flexible. The downside to the goals is that it is a complex standard. The RSS 2.0 branch includes versions 0.91, 0.92, and 2.0.x. Its goal is to be simple and easy to use. The drawback to this branch is that it may not be powerful enough for complex sites and feeds. There are some basic skeletal similarities between the two formats. After the XML root element, metadata about the feed itself is provided in a top section. After the metadata, one or more items follow. These items can be news stories, blog entries, or podcasts episodes. These items are the meat of an RSS feed. The following is an example RSS 1.1 file from XML.com: <Channel rdf_about="http://www.xml.com/xml/news.rss"> <title>XML.com</title> <link>http://xml.com/pub</link> <description> XML.com features a rich mix of information and services for the XML community. </description> <image rdf_parseType="Resource"> <title>XML.com</title> <url>http://xml.com/universal/images/xml_tiny.gif</url> </image> <items rdf_parseType="Collection"> <item rdf_about= "http://www.xml.com/pub/a/2005/01/05/restful.html"> <title> The Restful Web: Amazon's Simple Queue Service </title> <link> http://www.xml.com/pub/a/2005/01/05/restful.html </link> <description> In Joe Gregorio's latest Restful Web column, he explains that Amazon's Simple Queue Service, a web service offering a queue for reliable storage of transient messages, isn't as RESTful as it claims. </description> </item> <item rdf_about= "http://www.xml.com/pub/a/2005/01/05/tr-xml.html"> <title> Transforming XML: Extending XSLT with EXSLT </title> <link> http://www.xml.com/pub/a/2005/01/05/tr-xml.html </link> <description> In this month's Transforming XML column, Bob DuCharme reports happily that the promise of XSLT extensibility via EXSLT has become a reality. </description> </item> </items></Channel> The root element of an RSS file is an element named Channel. Immediately, after the root element are elements that describe the publisher and the feed. The title, link, description, and image elements give us more information about the feed. The actual content is nested in the items element. Even if there are no items in the feed, the items element is required, but will be empty. Usage of these elements can be summarized as follows: Channel Child Element Required? Description title Yes A human readable title of the channel. link Yes A URL to the feed. description Yes A human readable description of the feed. items Yes A parent element to wrap around item elements. image No A section to house information about an official image for the feed. others No Any other elements not in the RSS namespace can be optionally included here. The namespace must have been declared earlier, and the child elements must be prefixed. If used, the image element needs its own child elements to hold information about the feed image. A title element is required and while optional, a link element to the actual URL of the image would be extremely useful. Each news blog, or podcast entry is represented by an item element. In this RSS file, each item has a title, link, and a description, each, represented by the respective element. This file has two items in it before the items and Channel elements are closed off.
Read more
  • 0
  • 0
  • 2471

article-image-joomla-15-top-extensions-adding-booking-system-events
Packt
25 Oct 2010
4 min read
Save for later

Joomla! 1.5 Top Extensions: Adding a Booking System for Events

Packt
25 Oct 2010
4 min read
Joomla! 1.5 Top Extensions Cookbook Over 80 great recipes for taking control of Joomla! Extensions Set up and use the best extensions available for Joomla! Covers extensions for just about every use of Joomla! Packed with recipes to help you get the most of the Joomla! extensions Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Getting ready... The Seminar for Joomla! component allows visitors to register for events. Download this component from http://seminar.vollmar.ws/downloads/com_Seminar_V1.3.0.zip, and install it from the Extensions | Install/Uninstall screen in the Joomla! administration panel. How to do it... Once installed, follow these steps to allow registration for events: From the Joomla! administration panel, select Components | Seminar. It shows the Events screen, listing the available events. To create event categories, click on Categories and then click on New. This shows you the Category: [New] screen. On the Category: [New] screen, type in the Title and Alias, and select Yes in the Published radio box. Then select the Access Level, Image, and Image Position. Enter a description for the category. Click on Save in the toolbar. Repeat this step to add as many categories as you need. Click on Events and then click on New. It will show you the New Event screen. Type the Title of the event. Then select Begin, End, Closing Date, and the time of the event. Select Yes in the Display field if you want to show this information. Type a brief description of the event, and the place of the event. Select an organizer from the Organiser drop-down list. Then type the maximum number of participants in the Max. Particip. field. Also, select what to do when the number of participants exceeds this number. When this is done, click on Additional Settings. This shows you the Additional Settings screen. In the Additional Settings section, you can add an additional description. In the Description box, add descriptions to particular groups of visitors. The syntax to add messages for different groups is shown above the text box. Select an image for the event overview. Then type the tutor's name, target group, and the fees per person. Then click on General Input Fields. This will show the following screen: In the General Input Fields section, you can include additional fields. To add additional fields, follow the syntax provided. When this is done, click on Files. This shows the following screen for file upload. Click on the Browse button and select the file to upload. Type a description of the file and select the group that can view it. When this is done, click on the Save button in the toolbar. Select Menus | Main Menu, and then click on New. This will show you the Menu Item: [New] wizard. In the Select Menu Item Type section, select Seminar and provide a title for the menu item on the next screen. Then save the menu item. Preview the site's frontend, as a user logged in to the frontend, and click on the menu item for the seminar. This will display the following screen showing your added event. The overview of the event shows the overview image, event title, category, reservation information, and an indicator for the booking status. Click on the event title, it will show you the event details, as shown in the following screenshot: Select the spaces to be booked. Then type your Name and Email address and click on the Book button. This books your space for the event. You can view your booking by clicking the My Bookings tab, as shown in the following screenshot. Note that your participation status is indicated through a color-coded indicator. There's more... Most of the features of Seminar for the Joomla! component can be configured from the Settings screen. For example, you can configure who can book events, what happens when booking exceeds the number of seats, who can rate the events from the frontend, which folder will store images of the events, the maximum file upload size, the file extensions allowed for uploads, and so on. Summary This article showed you how to allow visitors to register for an event. Further resources on this subject: Adding an Event Calendar to your Joomla! Site using JEvents Showing your Google calendar on your Joomla! site using GCalendar Joomla! 1.5 Top Extensions for Using Languages Manually Translating Your Joomla! Site's Content into Your Desired Language
Read more
  • 0
  • 0
  • 2471

article-image-building-facebook-application-part-1
Packt
14 Oct 2009
4 min read
Save for later

Building a Facebook Application: Part 1

Packt
14 Oct 2009
4 min read
A simple Facebook application Well, with the Facebook side of things set up, it's now over to your server to build the application itself. The server could of course be: Your web host's server – This must support PHP, and it may be worthwhile for you to check if you have any bandwidth limits. Your own server – Obviously, you'll need Apache and PHP (and you'll also need a fixed IP address). Getting the server ready for action Once you have selected your server, you'll need to create a directory in which you'll build your application, for example: mkdir -p /www/htdocs/f8/penguin_picd /www/htdocs/f8/penguin_pi If you haven't already done so, then this is the time to download and uncompress the Facebook Platform libraries: wget http://developers.facebook.com/clientlibs/facebook-platform.tar.gztar -xvzf facebook-platform.tar.gz We're not actually going to use all of the files in the library, so you can copy the ones that you're going to use into your application directory: cp facebook-platform/client/facebook.phpcp facebook-platform/client/facebookapi_php5_restlib.php And once that's done, you can delete the unwanted files: rm -rf facebook-platform.tar.gz facebook-platform Now, you're ready to start building your application. Creating your first Facebook application Well, you're nearly ready to start building your application. First, you'll need to create a file (let's call it appinclude.php) that initiates the application. The application initiation code We've got some code that needs to be executed every time our application is accessed, and that code is: <?phprequire_once 'facebook.php'; #Load the Facebook API$appapikey = '322d68147c78d2621079317b778cfe10'; #Your API Key$appsecret = '0a53919566eeb272d7b96a76369ed90c'; #Your Secret$facebook = new Facebook($appapikey, $appsecret); #A Facebook object$user = $facebook->require_login(); #get the current user$appcallbackurl = 'http://213.123.183.16/f8/penguin_pi/'; #callback Url#Catch an invalid session_keytry { if (!$facebook->api_client->users_isAppAdded()) { $facebook->redirect($facebook->get_add_url()); }} catch (Exception $ex) { #If invalid then redirect to a login prompt $facebook->set_user(null, null); $facebook->redirect($appcallbackurl);}?> You'll notice that the code must include your API Key and your secret (they were created when you set up your application in Facebook). The PHP file also handles any invalid sessions. Now, you're ready to start building your application, and your code must be written into a file named index.php. The application code Our application code needs to call the initiation file, and then we can do whatever we want: <?phprequire_once 'appinclude.php'; #Your application initiation fileecho "<p>Hi $user, ";echo "welcome to Pygoscelis P. Ellsworthy's Suspect Tracker</p>";?> Of course, now that you've written the code for the application, you'll want to see what it looks like. Viewing the new application Start by typing your Canvas Page URL into a browser (you'll need to type in your own, but in the case of Pygoscelis P. Ellesworthy's Suspect Tracker, this would be http://apps.facebook.com/penguin_pi/): You can then add the application just as you would add any other application: And at last, you can view your new application: It is worth noting, however, that you won't yet be able to view your application on your profile. For the time being, you can only access the application by typing in your Canvas Page URL. That being said, your profile will register the fact that you've added your application: That's the obligatory "Hello World" done. Let's look at how to further develop the application.
Read more
  • 0
  • 0
  • 2469

article-image-themes-look-and-feel-and-logos
Packt
14 Aug 2013
9 min read
Save for later

Themes, the look and feel, and logos

Packt
14 Aug 2013
9 min read
(For more resources related to this topic, see here.) Themes Themes are sets of predefined styles and can be used to personalize the look and feel of Confluence. Themes can be applied to the entire site and individual spaces. Some themes add extra functionality to Confluence or change the layout significantly. Confluence 5 comes with two themes installed, and an administrator can install new themes as add-ons via the Administration Console. Atlassian is planning on merging the Documentation Theme with the Default Theme . As this is not the case in Confluence 5 yet, we will discuss them both as they have some different features. Keep in mind that, at some point, the Documentation Theme will be removed from Confluence. To change the global Confluence theme, perform the following steps: Browse to the Administration console (Administration | Confluence Admin). Select Themes from the left-hand side menu. All the installed themes will be present. Select the appropriate radio button to select a theme. Choose Confirm to change the theme: Space administrators can also decide on a different theme for their own spaces. Spaces with their own theme selections—and therefore not using the global look and feel—won't be affected if a Confluence Administrator changes the global default theme. To change a space theme, perform the following steps: Go to any page in the space. Select Space Tools in the sidebar. (If you are not using the default theme, select Browse | Space Admin.) Select Look and Feel, followed by Theme. Select the theme you want to apply to the space. Click on Confirm. The Default Theme As the name implies, this is the default theme shipped with Confluence. This Default Theme got a complete overhaul in Confluence 5 and looks as shown in the following screenshot: The Default Theme provides every space with a sidebar, containing useful links and navigation help throughout the current space. With the sidebar, you can quickly change from browsing pages to blog post or vice versa. The sidebar also allows important space content to be added as a link for quicker access, and displays the children of the current page for easy navigation. You can collapse or expand the sidebar. Click-and-drag the border, or use the keyboard shortcut: [. If the sidebar is collapsed, you can still access the sidebar options. Configuring the theme The default theme doesn't have any global configuration available, but a space administrator can make some space-specific changes to the theme's sidebar. Perform the following steps to change the space details: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Click on the edit icon next to the space title. A pop up will show where you can change the space title and logo (as shown in the preceding screenshot, indicated as 1). Click on Save to save the changes Click on the Done button to exit the configuration mode. The main navigation items on the sidebar (pages and blog posts) can be hidden. This can come in handy, for example, when you don't allow users to add blog posts to the space. To show or hide the main navigation items, perform the following steps: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Select the - or + icon beside the link to either hide or show the link. Click on the Done button to exit the configuration mode. Space shortcuts are manually added links to the sidebar, linking to important content within the space. A space administrator can manage these links. To add a space shortcut, perform the following steps: Go to any page in the relevant space. Select Configure sidebar in the space sidebar. Click on the Add Link button, indicated as 3 in the preceding screenshot. The Insert Link dialog will appear. Search and select the page you want to link. Click on Insert to add the link to the sidebar. Click on the Done button to exit the configuration mode. The Documentation Theme The Documentation Theme is another bundled theme. It supplies a built-in table of content for your space, a configurable header and footer, and a space-restricted search. The Documentation Theme's default look and feel is displayed in the following screenshot: The sidebar of the Documentation Theme will show a tree with all the pages in your space. Clicking on the icon in front of a page title will expand the branch and show its children. The sidebar can be opened and closed using the [ shortcut, or the icon on the left of the search box in the Confluence header. Configuring the theme The Documentation Theme allows configuration of the sidebar contents, page header and footer, and the possibility to restrict the search to only the current space. A Confluence Administrator can configure the theme globally, but a Space Administrator can overwrite this configuration for his or her own space. To configure the Documentation Theme for a space, the Space Administrator should explicitly select the Documentation Theme as the space theme. The theme configuration of the Documentation Theme allows you to change the properties displayed in the following screenshot. How to get to this screen and what the properties represent will be explained next. To configure the Documentation theme, perform the following steps: As a Confluence Administrator: Browse to the Administration Console (Administration | Confluence Admin). Select Themes from the left-hand side menu. Choose the Documentation Theme as the current theme. Click on the Configure theme link. As a Space administrator: Go to any page in the space. Select Browse | Space Admin. Choose Themes from the left-hand side menu. Make sure that the Documentation Theme is the current theme. Click on the Configure theme link. Select or deselect the Page Tree checkbox. This will determine if your space will display the default search box and page tree in the sidebar. Select or deselect the Limit search results to the current space checkbox. If you select the checkbox: The Confluence search in the top-left corner will only search in the current space. The sidebar will not contain a search box. If you deselect the checkbox: The Confluence search in the top-left corner will search across the entire Confluence site. The sidebar will contain a search box, which is limited to searching in the current space. In the three textboxes, you can enter any text or wiki markup you would like; for example, you could add some information to the sidebar or a notification to every page. The following screenshot will display these areas: Navigation: This will be displayed in the space sidebar. Header: This will be displayed above the title on all the pages in the space. Footer: This will be displayed after the comments on all the pages in the space. Look and feel The look and feel of Confluence can be customized on both global level and space level. Any changes made on a global level will be applied as the default settings for all spaces. A Space Administrator can choose to use a different theme than the global look and feel. When a Space Administrator selects a different theme, the default settings and theme are no longer applied to that space. This also means that settings in a space are not updated if the global settings are updated. In this section, we will cover some basic look and feel changes, such as changing the logo and color-scheme of your Confluence instance. It is also possible to change some of the Confluence layouts; this is covered in the Advanced customizing section. Confluence logo The Confluence logo is the logo that is displayed on the navigation bar in Confluence. This can easily be changed to your company logo. To change the global logo, perform the following steps: Browse to the Administration Console (Administration | Confluence Admin). Select Site Logo from the left-hand side menu Click on Choose File to select the file from your computer. Decide whether to show only your company logo, or also the title of your Confluence installation. If you choose to also show the title, you can change this in the text field next to Site Title. Click on Save. As you might notice, Confluence also changed the color scheme of your installation. Confluence will suggest a color scheme based upon your logo. To revert this change, click on Undo, which is directly available after updating your logo. Space logo Every space can choose its own logo, making it easy to identify certain topics or spaces. A Confluence administrator can also set the default space logo, for newly created spaces or spaces without their own specified logo. The logo of a personal space cannot be changed; it will always use the user's avatar as logo. To set the default space logo, perform the following steps: Browse to the Administration Console (Administration | Confluence Admin). Select Default Space Logo from the left-hand side menu. Click on Choose File to select the file from your computer. For the best result, make sure the image is about 48 x 48 pixels. Click on Upload Logo to upload the default space logo. As a Space Administrator, you can replace the default logo for your space. How this is done depends on the theme you are using. To change the space logo with the default theme, perform the following steps: Go to any page in the relevant space. Click on Configure Sidebar from the sidebar. Select the edit icon next to the page title. Click on Choose File next to the logo and select a file from your computer. Confluence will display an image editor to indicate how your logo should be displayed as shown in the next screenshot. Drag and resize the circle in the editor to the right position. Click on Save to save the changes. Click on the Done button to exit the configuration mode. To change the space logo with the Documentation Theme, perform the following steps: Go to any page in the relevant space. Select Browse | Space Admin. Select Change Space Logo from the left-hand side menu. Select Choose File and select the logo from your computer. Click on Upload Logo to save the new logo; Confluence will automatically resize and crop the logo for you. Summary We went through the different features for changing the look and feel of Confluence so that we can add some company branding to our instance or just to a space. Resources for Article : Further resources on this subject: Advanced JIRA 5.2 Features [Article] Getting Started with JIRA 4 [Article] JIRA: Programming Workflows [Article]
Read more
  • 0
  • 0
  • 2463

article-image-kentico-cms-5-website-development-workflow-management
Packt
07 Oct 2010
3 min read
Save for later

Kentico CMS 5 Website Development: Workflow Management

Packt
07 Oct 2010
3 min read
  Kentico CMS 5 Website Development: Beginner's Guide A clear, hands-on guide to build websites that get the most out of Kentico CMS 5's many powerful features Create websites that meet real-life requirements using example sites built with easy-to-follow steps Learn from easy-to-use examples to build a dynamic website Learn best practices to make your site more discoverable Practice your Kentico CMS skills from organizing your content to changing the site's look and feel Get going with example starter sites such as a corporate site, an e-commerce site, and a community-driven website to jumpstart your web development Written by Thom Robbins, the Web Evangelist for Kentico Software LLC Read more about this book (For more resources on CMS, see here.) Workflow management Workflow is a way to automate a business process for publishing content. Using workflow allows you to delegate portions of the business process to different users or groups for approval. Kentico CMS allows you to use workflow for all documents, including uploaded files. The workflow engine organizes the process of content creation, updates, and publishing content. The following diagram shows an example document lifecycle with workflow. It's important to keep in mind that document versioning is tightly bound with workflow and allows document comparison and version rollback. Time for action – configuring workflow The Kentico CMS workflow process is designed as a state machine. This means that workflows are event driven. A workflow contains three or more states, with only one state active at any given time. Based on an event, a transition is made to another state. Once a transition is made to the final state, the workflow is completed. Within each workflow step, members of authorized roles are allowed to modify, approve, or reject a document. Now, let's configure a workflow for the News folder. In CMS Site Manager, select the Development tab, select Workflows, and click New workflow, as shown in the following screenshot: For the New workflow, enter the Display Name as Approval and Code Name as Approval, as shown in the following screenshot: Select the Steps tab and click on the New workflow step link, as shown in the following screenshot: Quick tip The Edit, Published, and Archived steps are automatically created for every workflow and can't be deleted. These steps use the default system security. Enter the following information and select OK. Select the Roles tab and click Add roles, as shown in the following screenshot: Select the CMS Desk Administrators role and select OK. Select the Scope tab and click on the New workflow scope link, as shown in the following screenshot: What is a workflow scope? The workflow scope defines the folder, documents, and languages that are included in the workflow. In the Starting alias path, click the Select button, as shown in the following screenshot: Select News and then click the Select button, as shown in the following screenshot: Select OK, as shown in the following screenshot, to save the workflow scope.
Read more
  • 0
  • 0
  • 2461
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-opencart-themes-using-jcarousel-plugin
Packt
20 Apr 2011
6 min read
Save for later

OpenCart Themes: Using the jCarousel Plugin

Packt
20 Apr 2011
6 min read
  OpenCart 1.4 Template Design Cookbook Over 50 incredibly effective and quick recipes for building modern eye-catching OpenCart templates         Installing jQuery and jCarousel In this recipe, we will download jQuery and jCarousel and also install them into our store. By default, OpenCart comes with jQuery installed. Carousels have become a popular means to display products. There are many different types of carousels. But, we will use jCarousel for this recipe. You can use your favourite one. Getting started We need jQuery for our dynamic content throughout the site. jQuery is added by default with OpenCart. But here, we will also see how to download it and install it. We will download and install jQuery and jCarousel. First, we will go to the sites from where we can download them. If jQuery is not present in your site, you can download it from http://docs.jquery.com/Downloading_jquery. We will download the latest jQuery JavaScript file. How to do it First, we need to download the required files to use jCarousel. We need the jQuery JavaScript file and jCarousel JavaScript and css files. To check whether our site is jQuery-enabled or not, we can use web developer Firefox addons. Click on the information tab of Web Developer: It will display many sub-menus. Every sub-menu has its own importance. We select View JavaScript. Then, a new tab will open containing all the JavaScripts for the web page in it. You can contract or expand the links. If our store already has jQuery, then we can find it in this page. Otherwise, it won't be in this page or it will show a 404 error. Since our store is jQuery-enabled by default, we don't need to install jQuery. We have shown the installation of jQuery so that you can have full control of the process. There are several releases available to download. After downloading jQuery, we will place it under the catalogviewJavascriptjQuery folder. Now, we will download jCarousel. We can download it from http://sorgalla.com/projects/jCarousel/. Then, extract the compressed file. There will be many files in the extracted folder. Under the lib folder, we have an uncompressed and minified version of jCarousel. We create a folder named jCarousel under catalogviewiavascriptjquery. Then, in the jCarousel folder, create another folder named js. We will place any one of the two files under catalogviewjavascriptjqueryjcarouseljs. And bring the skins folder under the catalogviewjavascriptjqueryjcarousel. Displaying products using jCarousel We have installed jQuery and jCarousel in the previous recipe. Here, we will see how we can display products with jCarousel. In this recipe, we are going use jCarousel for the latest products, but you can use it for other modules as well. Getting started First, we will open the header.tpl file under catalogviewthemeshoptemplatecommon. If we don't have jQuery in our site, then we need to add it here. See the following code block: <script type="text/javascript" src="catalog/view/javascript/jquery/jquery-1.3.2.min.js"></script> How to do it We will follow the steps below for jCarousel addition to our store: First, we need to add the links for jCarousel JavaScript and css files to work with jCarousel. As we are using jCarousel for displaying the latest products, we place the links for those files in the latest_home.tpl under catalogviewthemeshoptemplatemodule. We add the following links: <script type="text/javascript" src="catalog/view/javascript/jquery/jcarousel/lib/jquery.jcarousel.js"></script> <link rel="stylesheet" type="text/css" href="catalog/view/javascript/jquery/jcarousel/skins/tango/skin.css" /> Now, we modify some HTML element structure to place the products in jCarousel. The current structure is table based. We will use <ul> <li> instead of the table structure. We remove the following tag: <table class="list"> //... </table> And write the following code, here by the three dots, we mean other inner codes: <ul id="latestcarousel" class="jCarousel-skin-tango"> //... </ul> Here latestcarousel ID is our carousel container ID. There are two skins available for jCarousel, one is tango and the other is ie7. Here, we are using tango. We also remove the tr tag: <tr> //... </tr> Now, remove the td tag and the following: <td style="width: 25%;"><?php if (isset($products[$j])) { ?> //... <?php } ?></td> And, we replace it with the following code: <li> //... </li> Now, we will initialize jCarousel. <script type="text/javascript"> jQuery(document).ready(function() { jQuery('#latestcarousel').jcarouseljcarousel(); }); </script> If we see our changes in the browser, then we will find it as: Here, we need to adjust the height and width of the carousel. To change the width of the carousel, open the skin.css file under catalogiewJavascriptjQueryjCarouselskins. We are going to change the following code: .jcarousel-skin-tango .jcarousel-container-horizontal { width: 245px; padding: 20px 40px; } To the following code: .jcarousel-skin-tango .jcarousel-container-horizontal { width: auto; padding: 20px 40px; } Again, if we are going to use jCarousel in some other places as well, it is not smart to change the skin.css. Instead we can override it in our theme-css for a specific region. For example, #content .middle .jcarousel-skin-tango .jcarousel-container-horizontal. Here, we have just shown you one instance of jCarousel usage. We changed the width to auto. So, in the browser, the carousel will be like this: There is style for both vertical and horizontal carousel in the stylesheet. As we are now using the horizontal carousel, we adjust the styles for horizontal only. The area showing the images of the products is smaller compared to the carousel width. So, we will adjust the width. We set the width to auto. .jcarousel-skin-tango .jcarousel-clip-horizontal { width: auto; height: 75px; } Now, the carousel is displaying products along the full width of the carousel. See the following image: Our height for the carousel is small. Let's change the height of it. We change the height to 200px: .jcarousel-skin-tango .jcarousel-clip-horizontal { width: auto; height: 200px; } This makes our carousel look like the following: To enlarge the height of the product image display area, we need to change the following code: .jcarousel-skin-tango .jcarousel-item { width: 75px; height: 200px; } We change the height to 200px. In the browser, now our carousel looks like this: We need to adjust the margin property for our product images. We change the margin-right property to 50px. .jcarousel-skin-tango .jcarousel-item-horizontal { margin-left: 0; margin-right: 50px; } This makes spacing between the product images. You need to refresh the browser to view the changes: We will set options for the carousel. There are several options. You can see the available options in this link: http://sorgalla.com/projects/jcarousel/. We used the following options and placed it in the latest_home.tpl, the page in which we are showing jCarousel: <script type="text/javascript"> jQuery(document).ready(function() { jQuery('#latestcarousel').jcarousel({ scroll: 1, visible: 3, auto: 3, rtl: true, wrap: 'circular' }); }); </script>  
Read more
  • 0
  • 0
  • 2458

article-image-scaling-influencers
Packt
11 Aug 2015
27 min read
Save for later

Scaling influencers

Packt
11 Aug 2015
27 min read
In this article written by Adam Boduch, author of the book JavaScript at Scale, goes on to say how we don't scale our software systems just because we can. While it's common to tout scalability, these claims need to be put into practice. In order to do so, there has to be a reason for scalable software. If there's no need to scale, then it's much easier, not to mention cost-effective, to simply build a system that doesn't scale. Putting something that was built to handle a wide variety of scaling issues into a context where scale isn't warranted just feels clunky. Especially to the end user. So we, as JavaScript developers and architects, need to acknowledge and understand the influences that necessitate scalability. While it's true that not all JavaScript applications need to scale, it may not always be the case. For example, it's difficult to say that we know this system isn't going to need to scale in any meaningful way, so let's not invest the time and effort to make it scalable. Unless we're developing a throw-away system, there's always going to be expectations of growth and success. At the opposite end of the spectrum, JavaScript applications aren't born as mature scalable systems. They grow up, accumulating scalable properties along the way. Scaling influencers are an effective tool for those of us working on JavaScript projects. We don't want to over-engineer something straight from inception, and we don't want to build something that's tied-down by early decisions, limiting its ability to scale. (For more resources related to this topic, see here.) The need for scale Scaling software is a reactive event. Thinking about scaling influencers helps us proactively prepare for these scaling events. In other systems, such as web application backends, these scaling events may be brief spikes, and are generally handled automatically. For example, there's an increased load due to more users issuing more requests. The load balancer kicks in and distributes the load evenly across backend servers. In the extreme case, the system may automatically provision new backend resources when needed, and destroy them when they're no longer of use. Scaling events in the frontend aren't like that. Rather, the scaling events that take place generally happen over longer periods of time, and are more complex. The unique aspect of JavaScript applications is that the only hardware resources available to them are those available to the browser in which they run. They get their data from the backend, and this may scale up perfectly fine, but that's not what we're concerned with. As our software grows, a necessary side-effect of doing something successfully, is that we need to pay attention to the influencers of scale. The preceding figure shows us a top-down flow chart of scaling influencers, starting with users, who require that our software implements features. Depending on various aspects of the features, such as their size and how they relate to other features, this influences the team of developers working on features. As we move down through the scaling influencers, this grows. Growing user base We're not building an application for just one user. If we were, there would be no need to scale our efforts. While what we build might be based on the requirements of one user representative, our software serves the needs of many users. We need to anticipate a growing user base as our application evolves. There's no exact target user count, although, depending on the nature of our application, we may set goals for the number of active users, possibly by benchmarking similar applications using a tool such as http://www.alexa.com/. For example, if our application is exposed on the public internet, we want lots of registered users. On the other hand, we might target private installations, and there, the number of users joining the system is a little slower. But even in the latter case, we still want the number of deployments to go up, increasing the total number of people using our software. The number of users interacting with our frontend is the largest influencer of scale. With each user added, along with the various architectural perspectives, growth happens exponentially. If you look at it from a top-down point of view, users call the shots. At the end of the day, our application exists to serve them. The better we're able to scale our JavaScript code, the more users we'll please. Building new features Perhaps the most obvious side-effect of successful software with a strong user base is the features necessary to keep those users happy. The feature set grows along with the users of the system. This is often overlooked by projects, despite the obviousness of new features. We know they're coming, yet, little thought goes into how the endless stream of features going into our code impedes our ability to scale up our efforts. This is especially tricky when the software is in its infancy. The organization developing the software will bend over backwards to reel in new users. And there's little consequence of doing so in the beginning because the side-effects are limited. There's not a lot of mature features, there's not a huge development team, and there's less chance of annoying existing users by breaking something that they've come to rely on. When these factors aren't there, it's easier for us to nimbly crank out the features and dazzle existing/prospective users. But how do we force ourselves to be mindful of these early design decisions? How do we make sure that we don't unnecessarily limit our ability to scale the software up, in terms of supporting more features? New feature development, as well as enhancing existing features, is an ongoing issue with scalable JavaScript architecture. It's not just the number of features listed in the marketing literature of our software that we need to be concerned about . There's also the complexity of a given feature, how common our features are with one another, and how many moving parts each of these features has. If the user is the first level when looking at JavaScript architecture from a top-down perspective, each feature is the next level, and from there, it expands out into enormous complexity. It's not just the individual users who make a given feature complex. Instead, it's a group of users that all need the same feature in order to use our software effectively. And from there, we have to start thinking about personas, or roles, and which features are available for which roles. The need for this type of organizational structure isn't made apparent till much later on in the game; after we've made decisions that make it difficult to introduce role-based feature delivery. And depending on how our software is deployed, we may have to support a variety of unique use cases. For example, if we have several large organizations as our customers, each with their own deployments, they'll likely have their own unique constraints on how users are structured. This is challenging, and our architecture needs to support the disparate needs of many organizations, if we're going to scale. Hiring more developers Making these features a reality requires solid JavaScript developers who know what they're doing, and if we're lucky, we'll be able to hire a team of them. The team part doesn't happen automatically. There's a level of trust and respect that needs to be established before the team members begin to actively rely on one another to crank out some awesome code. Once that starts happening, we're in good shape. Turning once again to the top-down perspective of our scaling influencers, the features we deliver can directly impact the health of our team. There's a balance that's essentially impossible to maintain, but we can at least get close. Too many features and not enough developers lead to a sense of perpetual inadequacy among team members. When there's no chance of delivering what's expected, there's not much sense in trying. On the other hand, if you have too many developers, and there's too much communication overhead due to a limited number of features, it's tough to define responsibilities. When there's no shared understanding of responsibilities, things start to break down. It's actually easier to deal with not enough developers for the features we're trying to develop, than having too many developers. When there's a large burden of feature development, it's a good opportunity to step back and think—"what would we do differently if we had more developers?" This question usually gets skipped. We go hire more developers, and when they arrive, it's to everyone's surprise that there's no immediate improvement in feature throughput. This is why it's best to have an open development culture where there are no stupid questions, and where responsibilities are defined. There's no one correct team structure or development methodology. The development team needs to apply itself to the issues faced by the software we're trying to deliver. The biggest hurdle is for sure the number, size, and complexity of features. So that's something we need to consider when forming our team initially, as well as when growing the team. This latter point is especially true because the team structure we used way back when the software was new isn't going to fit what we face when the features scale up. Architectural perspectives The preceding section was a sampling of the factors that influence scale in JavaScript applications. Starting from the top, each of these influencers affects the influencer below it. The number and nature of our users is the first and foremost influencer, and this has a direct impact on the number and nature of the features we develop. Further more, the size of the development team, and the structure of that team, are influenced by these features. Our job is to take these influencers of scale, and translate them into factors to consider from an architectural perspective: Scaling influences the perspectives of our architecture. Our architecture, in turn, determines responses to scaling influencers. The process is iterative and never-ending throughout the lifetime of our software. The browser is a unique environment Scaling up in the traditional sense doesn't really work in a browser environment. When backend services are overwhelmed by demand, it's common to "throw more hardware" at the problem. Easier said than done of course, but it's a lot easier to scale up our data services these days, compared to 20 years ago. Today's software systems are designed with scalability in mind. It's helpful to our frontend application if the backend services are always available and always responsive, but that's just a small portion of the issues we face. We can't throw more hardware at the web browsers running our code; given that; the time and space complexities of our algorithms are important. Desktop applications generally have a set of system requirements for running the software, such as OS version, minimum memory, minimum CPU, and so on. If we were to advertise requirements such as these in our JavaScript applications, our user base would shrink dramatically, and possibly generate some hate mail. The expectation that browser-based web applications be lean and fast is an emergent phenomenon. Perhaps, that's due in part to the competition we face. There are a lot of bloated applications out there, and whether they're used in the browser or natively on the desktop, users know what bloat feels like, and generally run the other way: JavaScript applications require many resources, all of different types; these are all fetched by the browser, on the application's behalf. Adding to our trouble is the fact that we're using a platform that was designed as a means to download and display hypertext, to click on a link, and repeat. Now we're doing the same thing, except with full-sized applications. Multi-page applications are slowly being set aside in favor of single-page applications. That being said, the application is still treated as though it were a web page. Despite all that, we're in the midst of big changes. The browser is a fully viable web platform, the JavaScript language is maturing, and there are numerous W3C specifications in progress; they assist with treating our JavaScript more like an application and less like a document. Take a look at the following diagram: A sampling of the technologies found in the growing web platform We use architectural perspectives to assess any architectural design we come up with. It's a powerful technique to examine our design through a different lens. JavaScript architecture is no different, especially for those that scale. The difference between JavaScript architecture and architecture for other environments is that ours have unique perspectives. The browser environment requires that we think differently about how we design, build, and deploy applications. Anything that runs in the browser is transient by nature, and this changes software design practices that we've taken for granted over the years. Additionally, we spend more time coding our architectures than diagramming them. By the time we sketch anything out, it's been superseded by another specification or another tool. Component design At an architectural level, components are the main building blocks we work with. These may be very high-level components with several levels of abstraction. Or, they could be something exposed by a framework we're using, as many of these tools provide their own idea of "components". When we first set out to build a JavaScript application with scale in mind, the composition of our components began to take shape. How our components are composed is a huge limiting factor in how we scale, because they set the standard. Components implement patterns for the sake of consistency, and it's important to get those patterns right: Components have an internal structure. The complexity of this composition depends on the type of component under consideration As we'll see, the design of our various components is closely-tied to the trade-offs we make in other perspectives. And that's a good thing, because it means that if we're paying attention to the scalable qualities we're after, we can go back and adjust the design of our components in order to meet those qualities. Component communication Components don't sit in the browser on their own. Components communicate with one another all the time. There's a wide variety of communication techniques at our disposal here. Component communication could be as simple as method invocation, or as complex as an asynchronous publish-subscribe event system. The approach we take with our architecture depends on our more specific goals. The challenge with components is that we often don't know what the ideal communication mechanism will be, till after we've started implementing our application. We have to make sure that we can adjust the chosen communication path: The component communication mechanism decouples components, enabling scalable structures Seldom will we implement our own communication mechanism for our components. Not when so many tools exist, that solve at least part of the problem for us. Most likely, we'll end up with a concoction of an existing tool for communication and our own implementation specifics. What's important is that the component communication mechanism is its own perspective, which can be designed independently of the components themselves. Load time JavaScript applications are always loading something. The biggest challenge is the application itself, loading all the static resources it needs to run, before the user is allowed to do anything. Then there's the application data. This needs to be loaded at some point, often on demand, and contributes to the overall latency experienced by the user. Load time is an important perspective, because it hugely contributes to the overall perception of our product quality. The initial load is the user's first impression and this is where most components are initialized; it's tough to get the initial load to be fast without sacrificing performance in other areas There's lots we can do here to offset the negative user experience of waiting for things to load. This includes utilizing web specifications that allow us to treat applications and the services they use as installable components in the web browser platform. Of course, these are all nascent ideas, but worth considering as they mature alongside our application. Responsiveness The second part of the performance perspective of our architecture is concerned with responsiveness. That is, after everything has loaded, how long does it take for us to respond to user input? Although this is a separate problem from that of loading resources from the backend, they're still closely-related. Often, user actions trigger API requests, and the techniques we employ to handle these workflows impact user-perceived responsiveness. User-perceived responsiveness is affected by the time taken by our components to respond to DOM events; a lot can happen in between the initial DOM event and when we finally notify the user by updating the DOM. Because of this necessary API interaction, user-perceived responsiveness is important. While we can't make the API go any faster, we can take steps to ensure that the user always has feedback from the UI and that feedback is immediate. Then, there's the responsiveness of simply navigating around the UI, using cached data that's already been loaded, for example. Every other architectural perspective is closely-tied to the performance of our JavaScript code, and ultimately, to the user-perceived responsiveness. This perspective is a subtle sanity-check for the design of our components and their chosen communication paths. Addressability Just because we're building a single-page application doesn't mean we no longer care about addressable URIs. This is perhaps the crowning achievement of the web— unique identifiers that point to the resource we want. We paste them in to our browser address bar and watch the magic happen. Our application most certainly has addressable resources, we just point to them differently. Instead of a URI that's parsed by the backend web server, where the page is constructed and sent back to the browser, it's our local JavaScript code that understands the URI: Components listen to routers for route events and respond accordingly. A changing browser URI triggers these events. Typically, these URIs will map to an API resource. When the user hits one of these URIs in our application, we'll translate the URI into another URI that's used to request backend data. The component we use to manage these application URIs is called a router, and there's lots of frameworks and libraries with a base implementation of a router. We'll likely use one of these. The addressability perspective plays a major role in our architecture, because ensuring that the various aspects of our application have an addressable URI complicates our design. However, it can also make things easier if we're clever about it. We can have our components utilize the URIs in the same way a user utilizes links. Configurability Rarely does software do what you need it to straight out of the box. Highly-configurable software systems are touted as being good software systems. Configuration in the frontend is a challenge because there's several dimensions of configuration, not to mention the issue of where we store these configuration options. Default values for configurable components are problematic too—where do they come from? For example, is there a default language setting that's set until the user changes it? As is often the case, different deployments of our frontend will require different default values for these settings: Component configuration values can come from the backend server, or from the web browser. Defaults must reside somewhere Every configurable aspect of our software complicates its design. Not to mention the performance overhead and potential bugs. So, configurability is a large issue, and it's worth the time spent up-front discussing with various stakeholders what they value in terms of configurability. Depending on the nature of our deployment, users may value portability with their configuration. This means that their values need to be stored in the backend, under their account settings. Obviously decisions like these have backend design implications, and sometimes it's better to get away with approaches that don't require a modified backend service. Making architectural trade-offs There's a lot to consider from the various perspectives of our architecture, if we're going to build something that scales. We'll never get everything that we need out of every perspective simultaneously. This is why we make architectural trade-offs—we trade one aspect of our design for another more desirable aspect. Defining your constants Before we start making trade-offs, it's important to state explicitly what cannot be traded. What aspects of our design are so crucial to achieving scale that they must remain constant? For instance, a constant might be the number of entities rendered on a given page, or a maximum level of function call indirection. There shouldn't be a ton of these architectural constants, but they do exist. It's best if we keep them narrow in scope and limited in number. If we have too many strict design principles that cannot be violated or otherwise changed to fit our needs, we won't be able to easily adapt to changing influencers of scale. Does it make sense to have constant design principles that never change, given the unpredictability of scaling influencers? It does, but only once they emerge and are obvious. So this may not be an up-front principle, though we'll often have at least one or two up-front principles to follow. The discovery of these principles may result from the early refactoring of code or the later success of our software. In any case, the constants we use going forward must be made explicit and be agreed upon by all those involved. Performance for ease of development Performance bottlenecks need to be fixed, or avoided in the first place where possible. Some performance bottlenecks are obvious and have an observable impact on the user experience. These need to be fixed immediately, because it means our code isn't scaling for some reason, and might even point to a larger design issue. Other performance issues are relatively small. These are generally noticed by developers running benchmarks against code, trying by all means necessary to improve the performance. This doesn't scale well, because these smaller performance bottlenecks that aren't observable by the end user are time-consuming to fix. If our application is of a reasonable size, with more than a few developers working on it, we're not going to be able to keep up with feature development if everyone's fixing minor performance problems. These micro-optimizations introduce specialized solutions into our code, and they're not exactly easy reading for other developers. On the other hand, if we let these minor inefficiencies go, we will manage to keep our code cleaner and thus easier to work with. Where possible, trade off optimized performance for better code quality. This improves our ability to scale from a number of perspectives. Configurability for performance It's nice to have generic components where nearly every aspect is configurable. However, this approach to component design comes at a performance cost. It's not noticeable at first, when there are few components, but as our software scales in feature count, the number of components grows, and so does the number of configuration options. Depending on the size of each component (its complexity, number of configuration options, and so forth) the potential for performance degradation increases exponentially. Take a look at the following diagram: The component on the left has twice as many configuration options as the component on the right. It's also twice as difficult to use and maintain. We can keep our configuration options around as long as there're no performance issues affecting our users. Just keep in mind that we may have to remove certain options in an effort to remove performance bottlenecks. It's unlikely that configurability is going to be our main source of performance issues. It's also easy to get carried away as we scale and add features. We'll find, retrospectively, that we created configuration options at design time that we thought would be helpful, but turned out to be nothing but overhead. Trade off configurability for performance when there's no tangible benefit to having the configuration option. Performance for substitutability A related problem to that of configurability is substitutability. Our user interface performs well, but as our user base grows and more features are added, we discover that certain components cannot be easily substituted with another. This can be a developmental problem, where we want to design a new component to replace something pre-existing. Or perhaps we need to substitute components at runtime. Our ability to substitute components lies mostly with the component communication model. If the new component is able to send/receive messages/events the same as the existing component, then it's a fairly straightforward substitution. However, not all aspects of our software are substitutable. In the interest of performance, there may not even be a component to replace. As we scale, we may need to re-factor larger components into smaller components that are replaceable. By doing so, we're introducing a new level of indirection, and a performance hit. Trade off minor performance penalties to gain substitutability that aids in other aspects of scaling our architecture. Ease of development for addressability Assigning addressable URIs to resources in our application certainly makes implementing features more difficult. Do we actually need URIs for every resource exposed by our application? Probably not. For the sake of consistency though, it would make sense to have URIs for almost every resource. If we don't have a router and URI generation scheme that's consistent and easy to follow, we're more likely to skip implementing URIs for certain resources. It's almost always better to have the added burden of assigning URIs to every resource in our application than to skip out on URIs. Or worse still, not supporting addressable resources at all. URIs make our application behave like the rest of the Web; the training ground for all our users. For example, perhaps URI generation and routes are a constant for anything in our application—a trade-off that cannot happen. Trade off ease of development for addressability in almost every case. The ease of development problem with regard to URIs can be tackled in more depth as the software matures. Maintainability for performance The ease with which features are developed in our software boils down to the development team and it's scaling influencers. For example, we could face pressure to hire entry-level developers for budgetary reasons. How well this approach scales depends on our code. When we're concerned with performance, we're likely to introduce all kinds of intimidating code that relatively inexperienced developers will have trouble swallowing. Obviously, this impedes the ease of developing new features, and if it's difficult, it takes longer. This obviously does not scale with respect to customer demand. Developers don't always have to struggle with understanding the unorthodox approaches we've taken to tackle performance bottlenecks in specific areas of the code. We can certainly help the situation by writing quality code that's understandable. Maybe even documentation. But we won't get all of this for free; if we're to support the team as a whole as it scales, we need to pay the productivity penalty in the short term for having to coach and mentor. Trade off ease of development for performance in critical code paths that are heavily utilized and not modified often. We can't always escape the ugliness required for performance purposes, but if it's well-hidden, we'll gain the benefit of the more common code being comprehensible and self-explanatory. For example, low-level JavaScript libraries perform well and have a cohesive API that's easy to use. But if you look at some of the underlying code, it isn't pretty. That's our gain—having someone else maintain code that's ugly for performance reasons. Our components on the left follow coding styles that are consistent and easy to read; they all utilize the high-performance library on the right, giving our application performance while isolating optimized code that's difficult to read and understand. Less features for maintainability When all else fails, we need to take a step back and look holistically at the featureset of our application. Can our architecture support them all? Is there a better alternative? Scrapping an architecture that we've sunk many hours into almost never makes sense—but it does happen. The majority of the time, however, we'll be asked to introduce a challenging set of features that violate one or more of our architectural constants. When that happens, we're disrupting stable features that already exist, or we're introducing something of poor quality into the application. Neither case is good, and it's worth the time, the headache, and the cursing to work with the stakeholders to figure out what has to go. If we've taken the time to figure out our architecture by making trade-offs, we should have a sound argument for why our software can't support hundreds of features. When an architecture is full, we can't continue to scale. The key is understanding where that breaking threshold lies, so we can better understand and communicate it to stakeholders. Leveraging frameworks Frameworks exist to help us implement our architecture using a cohesive set of patterns. There's a lot of variety out there, and choosing which framework is a combination of personal taste, and fitness based on our design. For example, one JavaScript application framework will do a lot for us out-of-the-box, while another has even more features, but a lot of them we don't need. JavaScript application frameworks vary in size and sophistication. Some come with batteries included, and some tend toward mechanism over policy. None of these frameworks were specifically designed for our application. Any purported ability of a framework needs to be taken with a grain of salt. The features advertised by frameworks are applied to a general case, and a simple one at that. Applied in the context of our architecture is something else entirely. That being said, we can certainly use a given framework of our liking as input to the design process. If we really like the tool, and our team has experience using it, we can let it influence our design decisions. Just as long as we understand that the framework does not automatically respond to scaling influencers—that part is up to us. It's worth the time investigating the framework to use for our project because choosing the wrong framework is a costly mistake. The realization that we should have gone with something else usually comes after we've implemented lots of functionality. The end result is lots of re-writing, re-planning, re-training, and re-documenting. Not to mention the time lost on the first implementation. Choose your frameworks wisely, and be cautious about being framework-coupling. Summary Scaling a JavaScript application isn't the same as scaling other types of applications. Although we can use JavaScript to create large-scale backend services, our concern is with scaling the applications our users interact with in the browser. And there're a number of influencers that guide our decision making process on producing an architecture that scales. We reviewed some of these influencers, and how they flow in a top-down fashion, creating challenges unique to frontend JavaScript development. We examined the effect of more users, more features, and more developers; we can see that there's a lot to think about. While the browser is becoming a powerful platform, onto which we're delivering our applications, it still has constraints not found on other platforms. Designing and implementing a scalable JavaScript application requires having an architecture. What the software must ultimately do is just one input to that design. The scaling influencers are key as well. From there, we address different perspectives of the architecture under consideration. Things such as component composition and responsiveness come into play when we talk about scale. These are observable aspects of our architecture that are impacted by influencers of scale. As these scaling factors change over time, we use architectural perspectives as tools to modify our design, or the product to align with scaling challenges. Resources for Article: Further resources on this subject: Developing a JavaFX Application for iOS [article] Deploying a Play application on CoreOS and Docker [article] Developing Location-based Services with Neo4j [article]
Read more
  • 0
  • 0
  • 2458

article-image-third-party-libraries
Packt
21 Apr 2015
21 min read
Save for later

Third Party Libraries

Packt
21 Apr 2015
21 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, the author believes that our TypeScript development environment would not amount to much if we were not able to reuse the myriad of existing JavaScript libraries, frameworks and general goodness. However, in order to use a particular third party library with TypeScript, we will first need a matching definition file. Soon after TypeScript was released, Boris Yankov set up a github repository to house TypeScript definition files for third party JavaScript libraries. This repository, named DefinitelyTyped (https://github.com/borisyankov/DefinitelyTyped) quickly became very popular, and is currently the place to go for high-quality definition files. DefinitelyTyped currently has over 700 definition files, built up over time from hundreds of contributors from all over the world. If we were to measure the success of TypeScript within the JavaScript community, then the DefinitelyTyped repository would be a good indication of how well TypeScript has been adopted. Before you go ahead and try to write your own definition files, check the DefinitelyTyped repository to see if there is one already available. In this article, we will have a closer look at using these definition files, and cover the following topics: Choosing a JavaScript Framework Using TypeScript with Backbone Using TypeScript with Angular (For more resources related to this topic, see here.) Using third party libraries In this section of the article, we will begin to explore some of the more popular third party JavaScript libraries, their declaration files, and how to write compatible TypeScript for each of these frameworks. We will compare Backbone, and Angular which are all frameworks for building rich client-side JavaScript applications. During our discussion, we will see that some frameworks are highly compliant with the TypeScript language and its features, some are partially compliant, and some have very low compliance. Choosing a JavaScript framework Choosing a JavaScript framework or library to develop Single Page Applications is a difficult and sometimes daunting task. It seems that there is a new framework appearing every other month, promising more and more functionality for less and less code. To help developers compare these frameworks, and make an informed choice, Addy Osmani wrote an excellent article, named Journey Through the JavaScript MVC Jungle. (http://www.smashingmagazine.com/2012/07/27/journey-through-the-javascript-mvc-jungle/). In essence, his advice is simple – it's a personal choice – so try some frameworks out, and see what best fits your needs, your programming mindset, and your existing skill set. The TodoMVC project (http://todomvc.com), which Addy started, does an excellent job of implementing the same application in a number of MV* JavaScript frameworks. This really is a reference site for digging into a fully working application, and comparing for yourself the coding techniques and styles of different frameworks. Again, depending on the JavaScript library that you are using within TypeScript, you may need to write your TypeScript code in a specific way. Bear this in mind when choosing a framework - if it is difficult to use with TypeScript, then you may be better off looking at another framework with better integration. If it is easy and natural to work with the framework in TypeScript, then your productivity and overall development experience will be much better. We will look at some of the popular JavaScript libraries, along with their declaration files, and see how to write compatible TypeScript. The key thing to remember is that TypeScript generates JavaScript - so if you are battling to use a third party library, then crack open the generated JavaScript and see what the JavaScript code looks like that TypeScript is emitting. If the generated JavaScript matches the JavaScript code samples in the library's documentation, then you are on the right track. If not, then you may need to modify your TypeScript until the compiled JavaScript starts matching up with the samples. When trying to write TypeScript code for a third party JavaScript framework – particularly if you are working off the JavaScript documentation – your initial foray may just be one of trial and error. Along the way, you may find that you need to write your TypeScript in a specific way in order to match this particular third party library. The rest of this article shows how three different libraries require different ways of writing TypeScript. Backbone Backbone is a popular JavaScript library that gives structure to web applications by providing models, collections and views, amongst other things. Backbone has been around since 2010, and has gained a very large following, with a wealth of commercial websites using the framework. According to Infoworld.com, Backbone has over 1,600 Backbone related projects on GitHub that rate over 3 stars - meaning that it has a vast ecosystem of extensions and related libraries. Let's take a quick look at Backbone written in TypeScript. To follow along with the code in your own project, you will need to install the following NuGet packages: backbone.js ( currently at v1.1.2), and backbone.TypeScript.DefinitelyTyped (currently at version 1.2.3). Using inheritance with Backbone From the Backbone documentation, we find an example of creating a Backbone.Model in JavaScript as follows: var Note = Backbone.Model.extend(    {        initialize: function() {            alert("Note Model JavaScript initialize");        },        author: function () { },        coordinates: function () { },        allowedToEdit: function(account) {            return true;        }    } ); This code shows a typical usage of Backbone in JavaScript. We start by creating a variable named Note that extends (or derives from) Backbone.Model. This can be seen with the Backbone.Model.extend syntax. The Backbone extend function uses JavaScript object notation to define an object within the outer curly braces { … }. In the preceding code, this object has four functions: initialize, author, coordinates and allowedToEdit. According to the Backbone documentation, the initialize function will be called once a new instance of this class is created. The initialize function simply creates an alert to indicate that the function was called. The author and coordinates functions are blank at this stage, with only the allowedToEdit function actually doing something: return true. If we were to simply copy and paste the above JavaScript into a TypeScript file, we would generate the following compile error: Build: 'Backbone.Model.extend' is inaccessible. When working with a third party library, and a definition file from DefinitelyTyped, our first port of call should be to see if the definition file may be in error. After all, the JavaScript documentation says that we should be able to use the extend method as shown, so why is this definition file causing an error? If we open up the backbone.d.ts file, and then search to find the definition of the class Model, we will find the cause of the compilation error: class Model extends ModelBase {      /**    * Do not use, prefer TypeScript's extend functionality.    **/    private static extend(        properties: any, classProperties?: any): any; This declaration file snippet shows some of the definition of the Backbone Model class. Here, we can see that the extend function is defined as private static, and as such, it will not be available outside the Model class itself. This, however, seems contradictory to the JavaScript sample that we saw in the documentation. In the preceding comment on the extend function definition, we find the key to using Backbone in TypeScript: prefer TypeScript's extend functionality. This comment indicates that the declaration file for Backbone is built around TypeScript's extends keyword – thereby allowing us to use natural TypeScript inheritance syntax to create Backbone objects. The TypeScript equivalent to this code, therefore, must use the extends TypeScript keyword to derive a class from the base class Backbone.Model, as follows: class Note extends Backbone.Model {    initialize() {      alert("Note model Typescript initialize");    }    author() { }    coordinates() { }    allowedToEdit(account) {        return true;    } } We are now creating a class definition named Note that extends the Backbone.Model base class. This class then has the functions initialize, author, coordinates and allowedToEdit, similar to the previous JavaScript version. Our Backbone sample will now compile and run correctly. With either of these versions, we can create an instance of the Note object by including the following script within an HTML page: <script type="text/javascript">    $(document).ready( function () {        var note = new Note();    }); </script> This JavaScript sample simply waits for the jQuery document.ready event to be fired, and then creates an instance of the Note class. As documented earlier, the initialize function will be called when an instance of the class is constructed, so we would see an alert box appear when we run this in a browser. All of Backbone's core objects are designed with inheritance in mind. This means that creating new Backbone collections, views and routers will use the same extends syntax in TypeScript. Backbone, therefore, is a very good fit for TypeScript, because we can use natural TypeScript syntax for inheritance to create new Backbone objects. Using interfaces As Backbone allows us to use TypeScript inheritance to create objects, we can just as easily use TypeScript interfaces with any of our Backbone objects as well. Extracting an interface for the Note class above would be as follows: interface INoteInterface {    initialize();    author();    coordinates();    allowedToEdit(account: string); } We can now update our Note class definition to implement this interface as follows: class Note extends Backbone.Model implements INoteInterface {    // existing code } Our class definition now implements the INoteInterface TypeScript interface. This simple change protects our code from being modified inadvertently, and also opens up the ability to work with core Backbone objects in standard object-oriented design patterns. We could, if we needed to, apply the Factory Pattern to return a particular type of Backbone Model – or any other Backbone object for that matter. Using generic syntax The declaration file for Backbone has also added generic syntax to some class definitions. This brings with it further strong typing benefits when writing TypeScript code for Backbone. Backbone collections (surprise, surprise) house a collection of Backbone models, allowing us to define collections in TypeScript as follows: class NoteCollection extends Backbone.Collection<Note> {    model = Note;    //model: Note; // generates compile error    //model: { new (): Note }; // ok } Here, we have a NoteCollection that derives from, or extends a Backbone.Collection, but also uses generic syntax to constrain the collection to handle only objects of type Note. This means that any of the standard collection functions such as at() or pluck() will be strongly typed to return Note models, further enhancing our type safety and Intellisense. Note the syntax used to assign a type to the internal model property of the collection class on the second line. We cannot use the standard TypeScript syntax model: Note, as this causes a compile time error. We need to assign the model property to a the class definition, as seen with the model=Note syntax, or we can use the { new(): Note } syntax as seen on the last line. Using ECMAScript 5 Backbone also allows us to use ECMAScript 5 capabilities to define getters and setters for Backbone.Model classes, as follows: interface ISimpleModel {    Name: string;    Id: number; } class SimpleModel extends Backbone.Model implements ISimpleModel {    get Name() {        return this.get('Name');    }    set Name(value: string) {        this.set('Name', value);    }    get Id() {        return this.get('Id');    }    set Id(value: number) {        this.set('Id', value);    } } In this snippet, we have defined an interface with two properties, named ISimpleModel. We then define a SimpleModel class that derives from Backbone.Model, and also implements the ISimpleModel interface. We then have ES 5 getters and setters for our Name and Id properties. Backbone uses class attributes to store model values, so our getters and setters simply call the underlying get and set methods of Backbone.Model. Backbone TypeScript compatibility Backbone allows us to use all of TypeScript's language features within our code. We can use classes, interfaces, inheritance, generics and even ECMAScript 5 properties. All of our classes also derive from base Backbone objects. This makes Backbone a highly compatible library for building web applications with TypeScript. Angular AngularJs (or just Angular) is also a very popular JavaScript framework, and is maintained by Google. Angular takes a completely different approach to building JavaScript SPA's, introducing an HTML syntax that the running Angular application understands. This provides the application with two-way data binding capabilities, which automatically synchronizes models, views and the HTML page. Angular also provides a mechanism for Dependency Injection (DI), and uses services to provide data to your views and models. The example provided in the tutorial shows the following JavaScript: var phonecatApp = angular.module('phonecatApp', []); phonecatApp.controller('PhoneListCtrl', function ($scope) { $scope.phones = [    {'name': 'Nexus S',      'snippet': 'Fast just got faster with Nexus S.'},    {'name': 'Motorola XOOM™ with Wi-Fi',      'snippet': 'The Next, Next Generation tablet.'},    {'name': 'MOTOROLA XOOM™',      'snippet': 'The Next, Next Generation tablet.'} ]; }); This code snippet is typical of Angular JavaScript syntax. We start by creating a variable named phonecatApp, and register this as an Angular module by calling the module function on the angular global instance. The first argument to the module function is a global name for the Angular module, and the empty array is a place-holder for other modules that will be injected via Angular's Dependency Injection routines. We then call the controller function on the newly created phonecatApp variable with two arguments. The first argument is the global name of the controller, and the second argument is a function that accepts a specially named Angular variable named $scope. Within this function, the code sets the phones object of the $scope variable to be an array of JSON objects, each with a name and snippet property. If we continue reading through the tutorial, we find a unit test that shows how the PhoneListCtrl controller is used: describe('PhoneListCtrl', function(){    it('should create "phones" model with 3 phones', function() {      var scope = {},          ctrl = new PhoneListCtrl(scope);        expect(scope.phones.length).toBe(3); });   }); The first two lines of this code snippet use a global function called describe, and within this function another function called it. These two functions are part of a unit testing framework named Jasmine. We declare a variable named scope to be an empty JavaScript object, and then a variable named ctrl that uses the new keyword to create an instance of our PhoneListCtrl class. The new PhoneListCtrl(scope) syntax shows that Angular is using the definition of the controller just like we would use a normal class in TypeScript. Building the same object in TypeScript would allow us to use TypeScript classes, as follows: var phonecatApp = angular.module('phonecatApp', []);   class PhoneListCtrl {    constructor($scope) {        $scope.phones = [            { 'name': 'Nexus S',              'snippet': 'Fast just got faster' },            { 'name': 'Motorola',              'snippet': 'Next generation tablet' },            { 'name': 'Motorola Xoom',              'snippet': 'Next, next generation tablet' }        ];    } }; Our first line is the same as in our previous JavaScript sample. We then, however, use the TypeScript class syntax to create a class named PhoneListCtrl. By creating a TypeScript class, we can now use this class as shown in our Jasmine test code: ctrl = new PhoneListCtrl(scope). The constructor function of our PhoneListCtrl class now acts as the anonymous function seen in the original JavaScript sample: phonecatApp.controller('PhoneListCtrl', function ($scope) {    // this function is replaced by the constructor } Angular classes and $scope Let's expand our PhoneListCtrl class a little further, and have a look at what it would look like when completed: class PhoneListCtrl {    myScope: IScope;    constructor($scope, $http: ng.IHttpService, Phone) {        this.myScope = $scope;        this.myScope.phones = Phone.query();        $scope.orderProp = 'age';          _.bindAll(this, 'GetPhonesSuccess');    }    GetPhonesSuccess(data: any) {       this.myScope.phones = data;    } }; The first thing to note in this class, is that we are defining a variable named myScope, and storing the $scope argument that is passed in via the constructor, into this internal variable. This is again because of JavaScript's lexical scoping rules. Note the call to _.bindAll at the end of the constructor. This Underscore utility function will ensure that whenever the GetPhonesSuccess function is called, it will use the variable this in the context of the class instance, and not in the context of the calling code. The GetPhonesSuccess function uses the this.myScope variable within its implementation. This is why we needed to store the initial $scope argument in an internal variable. Another thing we notice from this code, is that the myScope variable is typed to an interface named IScope, which will need to be defined as follows: interface IScope {    phones: IPhone[]; } interface IPhone {    age: number;    id: string;    imageUrl: string;    name: string;    snippet: string; }; This IScope interface just contains an array of objects of type IPhone (pardon the unfortunate name of this interface – it can hold Android phones as well). What this means is that we don't have a standard interface or TypeScript type to use when dealing with $scope objects. By its nature, the $scope argument will change its type depending on when and where the Angular runtime calls it, hence our need to define an IScope interface, and strongly type the myScope variable to this interface. Another interesting thing to note on the constructor function of the PhoneListCtrl class is the type of the $http argument. It is set to be of type ng.IHttpService. This IHttpService interface is found in the declaration file for Angular. In order to use TypeScript with Angular variables such as $scope or $http, we need to find the matching interface within our declaration file, before we can use any of the Angular functions available on these variables. The last point to note in this constructor code is the final argument, named Phone. It does not have a TypeScript type assigned to it, and so automatically becomes of type any. Let's take a quick look at the implementation of this Phone service, which is as follows: var phonecatServices =     angular.module('phonecatServices', ['ngResource']);   phonecatServices.factory('Phone',    [        '$resource', ($resource) => {            return $resource('phones/:phoneId.json', {}, {                query: {                    method: 'GET',                    params: {                        phoneId: 'phones'                    },                    isArray: true                }            });        }    ] ); The first line of this code snippet again creates a global variable named phonecatServices, using the angular.module global function. We then call the factory function available on the phonecatServices variable, in order to define our Phone resource. This factory function uses a string named 'Phone' to define the Phone resource, and then uses Angular's dependency injection syntax to inject a $resource object. Looking through this code, we can see that we cannot easily create standard TypeScript classes for Angular to use here. Nor can we use standard TypeScript interfaces or inheritance on this Angular service. Angular TypeScript compatibility When writing Angular code with TypeScript, we are able to use classes in certain instances, but must rely on the underlying Angular functions such as module and factory to define our objects in other cases. Also, when using standard Angular services, such as $http or $resource, we will need to specify the matching declaration file interface in order to use these services. We can therefore describe the Angular library as having medium compatibility with TypeScript. Inheritance – Angular versus Backbone Inheritance is a very powerful feature of object-oriented programming, and is also a fundamental concept when using JavaScript frameworks. Using a Backbone controller or an Angular controller within each framework relies on certain characteristics, or functions being available. Each framework implements inheritance in a different way. As JavaScript does not have the concept of inheritance, each framework needs to find a way to implement it, so that the framework can allow us to extend base classes and their functionality. In Backbone, this inheritance implementation is via the extend function of each Backbone object. The TypeScript extends keyword follows a similar implementation to Backbone, allowing the framework and language to dovetail each other. Angular, on the other hand, uses its own implementation of inheritance, and defines functions on the angular global namespace to create classes (that is angular.module). We can also sometimes use the instance of an application (that is <appName>.controller) to create modules or controllers. We have found, though, that Angular uses controllers in a very similar way to TypeScript classes, and we can therefore simply create standard TypeScript classes that will work within an Angular application. So far, we have only skimmed the surface of both the Angular TypeScript syntax and the Backbone TypeScript syntax. The point of this exercise was to try and understand how TypeScript can be used within each of these two third party frameworks. Be sure to visit http://todomvc.com, and have a look at the full source-code for the Todo application written in TypeScript for both Angular and Backbone. They can be found on the Compile-to-JS tab in the example section. These running code samples, combined with the documentation on each of these sites, will prove to be an invaluable resource when trying to write TypeScript syntax with an external third party library such as Angular or Backbone. Angular 2.0 The Microsoft TypeScript team and the Google Angular team have just completed a months long partnership, and have announced that the upcoming release of Angular, named Angular 2.0, will be built using TypeScript. Originally, Angular 2.0 was going to use a new language named AtScript for Angular development. During the collaboration work between the Microsoft and Google teams, however, the features of AtScript that were needed for Angular 2.0 development have now been implemented within TypeScript. This means that the Angular 2.0 library will be classed as highly compatible with TypeScript, once the Angular 2.0 library, and the 1.5 edition of the TypeScript compiler are available. Summary In this article, we looked at three types of third party libraries, and discussed how to integrate these libraries with TypeScript. We explored Backbone, which can be categorized as a highly compliant third party library, Angular, which is a partially compliant library. Resources for Article: Further resources on this subject: Optimizing JavaScript for iOS Hybrid Apps [article] Introduction to TypeScript [article] Getting Ready with CoffeeScript [article]
Read more
  • 0
  • 0
  • 2458

article-image-silverstripe-24-customizing-layout
Packt
06 May 2011
10 min read
Save for later

SilverStripe 2.4: Customizing the Layout

Packt
06 May 2011
10 min read
Templates and themes All pages within SilverStripe are rendered using a template, which is basically an HTML/XHTML file with added control code. A template allows you to make conditional statements, create menus, access data stored in the database, and much more. Instead of using PHP for these enhancements, templates rely on a dedicated template engine. When the user accesses a page the engine replaces the control code with PHP. This transformation is then evaluated by the web server to regular markup code, which every browser can display. A theme consists of a collection of template, CSS, and image files. Design-specific JavaScript is also stored here. Together they form the layout of a page. Switching between themes You can have multiple themes and switch between them whenever you like. You're definitely encouraged to give everything described here a try. It helps you in becoming an active developer instead of a passive user. So let's get things started! Time for action - change the default theme Simply perform the following steps in the CMS and you're all set: Log into the CMS by appending /admin to your page's base URL. Provide the credentials you defined during the installation and you should be logged-in. Go to the general configuration page by clicking on the globe within the Page Tree section. The default text of this element is Your Site Name. The Theme is set to (Use default theme) by default; change this to tutorial. Click on the Save button. Go to the base URL / and reload the page. What just happened? We've successfully changed the theme of our website. It's as easy as that. Note that you need to package your theme into a subfolder of themes/ to be able to switch between them. While you can also put the contents of a theme directly into the mysite/ folder, keeping your system modularized is a good practice. Therefore we won't integrate the layout into the site specific folder and will instead always create a dedicated, switchable theme. Getting more themes After unpacking a theme, copy it to the themes/ folder, change the theme in the CMS, and you're good to go. Using a prebuilt theme really is that simple. Next, we'll take a look at the facilities for building our own themes—specifically, the template engine that powers all themes in SilverStripe. By the end of this article and next, you'll even be ready to upload and share your own themes with the rest of the community at the address above. New contributions are always welcome. Template engine As described earlier, the SilverStripe architecture consists of three layers, each serving a specific purpose. The one responsible for the layout is called View, which we'll cover in detail in this article. Another template engine? One might wonder why it is necessary to learn another template language, while there are already so many others available and PHP can do all of it as well. The main reasons behind this are: The template engine and the rest of the system fit perfectly together. After getting the hang of it, it makes the creation of templates and communication with the other layers faster and easier. The available control code is very simple. Designers without a lot of programming knowledge can create feature-rich layouts. It enforces good coding practice. Without the ability to use raw PHP, one cannot by-pass the other layers and undermine the architecture. Only layout and rendering information should be used within templates. Taking a look at BlackCandy First of all, switch back to the BlackCandy theme as we want to take a better look at it. Within your installation, navigate to the folder themes/blackcandy/ and you'll see three folders: css/, images/, and templates/. The images/ folder contains any image used in your theme; if you like, you can create subfolders to keep things organized. The remaining folders are a bit more complicated, so let's break them down. CSS At first this looks a bit messy—five files for the CSS; couldn't we use just a single one? We could, but many developers consider it a good practise splitting the functionality into different parts, making it easier to navigate later on. Sticking to this convention adds functionality as well. This is a common principle, called convention over configuration. If you do things the way they are expected by the system, you don't need to configure them specifically. Once you know the "way" of the system, you'll work much quicker than if you had to configure the same things over and over again. editor.css This file is automagically loaded by SilverStripe's what you see is what you get (WYSIWYG) editor. So if you apply the correct styling via this file, the content in the CMS' backend will look like the final output in the frontend. Additionally you'll have all custom elements available under the Styles dropdown, so the content editors don't need to mess around with pure HTML. As this file is not automatically linked to the frontend, it's common practice to put all frontend styling information into typography.css and reference that file in editor.css: @import "typography.css"; If you want to provide any styling information just for the CMS, you can put it below the @import. layout.css, form.css, and typography.css These files are automagically included in the frontend if they are available in your theme. While layout.css is used for setting the page's basic sections and layout, form.css deals with the form related styling. typography.css covers the layout of content entered into the CMS, generally being imported by editor.css as we've just discussed. Elements you will include here are headers (<h1>, <h2>, and so on), text (for example <p>), lists, tables, and others you want to use in the CMS (aligning elements, <hr>, and so on). ie6.css This file isn't part of the SilverStripe convention, but is a useful idea you might want to adopt. You must include it manually but it will still be a good idea to stick to this naming schema: ie.css for styling elements in any version of Internet Explorer, ie6.css for Internet Explorer 6 specifics, and so on. What about performance? Cutting down on the number of files being loaded is an effective optimization technique for websites. We'll take a look at how to do that. Templates Now that we've discussed the styling, let's take a look at how to put the content together with the help of templates. Learning the very basics Before we continue, some basic facts and features of templates: Templates must use the file extension .ss instead of .html or .php. Templates can be organized alongside their PHP Controllers, so you can use the same powerful mechanism as in the other layers. We'll take a better look at what this means a little later. Page controls consist of placeholders and control structures, which are both placed within the regular markup language. Placeholders start with a $ and are processed and replaced by the template engine. Control structures are written between opening <% and closing %>. They look a bit like HTML tags and they are used the same way. Some consist of a single element, like HTML's <br>, whereas others consist of two or more. Starting to use templates Now that we've covered the basics, let's put them into practice. Time for action - using site title and slogan We shall use placeholders, taking a look at the code level and how to use them in the CMS: Open the file themes/blackcandy/templates/Page.ss in your preferred editor. If the syntax highlighting is disabled due to the unknown file extension, set it to HTML. Find the two placeholders $SiteConfig.Title and $SiteConfig.Tagline in the code. Go to the general configuration page in the CMS, the one where we've changed the theme. Edit the Site title and Site Tagline/Slogan to something more meaningful. Save the page. Go to the base URL and hit refresh. If you view the page's source, you can see that the two placeholders have been replaced by the text we have just entered in the CMS. If you are wondering about the short doctype declaration: there is nothing missing, this is HTML5, which SilverStripe is already using in its default theme. If you prefer XHTML or an older HTML standard, you can freely change it. The CMS happily works with all of them. What just happened? The file we've just edited is the base template. It's used for every page (unless specifically overwritten). You can define your general layout once and don't have to repeat it again. You should always try to avoid repeating code or content. This is generally called don't repeat yourself (DRY) or duplication is evil (DIE). SilverStripe supports you very well in doing this. The site title and slogan are globally available. You can use them on every page and they share the same content across the whole website. These placeholders are prefixed with SiteConfig. as well as a $ sign. In the CMS they are all available on the site's root element, the one with the globe. By default, there are just two, but we'll later see how to add more. Other placeholders are available on specific pages or can be different on each page. We'll come to those next. Layout Looking again at the Page.ss we've just opened, you'll also see a $Layout placeholder. This is replaced by a file from the themes/blackcandy/templates/Layout folder. There are only two files available by default and for every standard page Page.ss is used. When a page is loaded, the template engine first finds the base templates/Page.ss (excluding the theme specific part of the path as this can vary). It's evaluated and the $Layout is replaced by templates/Layout/Page.ss, which is also evaluated for further SilverStripe controls. Ignore Page_results.ss for the moment. It's only used for search queries which we'll cover later. We'll also add more page types so the layout's Page.ss can then be replaced by a more specific template, while the base templates/Page.ss is always used. Includes Both Page.ss files include statements like <% include BreadCrumbs %>. These controls are replaced by files from the themes/blackcandy/templates/Includes/ folder. For example, the above include grabs the themes/blackcandy/templates/Includes/BreadCrumbs.ss file. Note that filenames are case sensitive. Otherwise you're free to select a meaningful name. Try sticking to one naming convention to make your own life easier later on, and also note that you mustn't include the <code>.ss</code> extension. If you're seeing a Filename cannot be empty error, make sure the included file really exists. Have a go hero - using page name, navigation label, and metadata title Now that we've explored all available files and how they are used, let's take a better look at the system. In the CMS backend, go to the Home page. Change the text currently entered into Page name, Navigation label (both in the Main tab), and Title (Metadata tab). Take a look at: Where on the page they are used In which file they are located (take a look at all three possible locations) What template placeholders represent them You might notice $MetaTitle, $Title, and $MenuTitle in your template files (for the moment ignore the appended .XML). We will explore these later on.
Read more
  • 0
  • 0
  • 2456
article-image-catalyst-web-framework-building-your-own-model
Packt
23 Oct 2009
12 min read
Save for later

Catalyst Web Framework: Building Your Own Model

Packt
23 Oct 2009
12 min read
Extending a DBIx::Class Model A common occurrence is a situation in which your application has free reign over most of the database, but needs to use a few stored procedure calls to get at certain pieces of data. In that case, you'll want to create a normal DBIC schema and then add methods for accessing the unusual data. As an example, let's look back to the AddressBook application and imagine that for some reason we couldn't use DBIx::Class to access the user table, and instead need to write the raw SQL to return an array containing everyone's username. In AddressBook::Model::AddressDB, we just need to write a subroutine to do our work as follows:     package AddressBook::Model::AddressDB;    // other code in the package    sub get_users {        my $self = shift;        my $storage = $self->storage;        return $storage->dbh_do(            sub {                my $self = shift;                my $dbh = shift;                my $sth = $dbh->prepare('SELECT username FROM user');                $sth->execute();                my @rows = @{$sth->fetchall_arrayref()};                return map { $_->[0] } @rows;                });    } Here's how the code works. On the first line, we get our DBIC::Schema object and then obtain the schema's storage object. The storage object is what DBIC uses to execute its generated SQL on the database, and is usually an instance of DBIx:: Class::Storage::DBI. This class contains a method called dbh_do which will execute a piece of code, passed to dbh_do as a coderef (or "anonymous subroutine"), and provide the code with a standard DBI database handle (usually called $dbh). dbh_do will make sure that the database handle is valid before it calls your code, so you don't need to worry about things like the database connection timing out. DBIC will reconnect if necessary and then call your code. dbh_do will also handle exceptions raised within your code in a standard way, so that errors can be caught normally. The rest of the code deals with actually executing our query. When the database handle is ready, it's passed as the second argument to our coderef (the first is the storage object itself, in case you happen to need that). Once we have the database handle, the rest of the code is exactly the same as if we were using plain DBI instead of DBIx::Class. We first prepare our query (which need not be a SELECT; it could be EXEC or anything else), execute it and, finally, process the result. The map statement converts the returned data to the form we expect it in, a list of names (instead of a list of rows each containing a single name). Note that the return statement in the coderef returns to dbh_do, not to the caller of get_users. This means that you can execute dbh_do as many times as required and then further process the results before returning from the get_users subroutine. Once you've written this subroutine, you can easily call it from elsewhere in your application:     my @users = $c->model('AddressDB')->get_users;    $c->response->body('All Users' join ', ', @users); Custom Methods Without Raw SQL As the above example doesn't use any features of the database that DBIC doesn't explicitly expose in its resultset interface, let us see how we can implement the get_users function without using dbh_do. Although the preconditions of the example indicated that we couldn't use DBIC, it's good to compare the two approaches so you can decide which way to do things in your application. Here's another way to implement the above example:     sub get_users { # version 2        my $self = shift;        my $users = $self->resultset('User');        my @result;        while(my $user = $users->next){                push @result, $user->username;        }        return @result;    } This looks like the usual DBIC manipulation that we're used to. (Usually we call $c->model('AddressDB::User') to get the "User" resultset, but under the hood this is the same as $c->model('AddressDB')->resultset('User'). In this example, $self is the same as $c->model('AddressDB').) The above code is cleaner and more portable (across database systems) than the dbh_do method, so it's best to prefer resultsets over dbh_do unless there's absolutely no other way to achieve the functionality you desire. Calling Database Functions Another common problem is the need to call database functions on tables that you're accessing with DBIC. Fortunately, DBIC provides syntax for this case, so we won't need to write any SQL manually and run it with dbh_do. All that's required is a second argument to search. For example, if we want to get the count of all users in the user table, we could write (in a controller) the following:     $users = $c->model('AddressDB::User');    $users->search({}, { select => [ { COUNT => 'id' } ],                                                    as => [ 'count' ],});    $count = $users->first->get_column('count'); This is the same as executing SELECT COUNT(id) FROM user, fetching the first row and then setting $count to the first column of that row. Note that we didn't specify a WHERE clause, but if we wanted to, we could replace the first {} with the WHERE expression, and then get the count of matching rows. Here's a function that we can place in the User ResultSetClass to get easy access to the user count:     sub count_users_where {        my $self = shift;        my $condition = shift;        $self->search($condition,                { select => [ { COUNT => 'id' } ],                        as => [ 'count' ], });        my $first = $users->first;        return $first->get_column('count') if $first;        return 0; # if there is no "first" row, return 0    } Now, we can write something like the following:     $jons = $c->model('AddressDB::User')->        count_users_where([ username => {-like => '%jon%'}]); to get the number of jons in the database, without having to fetch every record and count them. If you only need to work with a single column, you can also use the DBIx::Class:: ResultSetColumn interface. Creating a Database Model from Scratch In some cases, you'll have no use for any of DBIC's functionality. DBIC might not work with your database, or perhaps you're migrating a legacy application that has well-tested database queries that you don't want to rewrite. In this sort of situation, you can write the entire database model manually. In the next example, we'll use Catalyst::Model::DBI to set up the basic DBI layer and the write methods (like we did above) to access the data in the model. As we have the AddressBook application working, we'll add a DBI model and write some queries against the AddressBook database. First, we need to create the model. We'll call it AddressDBI: $ perl script/addressbook_create.pl model AddressDBI DBI DBI:SQLite: database When you open the generated AddressBook::Model::AddressDBI file, you should see something like this:     package AddressBook::Model::AddressDBI;    use strict;    use base 'Catalyst::Model::DBI';    __PACKAGE__->config(            dsn => 'DBI:SQLite:database',            user => '',            password => '',            options => {},    );    1; # magic true value required Once you have this file, you can just start adding methods. The database handle will be available via $self->dbh, and the rest is up to you. Let's add a count_users function:     sub count_users {        my $self = shift;        my $dbh = $self->dbh;        my $rows = $dbh->            selectall_arrayref('SELECT COUNT(id) FROM user');        return $rows->[0]->[0]; # first row, then the first column    } Let's also add a test Controller so that we can see if this method works. First, create the Test controller by running the following command line: $ perl script/addressbook_create.pl controller Test And then add a quick test action as follows:     sub count_users : Local {        my ($self, $c) = @_;        my $count = $c->model('AddressDBI')->count_users();        $c->response->body("There are $count users."); } You can quickly see the output of this action by running the following command line:   $ perl script/addressbook_test.pl /test/count_users  There are 2 users. The myapp_test.pl script will work for any action, but it works best for test actions like this because the output is plain-text and will fit on the screen. When you're testing actual actions in your application, it's usually easier to read the page when you view it in the browser. That's all there is to it—just add methods to AddressDBI until you have everything you need. The only other thing you might want to do is to add the database configuration to your config file. It works almost the same way for DBI as it does for DBIC::Schema:     ---    name: AddressBook    Model::AddressDBI:        dsn: "DBI:SQLite:database"        username: ~        password: ~            options:                option1: something                # and so on    # the rest of your config file goes here Implementing a Filesystem Model In this final example, we'll build an entire model from scratch without even the help of a model base class like Catalyst::Model::DBI. Before you do this for your own application, you should check the CPAN to see if anyone's done anything similar already. There are currently about fifty ready-to-use model base classes that abstract data sources like LDAP servers, RSS readers, shopping carts, search engines, Subversion, email folders, web services and even YouTube. Expanding upon one of these classes will usually be easier than writing everything yourself. For this example, we'll create a very simple blog application. To post the blog, you just write some text and put it in a file whose name is the title you want on the post. We'll write a filesystem model from scratch to provide the application with the blog posts. Let's start by creating the app's skeleton:   $ catalyst.pl Blog After that, we'll create our Filesystem model:   $ cd Blog  $ perl script/blog_create.pl model Filesystem We'll also use plain TT for the View:   $ perl script/blog_create.pl view TT TT
Read more
  • 0
  • 0
  • 2456

article-image-creating-design-ez-publish-4-templating-system-part-2
Packt
19 Nov 2009
8 min read
Save for later

Creating a Design with eZ Publish 4 Templating System: Part 2

Packt
19 Nov 2009
8 min read
eZ Webin For this article it is assumed that the eZ Webin package is installed as a frontend for our site. This package is very flexible and is usually used as a starting point for developing a new site. By default, it includes: A flexible layout Some useful custom content classes (blog, event, forum, article, and so on) Web 2.0 features, such as a tag cloud and comment functions Custom template operators In our project, we will extend and override the eZ Webin template in order to create the Packtmedia Magazine site and add some features needed for the project. We will see this step-by-step as we understand better how eZ Publish works. Overriding the standard page layout The page layout is the main template and defines the style of the entire site. To create a page layout template, we need to create a file named pagelayout.tpl and place it inside the templates folder of our design extension. As we said, we will work with eZ Webin. This extension doesn't use the standard page layout but overrides the standard page layout with its own custom behavior. We need to do the same overriding from the eZ Webin pagelayout.tpl. To override the template, we have to copy it in our design's extension folder placed in extension/packtmedia/design/magazine/templates/. Now open a shell and execute this: # cd /var/www/packtmediaproject/extension# cp /ezwebin/design/ezwebin/templates/pagelayout.tpl /packtmedia/design/magazine/templates/ We will use this new pagelayout.tpl file to implement the wireframe that we developed in the previous sections. Section for our project eZ Publish includes features for creating a particular view in order to add content objects inside specified sections. For example, if we take a look at our wireframe, we need to assign a different layout for rendering the Issue archive folder and its subfolders. To do this, we have to create a new section in the administration panel and associate it to the entire Issue archive subtree. After that, we can use the fetch functions to select the correct view for that section. Creating a new section To create a new section, we have to open our browser and from the site's backend, select the Setup tab from the top menu. We then need to navigate to the Sections link in the left-hand menu, and then click on the New section button. Next, we will create a new section called Archive and select the default Content structure value in the select menu. Now, a new Archive link will appear in the Sections list. We have to click on the + button to the left of the Archive link, and then select the Issue archive node, by selecting the relevant checkbox. After we have saved, click on the Select button. All of the Issue archive subfolders will be placed inside the Archive section. We have to remember the ID of this section, which we'll use to create the override rules. In this case, the section ID number is 6, as seen in the first screenshot in the Creating a new section section. Setting up the section permission access By default, eZ Publish creates private sections that only an administrator can access. To make a section public, we need to give read permission to anonymous users. To set up the rules, we have to go back to Setup tab on the top menu, and then click on the Role and policies link on the left-hand menu. Here, we have to click on the Edit button on the right-hand side of the Anonymous link, and then click on the New policy button. Next, select the content option in the Module field, and then click on the Grant access to one function button. Select the read option in the Function field, and then click on the Grant limited access button. Next, select the Anonymous option for the Section field. Click on the OK button, and then click on the OK button on the Edit <Anonymous> Role page. Now, the anonymous user can access the Archive section. In the next paragraph, we will use this section to create custom override rules. Customizing the page layout After we copy the pagelayout.tpl template into the new path, we have to work on it in order to create the three columns inside the content layout of the eZ Webin template. To do this, first of all, we have to remove the leftmost sidebar, along with the secondary navigation menu, inside the Archive section that we have created. Open the pagelayout.tpl file that you have copied into your favorite IDE, and take a look at the code. At line 62 we will find the following code: {if and( is_set( $content_info.class_identifier ), ezini('MenuSettings', 'HideLeftMenuClasses', 'menu.ini' )|contains($content_info.class_identifier ) )}{set $pagestyle = 'nosidemenu noextrainfo'} Here, eZ Webin hides the side menu if the content class belongs to the array returned by the ezini operator. We now need to extend the IF sentence and add a control to the section ID, by using the following code: {if or(and( is_set( $content_info.class_identifier ), ezini('MenuSettings', 'HideLeftMenuClasses', 'menu.ini' )|contains($content_info.class_identifier ) ), $module_result.section_id|eq(6))}{set $pagestyle = 'nosidemenu noextrainfo'} As we can see, this code will now check to see if the browsed section has an ID equal to 6 (that is, the archive section ID that we previously created) and if it has, will hide the unnecessary sidebar. CSS editing Luckily, the entire template code of eZ Webin is strongly semantic and all of the elements have their own IDs and classes. Thanks to this, we can change a lot of things by simply working on the CSS. By default, the CMS uses six CSSes. These are: core.css: this is the global stylesheet where all of the standard tag styles for eZ Publish are defined; usually, this file is overridden by all of the others webstyletoolbar.css: this stylesheet is imported for the frontend web toolbar that is used for editing the content pagelayout.css: this is where all of the default styles of the global pagelayout are defined content.css: this is where all the default styles of the content classes are defined site-colors.css: this file is used to override the pagelayout.css to skin a site differently classes-colors.css: this file is used to override the default styles defined by the content.css file To edit the CSS, we have to copy the original eZ Webin stylesheet from the /var/www/packtmediaproject/extension/ezwebin/design/ezwebin/stylesheets folder to our design directory and then to execute the following commands: # cd /var/www/packtmediaproject/extension/# cp -rf ezwebin/design/ezwebin/stylesheets/* packtmedia/design/magazine/stylesheets/ Now, every time that we want to change the stylesheet, we have to remember to edit the CSS files in the design/magazine/stylesheets/ directory of our extension. Creating a new style package In eZ Publish, as we did for extension, it's possible to create a portable style package, so we can share and reuse our custom style in other sites. We can do this by navigating to the backend admin site and uploading the new stylesheet that we want to use. First, we have to create our CSS files by using our favorite CSS editor; we have to remember that they will override the default styles, so we only need to add the lines that we want to change. After we create the new stylesheet files, we have to open the browser, click on the Setup tab, and then click on the Package link in the left-hand sidebar. The system will ask us where we want to create our new package. We will select the local repository and click on the Create new package button. eZ Publish will then ask us which kind of package we want to create. We have to select the Site style wizard, and then click on the Create new package button. We can now choose a thumbnail for the style that we are uploading, or continue without it. After selecting the thumbnail, the wizard will ask us to choose the CSS file that we previously created. Select it, and then click on the Next button. With the wizard, we will also upload one or more images, for example a new logo file, or other images related to the CSS. To not upload files, we simply have to click on the Next button without selecting a file in the form. We have to remember that all of the images that we upload will be saved in a subfolder named images, which will be placed in the same directory as the stylesheet. This will be useful when we need to set the relative path of the images used inside the CSS. We can now add the package information and export it to our PC (if required). The new style package will automatically be installed in the eZ Publish package folder. It will be accessible from the Design tab, via the sidebar's Look and Feel link. If we select the package and clear the cache automatically, it will be applied to the frontend. Summary In this article, we learned the basics of the templating system of eZ Publish. We worked on template's function and operator, and also learned how to extend the default WYSIWYG editor of eZ Publish. Moreover, we created the site wireframe and learned how the design overriding feature works. We also created a new stylesheet package, and applied it to our extension.
Read more
  • 0
  • 0
  • 2451

article-image-taxonomy-and-thesauri-drupal-6
Packt
08 Oct 2009
6 min read
Save for later

Taxonomy and Thesauri in Drupal 6

Packt
08 Oct 2009
6 min read
What and Why? Taxonomy is described as the science of classification. In terms of how it applies to Drupal, it is the method by which content is organized using several distinct types of relationship between terms. Simple as that! This doesn't really encompass how useful it is, though, but before we move on to that, there is a bit of terminology to pick up first: Term: A term used to describe content (also known as a descriptor) Vocabulary: A grouping of related terms Thesaurus: A categorization of content that describes is similar to relationships Taxonomy: A categorization of content into a hierarchical structure Tagging: The process of associating a term (descriptor) with content Synonym: Can be thought of as another word for the current term. It may help to view the following diagram in order to properly grasp how these terms inter-relate. This serves to illustrate the fact that there are two main types of vocabulary. Each type consists of a set of terms, but the relationships between them are different in that a taxonomy deals with a hierarchy of information, and a thesaurus deals with relationships between terms. The terms (shown as small boxes) and their relationships (shown as arrows) play a critical role in how content is organized. One of the things that makes the Drupal taxonomy system so powerful, is that it allows content to be categorized on the fly (as and when it is created). This unburdens administrators because it is no longer necessary to moderate every bit of content coming into the site in order to put it into pre-determined categories. We'll discuss these methods in some detail in the coming sections, but it's also worth noting quickly that it is possible to tag a given node more than once. This means that content can belong to several vocabularies, at once. This is very useful for cross-referencing purposes because it highlights relationships between terms or vocabularies through the actual nodes. Implementing Controlled Taxonomies in Drupal The best way to talk about how to implement some form of categorization is to see it in action. There are quite a few settings to work with and consider in order to get things up and running. Let's assume that the demo site has enlisted a large number of specialists who will maintain their own blogs on the website so that interested parties can keep tabs on what's news according to the people in the know. Now, some people will be happy with visiting their blog of choice and reading over any new postings there. Some people, however, might want to be able to search for specific topics in order to see if there are correlations or disagreements between bloggers on certain subjects. As there is going to be a lot of content posted once the site has been up and running for a few months, we need some way to ensure that specific topics are easy to find, regardless of who has been discussing them on their blogs. Introduction to Vocabularies Let's quickly discuss how vocabularies are dealt with in the administration tool in order to work out how to go about making sure this requirement is satisfied. If you click on the Taxonomy link under Content management, you will be presented with a page listing the current vocabularies. Assuming you have created a forum before, you should have something like this: Before we look at editing terms and vocabularies, let's take a look at how to create a vocabulary for ourselves. Click on the add vocabulary tab to bring up the following page that we can use to create a vocabulary, manually: By way of example, this vocabulary will deal with the topic of hunting. This vocabulary only applies to blog entries because that is the only content (or node) type for which it is enabled—you can select as many or as few as you like, depending on how many content types it should apply to. Looking further down the page, there are several other options that we will discuss in more detail, shortly. Clicking on Submit adds this vocabulary to the list, so that the main page now looks like this: So far so good, but this will not be of much use to us as it stands! We need to add some terms (descriptors) in order to allow tagging to commence. Dealing with Terms Click on add terms link for the Hunting vocabulary to bring up the following page: The term Trapping has been added here, with a brief description of the term itself. We could, if we choose, associate the term Poaching with Trapping by making it a related term or synonym (of course, you would need to create this term first in order to make it a related term). Click on the Advanced options link to expose the additional features, as shown here: In this case, the term Trapping is specified as being related to Poaching and by way of example, gin traps is a synonym. Synonyms don't actually do anything useful at the moment, so don't pay too much mind to them yet, but there are modules that expose additional functionality based on related terms and synonyms, such as the Similar by Terms module. The Parents option at the start of the Advanced options warrants a closer inspection, but as it relates more closely to the structure of hierarchies, we'll look at it in the section on Hierarchies that's coming up. For now, add a few more terms to this vocabulary so that the list looks something like this: It's now time to make use of these terms by posting some blog content. Posting Content with Categories Enabled Using any account with the requisite permissions to add blog content, attempt to post to the site. You should now be able to view the newly inserted Hunting category, as shown here: Now comes the clever bit! Once this blog node has been posted, users can view the blog as normal, except that it now has its term displayed along with the post (bottom right): Where does the descriptor link take us? Click on the term, in this case Canned hunting, and you will be taken to a page listing all of the content that has been tagged with this term. This should really have you quite excited, because with very little work, users can now find focused content without having to look that hard—this is what content management is all about!
Read more
  • 0
  • 0
  • 2450
article-image-getting-started-magento-development
Packt
15 Dec 2010
8 min read
Save for later

Getting Started with Magento Development

Packt
15 Dec 2010
8 min read
Magento is an open-source eCommerce stack with smart features such as layered navigation, auto-complete search, multiple language support, multiple stores, smart browsing, RSS product feeds, tagging and reviewing products, reviewing search terms, reviewing customer tags, poll manager, currency exchange rates, Google sitemap integration, abandoned shopping cart report, catalog-wide product comparisons, product wish lists, and even zooming the product image. It was created by Varien, based on the top notch MVC framework—Zend Framework, on March 31, 2008. As with other MVC applications, Magento keeps the display logic separated from the application's business logic. A dedicated team works on updating Magento regularly. Magento Developer is a very hands-on role for both a novice and experienced software engineer who is interested in achieving high impact in a fast-paced development environment with an ambitious mission. Magento gives a very handy way to deal with its features faster than any other alternatives, whether you know a little or a lot about PHP programming. Let's see how these things happen. We will make our development platform ready to cook some mouth-watering recipes for Magento in this article. If you are a newbie, this is the right place to start. If you are a pro, this is still the right place to start as we have tried to follow some best practices for Magento development that you might not be aware of. Let's get our hands dirty with Magento Development! Good luck! Preparing the platform with a virtual host Magento is built on the de facto PHP framework—Zend Framework. We need to set up our development environment properly to get most out of it. In this recipe, we will set up a Fully Qualifed Domain Name (FQDN) and a virtual host. We could use the domain as http://localhost/magento or something like this, but in that case accessing the admin panel might be cumbersome. In most cases, you have to access through the local IP address. Using a FQDN, you don't have to worry about it. A FQDN will make the debugging process much easier also. Getting Ready We need to have the following things installed before kicking off the setup of a virtual host a.k.a. vhost: Apache2 PHP5 MySQL server 5.0 If we want to install the previously mentioned tools from Ubuntu Linux CLI, we can easily do that by running the following commands. We will be using Linux commands based on Ubuntu. The following command is a basic Apache installation command: sudo aptitude install apache2 apache2.2-common apache2-mpm-prefork apache2-utils libexpat1 ssl-cert The following command is a basic PHP5 installation command: sudo aptitude install libapache2-mod-php5 php5 php5-common php5-curl php5-dev php5-gd php5-imagick php5-mcrypt php5-memcache php5-mhash php5- mysql php5-pspell php5-snmp php5-sqlite php5-xmlrpc php5-xsl Enter the following command to begin with a simple MySQL installation: sudo aptitude install mysql-server mysql-client libmysqlclient15-dev Note that we have installed the development libs and headers with the libmysqlclient15-dev package. You can leave that out but it has been found that they are useful in many situations. Alternately, we can use an all-in-one package, such as the XAMPP, to get all the aforementioned tools in one click. The XAMPP package can be downloaded from http://www.apachefriends.org/en/xampp.html. How to do it... To test the domain without creating a DNS zone and record(s) on some Internet nameserver(s), let's modify the /etc/hosts file on our local computer to include some entries mapping the magento.local.com, and so on to the local machine's public IP address (in most cases, 127.0.0.1). Open the /etc/hosts file with any text editor or run the following command in terminal: sudo nano /etc/hosts Add the following line as a similar entry in /etc/hosts file: 127.0.0.1 magento.local.com The location of the host file depends on the OS loaded. If it's a Windows OS it should be c:windowssystem32driversetchosts. Now let's create the layout for our domain. Open terminal and change your location to the WEBROOT directory by running the following command: cd /var/www/ If you are using XAMPP, it would be c:xampphtdocs. Now for our domain, we want to host/create a folder with a standard set of subfolders: mkdir -p magento.local.com/{public,private,log,cgi-bin,backup} Let's create an index.html file for our domain: sudo nano /var/www/magento.local.com/public/index.html It's time to put some content in our index file: <html> <head> <title>Welcome to magento.local.com</title> </head> <body> <h1>Welcome to magento.local.com</h1> </body> </html> We've set up the basics and now we're ready to add our own virtual hosts, so that we can start to serve our domain. Let's go ahead and create the vhost file for magento.local.com: sudo nano /etc/apache2/sites-available/magento.local.com The contents should look like this: # Place any notes or comments you have here # It will make any customization easier to understand in the weeks to come # domain: magento.local.com # public: /var/www/magento.local.com/public <VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin webmaster@magento.local.com ServerName magento.local.com # Index file and Document Root (where the public files are located) DirectoryIndex index.php index.html DocumentRoot /var/www/magento.local.com/public # Custom log file locations LogLevel warn ErrorLog /var/www/magento.local.com/log/error.log CustomLog /var/www/magento.local.com/log/access.log combined </VirtualHost> Now we have the site available, we need to enable it: sudo a2ensite magento.local.com It seems like good advice to reload the apache: sudo /etc/init.d/apache2 reload With such changes made, we can now navigate to our site in a web browser on our local computer, as seen in the following screenshot: Tada! We now have the contents of public/index.html being shown. How it works... The Getting ready recipe describes the installation process of Apache, PHP, and MySQL from Linux command line. If you have already installed those, you can skip it and start confguring the layout and vhost directly. The Magento source files will be put in the public directory. Other directories represent as their name suggest. Usually in production server, we don't have to write the domain name manually as we wrote here, as the DNS zone is there. The rest of this recipe is the same as the production server. The virtual host contents has some inline comments starting with # to describe the purpose. Setting up a Subversion/SVN A Subversion is a popular version control system initiated in 2000 by CollabNet Inc. Its goal is to be a mostly-compatible successor to the widely used Concurrent Versions System (CVS). Getting Ready If you are a Windows user, please download the TortoiseSVN client from here: http://tortoisesvn.tigris.org/ and if you are a Linux user, fre up your terminal. How to do it... Execute the following command in the terminal to install Subversion: sudo apt-get install subversion Make sure you have an active Internet connection as aptitude will install the package from Ubuntu repository. If you are using windows and intended to use TortoiseSVN you may get a Windows binary from http://tortoisesvn.tigris.org. After downloading the TortoiseSVN installer file, double-click on the TortoiseSVN installer file and follow the on screen instructions. Now let's add a new group called Subversion to our system and add current username to Subversion group. sudo addgroup subversion sudo adduser <your_username_here> subversion If you don't a have a clue about your current username, issue the command whoami in your terminal. Once the Subversion installation is completed, let's create our repository for Magento project. Now issue the following command in the terminal: sudo mkdir /home/svn cd /home/svn sudo mkdir magento sudo chown -R www-data:subversion magento sudo chmod -R g+rws magento Now initiate an SVN repository with the following command: sudo svnadmin create /home/svn/magento In the case of TortoiseSVN, open the folder where you want to put your Magento repository and right-click and find create repository here... from the context menu TortoiseSVN. Enter a name for your repository, in this case the repository name would be magento. It's time to change your current working directory in terminal to: cd /var/www/magento.local.com We will checkout the Magento project by issuing the following command in the terminal: svn co file:///home/svn/magento public How it works... When we execute any apt-get command, the aptitude will take care of the installation process. The aptitude will look for the package from Ubuntu central repository—and download it if necessary—and perform the necessary tasks. After completion of the installation process, we've created a new group named subversion and added the username to the newly created group subversion. Then we've created a directory under /home/location to hold all repositories. After that, we've created a new repository for our Magento project named magento. Finally, we've checked out the repository in our site location in the public directory.
Read more
  • 0
  • 0
  • 2446

article-image-debugging-and-validation-wordpress-theme-design
Packt
07 Oct 2009
15 min read
Save for later

Debugging and Validation in WordPress Theme Design

Packt
07 Oct 2009
15 min read
As you work on and develop your own WordPress themes, you will no doubt discover that life will go much smoother if you debug and validate at each step of your theme development process. The full process will pretty much go like this: Add some code, check to see the page looks good in Firefox, validate, then check it in IE and any other browsers you and your site's audience use, validate again if necessary, add the next bit of code... repeat as necessary until your theme is complete. Don't Forget About Those Other Browsers and Platforms I'll mostly be talking about working in Firefox and then 'fixing' for IE. This is perhaps, unfairly, assuming you're working on Windows or a Mac and that the source of all your design woes will (of course) be Microsoft IE's fault. You must check your theme in all browsers and if possible, other platforms, especially the ones you know your audience uses the most. I surf with Opera a lot and find that sometimes JavaScripts can 'hang' or slow that browser down, so I debug and double-check scripts for that browser. I'm a freelance designer and find a lot of people who are also in the design field use a Mac (like me), and visit my sites using Safari, so I occasionally take advantage of this and write CSS that caters to the Safari browser. (Safari will interpret some neat CSS 3 properties that other browsers don't yet.) Generally, if you write valid markup and code that looks good in Firefox, it will look good in all the other browsers (including IE). Markup and code that goes awry in IE is usually easy to fix with a work-around. Firefox is a tool, nothing more! Firefox contains features and plug-ins that we'll be taking advantage of to help us streamline the theme development process and aid in the validation and debugging of our theme. Use it just like you use your HTML/code editor or your image editor. When you're not developing, you can use whatever browser you prefer. Introduction to Debugging Have a look at our initial work-flow chart. I was insistent that your work-flow pretty much be as edit -> check it -> then go back and edit some more. The main purpose of visually checking your theme in Firefox after adding each piece of code is so that you can see if it looks OK, and if not, immediately debug that piece of code. Running a validation check as you work just doubly ensures you're on the right track. So, your work-flow really ends up looking something more like this: You want to work with nice, small pieces or 'chunks' of code. I tend to define a chunk in XHTML markup as no more than one div section, the internal markup, and any WordPress template tags it contains. When working with CSS, I try to only work with one id or class rule at a time. Sometimes, while working with CSS, I'll break this down even further and test after every property I add to a rule, until the rule looks as I intend and validates. As soon as you see something that doesn't look right in your browser, you can check for validation and then fix it. The advantage of this work-flow is you know exactly what needs to be fixed and what XHTML markup or PHP code is to blame. All the code that was looking fine and validating before, you can ignore. The recently added markup and code is also the freshest in your mind, so you're more likely to realize the solution needed to fix the problem. If you add too many chunks of XHTML markup or several CSS rules before checking it in your browser, then discover something has gone awry, you'll have twice as much sleuthing to do in order to discover which (bit or bits) of markup and code are to blame. Again, your fail-safe is your backup. You should be regularly saving backups of your theme at good stable stopping points. If you do discover that you just can't figure out where the issue is, rolling back to your last stable stopping point and starting over might be your best bet to getting back on track. Here, you'll primarily design for Firefox and then apply any required fixes, hacks, and workarounds to IE. You can do that for each piece of code you add to your theme. As you can see in the preceding figure, first check your theme in Firefox and if there's a problem, fix it for Firefox first. Then, check it in IE and make any adjustments for that browser. At this point, you guessed it, more than half of the debugging process will depend directly on your own eyeballs and aesthetics. If it looks the way you intended it to look and works the way you intended it to work, check that the code validates and move on. When one of those three things doesn't happen (it doesn't look right, work right, or validate), you have to stop and figure out why. Troubleshooting Basics Suffice to say, it will usually be obvious when something is wrong with your WordPress theme. The most common reasons for things being 'off' are: Mis-named, mis-targeted, or inappropriately-sized images. Markup text or PHP code that affects or breaks the Document Object Model (DOM) due to being inappropriately placed or having syntax errors in it. WordPress PHP code copied over incorrectly, producing PHP error displays in your template, rather than content. CSS rules that use incorrect syntax or conflict with later CSS rules. The first point is pretty obvious when it happens. You see no images, or worse, you might get those little ugly 'x'd' boxes in IE if they're called directly from the WordPress posts or pages. Fortunately, the solution is also obvious: you have to go in and make sure your images are named correctly if you're overwriting standard icons or images from another theme. You also might need to go through your CSS file and make sure the relative paths to the images are correct. For images that are not appearing correctly because they were mis-sized, you can go back to your image editor, fix them, and then re-export them, or you might be able to make adjustments in your CSS file to display a height and/or width that is more appropriate to the image you designed. Don't forget about casing! If by some chance you happen to be developing your theme with an installation of WordPress on a local Windows machine, do be careful with the upper and lower casing in your links and image paths. Chances are, the WordPress installation that your theme is going to be installed into is more likely to be on a Unix or Linux web server. For some darn reason, Windows (even if you're running Apache, not IIS) will let you reference and call files with only the correct spelling required. Linux, in addition to spelling, requires the upper and lower casing to be correct. You must be careful to duplicate exact casing when naming images that are going to be replaced and/or when referencing your own image names via CSS. Otherwise, it will look fine in your local testing environment, but you'll end up with a pretty ugly theme when you upload it into your client's installation of WordPress for the first time (which is just plain embarrassing). For the latter two points, one of the best ways to debug syntax errors that cause visual 'wonks' is not to have syntax errors in the first place (don't roll your eyes just yet). This is why, in the last figure of our expanded work-flow chart, we advocate you to not only visually check your design as it progresses in Firefox and IE, but also test for validation. Why Validate? Hey, I understand it's easy to add some code, run a visual check in Firefox and IE, see everything looks OK, and then flip right back to your HTML editor to add more code. After-all, time is money and you'll just save that validation part until the very end. Besides, validation is just icing on the cake. Right? The problem with debugging purely based on visual output is, all browsers (some more grievously than others) will try their best to help you out and properly interpret less than ideal markup. One piece of invalid markup might very well look OK initially, until you add more markups and then the browser can't interpret your intentions between the two types of markup anymore. The browser will pick its own best option and display something guaranteed to be ugly. You'll then go back and futz around with the last bit of code you added (because everything was fine until you added that last bit, so that must be the offending code) which may or may not fix the problem. The next bits of code might create other problems and what's worse that you'll recognize a code chunk that you know should be valid! You're then frustrated, scratching your head as to why the last bit of code you added is making your theme 'wonky' when you know, without a doubt, it's perfectly fine code! The worst case scenario I tend to see of this type of visual-only debugging is that the theme developers get desperate and start randomly making all sorts of odd hacks and tweaks to their markup and CSS to get it to look right. Miraculously, they often do get it to look right, but in only one browser. Most likely, they've inadvertently discovered what the first invalid syntax was and unwittingly applied it across all the rest of their markup and CSS. Thus, that one browser started consistently interpreting the bad syntax! The theme designer then becomes convinced that the other browser is awful and designing these non-WYSIWYG, dynamic themes is a pain. Avoid all that frustration! Even if it looks great in both browsers, run the code through the W3C's XHTML and CSS validators. If something turns up invalid, no matter how small or pedantic the validator's suggestion might be (and they do seem pedantic at times), incorporate the suggested fix into your markup now, before you continue working. This will keep any small syntax errors from compounding future bits of markup and code into big visual 'uglies' that are hard to track down and troubleshoot. PHP Template Tags The next issue you'll most commonly run into is mistakes and typos that are created by 'copying and pasting' your WordPress template tags and other PHP code incorrectly. The most common result you'll get from invalid PHP syntax is a 'Fatal Error.' Fortunately, PHP does a decent job of trying to let you know what file name and line of code in the file the offending syntax lives (yet another reason why I highly recommend an HTML editor that lets you view the line number in the Code view). If you get a 'Fatal Error' in your template, your best bet is to open the file name that is listed and go to the line in your editor. Once there, search for missing < ?php ? > tags. Your template tags should also be followed with parenthesis followed by a semicolon like ( ) ; . If the template tag has parameters passed in it, make sure each parameter is surrounded by single quote marks, that is, template_tag_name('parameter name', 'next_parameter');. CSS Quick Fixes Last, your CSS file might get fairly big, fairly quickly. It's easy to forget you already made a rule and/or just accidentally create another rule of the same name. It's all about cascading, so whatever comes last, overwrites what came first. Double rules: It's an easy mistake to make, but validating using W3C's CSS validator will point this out right away. However, this is not the case for double properties within rules! W3C's CSS validator will not point out double properties if both properties use correct syntax. This is one of the reasons why the !important hack returns valid. (We'll discuss this hack just a little further down in this article under To Hack or Not to Hack.) Perhaps you found a site that has a nice CSS style or effect you like, and so you copied those CSS rules into your theme's style.css sheet. Just like with XHTML markup or PHP code, it's easy to introduce errors by miscopying the bits of CSS syntax in. A small syntax error in a property towards the bottom of a rule may seem OK at first, but cause problems with properties added to the rule later. This can also affect the entire rule or even the rule after it. Also, if you're copying CSS, be aware that older sites might be using depreciated CSS properties, which might be technically OK if they're using an older HTML DOCTYPE, but won't be OK for the XHTML DOCTYPE you're using. Again, validating your markup and CSS as you're developing will alert you to syntax errors, depreciated properties, and duplicate rules which could compound and cause issues in your stylesheet down the line. Advanced Troubleshooting Take some time to understand the XHTML hierarchy. You'll start running into validation errors and CSS styling issues if you wrap a 'normal' (also known as a 'block') element inside an 'in-line' only element, such as putting a header tag inside an anchor tag (<a href, <a name, etc.) or wrapping a div tag inside a span tag. Avoid triggering quirks mode in IE! This, if nothing else, is one of the most important reasons for using the W3C HTML validator. There's no real way to tell if IE is running in quirks mode. It doesn't seem to output that information anywhere (that I've found). However, if any part of your page or CSS isn't validating, it's a good way to trigger quirks mode in IE. The first way to avoid quirks mode is to make sure your DOCTTYPE is valid and correct. If IE doesn't recognize the DOCTYPE (or if you have huge conflicts, like an XHTML DOCTYPE, but then you use all-cap, HTML 4.0 tags in your markup), IE will default into quirks mode and from there on out, who knows what you'll get in IE. My theme stopped centering in IE! The most obvious thing that happens when IE goes into quirks mode is that IE will stop centering your layout in the window properly if your CSS is using the margin: 0 auto; technique. If this happens, immediately fix all the validation errors in your page. Another big obvious item to note is if your div layers with borders and padding are sized differently between browsers. If IE is running in quirks mode it will incorrectly render the box model, which is quite noticeable between Firefox and IE if you're using borders and padding in your divs. Another item to keep track of is to make sure you don't have anything that will generate any text or code above your DOCTYPE. Firefox will read your page until it hits a valid DOCTYPE and then proceed from there, but IE will just break and go into quirks mode. Fixing CSS Across Browsers If you've been following our debug -> validate method described in the article, then for all intents and purposes, your layout should look pretty spot-on between both the browsers. Box Model Issues In the event that there is a visual discrepancy between Firefox and IE, in most cases it's a box model issue arising because you're running in quirks mode in IE. Generally, box model hacks apply to pre IE 6 browsers (IE 5.x) and apply to IE6 if it's running in quirks mode. Again, running in quirks mode is to be preferably avoided, thus eliminating most of these issues. If your markup and CSS are validating (which means you shouldn't be triggering quirks mode in IE, but I've had people 'swear' to me their page validated yet quirks mode was being activated), you might rather 'live with it' than try to sleuth what's causing quirks mode to activate. Basically, IE 5.x and IE6 quirks mode don't properly interpret the box model standard and thus, 'squish' your borders and padding inside your box's width, instead of adding to the width as the W3C standard recommends. However, IE does properly add margins! This means that if you've got a div set to 50 pixels wide, with a 5 pixel border, 5 pixels of padding, and 10 pixels of margin in Firefox, your div is actually going to be 60 pixels wide with 10 pixels of margin around it, taking up a total space of 70 pixels.. In IE quirks mode, your box is kept at 50 pixels wide (meaning it's probably taller than your Firefox div because the text inside is having to wrap at 40 pixels), yet it does have 10 pixels of margin around it. You can quickly see how even a one pixel border, some padding, and a margin can start to make a big difference in layout between IE and Firefox!
Read more
  • 0
  • 0
  • 2444
Modal Close icon
Modal Close icon