Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-making-content-findable-drupal-6
Packt
20 Oct 2009
5 min read
Save for later

Making Content Findable in Drupal 6

Packt
20 Oct 2009
5 min read
What you will learn In this article, you will learn about: Using Taxonomy to link descriptive terms to Node Content Tag clouds Path aliases What you will do In this article, you will: Create a Taxonomy Enable the use of tags with Node Content Define a custom URL Activate site searching Perform a search Understanding Taxonomy One way to find content on a site is by using a search function, but this can be considered as a hit-or-miss approach. Searching for an article on 'canines' won't return an article about dogs, unless it contains the word 'canines'. Certainly, navigation provides a way to navigate the site, but unless your site has only a small amount of content, the navigation can only be general in nature. Too much navigation is annoying. There are far too many sites with two or three sets of top navigation, plus left and bottom navigation. It's just too much to take in and still feel relaxed. Site maps offer additional navigation assistance, but they're usually not fun to read, and are more like a Table of Contents, where you have to know what you're looking for. So, what's the answer?—Tags! A Tag is simply a word or a phrase that is used as a descriptive link to content. In Drupal, a collective set of terms, from which terms or tags are associated with content, is called a Vocabulary. One or more Vocabularies comprise a Taxonomy. This a good place to begin, so let's create a Vocabulary. Activity 1: Creating a Taxonomy Vocabulary In this activity, we will be adding two terms to our Vocabulary. We shall also learn how to assign a Taxonomy to Node Content that has been created. We begin in the Content management area of the admin menu. There, you should find the Taxonomy option listed, as shown in the following screenshot. Click on this option. Taxonomy isn't listed in my admin menuThe Taxonomy module is not enabled by default. Check on the Modules page (Admin | Site building | Modules) and make sure that the module is enabled. For the most part, modules can be thought of as options that can be added to your Drupal site, although some of them are considered essential. Some modules come pre-installed with Drupal. Among them, some are automatically activated, and some need to be activated manually. There are many modules that are not included with Drupal, but are available freely from the Drupal web site. The next page gives us a lengthy description of the use of a taxonomy. At the top of the page are two options, List and Add vocabulary. We'll choose the latter. On the Add vocabulary page, we need to provide a Vocabulary name. We can create several vocabularies, each for a different use. For example, with this site, we could have a vocabulary for 'Music' and another for 'Meditation'. For now, we'll just create one vocabulary, and name it Tags, as suggested below, in the Vocabulary name box. In the Description box, we'll type This vocabulary contains Tag terms. In the Help text box, we'll type Enter one or more descriptive terms separated by commas. Next is the [Node] Content types section. This lists the types of Node Content that are currently defined. Each has a checkbox alongside it. Selecting the checkbox indicates that the associated Node Content type can have Tags from this vocabulary assigned to it. Ultimately, it means that if a site visitor searches using a Tag, then this type of Node Content might be offered as a match. We will be selecting all of the checkboxes. If a new Node Content type is created that will use tags, then edit the vocabulary and select the checkbox. The Settings section defines how we will use this vocabulary. In this case, we want to use it with tags, so we will select the Tags checkbox. The following screenshot shows the completed page. We'll then click on the Save button. At this point, we have a vocabulary, as shown in the screenshot, but it doesn't contain anything. We need to add something to it, so that we can use it. Let's click on the add terms link. On the Add term page, we're going to add two terms, one at a time. First we'll type healing music into the Term name box. We'll purposely make the terms lower case, as it will look better in the display that we'll be creating soon. We'll click on the Save button, and then repeat the procedure for another term named meditation. The method we used for adding terms is acceptable when creating new terms that have not been applied to anything, yet. If the term does apply to existing Node Content, then a better way to add it is by editing that content. We'll edit the Page we created, entitled Soul Reading. Now that the site has the Taxonomy module enabled, a new field named Tags appears below Title. We're going to type soul reading into it. This is an AJAX field. If we start typing a tag that already exists, then it will offer to complete the term. AJAX (Asynchronous JavaScript + XML) is a method of using existing technologies to retrieve data in the background. What it means to a web site visitor is that data can be retrieved and presented on the page being viewed, without having to reload the page. Now, we can Save our Node Content, and return to the vocabulary that we created earlier. Click on the List tab at the top of the page. Our terms are listed, as shown in the following screenshot.  
Read more
  • 0
  • 0
  • 2734

article-image-deploying-your-dotnetnuke-portal
Packt
21 Oct 2009
7 min read
Save for later

Deploying Your DotNetNuke Portal

Packt
21 Oct 2009
7 min read
Acquiring a Domain Name One of the most exciting parts of starting a website is acquiring a domain name. When selecting the perfect name there are a few things that you need to keep in mind: Keep it brief: The more letters that a user has to type in to get to your site the more difficult it is going to be for them to remember your site. The name you select will help to brand your site. If it is catchy then people will remember it more readily. Have alternative names in mind: As time goes on, great domain names are becoming fewer and fewer. Make sure you have a few alternatives to choose from. The first domain name you had in mind may already be taken so having a backup plan will help when you decide to purchase a name. Consider buying additional top-level domain names: Say you've already bought www.DanielsDoughnuts.com. You might want to purchase www.DanielsDoughnuts.net as well to protect your name. Once you have decided on the name you want for your domain, you will need to register it. There are dozens of different sites that allow you to register your domain name as well as search to see if it is available. Some of the better-known domain-registration sites are Register.com and NetworkSolutions.com. Both of these have been around a long time and have good reputations. You can also look into some of the discount registers like BulkRegister (http://www.BulkRegister.com) or Enom (http://www.enom.com). After deciding on your domain name and having it registered, you will need to find a place to physically host your portal. Most registration services will also give the ability to host your site with them but it is best to search for a provider that fits your site's needs. Finding a Hosting Provider When deciding on a provider to host your portal, you will need to consider a few things: Cost: This is of course one of the most important things to look at when looking for a provider. There are usually a few plans to select from. The basic plan usually allows you a certain amount of disk space for a very small price but has you share the server with numerous other websites. Most providers also offer dedicated (you get the server all to yourself) and semi-dedicated (you share with a few others). It is usually best to start with the basic plan and move up if the traffic on your site requires it. Windows servers: The provider you select needs to have Windows Server 200/2003 running IIS (Internet Information Services). Some hosts run alternatives to Microsoft like Linux and Apache web server. .NET framework: The provider's servers need to have the .NET framework version 1.1 installed. Most hosts have installed the framework on their servers, but not all. Make sure this is available because DotNetNuke needs this to run. Database availability: You will need database server availability to run DotNetNuke and Microsoft SQL Server is the preferred back-end. It is possible to run your site off Microsoft Access or MySQL (with a purchased provider), but I would not suggest it. Access does not hold up well in a multi-user platform and will slow down considerably when your traffic increases. Also, since most module developers target MS SQL, MySQL, while able to handle multiple users, does not have the module support. FTP access: You will need a way to post your DotNetNuke portal files to your site and the easiest way is to use FTP. Make sure that your host provides this option. E-mail server: A great deal of functionality associated with the DotNetNuke portal relies on being able to send out e-mails to users. Make sure that you will have the availability of an e-mail server. Folder rights: The ASPNET or NetworkService Account (depending on server) will need to have full permissions to the root and subfolders for your DotNetNuke application to run correctly. Make sure that your host either provides you with the ability to set this or is willing to set this up for you. We will discuss the exact steps later in this article. The good news is that you will have plenty of hosting providers to choose from and it should not break the bank. Try to find one that fits all of your needs. There are even some hosts (www.WebHost4life.com) that will install DotNetNuke for you free of charge. They host many DotNetNuke sites and are familiar with the needs of the portal. Preparing Your Local Site Once you have your domain name and a provider to host your portal, you will need to get your local site ready to be uploaded to your remote server. This is not difficult, but make sure you cover all of the following steps for a smooth transition. Modify the compilation debug setting in the web.config file: You will need to modify your web.config file to match the configuration of the server to which you will be sending your files. The first item that needs to be changed is the debug configuration. This should be set to false. You should also rebuild your application in release mode before uploading. This will remove the debug tokens, perform optimizations in the code, and help the site to run faster: <!-- set debugmode to false for running application --> <compilation debug="false" /> Modify the data-provider information in the web.config file: You will need to change the information for connecting to the database so that it will now point to the server on your host. There are three things to look out for in this section (changes shown overleaf): First, if you are using MS SQL, make sure SqlDataProvider is set up as the default provider. Second, change the connection string to reflect the database server address, the database name (if not DotNetNuke), as well as the user ID and password for the database that you received from your provider. Third, if you will be using an existing database to run the DotNetNuke portal, add an objectQualifier. This will append whatever you place in the quotations to the beginning of all of the tables and procedures that are created for your database. <data defaultProvider=" SqlDataProvider" > <providers> <clear/> <add name = "SqlDataProvider" type = "DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider" connectionStringname = "Server=MyServerIP;Database=DotNetNuke; uid=myID;pwd=myPWD;" providerPath = "~ProvidersDataProvidersSqlDataProvider" objectQualifier = "DE" databaseOwner = "dbo" upgradeConnectionString = "" />   Modify any custom changes in the web.config file: Since you set up YetAnotherForum for use on our site, we will need to make the modifications necessary to ensure that the forums connect to the hosted database. Change the to point to the database on the server: <yafnet> <dataprovider>yaf.MsSql,yaf</dataprovider> <connstr> user id=myID;password=myPwd;data source=myServerIP;initial catalog=DotNetNuke;timeout=90 </connstr> <root>/DotNetNuke/DesktopModules/YetAnotherForumDotNet/</root> <language>english.xml</language> <theme>standard.xml</theme> <uploaddir>/DotNetNuke/DesktopModules/yetanotherforum.net /upload/</uploaddir> <!--logtomail>email=;server=;user=;pass=;</logtomail--> </yafnet> Add your new domain name to your portal alias: Since DotNetNuke has the ability to run multiple portals we need to tell it which domain name is associated with our current portal. To do this we need to sign on as host (not admin) and navigate to Admin | Site Settings on the main menu. If signed on as host, you will see a Portal Aliases section on the bottom of the page. Click on the Add New HTTP Alias link:
Read more
  • 0
  • 0
  • 2724

article-image-adding-newsletters-web-site-using-drupal-6
Packt
22 Oct 2009
4 min read
Save for later

Adding Newsletters to a Web Site Using Drupal 6

Packt
22 Oct 2009
4 min read
Creating newsletters A newsletter is a great way of keeping customers up-to-date without them needing to visit your web site. Customers appreciate well-designed newsletters because they allow the customer to keep tabs on their favorite places without needing to check every web site on a regular basis. Creating a newsletter Good Eatin' Goal: Create a new newsletter on the Good Eatin' site, which will contain relevant news about the restaurant, and will be delivered quarterly to subscribers. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps Newsletters are containers for individual issues. For example, you could have a newsletter called Seasonal Dining Guide, which would have four issues per year (Summer, Fall, Winter, and Spring). A customer subscribes to the newsletter and each issue is sent to them as it becomes available. Begin by installing and activating the Simplenews module, as shown below: At this point, we only need to enable the Simplenews module, and the Simplenews action module can be left disabled. Next, select Content management and then Newsletters, from the Administer menu. Drupal will display an administration area divided into the following sections: a) Sent issuesb) Draftsc) Newslettersd) Subscriptions Click on the Newsletters tab and Drupal will display a page similar to the following: As you can see, a default newsletter with the name of our site has been automatically created for us. We can either edit this default newsletter or click on the Add newsletter link to create a new newsletter. Let's click the Add newsletter option to create our seasonal newsletter. Drupal will display a standard form where we can enter the name, description, and relative importance (relative importance weight) of the newsletter. Click Save to save the newsletter. It will now appear in the list of available newsletters. If you want to modify the Sender information for the newsletter to use an alternate name or email address to your site's default ones, you can either expand the Sender information section when adding the newsletter, or you click Edit newsletter and modify the Sender information, as shown in the following screenshot: Allowing users to sign-up for the newsletter Good Eatin' Goal: Demonstrate how registered and unregistered users can sign-up for a newsletter, and configure the registration process. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps To allow customers to sign-up for the newsletter, we will begin by adding a block to the page. Open the Block Manager by selecting Site building and then Blocks, from the Administer menu. Add the block for the newsletter that you want to allow customers to subscribe to, as shown in the following screenshot: We will now need to give users permission to subscribe to newsletters by selecting User management and then Permissions, from the Administer menu. We will give all users permissions to subscribe to newsletters and to view newsletter links, as shown below: If the customer does not have permission to subscribe to newsletters then the block will appear as shown in the following screenshot: However, if the customer has permissions to subscribe to newsletters, and is logged in to the site, the block will appear as shown in the following screenshot: If the customer has permission to subscribe, but is not logged in, the block will appear as follows: To subscribe to the newsletter, the customer will simply click on the Subscribe button. Once they he subscribed, the Subscribe button will change to Unsubscribe so that the user can easily opt out of the newsletter. If the user does not have an active account with the site, they will need to confirm that they want to subscribe to the site.
Read more
  • 0
  • 0
  • 2721

article-image-date-and-calendar-module-drupal-5-part-1
Packt
20 Oct 2009
5 min read
Save for later

Date and Calendar Module in Drupal 5: Part 1

Packt
20 Oct 2009
5 min read
Recipe 33: Understanding Date formats Drupal dates are typically stored in one of two ways. Core Drupal dates—including Node: Created Time, and Node: Updated Time—are stored as Unix timestamps. Contributed module date fields can be stored as either a timestamp or a format known as ISO. Neither style is particularly friendly to human readers, so both field types are usually formatted before users see them. This recipe offers a tour of places in Drupal where dates can be formatted and information on how to customize the formats. What's that Lucky Day?The Unix timestamp 1234567890 fell on Friday the 13th, in February, 2009. This timestamp marks 1,234,567,890 seconds since January 1, 1970. The same date/time combination would be stored in a date field in ISO format as 2009-02-13T23:31:30+00:0. ISO is an abbreviation for the International Organization for Standardization Opening the browser windows side-by-side will help you understand date formatting. In the left window, open YOURSITE.com/admin/settings/ date-time to view the settings page for date and time. In the right window, open the API page of code that defines these system date time settings at http://api.drupal.org/api/function/system_date_time_settings/5. Compare each item in the $datemedium array, for instance, with the associated Medium date format drop-down. a – am/pm D – Day, Mon through Sun d – Date, 01 to 31 (with leading zeroes) F – Month, January through December (mnemonic, F = Full name) g – Hours, 1 through 12 H – Hours, 00 through 23 i – Minutes, 00 to 59 j – Date, 1 to 31 (No leading zeroes) l – Sunday through Saturday m – Month, 01 through 12 M – Month, Jan through Dec s – Seconds, 00 through 59 (with leading zeroes) S – Month Suffix, st, nd, rd, or th. Works well with j Y – Year, Examples: 1999 or 2011 Below is the list of codes for many commonly used date and time formats. A more comprehensive list appears at http://us.php.net/date. Explore Drupal places where these codes may be used. The first four locations in the table below are available in the Drupal administrative interface. The last three involve editing files on the server—these edits are completely optional. Location Details CCK field setup Custom Input formats admin/content/types/story/add_field   After the field widget is specified admin/content/types/<CONTENTTYPE>/fields/field_<FIELDNAME> Near the top of the page.   Near the bottom of the page:   Formatting Fields in Views. admin/build/views/<VIEW_NAME>/edit CCK Date fields are set via the Options drop-down in the Fields fieldset.   Custom date formats for core fields, such as Node: Created Time are set via handler and options from elements.   Default Date and Time settings admin/settings/date-time Set the default time zone, Short, Medium, and Long date formats, and the first day of the week.   Post Settings This may be one of the harder-to-find settings in Drupal, enabling the Post settings to be turned-off for specified content types. (An example of a post setting would be: Submitted by admin on Sun, 10/12/2008 - 4:55pm. The setting is found on the right-hand side of this URL: admin/build/themes/settings Use the following mouse click trail to get to this URL: Administer | Site Building | Themes | Configure (Click on the Configure tab at the top of the page. If you click on the Configure link in the Operations column, you will still need to click the Configure tab at the top to get to the global settings.)   Variable overrides in settings.php You may override variables at the bottom of the /sites/default/settings.php file. Remove the appropriate pound signs to enable the $conf array, and add a setting as shown below. Note that this is a quick way to modify the post settings format, which draws from the medium date variable. $conf = array( #   'site_name' => 'My Drupal site', #   'theme_default' => 'minnelli', #   'anonymous' => 'Visitor', 'date_format_medium' => 'l F d, Y'  ); *.tpl.php files Examples: node-story.tpl.php <?php print format_date($node->created, 'custom', 'F Y'); ?> comment.tpl.php <?php echo t('On ') . format_date($comment->timestamp,   'custom'  , 'F jS, Y'); ?> <?php echo theme('username',   $comment) . t(' says:'); ?> template.php Redefine $variables['submitted'] Example from blommor01 theme:   $vars['submitted'] =  t('!user - <abbr class="created"   title="!microdate">!date</abbr>', array(    '!user' => theme('username', $vars['node']),    '!date' => format_date($vars['node']->created),    '!microdate' => format_date($vars['node']->   created,'custom', "Y-m-dTH:i:sO")   )); Recipe notes Note that when using the PHP date codes, additional characters may be added, including commas, spaces, and letters. In the template.php example, a backslash was used to show that the letter 'T' will be printed, rather than the formatted return values. Below are more examples of added characters: F j, Y, g:i a // August 27, 2010, 5:16 pmm.d.y // 08.27.10 You may occasionally find that an online date converter comes in handy. http://www.timestampconverterer.com/ (this URL includes the word "converter" followed by another "er"). http://www.coryking.com/date-converter.php
Read more
  • 0
  • 0
  • 2719

article-image-magento-designs-and-themes
Packt
19 May 2012
13 min read
Save for later

Magento: Designs and Themes

Packt
19 May 2012
13 min read
(For more resources on e-Commerce, see here.) The Magento theme structure The same holds true for themes. You can specify the look and feel of your stores at the Global, Website, or Store levels (themes can be applied for individual store views relating to a store) by assigning a specific theme. In Magento,a group of related themes is referred to as a design package. Design packages contain files that control various functional elements that are common among the themes within the package. By default, Magento Community installs two design packages: Base package: A special package that contains all the default elements for a Magento installation (we will discuss this in more detail in a moment) Default package: This contains the layout elements of the default store (look and feel) Themes within a design package contain the various elements that determine the look and feel of the site: layout files, templates, CSS, images, and JavaScript. Each design package must have at least one default theme, but can contain other theme variants. You can include any number of theme variants within a design package and use them, for example, for seasonal purposes (that is, holidays, back-to-school, and so on). The following image shows the relationship between design packages and themes: A design package and theme can be specified at the Global, Website or Store levels. Most Magento users will use the same design package for a website and all descendant stores. Usually, related stores within a website business share very similar functional elements, as well as similar style features. This is not mandatory; you are free to specify a completely different design package and theme for each store view within your website hierarchy. The Theme structure Magento divides themes into two group of files: templating and skin. Templating files contain the HTML, PHTML, and PHP code that determines the functional aspects of the pages in your Magento website. Skin files are made of CSS, image, and JavaScript files that give your site its outward design. Ingeniously, Magento further separates these areas by putting them into different directories of your installation: Templating files are stored in the app/design directory, where the extra security of this section protects the functional parts of your site design Skin files are stored within the skin directory (at the root level of the installation), and can be granted a higher permission level, as these are the files that are delivered to a visitor's browser for rendering the page Templating hierarchy Frontend theme template files (the files used to produce your store's pages) are stored within three subdirectories: layout: It contains the XML files that contain the various core information that defines various areas of a page. These files also contain meta and encoding information. template: This stores the PHTML files (HTML files that contain PHP code and processed by the PHP server engine) used for constructing the visual structure of the page. locale: This add files within this directory to provide additional language translations for site elements, such as labels and messages. Magento has a distinct path for storing templating files used for your website: app/design/frontend/[Design Package]/[Theme]/. Skin hierarchy The skin files for a given design package and theme are subdivided into the following: css: This stores the CSS stylesheets, and, in some cases, related image files that are called by CSS files (this is not an acceptable convention, but I have seen some designers do this) images:This contains the JPG, PNG, and GIF files used in the display of your site js: This contains the JavaScript files that are specific to a theme (JavaScript files used for core functionality are kept in the js directory at the root level) The path for the frontend skin files is: skin/frontend/[Design Package]/[Theme]/. The concept of theme fallback A very important and brilliant aspect of Magento is what is called the Magento theme fallback model. Basically, this concept means that when building a page, Magento first looks to the assigned theme for a store. If the theme is missing any necessary templating or skin files, Magento then looks to the required default theme within the assigned design package. If the file is not found there, Magento finally looks into the default theme of the Base design package. For this reason, the Base design package is never to be altered or removed; it is the failsafe for your site. The following flowchart outlines the process by which Magento finds the necessary files for fulfilling a page rendering request. This model also gives the designers some tremendous assistance. When a new theme is created, it only has to contain those elements that are different from what is provided by the Base package. For example, if all parts of a desired site design are similar to the Base theme, except for the graphic appearance of the site, a new theme can be created simply by adding new CSS and image files to the new theme (stored within the skin directory). Any new CSS files will need to be included in the local.xml file for your theme (we will discuss the local.xml file later in this article). If the design requires different layout structures, only the changed layout and template files need to be created; everything that remains the same need not be duplicated. While previous versions of Magento were built with fallback mechanisms, only in the current versions has this become a true and complete fallback. In the earlier versions, the fallback was to the default theme within a package, not to the Base design package. Therefore, each default theme within a package had to contain all the files of the Base package. If Magento base files were updated in subsequent software versions, these changes had to be redistributed manually to each additional design package within a Magento installation. With Magento CE 1.4 and above, upgrades to the Base package automatically enhance all design packages. If you are careful not to alter the Base design package, then future upgrades to the core functionality of Magento will not break your installation. You will have access to the new improvements based on your custom design package or theme, making your installation virtually upgrade proof. For the same reason, never install a custom theme inside the Base design package. Default installation design packages and themes In a new, clean Magento Community installation, you are provided with the following design packages and themes: Depending on your needs, you could add additional a custom design packages, or custom themes within the default design package: If you're going to install a group of related themes, you should probably create a new design package, containing a default theme as your fallback theme On the other hand, if you're using only one or two themes based on the features of the default design package, you can install the themes within the default design package hierarchy I like to make sure that whatever I customize can be undone, if necessary. It's difficult for me to make changes to the core, installed files; I prefer to work on duplicate copies, preserving the originals in case I need to revert back. After re-installing Magento for the umpteenth time because I had altered too many core files, I learned the hard way! As Magento Community installs a basic variety of good theme variants from which to start, the first thing you should do before adding or altering theme components is to duplicate the default design package files, renaming the duplicate to an appropriate name, such as a description of your installation (for example, Acme or Sports). Any changes you make within this new design package will not alter the originally installed components, thereby allowing you to revert any or all of your themes to the originals. Your new theme hierarchy might now look like this: When creating new packages, you also need to create new folders in the /skin directory to match your directory hierarchy in the /app/design directory. Likewise, if you decide to use one of the installed default themes as the basis for designing a new custom theme, duplicate and rename the theme to preserve the original as your fallback. The new Blank theme A fairly recent default installed theme is Blank. If your customization to your Magento stores is primarily one of colors and graphics, this is not a bad theme to use as a starting point. As the name implies, it has a pretty stark layout, as shown in the following screenshot. However, it does give you all the basic structures and components. Using images and CSS styles, you can go a long way to creating a good-looking, functional website, as shown in the next screenshot for www.aviationlogs.com: When duplicating any design package or theme, don't forget that each of them is defined by directories under /app/design/frontend/ and /skin/frontend/ Installing third-party themes In most cases, Magento users who are beginners will explore hundreds of the available Magento themes created by third-party designers. There are many free ones available, but most are sold by dedicated designers. Shopping for themes One of the great good/bad aspects of Magento is the third-party themes. The architecture of the Magento theme model gives knowledgeable theme designers tremendous abilities to construct themes that are virtually upgrade proof, while possessing powerful enhancements. Unfortunately, not all designers have either upgraded older themes properly or created new themes fully honoring the fallback model. If the older fallback model is still used for current Magento versions, upgrades to the Base package could adversely affect your theme. Therefore, as you review third-party themes, take time to investigate how the designer constructs their themes. Most provide some type of site demo. As you learn more about using themes, you'll find it easier to analyze third-party themes. Apart from a few free themes offered through the Magento website, most of them require that you install the necessary files manually, by FTP or SFTP to your server. Every third-party theme I have ever used has included some instructions on how to install the files to your server. However, allow me to offer the following helpful guidelines: When using FTP/SFTP to upload theme files, use the merge function so that only additional files are added to each directory, instead of replacing entire directories. If you're not sure whether your FTP client provides merge capabilities, or not sure how to configure for merge, you will need to open each directory in the theme and upload the individual files to the corresponding directories on your server. If you have set your CSS and JavaScript files to merge, under System | Configuration | Developer, you should turn merging off while installing and modifying your theme. After uploading themes or any component files (for example, templates, CSS, or images), clear the Magento caches under System | Cache Management in your backend. Disable your Magento cache while you install and configure themes. While not critical, it will allow you to see changes immediately instead of having to constantly clear the Magento cache. You can disable the cache under System | Cache Management in the backend. If you wish to make any changes to a theme's individual file, make a duplicate of the original file before making your changes. That way, if something goes awry, you can always re-install the duplicated original. If you have followed the earlier advice to duplicate the Default design package before customizing, instructions to install files within /app/design/frontend/default/ and /skin/frontend/default/ should be interpreted as /app/design/frontend/[your design package name]/ and /skin/frontend/[your design package name]/, respectively. As most of the new Magento users don't duplicate the Default design package, it's common for theme designers to instruct users to install new themes and files within the Default design package. (We know better, now, don't we?) Creating variants Let's assume that we have created a new design package called outdoor_package. Within this design package, we duplicate the Blank theme and call it outdoor_theme. Our new design package file hierarchy, in both /app/design/ and /skin/frontend/ might resemble the following hierarchy: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_theme/ However, let's also take one more customization step here. Since Magento separates the template structure from the skin structure—the layout from the design, so to speak—we could create variations of a theme that are simply controlled by CSS and images, by creating more than one skin. For Acme, we might want to have our English language store in a blue color scheme, but our French language store in a green color scheme. We could take the acme/skin directory and duplicate it, renaming both for the new colors: app/ design/ frontend/ default/ blank/ modern/ iphone/ outdoor_package/ outdoor_theme/ skin/ frontend/ default/ blank/ blue/ french/ german/ modern/ iphone/ outdoor_package/ outdoor_blue/ outdoor_green/ Before we continue, let's go over something which is especially relevant to what we just created. For our outdoor theme, we created two skin variants: blue and green. However, what if the difference between the two is only one or two files? If we make changes to other files that would affect both color schemes, but which are otherwise the same for both, this would create more work to keep both color variations in sync, right? Remember, with the Magento fallback method, if your site calls on a file, it first looks into the assigned theme, then the default theme within the same design package, and, finally, within the Base design package. Therefore, in this example, you could use the default skin, under /skin/frontend/outdoor_package/default/ to contain all files common to both blue and green. Only include those files that will forever remain different to each of them within their respective skin directories. Assigning themes As mentioned earlier, you can assign design packages and themes at any level of the GWS hierarchy. As with any configuration, the choice depends on the level you wish to assign control. Global configurations affect the entire Magento installation. Website level choices set the default for all subordinant store views, which can also have their own theme specifics, if desired. Let's walk through the process of assigning custom design package and themes. For the sake of this exercise, let's continue with our Outdoor theme, as described earlier.Refer to the following screenshot: We're going to now assign our Outdoor theme to a Outdoor website and store views. Our first task is to assign the design package and theme to the website as the default for all subordinant store views: Go to System | Configuration | General | Design in your Magento backend. In the Current Configuration Scope drop-down menu, choose Outdoor Products. As shown in the following screenshot, enter the name of your design package, template, layout, and skin. You will have to uncheck the boxes labeled Use Default beside each field you wish to use. Click on the Save Config button. The reason you enter default in the fields, as shown in the previous screenshot, is to provide the fallback protection I described earlier. Magento needs to know where to look for any files that may be missing from your theme files.
Read more
  • 0
  • 0
  • 2713

article-image-customizing-look-and-feel-uag
Packt
22 Feb 2012
8 min read
Save for later

Customizing Look and Feel of UAG

Packt
22 Feb 2012
8 min read
(For more resources on Microsoft Forefront UAG, see here.) Honey, I wouldn't change a thing! We'll save the flattery for our spouses, and start by examining some key areas of interest what you might want and be able to change on a UAG implementation. Typically, the end user interface is comprised of the following: The Endpoint Components Installation page The Endpoint Detection page The Login page The Portal Frame The Portal page The Credentials Management page The Error pages There is also a Web Monitor, but it is typically only used by the administrator, so we won't delve into that. The UAG management console itself and the SSL-VPN/SSTP client-component user interface are also visual, but they are compiled code, so there's not much that can be done there. The elements of these pages that you might want to adjust are the graphics, layout, and text strings. Altering a piece of HTML or editing a GIF in Photoshop to make it look different may sound trivial, but there's actually more to it than that, and the supportability of your changes should definitely be questioned on every count. You wouldn't want your changes to disappear upon the next update to UAG, would you? Nor would you look the page to suddenly become all crooked because someone decided that he wants the RDP icon to have an animation from the Smurfs. The UI pages Anyone familiar with UAG will know of its folder structure and the many files that make up the code and logic that is applied throughout. For those less acquainted however, we'll start with the two most important folders you need to know—InternalSite and PortalHomePage. InternalSite contains pages that are displayed to the user as part of the login and logout process, as well as various error pages. PortalHomePage contains the files that are a part of the portal itself, shown to the user after logging in. The portal layout comes in three different flavors, depending on the client that is accessing it. The most common one is the Regular portal, which happens to be the more polished version of the three, shown to all computers. The second is the Premium portal, which is a scaled-down version designed for phones that have advanced graphic capabilities, such as Windows Mobile phones. The third is the Limited portal, which is a text-based version of the portal, shown to phones that have limited or no graphic capabilities, such as the Nokia S60 and N95 handsets. Regardless of the type, the majority of devices connecting to UAG will present a user-agent string in their request and it is this string that determines the type of layout that UAG will use to render its pages and content. UAG takes advantage of this, by allowing the administrator to choose between the various formats that are made available, on a per application basis. The results are pretty cool and being able to cater for most known platforms and form factors, provides users with the best possible experience. The following screenshot illustrates an application that is enabled for the Premium portal, and how the portal and login pages would look on both a premium device and on a limited device: Customizing the login and admin pages The login and admin pages themselves are simple ASP pages, which contain a lot of code, as well as some text and visual elements. The main files in InternalSite that may be of interest to you are the following: Login.asp LogoffMsg.asp InstallAndDetect.asp Validate.asp PostValidate.asp InternalError.asp In addition, UAG keeps another version of some of the preceding files for ADFS, OTP, and OWA under similarly named folders. This means that if you have enabled the OWA theme on your portal, and you wish to customize it, you should work with the files under the /InternalSite/OWA folder. Of course, there are many other files that partake in the flow of each process, but the fact is there is little need to touch either the above or the others, as most of the appearance is controlled by a CSS template and text strings stored elsewhere. Certain requirements may even involve making significant changes to the layout of the pages, and leave you with no other option but to edit core ASP files themselves, but be careful as this introduces risk and is not technically supported. It's likely that these pages change with future updates to UAG, and that may cause a conflict with the older code that is in your files. The result of mixing old and new code is unpredictable, to say the least. The general appearance of the various admin pages is controlled by the file /InternalSite/CSS/template.css. This file contains about 80 different style elements including some of the 50 or so images displayed in the portal pages, such as the gradient background, the footer, and command buttons to name a few. The images themselves are stored in /InternalSite/Images. Both these folders have an OWA folder, which contains the CSS and images for the OWA theme. When editing the CSS, most of the style names will make sense, but if you are not sure, then why not copy the relevant ASP file and the CSS to your computer, so you can take a closer look with a visual editor, to better understand the structure. If you are doing this be careful not to make any changes that may alter the code in a damaging way, as this is easily done and can waste a lot of valuable time. A very useful piece of advice for checking tweaked code is to consider the use of Internet Explorer's integrated developer tool. In case you haven't noticed, it's a simple press of F12 on the keyboard and you'll find everything you need to get debugging. IE 9 and higher versions even pack a nifty trace module that allows you to perform low-level inspection on client-server interaction, without the need for additional third-party tools. We don't intend to devote this book to CSS, but one useful CSS element to be familiar with is the display: none; element, which can be used to hide any element it's put in. For example, if you add this to the .button element, it will hide the Login button completely. A common task is altering the part of the page where you see the Application and Network Access Portal text displayed. The text string itself can be edited using the master language files, which we will discuss shortly. The background of that part of the page, however, is built with the files headertopl.gif, headertopm.gif, and headertopr.gif. The original page design is classic HTML—it places headertopl on the left, headertopr on the right, and repeats headertopm in between to fill the space. If you need to change it, you could simply design a similar layout and put the replacement image files in /InternalSite/Images/CustomUpdate. Alternatively, you might choose to customize the logo only by copying the /InternalSite/Samples/logo.inc file into the /InternalSite/Inc/CustomUpdate folder, as this is where the HTML code that pertains to that area is located. Another thing that's worth noting is that if you create a custom CSS file, it takes effect immediately, and there's no need to do an activation. Well at least for the purposes of testing anyway. The same applies for image file changes too but as a general rule you should always remember to activate when finished, as any new configurations or files will need to be pushed into the TMG storage. Arrays are no exception to this rule either and you should know that custom files are only propagated to array members during an activation, so in this scenario, you do need to activate after each change. During development, you may copy the custom files to each member node manually to save time between activations, or better still, simply stop NLB on all array members so that all client traffic is directed to the one you are working on. An equally important point is that when you test changes to the code, the browser's cache or IIS itself may still retain files from the previous test or config, so if changes you've made do not appear first time around, then start by clearing your browser's cache and even reset IIS, before assuming you messed up the code. Customizing the portal As we said earlier, the pages that make up a portal and its various flavors are under the PortalHomePage folder. These are all ASP.NET files (.ASPX), and the scope for making any alterations here is very limited. However, the appearance is mostly controlled via the file /InternalSite/PortalHomePage/Standard.Master, which contains many visual parameters that you can change. For example, the DIV with ID content has a section pertaining to the side bar application list. You might customize the midTopSideBarCell width setting to make the bar wider or thinner. You can even hide it completely by adding style="display: none;" to the contentLeftSideBarCell table cell. As always, make sure you copy the master file to CustomUpdate, and not touch the original file, and as with the CSS files, any changes you make take effect immediately. Additional things that you can do with the portal are removing or adding buttons to the portal toolbar. For example, you might add a button to point to a help page that describes your applications, or a procedure to contact your internal technical support in case of a problem with the site.
Read more
  • 0
  • 0
  • 2710
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-so-what-django
Packt
26 Mar 2013
7 min read
Save for later

So, what is Django?

Packt
26 Mar 2013
7 min read
(For more resources related to this topic, see here.) I would like to introduce you to Django by using a definition straight from its official website: Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. The first part of this definition makes clear that Django is a software framework written in Python and designed to support the development of web applications by offering a series of solutions to common problems and an abstraction of common design patterns for web development. The second part already gives you a it clear idea of the basic concepts on which Django is built, by highlighting its capabilities on rapid development without compromising the quality and the maintainability of the code. To get the job done in a fast and clean way, the Django stack is made up of a series of layers that have nearly no dependencies between them. This introduces great benefits as it will drive you to code with almost no knowledge sharing between components making future changes easy to apply and avoiding side effects on other components. All this identifies Django as a loosely coupled framework and its structure is a consequence of the just described approach and can be defined as the Model-Template-View (MTV) framework, a variation of the well know architectural pattern called Model-View-Controller (MVC). The MTV structure can be explained in the following way: Model: The application data View: Which data is presented Template: How the data is presented As you can understand from the architectural structure of the framework one of the most basic and important Django components is the Object Relational Mapper (ORM) that lets you define your data models entirely in Python and offers a complete dynamic API to access your database. The template engine also plays an important role in making the framework so great and easy to use—it is built to be designer-friendly. This means the templates are just HTML and that the template language doesn't add any variable assignments or advanced logic, offering only "programming-esque" functionality such as looping. Another innovative concept in the Django template engine is the introduction of template inheritance. The possibility to extend a base template discourages redundancy and helps you to keep the information in one place. The key to the success of a web framework is to also make it possible to easily plug third part modules in it. Django uses this concept and it comes—like Python—with "batteries included". It is built with a system to plug in applications in an easy way and the framework itself already includes a series of useful applications that you can feel free to use or not. One of the included applications that makes Django successful is the automatic admin interface, a complete, user-friendly, and production-ready web admin interface for your projects. It's easy to customize and extend and is a great added value that helps you to speed up most common web projects. In modern web application development, systems are often built for a global audience, and web frameworks have to take into account the need to provide support for internalization and localization. Django has full support for the translation of text, formatting of dates, times, and numbers, and time zones, and all this makes it possible to create multilingual web projects in a clear and easy way. On top of all these great features, Django is shipped with a complete cache framework that is a must-have support in a web framework if we want to grant great performance with high load. This component makes caching an easy task offering supports for different types of cache backends, from memory cache to the most famous, memacached. There are several other reasons that make Django a great framework and most of them can be really understood by diving into the framework, so do not hesitate and let's jump into Django. Installation Installing Django on your system is very easy. As it is just Python, you will only need a small effort to get it up and running on your machine. We will do it in two easy steps: Step 1 – What do I need? The only thing you need on your system to get Django running is obviously Python. At the time of writing this book, the latest version available is the 1.5c1 (release candidate) and it works on all Python versions from 2.6.5 to 2.7, and it also features experimental support for Version 3.2 and Version 3.3. Get the right Python package for your system at http://www.python.org. If you are running Linux or Mac OSX, Python is probably already installed in your operating system. If you are using Windows you will need to add the path of the Python installation folder (C:Python27) to the environment variables. You can verify that Python is installed by typing python in your shell. The expected result should look similar to the following output: Python 2.7.2 (default, Jun 20 2012, 16:23:33) [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Step 2 – Get and install Django Now we will see two methods to install Django: through a Python package manager tool called pip and the manual way. Feel free to use the one that you prefer. At the time of writing this book the Django Version 1.5 is in the release candidate status, if this is still the case jump to the manual installation step and download the 1.5 release candidate package in place of the last stable one. Installing Django with pip Install pip; the easiest way is to get the installer from http://www.pip-installer.org: If you are using a Unix OS execute the following command: $ sudo pip install Django If you are using Windows you will need to start a shell with administrator privileges and run the following command: $ pip install Django Installing Django manually Download the last stable release from the official Django website https://www.djangoproject.com/download/. Uncompress the downloaded file using the tool that you prefer. Change to the directory just created (cd Django-X.Y). If you are using a Unix OS execute the following command: $ sudo python setup.py install If you are using Windows you will need to start a shell with administrator privileges and run the command: $ python setup.py install Django Verifying the Django installation To verify that Django is installed on your system you just need to open a shell and launch a Python console by typing python. In the Python console try to import Django: >>> import django >>> django.get_version() '1.5' c1 And that's it!! Now that Django is installed on your system we can start to explore all its potential. Summary In this article we learned about what Django actually is, what you can do with it, and why it's so great. We also learned how to download and install Django with minimum fuss and then set it up so that you can use it as soon as possible. Resources for Article : Further resources on this subject: Creating an Administration Interface in Django [Article] Creating an Administration Interface with Django 1.0 [Article] Views, URLs, and Generic Views in Django 1.0 [Article]
Read more
  • 0
  • 0
  • 2709

article-image-show-hide-rows-and-highlighting-cells
Packt
09 Apr 2013
7 min read
Save for later

Show/hide rows and Highlighting cells

Packt
09 Apr 2013
7 min read
(For more resources related to this topic, see here.) Show/hide rows Click a link to trigger hiding or displaying of table rows. Getting ready Once again, start off with an HTML table. This one is not quite as simple a table as in previous recipes. You'll need to create a few <td> tags that span the entire table, as well as provide some specific classes to certain elements. How to do it... Again, give the table an id attribute. Each of the rows that represent a department, specifically the rows that span the entire table, should have a class attribute value of dept. <table border="1" id="employeeTable"> <thead> <tr> <th>Last Name</th> <th>First Name</th> <th>Phone</th> </tr> </thead> <tbody> <tr> <td colspan="3" class="dept"> </td> </tr> Each of the department names should be links where the <a> elements have a class of rowToggler. <a href="#" class="rowToggler">Accounting</a> Each table row that contains employee data should have a class attribute value that corresponds to its department. Note that class names cannot contain spaces. So in the case of the Information Technology department, the class names should be InformationTechnology without a space. The issue of the space will be addressed later. <tr class="Accounting"> <td>Frang</td> <td>Corey</td> <td>555-1111</td> </tr> The following script makes use of the class names to create a table whose rows can be easily hidden or shown by clicking a link: <script type="text/javascript"> $( document ).ready( function() { $( "a.rowToggler" ).click( function( e ) { e.preventDefault(); var dept = $( this ).text().replace( /s/g, "" ); $( "tr[class=" + dept + "]" ).toggle(); }) }); </script> With the jQuery implemented, departments are "collapsed", and will only reveal the employees when the link is clicked. How it works... The jQuery will "listen" for a click event on any <a> element that has a class of rowToggler. In this case, capture a reference to the event that triggered the action by passing e to the click handler function. $( "a.rowToggler" ).click( function( e ) In this case, e is simply a variable name. It can be any valid variable name, but e is a standard convention. The important thing is that jQuery has a reference to the event. Why? Because in this case, the event was that an <a> was clicked. The browser's default behavior is to follow a link. This default behavior needs to be prevented. As luck would have it, jQuery has a built-in function called preventDefault(). The first line of the function makes use of this by way of the following: e.preventDefault(); Now that you've safely prevented the browser from leaving or reloading the page, set a variable with a value that corresponds to the name of the department that was just clicked. var dept = $( this ).text().replace( /s/g, "" ); Most of the preceding line should look familiar. $( this ) is a reference to the element that was clicked, and text() is something you've already used. You're getting the text of the <a> tag that was clicked. This will be the name of the department. But there's one small issue. If the department name contains a space, such as "Information Technology", then this space needs to be removed. .replace( /s/g, "" ) replace() is a standard JavaScript function that uses a regular expression to replace spaces with an empty string. This turns "Information Technology" into "InformationTechnology", which is a valid class name. The final step is to either show or hide any table row with a class that matches the department name that was clicked. Ordinarily, the selector would look similar to the following: $( "tr.InformationTechnology" ) Because the class name is a variable value, an alternate syntax is necessary. jQuery provides a way to select an element using any attribute name and value. The selector above can also be represented as follows: $( "tr[class=InformationTechnology]" ) The entire selector is a literal string, as indicated by the fact that it's enclosed in quotes. But the department name is stored in a variable. So concatenate the literal string with the variable value: $( "tr[class=" + dept + "]" ) With the desired elements selected, either hide them if they're displayed, or display them if they're hidden. jQuery makes this very easy with its built-in toggle() method. Highlighting cells Use built-in jQuery traversal methods and selectors to parse the contents of each cell in a table and apply a particular style (for example, a yellow background or a red border) to all cells that meet a specified set of criteria. Getting ready Borrowing some data from Tiobe (http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html), create a table of the top five programming languages for 2012. To make it "pop" a bit more, each <td> in the Ratings column that's over 10 percent will be highlighted in yellow, and each <td> in the Delta column that's less than zero will be highlighted in red. Each <td> in the Ratings column should have a class of ratings, and each <td> in the Delta column should have a class of delta. Additionally, set up two CSS classes for the highlights as follows: .highlight { background-color: #FFFF00; } /* yellow */ .highlight-negative { background-color: #FF0000; } /* red */ Initially, the table should look as follows: How to do it... Once again, give the table an id attribute (but by now, you knew that), as shown in the following code snippet: <table border="1" id="tiobeTable"> <thead> <tr> <th>Position<br />Dec 2012</th> <th>Position<br />Dec 2011</th> <th>Programming Language</th> <th>Ratings<br />Dec 2012</th> <th>Delta<br />Dec 2011</th> </tr> </thead> Apply the appropriate class names to the last two columns in each table row within the <tbody>, as shown in the following code snippet: <tbody> <tr> <td>1</td> <td>2</td> <td>C</td> <td class="ratings">18.696%</td> <td class="delta">+1.64%</td> </tr> With the table in place and properly marked up with the appropriate class names, write the script to apply the highlights as follows: <script type="text/javascript"> $( document ).ready( function() { $( "#tiobeTable tbody tr td.ratings" ).each( function( index ) { if ( parseFloat( $( this ).text() ) > 10 ) { $( this ).addClass( "highlight" ); } }); $( "#tiobeTable tbody tr td.delta" ).each( function( index ) { if ( parseFloat( $( this ).text() ) < 0 ) { $( this ).addClass( "highlight-negative" ); } }); }); </script> Now, you will see a much more interesting table with multiple visual cues: How it works... Select the <td> elements within the tbody tag's table rows that have a class of ratings. For each iteration of the loop, test whether or not the value (text) of the <td> is greater than 10. Because the values in <td> contain non-numeric characters (in this case, % signs), we use JavaScript's parseFloat() to convert the text to actual numbers: parseFloat( $( this ).text() ) Much of that should be review. $( this ) is a reference to the element in question. text() retrieves the text from the element. parseFloat() ensures that the value is numeric so that it can be accurately compared to the value 10. If the condition is met, use addClass() to apply the highlight class to <td>. Do the same thing for the Delta column. The only difference is in checking to see if the text is less than zero. If it is, apply the class highlight-negative. The end result makes it much easier to identify specific data within the table. Summary In this article we covered two recipes Show/hide rows and Highlighting cells. Resources for Article : Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress5 [Article] Using jQuery Script for Creating Dynamic Table of Contents [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2708

article-image-customization
Packt
29 Aug 2013
18 min read
Save for later

Customization

Packt
29 Aug 2013
18 min read
(For more resources related to this topic, see here.) Now that you've got a working multisite installation, we can start to add some customizations. Customizations can come in a few different forms. You're probably aware of the customizations that can be made via WordPress plugins and custom WordPress themes. Another way we can customize a multisite installation is by creating a landing page that displays information about each blog in the multisite network, as well as displaying information about the author for that individual blog. I wrote a blog post shortly after WordPress 3.0 came out detailing how to set this landing page up. At the time, I was working for a local newspaper and we were setting up a blog network for some of our reporters to blog about politics (being in Iowa, politics are a pretty big deal here, especially around Caucus time). You can find the post at http://www.longren.org/how-to-wordpress-3-0-multi-site-blog-directory/ if you'd like to read it. There's also a blog-directory.zip file attached to the post that you can download and use as a starting point. Before we get into creating the landing page, let's get the really simple stuff out of the way and briefly go over how themes and plugins are managed in WordPress multisite installations. We'll start with themes. Themes can be activated network-wide, which is really nice if you have a theme that you want every site in your blog network to use. You can also activate a theme for an individual blog, instead of activating the theme for the entire network. This is helpful if one or two individual blogs need to have a totally unique theme that you don't want to be available to the other blogs. Theme management You can install themes on a multisite installation the same way you would with a regular WordPress install. Just upload the theme folder to your wp-content/themes folder to install the theme. Installing a theme is only part of the process for individual blogs to use the themes; you'll need to activate them for the entire blog network or for specific blogs. To activate a theme for an entire network, click on Themes and then click on Installed Themes in the Network Admin dashboard. Check the themes that you want to enable, select Network Enable in the Bulk Actions drop-down menu, and then click on the Apply button. That's all there is to activating a theme (or multiple themes) for an entire multisite network. The individual blog owners can apply the theme just as you would in a regular, nonmultisite WordPress installation. To activate a theme for just one specific blog and not the entire network, locate the target blog using the Sites menu option in the Network Admin dashboard. After you've found it, put your mouse cursor over the blog URL or domain. You should see the action menu appear immediately under the blog URL or domain. The action menu includes options such as Edit, Dashboard, and Deactivate. Click on the Edit action menu item and then navigate to the Themes tab. To activate an individual theme, just click on Enable below the theme that you want to activate. Or, if you want to activate multiple themes for the blog, check all the themes you want through the checkboxes on the left-hand side of each theme from the list, select Enable in the Bulk Actions drop-down menu, and then click on the Apply button. An important thing to keep in mind is that themes that have been activated for the entire network won't be shown here. Now the blog administrator can apply the theme to their blog just as they normally would. Plugin management To install a plugin for network use, upload the plugin folder to wp-content/plugins/ as you normally would. Unlike themes, plugins cannot be activated on a per-site basis. As network administrator, you can add a plugin to the Plugins page for all sites, but you can't make a plugin available to one specific site. It's all or nothing. You'll also want to make sure that you've enabled the Plugins page for the sites that need it. You can enable the Plugins page by visiting the Network Admin dashboard and then navigating to the Network Settings page. At the bottom of that page you should see a Menu Settings section where you can check a box next to Plugins to enable the plugins page. Make sure to click on the Save Changes button at the bottom or nothing will change. You can see the Menu Settings section in the following screenshot. That's where you'll want to enable the Plugins page. Enabling the Plugins page After you've ensured that the Plugins page is enabled, specific site administrators will be able to enable or disable plugins as they normally would. To enable a plugin for the entire network go to the Network Admin dashboard, mouse over the Plugins menu item, and then click on Installed Plugins. This will look pretty familiar to you; it looks pretty much like the Installed Plugins page does on a typical WordPress single-site installation. The following screenshot shows the installed Plugins page: Enable plugins for the entire network You'll notice below each plugin there's some text that reads Network Activate. I bet you can guess what clicking that will do. Yes, clicking on the Network Activate link will activate that plugin for the entire network. That's all there is to the basic plugin setup in WordPress multisite. There's another plugin feature that is often overlooked in WordPress multisite, and that's must-use plugins. These are plugins that are required for every blog or site on the network. Must-use plugins can be installed in the wp-content/mu-plugins/ folder but they must be single-file plugins. The files within folders won't be read. You can't deactivate or activate the must-use plugins. If they exist in the mu-plugins folder, they're used. They're entirely hidden from the Plugin pages, so individual site administrators won't even see them or know they're there. I don't think must-use plugins are a commonly used thing, but it's nice information to have just in case. Some plugins, especially domain mapping plugins, need to be installed in mu-plugins and need to be activated before the normal plugins. Third-party plugins and plugins for plugin management We should also discuss some of the plugins that are available for making the management of plugins and themes on WordPress multisite installations a bit easier. One of the most popular plugins is called Multisite Plugin Manager, and is developed by Aaron Edwards of UglyRobot.com. The Multisite Plugin Manager plugin was previously known as WPMU Plugin Manager. The plugin can be obtained from the WordPress Plugin Directory at http://wordpress.org/plugins/multisite-plugin-manager/. Here's a quick rundown of some of the plugin features: Select which plugins specific sites have access to Set certain plugins to autoactivate itself for new blogs or sites Activate/deactivate a plugin on all network sites Assign some special plugin access permissions to specific network sites Another plugin that you may find useful is called WordPress MU Domain Mapping. It allows you to easily map any blog or site to an external domain. You can find this plugin in the WordPress Plugin Directory at http://wordpress.org/plugins/wordpress-mu-domain-mapping/. There's one other plugin I want to mention; the only drawback is that it's not a free plugin. It's called WP Multisite Replicator, and you can probably guess what it does. This plugin will allow you to set up a "template" blog or site and then replicate that site when adding new sites or blogs. The idea is that you'd create a blog or site that has all the features that other sites in your network will need. Then, you can easily replicate that site when creating a new site or blog. It will copy widgets, themes, and plugin settings to the new site or blog, which makes deploying new, identical sites extremely easy. It's not an expensive plugin, costing about $36 at the moment of writing, which is well worth it in my opinion if you're going to be creating lots of sites that have the same basic feature set. WP Multisite Replicator can be found at http://wpebooks.com/replicator/. Creating a blog directory / landing page Now that we've got the basic theme and plugin stuff taken care of, I think it's time to move onto creating a blog directory or a landing page, whichever you prefer to call it. From this point on I'll be referring to it as a blog directory. You can see a basic version of what we're going to make in the following screenshot. The users on my example multisite installation, at http://multisite.longren.org/, are Kayla and Sydney, my wife and daughter. Blog directory example As I mentioned earlier in this article, I wrote a post about creating this blog directory back when WordPress 3.0 was first released in 2010. I'll be using that post as the basis for most of what we'll do to create the blog directory with some things changed around, so this will integrate more nicely into whatever theme you're using on the main network site. The first thing we need to do is to create a basic WordPress page template that we can apply to a newly created WordPress page. This template will contain the HTML structure for the blog directory and will dictate where the blog names will be shown and where the recent posts and blog description will be displayed. There's no reason that you need to stick with the following blog directory template specifically. You can take the code and add or remove various elements, such as the recent post if you don't want to show them. You'll want to implement this blog directory template as a child theme in WordPress. To do that, just make a new folder in wp-content/themes/. I typically name my child theme folders after their parent themes. So, the child theme folder I made was wp-content/themes/twentythirteen-tyler/. Once you've got the child theme folder created, make a new file called style.css and make sure it has the following code at the top: /*Theme Name: Twenty Thirteen Child ThemeTheme URI: http://yourdomain.comDescription: Child theme for the Twenty Thirteen themeAuthor: Your name hereAuthor URI: http://example.com/about/Template: twentythirteenVersion: 0.1.0*//* ================ *//* = The 1Kb Grid = */ /* 12 columns, 60 pixels each, with 20pixel gutter *//* ================ */.grid_1 { width:60px; }.grid_2 { width:140px; }.grid_3 { width:220px; }.grid_4 { width:300px; }.grid_5 { width:380px; }.grid_6 { width:460px; }.grid_7 { width:540px; }.grid_8 { width:620px; }.grid_9 { width:700px; }.grid_10 { width:780px; }.grid_11 { width:860px; }.grid_12 { width:940px; }.column {margin: 0 10px;overflow: hidden;float: left;display: inline;}.row {width: 960px;margin: 0 auto;overflow: hidden;}.row .row {margin: 0 -10px;width: auto;display: inline-block;}.author_bio {border: 1px solid #e7e7e7;margin-top: 10px;padding-top: 10px;background:#ffffff url('images/sign.png') no-repeat right bottom;z-index: -99999;}small { font-size: 12px; }.post_count {text-align: center;font-size: 10px;font-weight: bold;line-height: 15px;text-transform: uppercase;float: right;margin-top: -65px;margin-right: 20px;}.post_count a {color: #000;}#content a {text-decoration: none;-webkit-transition: text-shadow .1s linear;outline: none;}#content a:hover {color: #2DADDA;text-shadow: 0 0 6px #278EB3;} The preceding code adds the styling to your child theme, and also tells WordPress the name of your child theme. You can set a custom theme name if you want by changing the Theme Name line to whatever you like. The only fields in that big comment block that are required are the Theme Name and Template. Template, which should be set to whatever the parent theme's folder name is. Now create another file in your child theme folder and name it blog-directory.php. The remaining blocks of code need to go into that blog-directory.php file: <?php/*** Template Name: Blog Directory** A custom page template with a sidebar.* Selectable from a dropdown menu on the add/edit page screen.** @package WordPress* @subpackage Twenty Thirteen*/?><?php get_header(); ?><div id="container" class="onecolumn"><div id="content" role="main"><?php the_post(); ?><div id="post-<?php the_ID(); ?>" <?php post_class(); ?>><?php if ( is_front_page() ) { ?><h2 class="entry-title"><?php the_title(); ?></h2><?php } else { ?><h1 class="entry-title"><?php the_title(); ?></h1><?php } ?><div class="entry-content"><!-- start blog directory --><?php// Get the authors from the database ordered randomlyglobal $wpdb;$query = "SELECT ID, user_nicename from $wpdb->users WHERE ID != '1'ORDER BY 1 LIMIT 50";$author_ids = $wpdb->get_results($query);// Loop through each authorforeach($author_ids as $author) {// Get user data$curauth = get_userdata($author->ID);// Get link to author page$user_link = get_author_posts_url($curauth->ID);// Get blog details for the authors primary blog ID$blog_details = get_blog_details($curauth->primary_blog);$postText = "posts";if ($blog_details->post_count == "1") {$postText = "post";}$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($blog_details->last_updated));if ($blog_details->post_count == "") {$blog_details->post_count = "0";}$posts = $wpdb->get_col( "SELECT ID FROM wp_".$curauth->primary_blog."_posts WHERE post_status='publish' AND post_type='post' ANDpost_author='$author->ID' ORDER BY ID DESC LIMIT 5");$postHTML = "";$i=0;foreach($posts as $p) {$postdetail=get_blog_post($curauth->primary_blog,$p);if ($i==0) {$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($postdetail->post_date));}$postHTML .= "&#149; <a href="$postdetail->guid">$postdetail->post_title</a><br />";$i++;}?> The preceding code sets up the theme and queries the WordPress database for authors. In WordPress multisite, users who have the Author permission type have a blog on the network. There's also code for grabbing posts from each of the network sites for displaying the recent posts from them: <div class="author_bio"><div class="row"><div class="column grid_2"><a href="<?php echo $blog_details->siteurl; ?>"><?php echo get_avatar($curauth->user_email, '96','http://www.gravatar.com/avatar/ad516503a11cd5ca435acc9bb6523536'); ?></a></div><div class="column grid_6"><a href="<?php echo $blog_details->siteurl; ?>" title="<?php echo$curauth->display_name; ?> - <?=$blog_details->blogname?>"><?php //echo $curauth->display_name; ?> <?=$curauth->display_name;?></a><br /><small><strong>Updated <?=$updatedOn?></strong></small><br /><?php echo $curauth->description; ?></div><div class="column grid_3"><h3>Recent Posts</h3><?=$postHTML?></div></div><span class="post_count"><a href="<?php echo $blog_details->siteurl;?>" title="<?php echo $curauth->display_name; ?>"><?=$blog_details->post_count?><br /><?=$postText?></a></span></div><?php } ?><!-- end blog directory --><?php wp_link_pages( array( 'before' => '<div class="page-link">' .__( 'Pages:', 'twentythirteen' ), 'after' => '</div>' ) ); ?><?php edit_post_link( __( 'Edit', 'twentythirteen' ), '<spanclass="edit-link">', '</span>' ); ?></div><!-- .entry-content --></div><!-- #post-<?php the_ID(); ?> --><?php comments_template( '', true ); ?></div><!-- #content --></div><!-- #container --><?php //get_sidebar(); ?><?php get_footer(); ?> Once you've got your blog-directory.php template file created, we can get actually started by setting up the page to serve as our blog directory. You'll need to set the root site's theme to your child theme; do it just as you would on a nonmultisite WordPress installation. Before we go further, let's create a couple of network sites so we have something to see on our blog directory. Go to the Network Admin dashboard, mouse over the Sites menu option in the left-hand side menu, and then click on Add New. If you're using a directory network type, as I am, the value you enter for the Site Address field will be the path to the directory that site sits in. So, if you enter tyler as the Site Address value, that the site can be reached at http://multisite.longren.org/tyler/. The settings that I used to set up multisite.longren.org/tyler/ can be seen in the following screenshot. You'll probably want to add a couple of sites just so you get a good idea of what your blog directory page will look like. Example individual site setup Now we can set up the actual blog directory page. On the main dashboard (that is, /wp-admin/index.php), mouse over the Pages menu item on the left-hand side of the page and then click on Add New to create a new page. I usually name this page Home, as I use the blog directory as the first page that visitors see when visiting the site. From there, visitors can choose which blog they want to visit and are also shown a list of the most recent posts from each blog. There's no need to enter any content on the page, unless you want to. The important part is selecting the Blog Directory template. Before you publish your new Home / blog directory page, make sure that you select Blog Directory as the Template value in the Page Attributes section. An example a Home / blog directory page can be seen in the following screenshot: Example Home / blog directory page setup Once you've got your page looking like the example, as shown in the previous screenshot, you can go ahead and publish that page. The Update button in the previous screenshot will say Publish if you've not yet published the page. Next you'll want to set the newly created Home / blog directory page as the front page for the site. To do this, mouse over the Settings menu option on the left-hand side of the page and then click on Reading. For the Front page displays value, check A static page (select below). Previously, Your latest posts was checked. Then for the Front Page drop-down menu, just select the Home page that we just created and click on the Save Changes button at the bottom of the page. I usually don't set anything for the Posts page drop-down menu because I never post to the "parent" site. If you do intend to make posts on the parent site, I'd suggest that you create a new blank page titled Posts and then select that page as your Posts page. The reading settings I use at multisite.longren.org can be as shown in the following screenshot: Reading settings setup After you've saved your reading settings, open up your parent site in your browser and you should see something similar to what I showed in the Blog directory example screenshot. Again, there's no need for you to keep the exact setup that I've used in the example blog-directory.php file. You can give that any style/design that you want. You can rearrange the various pieces on the page as you prefer. You should probably have a decent working knowledge of HTML and CSS to accomplish this, however. You should have a basic blog directory at this point. If you have any experience with PHP, HTML, and CSS, you can probably extend this basic code and do a whole lot more with it. The number of plugins is astounding and they are of very good quality, generally. And I think Automattic has done great things for WordPress in general. No other CMS can claim to have anything like the number of plugins that WordPress does. Summary You should be able to effectively manage themes and plugins in a multisite installation now. If you set the code up, you've got a directory showcasing network member content and, more importantly, know how to set up and customize a WordPress child theme now. Resources for Article : Further resources on this subject: Customization using ADF Meta Data Services [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing an Avatar in Flash Multiplayer Virtual Worlds [Article]
Read more
  • 0
  • 0
  • 2700

article-image-so-what-markdown
Packt
02 Sep 2013
3 min read
Save for later

So, what is Markdown?

Packt
02 Sep 2013
3 min read
(For more resources related to this topic, see here.) Markdown is a lightweight markup language that simplifies the workflow of web writers. It was created in 2004 by John Gruber with contributions and feedback from Aaron Swartz. Markdown was described by John Gruber as: "A text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)." Markdown is two different things: A simple syntax to create documents in plain text A software tool written in Perl that converts the plain text formatting to HTML Markdown's formatting syntax was designed with simplicity and readability as a design goal. We add rich formatting to plain text without considering that we are writing using a markup language. The main features of Markdown Markdown is: Easy to use: Markdown has an extremely simple syntax that you can learn quickly Fast: Writing is much faster than with HTML, we can dramatically reduce the time we spend crafting HTML tags Clean: We can clearly read and write documents that are always translated into HTML without mistakes or errors Flexible: It is suitable for many things such as writing on the Internet, e-mails, creating presentations Portable: Documents are just plain text; we can edit Markdown with any basic text editor in any operating system Made for writers: Writers can focus on distraction-free writing Here, we can see a quick comparison of the same document between HTML and Markdown. This is the final result that we achieve in both cases: The following code is written in HTML: <h1>Markdown</h1><p>This a <strong>simple</strong> example of Markdown.</p><h2>Features:</h2><ul><li>Simple</li><li>Fast</li><li>Portable</li></ul><p>Check the <a href="http://daringfireball.net/projects/markdown/">official website</a>.</p> The following code is an equivalent document written in Markdown: # Markdown This a **simple** example of Markdown. ## Features: - Simple - Fast - Portable Check the [official website]. [official website]:http://daringfireball.net/projects/markdown/ summary In this article, we learned the basics of Markdown and got to know its features. We also saw how convenient Markdown is, thus proving the fact that it's made for writers. Resources for Article: Further resources on this subject: Generating Reports in Notebooks in RStudio [Article] Database, Active Record, and Model Tricks [Article] Formatting and Enhancing Your Moodle Materials: Part 1 [Article]
Read more
  • 0
  • 0
  • 2691
article-image-android-application-testing-adding-functionality-ui
Packt
27 Jun 2011
10 min read
Save for later

Android Application Testing: Adding Functionality to the UI

Packt
27 Jun 2011
10 min read
  Android Application Testing Guide Build intensively tested and bug free Android applications  The user interface is in place. Now we start adding some basic functionality. This functionality will include the code to handle the actual temperature conversion. Temperature conversion From the list of requirements from the previous article we can obtain this statement: When one temperature is entered in one field the other one is automatically updated with the conversion. Following our plan we must implement this as a test to verify that the correct functionality is there. Our test would look something like this: @UiThreadTest public final void testFahrenheitToCelsiusConversion() { mCelsius.clear(); mFahrenheit.clear(); final double f = 32.5; mFahrenheit.requestFocus(); mFahrenheit.setNumber(f); mCelsius.requestFocus(); final double expectedC = TemperatureConverter.fahrenheitToCelsius(f); final double actualC = mCelsius.getNumber(); final double delta = Math.abs(expectedC - actualC); final String msg = "" + f + "F -> " + expectedC + "C but was " + actualC + "C (delta " + delta + ")"; assertTrue(msg, delta < 0.005); } Firstly, as we already know, to interact with the UI changing its values we should run the test on the UI thread and thus is annotated with @UiThreadTest. Secondly, we are using a specialized class to replace EditText providing some convenience methods like clear() or setNumber(). This would improve our application design. Next, we invoke a converter, named TemperatureConverter, a utility class providing the different methods to convert between different temperature units and using different types for the temperature values. Finally, as we will be truncating the results to provide them in a suitable format presented in the user interface we should compare against a delta to assert the value of the conversion. Creating the test as it is will force us to follow the planned path. Our first objective is to add the needed code to get the test to compile and then to satisfy the test's needs. The EditNumber class In our main project, not in the tests one, we should create the class EditNumber extending EditText as we need to extend its functionality. We use Eclipse's help to create this class using File | New | Class or its shortcut in the Toolbars. This screenshot shows the window that appears after using this shortcut: The following table describes the most important fields and their meaning in the previous screen:     Field Description Source folder: The source folder for the newly-created class. In this case the default location is fine. Package: The package where the new class is created. In this case the default package com.example.aatg.tc is fine too. Name: The name of the class. In this case we use EditNumber. Modifiers: Modifiers for the class. In this particular case we are creating a public class. Superclass: The superclass for the newly-created type. We are creating a custom View and extending the behavior of EditText, so this is precisely the class we select for the supertype. Remember to use Browse... to find the correct package. Which method stubs would you like to create? These are the method stubs we want Eclipse to create for us. Selecting Constructors from superclass and Inherited abstract methods would be of great help. As we are creating a custom View we should provide the constructors that are used in different situations, for example when the custom View is used inside an XML layout. Do you want to add comments? Some comments are added automatically when this option is selected. You can configure Eclipse to personalize these comments. Once the class is created we need to change the type of the fields first in our test: public class TemperatureConverterActivityTests extends ActivityInstrumentationTestCase2<TemperatureConverterActivity> { private TemperatureConverterActivity mActivity; private EditNumber mCelsius; private EditNumber mFahrenheit; private TextView mCelsiusLabel; private TextView mFahrenheitLabel; ... Then change any cast that is present in the tests. Eclipse will help you do that. If everything goes well, there are still two problems we need to fix before being able to compile the test: We still don't have the methods clear() and setNumber() in EditNumber We don't have the TemperatureConverter utility class To create the methods we are using Eclipse's helpful actions. Let's choose Create method clear() in type EditNumber. Same for setNumber() and getNumber(). Finally, we must create the TemperatureConverter class. Be sure to create it in the main project and not in the test project. Having done this, in our test select Create method fahrenheitToCelsius in type TemperatureConverter. This fixes our last problem and leads us to a test that we can now compile and run. Surprisingly, or not, when we run the tests, they will fail with an exception: 09-06 13:22:36.927: INFO/TestRunner(348): java.lang. ClassCastException: android.widget.EditText 09-06 13:22:36.927: INFO/TestRunner(348): at com.example.aatg. tc.test.TemperatureConverterActivityTests.setUp( TemperatureConverterActivityTests.java:41) 09-06 13:22:36.927: INFO/TestRunner(348): at junit.framework. TestCase.runBare(TestCase.java:125) That is because we updated all of our Java files to include our newly-created EditNumber class but forgot to change the XMLs, and this could only be detected at runtime. Let's proceed to update our UI definition: <com.example.aatg.tc.EditNumber android_layout_height="wrap_content" android_id="@+id/celsius" android_layout_width="match_parent" android_layout_margin="@dimen/margin" android_gravity="right|center_vertical" android_saveEnabled="true" /> That is, we replace the original EditText by com.example.aatg.tc.EditNumber which is a View extending the original EditText. Now we run the tests again and we discover that all tests pass. But wait a minute, we haven't implemented any conversion or any handling of values in the new EditNumber class and all tests passed with no problem. Yes, they passed because we don't have enough restrictions in our system and the ones in place simply cancel themselves. Before going further, let's analyze what just happened. Our test invoked the mFahrenheit.setNumber(f) method to set the temperature entered in the Fahrenheit field, but setNumber() is not implemented and it is an empty method as generated by Eclipse and does nothing at all. So the field remains empty. Next, the value for expectedC—the expected temperature in Celsius is calculated invoking TemperatureConverter.fahrenheitToCelsius(f), but this is also an empty method as generated by Eclipse. In this case, because Eclipse knows about the return type it returns a constant 0. So expectedC becomes 0. Then the actual value for the conversion is obtained from the UI. In this case invoking getNumber() from EditNumber. But once again this method was automatically generated by Eclipse and to satisfy the restriction imposed by its signature, it must return a value that Eclipse fills with 0. The delta value is again 0, as calculated by Math.abs(expectedC – actualC). And finally our assertion assertTrue(msg, delta < 0.005) is true because delta=0 satisfies the condition, and the test passes. So, is our methodology flawed as it cannot detect a simple situation like this? No, not at all. The problem here is that we don't have enough restrictions and they are satisfied by the default values used by Eclipse to complete auto-generated methods. One alternative could be to throw exceptions at all of the auto-generated methods, something like RuntimeException("not yet implemented") to detect its use when not implemented. But we will be adding enough restrictions in our system to easily trap this condition. TemperatureConverter unit tests It seems, from our previous experience, that the default conversion implemented by Eclipse always returns 0, so we need something more robust. Otherwise this will be only returning a valid result when the parameter takes the value of 32F. The TemperatureConverter is a utility class not related with the Android infrastructure, so a standard unit test will be enough to test it. We create our tests using Eclipse's File | New | JUnit Test Case, filling in some appropriate values, and selecting the method to generate a test as shown in the next screenshot. Firstly, we create the unit test by extending junit.framework.TestCase and selecting com.example.aatg.tc.TemperatureConverter as the class under test: Then by pressing the Next > button we can obtain the list of methods we may want to test: We have implemented only one method in TemperatureConverter, so it's the only one appearing in the list. Other classes implementing more methods will display all the options here. It's good to note that even if the test method is auto-generated by Eclipse it won't pass. It will fail with the message Not yet implemented to remind us that something is missing. Let's start by changing this: /** * Test method for {@link com.example.aatg.tc. TemperatureConverter#fahrenheitToCelsius(double)}. */ public final void testFahrenheitToCelsius() { for (double c: conversionTableDouble.keySet()) { final double f = conversionTableDouble.get(c); final double ca = TemperatureConverter.fahrenheitToCelsius(f); final double delta = Math.abs(ca - c); final String msg = "" + f + "F -> " + c + "C but is " + ca + " (delta " + delta + ")"; assertTrue(msg, delta < 0.0001); } } Creating a conversion table with values for different temperature conversion we know from other sources would be a good way to drive this test. private static final HashMap<Double, Double> conversionTableDouble = new HashMap<Double, Double>(); static { // initialize (c, f) pairs conversionTableDouble.put(0.0, 32.0); conversionTableDouble.put(100.0, 212.0); conversionTableDouble.put(-1.0, 30.20); conversionTableDouble.put(-100.0, -148.0); conversionTableDouble.put(32.0, 89.60); conversionTableDouble.put(-40.0, -40.0); conversionTableDouble.put(-273.0, -459.40); } We may just run this test to verify that it fails, giving us this trace: junit.framework.AssertionFailedError: -40.0F -> -40.0C but is 0.0 (delta 40.0)at com.example.aatg.tc.test.TemperatureConverterTests. testFahrenheitToCelsius(TemperatureConverterTests.java:62) at java.lang.reflect.Method.invokeNative(Native Method) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:169) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:154) at android.test.InstrumentationTestRunner.onStart( InstrumentationTestRunner.java:520) at android.app.Instrumentation$InstrumentationThread.run( Instrumentation.java:1447) Well, this was something we were expecting as our conversion always returns 0. Implementing our conversion, we discover that we need some ABSOLUTE_ZERO_F constant: public class TemperatureConverter { public static final double ABSOLUTE_ZERO_C = -273.15d; public static final double ABSOLUTE_ZERO_F = -459.67d; private static final String ERROR_MESSAGE_BELOW_ZERO_FMT = "Invalid temperature: %.2f%c below absolute zero"; public static double fahrenheitToCelsius(double f) { if (f < ABSOLUTE_ZERO_F) { throw new InvalidTemperatureException( String.format(ERROR_MESSAGE_BELOW_ZERO_FMT, f, 'F')); } return ((f - 32) / 1.8d); } } Absolute zero is the theoretical temperature at which entropy would reach its minimum value. To be able to reach this absolute zero state, according to the laws of thermodynamics, the system should be isolated from the rest of the universe. Thus it is an unreachable state. However, by international agreement, absolute zero is defined as 0K on the Kelvin scale and as -273.15°C on the Celsius scale or to -459.67°F on the Fahrenheit scale. We are creating a custom exception, InvalidTemperatureException, to indicate a failure providing a valid temperature to the conversion method. This exception is created simply by extending RuntimeException: public class InvalidTemperatureException extends RuntimeException { public InvalidTemperatureException(String msg) { super(msg); } } Running the tests again we now discover that testFahrenheitToCelsiusConversion test fails, however testFahrenheitToCelsius succeeds. This tells us that now conversions are correctly handled by the converter class but there are still some problems with the UI handling this conversion. A closer look at the failure trace reveals that there's something still returning 0 when it shouldn't. This reminds us that we are still lacking a proper EditNumber implementation. Before proceeding to implement the mentioned methods, let's create the corresponding tests to verify what we are implementing is correct.
Read more
  • 0
  • 0
  • 2689

article-image-chef-infrastructure
Packt
05 Sep 2013
10 min read
Save for later

Chef Infrastructure

Packt
05 Sep 2013
10 min read
(For more resources related to this topic, see here.) First, let's talk about the terminology used in the Chef universe. A cookbook is a collection of recipes – codifying the actual resources, which should be installed and configured on your node – and the files and configuration templates needed. Once you've written your cookbooks, you need a way to deploy them to the nodes you want to provision. Chef offers multiple ways for this task. The most widely used way is to use a central Chef Server. You can either run your own or sign up for Opscode's Hosted Chef. The Chef Server is the central registry where each node needs to get registered. The Chef Server distributes the cookbooks to the nodes based on their configuration settings. Knife is Chef's command-line tool called to interact with the Chef Server. You use it for uploading cookbooks and managing other aspects of Chef. On your nodes, you need to install Chef Client – the part that retrieves the cookbooks from the Chef Server and executes them on the node. In this article, we'll see the basic infrastructure components of your Chef setup at work and learn how to use the basic tools. Let's get started with having a look at how to use Git as a version control system for your cookbooks. Using version control Do you manually back up every file before you change it? And do you invent creative filename extensions like _me and _you when you try to collaborate on a file? If you answer yes to any of the preceding questions, it's time to rethink your process. A version control system (VCS) helps you stay sane when dealing with important files and collaborating on them. Using version control is a fundamental part of any infrastructure automation. There are multiple solutions (some free, some paid) for managing source version control including Git, SVN, Mercurial, and Perforce. Due to its popularity among the Chef community, we will be using Git. However, you could easily use any other version control system with Chef. Getting ready You'll need Git installed on your box. Either use your operating system's package manager (such as Apt on Ubuntu or Homebrew on OS X), or simply download the installer from www.git-scm.org. Git is a distributed version control system. This means that you don't necessarily need a central host for storing your repositories. But in practice, using GitHub as your central repository has proven to be very helpful. In this article, I'll assume that you're using GitHub. Therefore, you need to go to github.com and create a (free) account to follow the instructions given in this article. Make sure that you upload your SSH key following the instructions at https://help.github.com/articles/generating-ssh-keys, so that you're able to use the SSH protocol to interact with your GitHub account. As soon as you've created your GitHub account, you should create your repository by visiting https://github.com/new and using chef-repo as the repository name. How to do it... Before you can write any cookbooks, you need to set up your initial Git repository on your development box. Opscode provides an empty Chef repository to get you started. Let's see how you can set up your own Chef repository with Git using Opscode's skeleton. Download Opscode's skeleton Chef repository as a tarball: mma@laptop $ wget http://github.com/opscode/chef-repo/tarball/master...TRUNCATED OUTPUT...2013-07-05 20:54:24 (125 MB/s) - 'master' saved [9302/9302] Extract the downloaded tarball: mma@laptop $ tar zvf master Rename the directory. Replace 2c42c6a with whatever your downloaded tarball contained in its name: mma@laptop $ mv opscode-chef-repo-2c42c6a/ chef-repo Change into your newly created Chef repository: mma@laptop $ cd chef-repo/ Initialize a fresh Git repository: mma@laptop:~/chef-repo $ git init .Initialized empty Git repository in /Users/mma/work/chef-repo/.git/ Connect your local repository to your remote repository on github.com. Make sure to replace mmarschall with your own GitHub username: mma@laptop:~/chef-repo $ git remote add origin git@github.com:mmarschall/chef-repo.git Add and commit Opscode's default directory structure: mma@laptop:~/chef-repo $ git add .mma@laptop:~/chef-repo $ git commit -m "initial commit"[master (root-commit) 6148b20] initial commit10 files changed, 339 insertions(+), 0 deletions(-)create mode 100644 .gitignore...TRUNCATED OUTPUT...create mode 100644 roles/README.md Push your initialized repository to GitHub. This makes it available to all your co-workers to collaborate on it. mma@laptop:~/chef-repo $ git push -u origin master...TRUNCATED OUTPUT...To git@github.com:mmarschall/chef-repo.git* [new branch] master -> master How it works... You've downloaded a tarball containing Opscode's skeleton repository. Then, you've initialized your chef-repo and connected it to your own repository on GitHub. After that, you've added all the files from the tarball to your repository and committed them. This makes Git track your files and the changes you make later. As a last step, you've pushed your repository to GitHub, so that your co-workers can use your code too. There's more... Let's assume you're working on the same chef-repo repository together with your co-workers. They cloned your repository, added a new cookbook called other_cookbook, committed their changes locally, and pushed their changes back to GitHub. Now it's time for you to get the new cookbook down to your own laptop. Pull your co-workers, changes from GitHub. This will merge their changes into your local copy of the repository. mma@laptop:~/chef-repo $ git pull From github.com:mmarschall/chef-repo * branch master -> FETCH_HEAD ...TRUNCATED OUTPUT... create mode 100644 cookbooks/other_cookbook/recipes/default.rb In the case of any conflicting changes, Git will help you merge and resolve them. Installing Chef on your workstation If you want to use Chef, you'll need to install it on your local workstation first. You'll have to develop your configurations locally and use Chef to distribute them to your Chef Server. Opscode provides a fully packaged version, which does not have any external prerequisites. This fully packaged Chef is called the Omnibus Installer. We'll see how to use it in this section. Getting ready Make sure you've curl installed on your box by following the instructions available at http://curl.haxx.se/download.html. How to do it... Let's see how to install Chef on your local workstation using Opscode's Omnibus Chef installer: In your local shell, run the following command: mma@laptop:~/chef-repo $ curl -L https://www.opscode.com/chef/install.sh | sudo bashDownloading Chef......TRUNCATED OUTPUT...Thank you for installing Chef! Add the newly installed Ruby to your path: mma@laptop:~ $ echo 'export PATH="/opt/chef/embedded/bin:$PATH"'>> ~/.bash_profile && source ~/.bash_profile How it works... The Omnibus Installer will download Ruby and all the required Ruby gems into /opt/chef/embedded. By adding the /opt/chef/embedded/bin directory to your .bash_profile, the Chef command-line tools will be available in your shell. There's more... If you already have Ruby installed in your box, you can simply install the Chef Ruby gem by running mma@laptop:~ $ gem install chef. Using the Hosted Chef platform If you want to get started with Chef right away (without the need to install your own Chef Server) or want a third party to give you an Service Level Agreement (SLA) for your Chef Server, you can sign up for Hosted Chef by Opscode. Opscode operates Chef as a cloud service. It's quick to set up and gives you full control, using users and groups to control the access permissions to your Chef setup. We'll configure Knife, Chef's command-line tool to interact with Hosted Chef, so that you can start managing your nodes. Getting ready Before being able to use Hosted Chef, you need to sign up for the service. There is a free account for up to five nodes. Visit http://www.opscode.com/hosted-chef and register for a free trial or the free account. I registered as the user webops with an organization short-name of awo. After registering your account, it is time to prepare your organization to be used with your chef-repo repository. How to do it... Carry out the following steps to interact with the Hosted Chef: Navigate to http://manage.opscode.com/organizations. After logging in, you can start downloading your validation keys and configuration file. Select your organization to be able to see its contents using the web UI. Regenerate the validation key for your organization and save it as <your-organization-short-name>.pem in the .chef directory inside your chef-repo repository. Generate the Knife config and put the downloaded knife.rb into the .chef directory inside your chef-repo directory as well. Make sure you replace webops with the username you chose for Hosted Chef and awo with the short-name you chose for your organization: current_dir = File.dirname(__FILE__)log_level :infolog_location STDOUTnode_name "webops"client_key "#{current_dir}/webops.pem"validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem"chef_server_url "https://api.opscode.com/organizations/awo"cache_type 'BasicFile'cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )cookbook_path ["#{current_dir}/../cookbooks"] Use Knife to verify that you can connect to your hosted Chef organization. It should only have your validator client so far. Instead of awo, you'll see your organization's short-name: mma@laptop:~/chef-repo $ knife client listawo-validator How it works... Hosted Chef uses two private keys (called validators): one for the organization and the other for every user. You need to tell Knife where it can find these two keys in your knife.rb file. The following two lines of code in your knife.rb file tells Knife about which organization to use and where to find its private key: validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem" The following line of code in your knife.rb file tells Knife about where to find your users' private key: client_key "#{current_dir}/webops.pem" And the following line of code in your knife.rb file tells Knife that you're using Hosted Chef. You will find your organization name as the last part of the URL: chef_server_url "https://api.opscode.com/organizations/awo" Using the knife.rb file and your two validators Knife can now connect to your organization hosted by Opscode. You do not need your own, self-hosted Chef Server, nor do you need to use Chef Solo in this setup. There's more... This setup is good for you if you do not want to worry about running, scaling, and updating your own Chef Server and if you're happy with saving all your configuration data in the cloud (under Opscode's control). If you need to have all your configuration data within your own network boundaries, you might sign up for Private Chef, which is a fully supported and enterprise-ready version of Chef Server. If you don't need any advanced enterprise features like role-based access control or multi-tenancy, then the open source version of Chef Server might be just right for you. Summary In this article, we learned about key concepts such as cookbooks, roles, and environments and how to use some basic tools such as Git, Knife, Chef Shell, Vagrant, and Berkshelf. Resources for Article: Further resources on this subject: Automating the Audio Parameters – How it Works [Article] Skype automation [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 2688

article-image-our-app-and-tool-stack
Packt
04 Mar 2015
33 min read
Save for later

Our App and Tool Stack

Packt
04 Mar 2015
33 min read
In this article by Zachariah Moreno, author of the book AngularJS Deployment Essentials, you will learn how to do the following: Minimize efforts and maximize results using a tool stack optimized for AngularJS development Access the krakn app via GitHub Scaffold an Angular app with Yeoman, Grunt, and Bower Set up a local Node.js development server Read through krakn's source code Before NASA or Space X launches a vessel into the cosmos, there is a tremendous amount of planning and preparation involved. The guiding principle when planning for any successful mission is similar to minimizing efforts and resources while retaining maximum return on the mission. Our principles for development and deployment are no exception to this axiom, and you will gain a firmer working knowledge of how to do so in this article. (For more resources related to this topic, see here.) The right tools for the job Web applications can be compared to buildings; without tools, neither would be a pleasure to build. This makes tools an indispensable factor in both development and construction. When tools are combined, they form a workflow that can be repeated across any project built with the same stack, facilitating the practices of design, development, and deployment. The argument can be made that it is just as paramount to document workflow as an application's source code or API. Along with grouping tools into categories based on the phases of building applications, it is also useful to group tools based on the opinions of a respective project—in our case, Angular, Ionic, and Firebase. I call tools grouped into opinionated workflows tool stacks. For example, the remainder of this article discusses the tool stack used to build the application that we will deploy across environments in this book. In contrast, if you were to build a Ruby on Rails application, the tool stack would be completely different because the project's opinions are different. Our app is called krakn, and it functions as a real-time chat application built on top of the opinions of Angular, the Ionic Framework, and Firebase. You can find all of krakn's source code at https://github.com/zachmoreno/krakn. Version control with Git and GitHub Git is a command-line interface (CLI) developed by Linus Torvalds, to use on the famed Linux kernel. Git is mostly popular due to its distributed architecture making it nearly impossible for corruption to occur. Git's distributed architecture means that any remote repository has all of the same information as your local repository. It is useful to think of Git as a free insurance policy for my code. You will need to install Git using the instructions provided at www.git-scm.com/ for your development workstation's operating system. GitHub.com has played a notable role in Git's popularization, turning its functionality into a social network focused on open source code contributions. With a pricing model that incentivizes Open Source contributions and licensing for private, GitHub elevated the use of Git to heights never seen before. If you don't already have an account on GitHub, now is the perfect time to visit github.com to provision a free account. I mentioned earlier that krakn's code is available for forking at github.com/ZachMoreno/krakn. This means that any person with a GitHub account has the ability to view my version of krakn, and clone a copy of their own for further modifications or contributions. In GitHub's web application, forking manifests itself as a button located to the right of the repository's title, which in this case is XachMoreno/krakn. When you click on the button, you will see an animation that simulates the hardcore forking action. This results in a cloned repository under your account that will have a title to the tune of YourName/krakn. Node.js Node.js, commonly known as Node, is a community-driven server environment built on Google Chrome's V8 JavaScript runtime that is entirely event driven and facilitates a nonblocking I/O model. According to www.nodejs.org, it is best suited for: "Data-intensive real-time applications that run across distributed devices." So what does all this boil down to? Node empowers web developers to write JavaScript both on the client and server with bidirectional real-time I/O. The advent of Node has empowered developers to take their skills from the client to the server, evolving from frontend to full stack (like a caterpillar evolving into a butterfly). Not only do these skills facilitate a pay increase, they also advance the Web towards the same functionality as the traditional desktop or native application. For our purposes, we use Node as a tool; a tool to build real-time applications in the fewest number of keystrokes, videos watched, and words read as possible. Node is, in fact, a modular tool through its extensible package interface, called Node Package Manager (NPM). You will use NPM as a means to install the remainder of our tool stack. NPM The NPM is a means to install Node packages on your local or remote server. NPM is how we will install the majority of the tools and software used in this book. This is achieved by running the $ npm install –g [PackageName] command in your command line or terminal. To search the full list of Node packages, visit www.npmjs.org or run $ npm search [Search Term] in your command line or terminal as shown in the following screenshot: Yeoman's workflow Yeoman is a CLI that is the glue that holds your tools into your opinionated workflow. Although the term opinionated might sound off-putting, you must first consider the wisdom and experience of the developers and community before you who maintain Yeoman. In this context, opinionated means a little more than a collection of the best practices that are all aimed at improving your developer's experience of building static websites, single page applications, and everything in between. Opinionated does not mean that you are locked into what someone else feels is best for you, nor does it mean that you must strictly adhere to the opinions or best practices included. Yeoman is general enough to help you build nearly anything for the Web as well as improving your workflow while developing it. The tools that make up Yeoman's workflow are Yo, Grunt.js, Bower, and a few others that are more-or-less optional, but are probably worth your time. Yo Apart from having one of the hippest namespaces, Yo is a powerful code generator that is intelligent enough to scaffold most sites and applications. By default, instantiating a yo command assumes that you mean to scaffold something at a project level, but yo can also be scoped more granularly by means of sub-generators. For example, the command for instantiating a new vanilla Angular project is as follows: $ yo angular radicalApp Yo will not finish your request until you provide some further information about your desired Angular project. This is achieved by asking you a series of relevant questions, and based on your answers, yo will scaffold a familiar application folder/file structure, along with all the boilerplate code. Note that if you have worked with the angular-seed project, then the Angular application that yo generates will look very familiar to you. Once you have an Angular app scaffolded, you can begin using sub-generator commands. The following command scaffolds a new route, radicalRoute, within radicalApp: $ yo angular:route radicalRoute The :route sub-generator is a very powerful command, as it automates all of the following key tasks: It creates a new file, radicalApp/scripts/controllers/radicalRoute.js, that contains the controller logic for the radicalRoute view It creates another new file, radicalApp/views/radicalRoute.html, that contains the associated view markup and directives Lastly, it adds an additional route within, radicalApp/scripts/app.js, that connects the view to the controller Additionally, the sub-generators for yo angular include the following: :controller :directive :filter :service :provider :factory :value :constant :decorator :view All the sub-generators allow you to execute finer detailed commands for scaffolding smaller components when compared to :route, which executes a combination of sub-generators. Installing Yo Within your workstation's terminal or command-line application type, insert the following command, followed by a return: $ npm install -g yo If you are a Linux or Mac user, you might want to prefix the command with sudo, as follows: $ sudo npm install –g yo Grunt Grunt.js is a task runner that enhances your existing and/or Yeoman's workflow by automating repetitive tasks. Each time you generate a new project with yo, it creates a /Gruntfile.js file that wires up all of the curated tasks. You might have noticed that installing Yo also installs all of Yo's dependencies. Reading through /Gruntfile.js should incite a fair amount of awe, as it gives you a snapshot of what is going on under the hood of Yeoman's curated Grunt tasks and its dependencies. Generating a vanilla Angular app produces a /Gruntfile.js file, as it is responsible for performing the following tasks: It defines where Yo places Bower packages, which is covered in the next section It defines the path where the grunt build command places the production-ready code It initializes the watch task to run: JSHint when JavaScript files are saved Karma's test runner when JavaScript files are saved Compass when SCSS or SASS files are saved The saved /Gruntfile.js file It initializes LiveReload when any HTML or CSS files are saved It configures the grunt server command to run a Node.js server on localhost:9000, or to show test results on localhost:9001 It autoprefixes CSS rules on LiveReload and grunt build It renames files for optimizing browser caching It configures the grunt build command to minify images, SVG, HTML, and CSS files or to safely minify Angular files Let us pause for a moment to reflect on the amount of time it would take to find, learn, and implement each dependency into our existing workflow for each project we undertake. Ok, we should now have a greater appreciation for Yeoman and its community. For the vast majority of the time, you will likely only use a few Grunt commands, which include the following: $ grunt server $ grunt test $ grunt build Bower If Yo scaffolds our application's structure and files, and Grunt automates repetitive tasks for us, then what does Bower bring to the party? Bower is web development's missing package manager. Its functionality parallels that of Ruby Gems for the Ruby on Rails MVC framework, but is not limited to any single framework or technology stack. The explicit use of Bower is not required by the Yeoman workflow, but as I mentioned previously, the use of Bower is configured automatically for you in your project's /Gruntfile.js file. How does managing packages improve our development workflow? With all of the time we've been spending in our command lines and terminals, it is handy to have the ability to automate the management of third-party dependencies within our application. This ability manifests itself in a few simple commands, the most ubiquitous being the following command: $ bower install [PackageName] --save With this command, Bower will automate the following steps: First, search its packages for the specified package name Download the latest stable version of the package if found Move the package to the location defined in your project's /Gruntfile.js file, typically a folder named /bower_components Insert dependencies in the form of <link> elements for CSS files in the document's <head> element, and <script> elements for JavaScript files right above the document's closing </body> tag, to the package's files within your project's /index.html file This process is one that web developers are more than familiar with because adding a JavaScript library or new dependency happens multiple times within every project. Bower speeds up our existing manual process through automation and improves it by providing the latest stable version of a package and then notifying us of an update if one is available. This last part, "notifying us of an update if … available", is important because as a web developer advances from one project to the next, it is easy to overlook keeping dependencies as up to date as possible. This is achieved by running the following command: $ bower update This command returns all the available updates, if available, and will go through the same process of inserting new references where applicable. Bower.io includes all of the documentation on how to use Bower to its fullest potential along with the ability to search through all of the available Bower packages. Searching for available Bower packages can also be achieved by running the following command: $ bower search [SearchTerm] If you cannot find the specific dependency for which you search, and the project is on GitHub, consider contributing a bower.json file to the project's root and inviting the owner to register it by running the following command: $ bower register [ThePackageName] [GitEndpoint] Registration allows you to install your dependency by running the next command: $ bower install [ThePackageName] The Ionic framework The Ionic framework is a truly remarkable advancement in bridging the gap between web applications and native mobile applications. In some ways, Ionic parallels Yeoman where it assembles tools that were already available to developers into a neat package, and structures a workflow around them, inherently improving our experience as developers. If Ionic is analogous to Yeoman, then what are the tools that make up Ionic's workflow? The tools that, when combined, make Ionic noteworthy are Apache Cordova, Angular, Ionic's suite of Angular directives, and Ionic's mobile UI framework. Batarang An invaluable piece to our Angular tool stack is the Google Chrome Developer Tools extension, Batarang, by Brian Ford. Batarang adds a third-party panel (on the right-hand side of Console) to DevTools that facilitates Angular's specific inspection in the event of debugging. We can view data in the scopes of each model, analyze each expression's performance, and view a beautiful visualization of service dependencies all from within Batarang. Because Angular augments the DOM with ng- attributes, it also provides a Properties pane within the Elements panel, to inspect the models attached to a given element's scope. The extension is easy to install from either the Chrome Web Store or the project's GitHub repository and inspection can be enabled by performing the following steps: Firstly, open the Chrome Developer Tools. You should then navigate to the AngularJS panel. Finally, select the Enable checkbox on the far right tab. Your active Chrome tab will then be reloaded automatically, and the AngularJS panel will begin populating the inspection data. In addition, you can leverage the Angular pane with the Elements panel to view Angular-specific properties at an elemental level, and observe the $scope variable from within the Console panel. Sublime Text and Editor Integration While developing any Angular app, it is helpful to augment our workflow further with Angular-specific syntax completion, snippets, go to definition, and quick panel search in the form of a Sublime Text package. Perform the following steps: If you haven't installed Sublime Text already, you need to first install Package Control. Otherwise, continue with the next step. Once installed, press command + Shift + P in Sublime. Then, you need to select the Package Control: Install Package option. Finally, type angularjs and press Enter on your keyboard. In addition to support within Sublime, Angular enhancements exist for lots of popular editors, including WebStorm, Coda, and TextMate. Krakn As a quick refresher, krakn was constructed using all of the tools that are covered in this article. These include Git, GitHub, Node.js, NPM, Yeoman's workflow, Yo, Grunt, Bower, Batarang, and Sublime Text. The application builds on Angular, Firebase, the Ionic Framework, and a few other minor dependencies. The workflow I used to develop krakn went something like the following. Follow these steps to achieve the same thing. Note that you can skip the remainder of this section if you'd like to get straight to the deployment action, and feel free to rename things where necessary. Setting up Git and GitHub The workflow I followed while developing krakn begins with initializing our local Git repository and connecting it to our remote master repository on GitHub. In order to install and set up both, perform the following steps: Firstly, install all the tool stack dependencies, and create a folder called krakn. Following this, run $ git init, and you will create a README.md file. You should then run $ git add README.md and commit README.md to the local master branch. You then need to create a new remote repository on GitHub called XachMoreno/krakn. Following this, run the following command: $ git remote add origin git@github.com:[YourGitHubUserName] /krakn.git Conclude the setup by running $ git push –u origin master. Scaffolding the app with Yo Scaffolding our app couldn't be easier with the yo ionic generator. To do this, perform the following steps: Firstly, install Yo by running $ npm install -g yo. After this, install generator-ionicjs by running $ npm install -g generator-ionicjs. To conclude the scaffolding of your application, run the yo ionic command. Development After scaffolding the folder structure and boilerplate code, our workflow advances to the development phase, which is encompassed in the following steps: To begin, run grunt server. You are now in a position to make changes, for example, these being deletions or additions. Once these are saved, LiveReload will automatically reload your browser. You can then review the changes in the browser. Repeat steps 2-4 until you are ready to advance to the predeployment phase. Views, controllers, and routes Being a simple chat application, krakn has only a handful of views/routes. They are login, chat, account, menu, and about. The menu view is present in all the other views in the form of an off-canvas menu. The login view The default view/route/controller is named login. The login view utilizes the Firebase's Simple Login feature to authenticate users before proceeding to the rest of the application. Apart from logging into krakn, users can register a new account by entering their desired credentials. An interesting part of the login view is the use of the ng-show directive to toggle the second password field if the user selects the register button. However, the ng-model directive is the first step here, as it is used to pass the input text from the view to the controller and ultimately, the Firebase Simple Login. Other than the Angular magic, this view uses the ion-view directive, grid, and buttons that are all core to Ionic. Each view within an Ionic app is wrapped within an ion-view directive that contains a title attribute as follows: <ion-view title="Login"> The login view uses the standard input elements that contain a ng-model attribute to bind the input's value back to the controller's $scope as follows:   <input type="text" placeholder="you@email.com" ng-model= "data.email" />     <input type="password" placeholder=  "embody strength" ng-model="data.pass" />     <input type="password" placeholder=  "embody strength" ng-model="data.confirm" /> The Log In and Register buttons call their respective functions using the ng-click attribute, with the value set to the function's name as follows:   <button class="button button-block button-positive" ng-  click="login()" ng-hide="createMode">Log In</button> The Register and Cancel buttons set the value of $scope.createMode to true or false to show or hide the correct buttons for either action:   <button class="button button-block button-calm" ng-  click="createMode = true" ng-hide=  "createMode">Register</button>   <button class="button button-block button-calm" ng-  show="createMode" ng-click=  "createAccount()">Create Account</button>     <button class="button button-block button-  assertive" ng-show="createMode" ng-click="createMode =   false">Cancel</button> $scope.err is displayed only when you want to show the feedback to the user:   <p ng-show="err" class="assertive text-center">{{err}}</p>   </ion-view> The login controller is dependent on Firebase's loginService module and Angular's core $location module: controller('LoginCtrl', ['$scope', 'loginService', '$location',   function($scope, loginService, $location) { Ionic's directives tend to create isolated scopes, so it was useful here to wrap our controller's variables within a $scope.data object to avoid issues within the isolated scope as follows:     $scope.data = {       "email"   : null,       "pass"   : null,       "confirm"  : null,       "createMode" : false     } The login() function easily checks the credentials before authentication and sends feedback to the user if needed:     $scope.login = function(cb) {       $scope.err = null;       if( !$scope.data.email ) {         $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {         $scope.err = 'Please enter a password';       } If the credentials are sound, we send them to Firebase for authentication, and when we receive a success callback, we route the user to the chat view using $location.path() as follows:       else {         loginService.login($scope.data.email,         $scope.data.pass, function(err, user) {          $scope.err = err? err + '' : null;          if( !err ) {           cb && cb(user);           $location.path('krakn/chat');          }        });       }     }; The createAccount() function works in much the same way as login(), except that it ensures that the users don't already exist before adding them to your Firebase and logging them in:     $scope.createAccount = function() {       $scope.err = null;       if( assertValidLoginAttempt() ) {        loginService.createAccount($scope.data.email,    $scope.data.pass,          function(err, user) {           if( err ) {             $scope.err = err? err + '' : null;           }           else {             // must be logged in before I can write to     my profile             $scope.login(function() {              loginService.createProfile(user.uid,     user.email);              $location.path('krakn/account');             });           }          });       }     }; The assertValidLoginAttempt() function is a function used to ensure that no errors are received through the account creation and authentication flows:     function assertValidLoginAttempt() {       if( !$scope.data.email ) {        $scope.err = 'Please enter an email address';       }       else if( !$scope.data.pass ) {        $scope.err = 'Please enter a password';       }       else if( $scope.data.pass !== $scope.data.confirm ) {        $scope.err = 'Passwords do not match';       }       return !$scope.err;     }    }]) The chat view Keeping vegan practices aside, the meat and potatoes of krakn's functionality lives within the chat view/controller/route. The design is similar to most SMS clients, with the input in the footer of the view and messages listed chronologically in the main content area. The ng-repeat directive is used to display a message every time a message is added to the messages collection in Firebase. If you submit a message successfully, unsuccessfully, or without any text, feedback is provided via the placeholder attribute of the message input. There are two filters being utilized within the chat view: orderByPriority and timeAgo. The orderByPriority filter is defined within the firebase module that uses the Firebase object IDs that ensure objects are always chronological. The timeAgo filter is an open source Angular module that I found. You can access it at JS Fiddle. The ion-view directive is used once again to contain our chat view: <ion-view title="Chat"> Our list of messages is composed using the ion-list and ion-item directives, in addition to a couple of key attributes. The ion-list directive gives us some nice interactive controls using the option-buttons and can-swipe attributes. This results in each list item being swipable to the left, revealing our option-buttons as follows:    <ion-list option-buttons="itemButtons" can-swipe=     "true" ng-show="messages"> Our workhorse in the chat view is the trusty ng-repeat directive, responsible for persisting our data from Firebase to our service to our controller and into our view and back again:    <ion-item ng-repeat="message in messages |      orderByPriority" item="item" can-swipe="true"> Then, we bind our data into vanilla HTML elements that have some custom styles applied to them:     <h2 class="user">{{ message.user }}</h2> The third-party timeago filter converts the time into something such as, "5 min ago", similar to Instagram or Facebook:     <small class="time">{{ message.receivedTime |       timeago }}</small>     <p class="message">{{ message.text }}</p>    </ion-item>   </ion-list> A vanilla input element is used to accept chat messages from our users. The input data is bound to $scope.data.newMessage for sending data to Firebase and $scope.feedback is used to keep our users informed:   <input type="text" class="{{ feeling }}" placeholder=    "{{ feedback }}" ng-model="data.newMessage" /> When you click on the send/submit button, the addMessage() function sends the message to your Firebase, and adds it to the list of chat messages, in real time:   <button type="submit" id="chat-send" class="button button-small button-clear" ng-click="addMessage()"><span class="ion-android-send"></span></button> </ion-view> The ChatCtrl controller is dependant on a few more modules other than our LoginCtrl, including syncData, $ionicScrollDelegate, $ionicLoading, and $rootScope: controller('ChatCtrl', ['$scope', 'syncData', '$ionicScrollDelegate', '$ionicLoading', '$rootScope',    function($scope, syncData, $ionicScrollDelegate, $ionicLoading, $rootScope) { The userName variable is derived from the authenticated user's e-mail address (saved within the application's $rootScope) by splitting the e-mail and using everything before the @ symbol: var userEmail = $rootScope.auth.user.e-mail       userName = userEmail.split('@'); Avoid isolated scope issue in the same fashion, as we did in LoginCtrl:     $scope.data = {       newMessage   : null,       user      : userName[0]     } Our view will only contain the latest 20 messages that have been synced from Firebase:     $scope.messages = syncData('messages', 20); When a new message is saved/synced, it is added to the bottom of the ng-repeated list, so we use the $ionicScrollDeligate variable to automatically scroll the new message into view on the display as follows: $ionicScrollDelegate.scrollBottom(true); Our default chat input placeholder text is something on your mind?:     $scope.feedback = 'something on your mind?';     // displays as class on chat input placeholder     $scope.feeling = 'stable'; If we have a new message and a valid username (shortened), then we can call the $add() function, which syncs the new message to Firebase and our view is as follows:     $scope.addMessage = function() {       if(  $scope.data.newMessage         && $scope.data.user ) {        // new data elements cannot be synced without adding          them to FB Security Rules        $scope.messages.$add({                    text    : $scope.data.newMessage,                    user    : $scope.data.user,                    receivedTime : Number(new Date())                  });        // clean up        $scope.data.newMessage = null; On a successful sync, the feedback updates say Done! What's next?, as shown in the following code snippet:        $scope.feedback = 'Done! What's next?';        $scope.feeling = 'stable';       }       else {        $scope.feedback = 'Please write a message before sending';        $scope.feeling = 'assertive';       }     };       $ionicScrollDelegate.scrollBottom(true); ]) The account view The account view allows the logged in users to view their current name and e-mail address along with providing them with the ability to update their password and e-mail address. The input fields interact with Firebase in the same way as the chat view does using the syncData method defined in the firebase module: <ion-view title="'Account'" left-buttons="leftButtons"> The $scope.user object contains our logged in user's account credentials, and we bind them into our view as follows:   <p>{{ user.name }}</p>  …   <p>{{ user.email }}</p> The basic account management functionality is provided within this view; so users can update their e-mail address and or password if they choose to, using the following code snippet:   <input type="password" ng-keypress=    "reset()" ng-model="oldpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="newpass"/>  …   <input type="password" ng-keypress=    "reset()" ng-model="confirm"/> Both the updatePassword() and updateEmail() functions work in much the same fashion as our createAccount() function within the LoginCtrl controller. They check whether the new e-mail or password is not the same as the old, and if all is well, it syncs them to Firebase and back again:   <button class="button button-block button-calm" ng-click=    "updatePassword()">update password</button>  …    <p class="error" ng-show="err">{{err}}</p>   <p class="good" ng-show="msg">{{msg}}</p>  …   <input type="text" ng-keypress="reset()" ng-model="newemail"/>  …   <input type="password" ng-keypress="reset()" ng-model="pass"/>  …   <button class="button button-block button-calm" ng-click=    "updateEmail()">update email</button>  …   <p class="error" ng-show="emailerr">{{emailerr}}</p>   <p class="good" ng-show="emailmsg">{{emailmsg}}</p>  … </ion-view> The menu view Within krakn/app/scripts/app.js, the menu route is defined as the only abstract state. Because of its abstract state, it can be presented in the app along with the other views by the ion-side-menus directive provided by Ionic. You might have noticed that only two menu options are available before signing into the application and that the rest appear only after authenticating. This is achieved using the ng-show-auth directive on the chat, account, and log out menu items. The majority of the options for Ionic's directives are available through attributes making them simple to use. For example, take a look at the animation="slide-left-right" attribute. You will find Ionic's use of custom attributes within the directives as one of the ways that the Ionic Framework is setting itself apart from other options within this space. The ion-side-menu directive contains our menu list similarly to the one we previously covered, the ion-view directive, as follows: <ion-side-menus>  <ion-pane ion-side-menu-content>   <ion-nav-bar class="bar-positive"> Our back button is displayed by including the ion-nav-back-button directive within the ion-nav-bar directive:    <ion-nav-back-button class="button-clear"><i class=     "icon ion-chevron-left"></i> Back</ion-nav-back-button>   </ion-nav-bar> Animations within Ionic are exposed and used through the animation attribute, which is built atop the ngAnimate module. In this case, we are doing a simple animation that replicates the experience of a native mobile app:   <ion-nav-view name="menuContent" animation="slide-left-right"></ion-nav-view>  </ion-pane>    <ion-side-menu side="left">   <header class="bar bar-header bar-positive">    <h1 class="title">Menu</h1>   </header>   <ion-content class="has-header"> A simple ion-list directive/element is used to display our navigation items in a vertical list. The ng-show attribute handles the display of menu items before and after a user has authenticated. Before a user logs in, they can access the navigation, but only the About and Log In views are available until after successful authentication.    <ion-list>     <ion-item nav-clear menu-close href=      "#/app/chat" ng-show-auth="'login'">      Chat     </ion-item>       <ion-item nav-clear menu-close href="#/app/about">      About     </ion-item>       <ion-item nav-clear menu-close href=      "#/app/login" ng-show-auth="['logout','error']">      Log In     </ion-item> The Log Out navigation item is only displayed once logged in, and upon a click, it calls the logout() function in addition to navigating to the login view:     <ion-item nav-clear menu-close href="#/app/login" ng-click=      "logout()" ng-show-auth="'login'">      Log Out     </ion-item>    </ion-list>   </ion-content>  </ion-side-menu> </ion-side-menus> The MenuCtrl controller is the simplest controller in this application, as all it contains is the toggleMenu() and logout() functions: controller("MenuCtrl", ['$scope', 'loginService', '$location',   '$ionicScrollDelegate', function($scope, loginService,   $location, $ionicScrollDelegate) {   $scope.toggleMenu = function() {    $scope.sideMenuController.toggleLeft();   };     $scope.logout = function() {     loginService.logout();     $scope.toggleMenu();  };  }]) The about view The about view is 100 percent static, and its only real purpose is to present the credits for all the open source projects used in the application. Global controller constants All of krakn's controllers share only two dependencies: ionic and ngAnimate. Because Firebase's modules are defined within /app/scripts/app.js, they are available for consumption by all the controllers without the need to define them as dependencies. Therefore, the firebase service's syncData and loginService are available to ChatCtrl and LoginCtrl for use. The syncData service is how krakn utilizes three-way data binding provided by krakenjs.com. For example, within the ChatCtrl controller, we use syncData( 'messages', 20 ) to bind the latest twenty messages within the messages collection to $scope for consumption by the chat view. Conversely, when a ng-click user clicks the submit button, we write the data to the messages collection by use of the syncData.$add() method inside the $scope.addMessage() function: $scope.addMessage = function() {   if(...) { $scope.messages.$add({ ... });   } }; Models and services The model for krakn is www.krakn.firebaseio.com. The services that consume krakn's Firebase API are as follows: The firebase service in krakn/app/scripts/service.firebase.js The login service in krakn/app/scripts/service.login.js The changeEmail service in krakn/app/scripts/changeEmail.firebase.js The firebase service defines the syncData service that is responsible for routing data bidirectionally between krakn/app/bower_components/angularfire.js and our controllers. Please note that the reason I have not mentioned angularfire.js until this point is that it is basically an abstract data translation layer between firebaseio.com and Angular applications that intend on consuming data as a service. Predeployment Once the majority of an application's development phase has been completed, at least for the initial launch, it is important to run all of the code through a build process that optimizes the file size through compression of images and minification of text files. This piece of the workflow was not overlooked by Yeoman and is available through the use of the $ grunt build command. As mentioned in the section on Grunt, the /Gruntfile.js file defines where built code is placed once it is optimized for deployment. Yeoman's default location for built code is the /dist folder, which might or might not exist depending on whether you have run the grunt build command before. Summary In this article, we discussed the tool stack and workflow used to build the app. Together, Git and Yeoman formed a solid foundation for building krakn. Git and GitHub provided us with distributed version control and a platform for sharing the application's source code with you and the world. Yeoman facilitated the remainder of the workflow: scaffolding with Yo, automation with Grunt, and package management with Bower. With our app fully scaffolded, we were able to build our interface with the directives provided by the Ionic Framework, and wire up the real-time data synchronization forged by our Firebase instance. With a few key tools, we were able to minimize our development time while maximizing our return. Resources for Article: Further resources on this subject: Role of AngularJS? [article] AngularJS Project [article] Creating Our First Animation AngularJS [article]
Read more
  • 0
  • 0
  • 2688
article-image-working-data-application-components-sql-server-2008-r2
Packt
19 Feb 2010
4 min read
Save for later

Working with Data Application Components in SQL Server 2008 R2

Packt
19 Feb 2010
4 min read
(For more resources on Microsoft, see here.) A Data Application Component is an entity that integrates all data tier related objects used in authoring, deploying and managing into a single unit instead of working with them separately. Programmatically DACs belong to classes that are found in The Microsoft.SqlServer.Management.Dac namespace. DACs are stored in a DacStore and managed centrally. Dacs can be authored and built using SQL Server Data-Tier Application templates in VS2010 (now in Beta 2) or using SQL Server Management Studio. This article describes creating DAC using SQL Server 2008 R2 Nov-CTP(R2 server in this article), a new feature in this version. Overview of the article In order to proceed working with this example you need to download SQL Server 2008 R2 Nov-CTP. The ease with which this installs depend on the Windows OS on your machine. I had encountered problems installing it on my Windows XP SP3 where only partial files were installed. On Windows 7 Ultimate it installed very easily. This article uses the R2 Server installed on Windows 7 Ultimate. You can download the R2 Server from this link after registering at the site. Download the x 32 version, a 1.2 GB file. In order to work with Data Tier Applications in Visual Studio you need to install Visual Studio 2010 now in Beta 2. If you have installed Beta 1, take it out (use Add/Remove programs) before you install Beta 2. You would create a Database Project as shown in the next figure. In the following sections we will look at how to extract a DAC from a existing Database using tools in SSMS and R2 Server. This will be followed by deploying the DAC to a SQL Server 2008 (before R2 Version). In a future article we will see how to create and work with the DACs in Visual Studio. Extracting a DAC We will use the Extract a Data-Tier Application wizard to create a DAC file. Connect to the SQL Server 2008 Server in the Management Studio as shown. We will create a DAC package that will create a DAC file for us on completing this task. Right click the Pubs database and click on Tasks | Extract Data-Tier Appplication... You may also use any other database for working with this exercise. This brings up the Wizard as shown in the next figure. Read the notes on this window and review the database icons on this window. Click Next. The Set Properties page of the wizard gets displayed. The Application name will be the database name with which you started. You can change it if you like. The package file name will reflect the application name. The version is set at 1.0.0.0., but you may specify any version you like. You can create different DACs with different version numbers. Click Next. The program cranks up and after validation the Validation & Summary page gets displayed as shown. The file path of the package, the name of the package and the DAC objects that got into the package are all shown here. All database objects (Tables, Views, Stored Procedures etc) are included in the package. Click Save Report button to save the packaging info to a file. This saves the HTML file ExtractDACSummary_HODENTEK3_ pubs_20100125 to the SQL Server Management Studio folder. This report shows what objects were validated during this process as shown. Click Next. The Build the Package opens up and after the build process is completed you will be able to save the package as shown in the next picture. At the package location shown earlier you will see a package object as shown. This file can be Unpacked to a destination as well as opened with Microsoft SQL Server DAC Package File Unpack wizard. These actions can be accessed by making a right click on this package file.
Read more
  • 0
  • 0
  • 2686

article-image-getting-started-adobe-premiere-pro-cs6-hotshot
Packt
11 Jul 2013
14 min read
Save for later

Getting Started with Adobe Premiere Pro CS6 Hotshot

Packt
11 Jul 2013
14 min read
(For more resources related to this topic, see here.) Getting the story right! This is basic housekeeping and ignoring it is like making your own editing life much more frustrating. So take a deep breath, think of calm blue oceans, and begin by getting this project organized. First you need to set the Timeline correctly and then you will create a short storyboard of the interview; again you will do this by focusing on the beginning, middle, and end of the story. Always start this way as a good story needs these elements to make sense. For frame-accurate editing it's advisable to use the keyboard as much as possible, although some actions will need to be performed with the mouse. Towards the end of this task you will cover some new ground as you add and expand Timeline tracks in preparation for the tasks ahead. Prepare for Lift Off Once you have completed all the preparations detailed in the Mission Checklist section, you are ready to go. Launch Premiere Pro CS6 in the usual way and then proceed to the first task. Engage Thrusters First you will open the project template, save it as a new file, and then create a three-clip sequence; the rough assembly of your story. Once done, perform the following steps: When the Recent Projects splash screen appears, select Hotshots Template – Montage. Wait for the project to finish loading and save this as Hotshots – Interview Project. Close any sequences open on the Timeline. Select Editing Optimized Workspace. Select the Project panel and open the Video bin without creating a separate window. If you would like Premiere Pro to always open a bin without creating a separate window, select Edit | Preferences | General from the menu. When the General Options window displays, locate the Bins option area and change the Double-Click option to Open in Place. Import all eight video files into the Video folder inside the Project 3 folder. Create a new sequence. Pick any settings at random, you will correct this in the next step. Rename the sequence as Project 3. Match the Timeline settings with any clip from the Video bin, and then delete the clip from the Timeline. Set the Project panel as the active panel and switch to List View if it is not already displayed. Create the basic elements of a short story for this scene using only three of the available clips in the Video bin. To do this, hold down the Ctrl or command key and click on the clips named ahead. Make sure you click on them in the same order as they are presented here: Intro_Shot.avi Two_Shot.avi Exit_Shot.avi Ensure the Timeline indicator is at the start of the Timeline and then click on the Automate to Sequence icon. When the Automate To Sequence window appears, change Ordering to Selection Order and leave Placement as the default (Sequentially). Uncheck the Apply Default Audio Transition , Apply Default Video Transition, and Ignore Audio checkboxes. Click on OK or press Enter on the keyboard to complete this action. Right-click on the Video 1 track and select Add Tracks from the context menu. When the Add Tracks window appears, set the number of video tracks to be added as 2 and the number of audio tracks to be added as 0. Click on OK or press Enter to confirm these changes. Dial open the Audio 1 track (hint – small triangle next to Audio 1), then expand the Audio 1 track by placing the cursor at the bottom of the Audio 1 area, and clicking on it, and dragging it downwards. Stop before the Master audio track disappears below the bottom of the Timeline panel. The Master audio track is used to control the output of all the audio tracks present on the Timeline; this is especially useful when you come to prepare your timeline for exporting to DVD. The Master audio track also allows you to view the left and right audio channels of your project. More details on the use of the Master audio track can be found in the Premiere Pro reference guide, which can be downloaded from http://helpx.adobe.com/pdf/premiere_pro_reference.pdf. Make sure the Timeline panel is active and zoom in to show all the clips present (hint – press backslash). You should end this section with a Timeline that looks something like the following screenshot. Save your project (Press Ctrl + S or command + S) before moving on to the next task. Objective Complete - Mini Debriefing How did you do? Review the shortcuts listed next. Did you remember them all? In this task you should have automatically matched up the Timeline to the clips with one drag-and-drop, plus a delete. You should have then sent three clips from the Project panel to the Timeline using the Automate to Sequence function. Finally you should have added two new video tracks and expanded the Audio 1 track. Keyboard shortcuts covered in this task are as follows: (backslash): Zoom the Timeline to show all populated clips Ctrl or command + double-click: Open bin without creating a separate Project panel (also see the tip after step 3 in the Engage Thrusters section) Ctrl or command + N: Create a new sequence Ctrl or command + (backslash): Create new bin in the Project panel Ctrl or command + I: Open the Import window Shift + 1: Set the Project panel as active Shift + 3: Set Timeline as active Classified Intel In this project, the Automate to Timeline function is being used to create a rough assembly of three clips. These are placed on the Timeline in the order that you clicked on them in the project bin. This is known as the selection order and allows the Automate to Timeline function to ignore the clips-relative location in the project bin. This is not a practical work flow if you have too many clips in your Project panel (how would you remember the selection order of twenty clips?). However, for a small number of clips, this is a practical workflow to quickly and easily send a rough draft of your story to the Timeline in just a few clicks. If you remember nothing else from this book, always remember how to correctly use Automate To Timeline! Extracting audio fat Raw material from every interview ever filmed will have lulls and pauses, and some stuttering. People aren't perfect and time spent trying to get lines and timing just right can lead to an unfortunate waste of filming time. As this performance is not live, you, the all-seeing editor, have the power to cut those distracting lulls and pauses, keeping the pace on beat and audience's attention on track. In this task you will move through the Timeline, cutting out some of the audio fat using Premiere Pro's Extract function, and to get this frame accurate, you will use as many keyboard shortcuts as possible. Engage Thrusters You will now use the Extract function to remove "dead" audio areas from the Timeline. Perform the following steps: Set the Timeline panel as active then play the timeline back by pressing the L key once. Make a mental note of the silences that occur in the first clip (Intro_Shot.avi). Return the Timeline indicator to the start of the Timeline using the Home key. Zoom in on the Timeline by pressing the + (plus) key on the main keyboard area. Do this until your Timeline looks something like the screenshot just after the following tip: To zoom in and out of the Timeline use the + (plus) and - (minus) keys in the main keyboard area, not the ones in the number pad area. Pressing the plus or minus key in the number area allows you to enter an exact number of frames into whichever tool is currently active. You should be able to clearly see the first area of silence starting at around 06;09 on the Timeline. Use the J, K, and L keyboard shortcuts to place the Timeline indicator at this point. Press the I key to set an In point here, then move the Timeline indicator to the end of the silence (around 08;17), and press the O key to set an Out point. Press the # (hash) key on your keyboard to remove the marked section of silence using Premiere Pro's Extract function. Important information on Sync Locking tracks The above step will only work if you have the Sync Lock icons toggled on for both the Video 1 and Audio 1 tracks. The Sync Lock icon controls which Timeline tracks will be altered when using a function such as Extract. For example; if the Sync Lock icon was toggled off for the Audio 1 track, then only the video would be extracted, which is counterproductive to what you are trying to achieve in this task! By default each new project should open with the Sync Lock icon toggled on for all video and audio tracks that already exist on the Timeline, and those added at a later point in the project. More information on Sync Lock can be found in the Premiere Pro reference guide (tinyurl.com/cz5fvh9). Repeat steps 5 and 6 to remove silences from the following Timeline areas (you should judge these points for yourself rather than slavishly following the suggestions given next): i. Set In point at 07;11 and Out point at 08;10. ii.Press # (hash). iii.Set In point at 11;05 and Out point at 12;13. iv.Press # (hash). Play back the Timeline to make sure you haven't extracted away too much audio and clipped the end of a sentence. Use the Trim tool to restore the full sentence. You may have spotted other silences on the Timeline; for the moment leave them alone. You will deal with these using other methods later in this project. Save the project before moving on to the next section. Objective Complete - Mini Debriefing At the end of this section you should have successfully removed three areas of silence from the Intro_Shot.avi clip. You did this using the Extract function, an elegant way of removing unwanted areas from your clips. You may also have refreshed your working knowledge of the Trim tool. If this still feels a lit le alien to you, don't worry, you will have a chance to practice trimming skills later in this project. Classified Intel Extract is another cunningly simple function that does exactly what it says; it extracts a section of footage from the Timeline, and then closes the gap created by this ac i on. In one step it replicates the razor cut and ripple delete. Creating a J-cut (away) One of the most common video techniques used in interviews and documentaries (not to mention a number of films) is called a J-cut. This describes cutting away some of the video, while leaving the audio beneath intact. The deleted video area is then replaced with alternative footage. This creates a voice-over effect that allows for a seamless transfer between the alternative viewpoints and the original speaker. In this task you will create a J-cut by replacing the video at the start of Intro_Shot.avi, leaving the voice of the newsperson and replacing his image with cutaway shots of what he is describing. You will make full use of four-point edits. Engage Thrusters Create J-cuts and cutaway shots using work flows you should now be familiar with. Perform the following steps to do so: Send the Cutaways_1.avi clip from the Project panel to the Source Monitor. In the Source Monitor, create an In point at 00;00 and an Out point just before the shot changes (around 04;24). Switch to the Timeline and send the Timeline indicator to the start of the Timeline (00;00). Create an In point here. Use a keyboard shortcut of your choice to identify the point just before the newsperson mentions the "Local village shop". (hint – roughly at 06;09). Create an Out point here. You want to create a J-cut, which means protecting the audio track that is already on the Timeline. To do this click once on the Audio 1 track header so it turns dark gray. Switch back to the Source Monitor and send the marked Cutaways_1.avi clip to the Timeline using the Overwrite function (hint – press the '.' (period) key). When the Fit Clip window appears, select Change Clip Speed (Fit to Fill), and click on OK or press Enter on the keyboard. The village scene cutaway shot should now appear on Video 1, but Audio 1 should retain the newsperson's dialog. His inserted village scene clip will have also slowed slightly to match what's being said by the newsperson. Repeat steps 2 to 7 to place the Cutaways_1.avi clip that shows the shot of the village shop, to match the village church and the village pub on the Timeline with the newsperson's dialog. The following are some suggestions on times, but try to do this step first of all without looking too closely at them: For the village shop cutaway, set the Source Monitor In point at 05;00 and Out point at 09;24. Set the Timeline In point at 06;10 and Out point at 07;13. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the village church cutaway, set the Source Monitor In point at 10;00 and Out point at 14;24. Set the Timeline In point at 07;14 and Out point at 09;03. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. For the pub cutaway, send Reconstruction_1.avi to the Source Monitor. Set the Source Monitor In point at 04;11 and Out point at 04;17. Set the Timeline In point at 09;04 and Out point at 12;00. Switch back to Source Monitor. Send the clip in the Overwrite mode and set Change Clip Speed to Fit to Fill. The last cutaway shot here is part of the reconstruction reel and has been used because your camera person was unable (or forgot) to film a cutaway shot of the pub. This does sometimes happen and then it's down to you, the editor in charge, to get the piece on air with as few errors as possible. To do this you may find yourself scavenging footage from any of the other clips. In this case you have used just seven frames of Reconstruction_1.avi, but using the Premiere Pro feature, Fit to Fill , you are able to match the clip to the duration of the dialogue, saving your camera person from a production meeting dressing down! Review your edit decisions and use the Trim tool or the Undo command to alter edit points that you feel need adjustments. As always, being an editor is about experimentation, so don't be afraid to try something out of the box, you never know where it might lead. Once you are happy with your edit decisions, render any clips on the Timeline that display a red line above them. You should end up with a Timeline that looks something like the following screenshot; save your project before moving on to the next section. Objective Complete - Mini Debriefing In this task you have learned how to piece together cutaway shots to match the voice-over, creating an effective J-cut, as seen in the way the dialog seamlessly blends between the pub cutaway shot and the news reporter finishing his last sentence. You also learned how to scavenge source material from other reels in order to find the necessary shot to match the dialog. Classified Intel The last set of time suggestions given in this task allow the pub cutaway shot to run over the top of the newsperson saying "And now, much to the surprise…". This is an editorial decision that you can make on whether or not this cutaway should run over the dialog. It is simply a matter of taste, but you are the editor and the final decision is yours! In this article, we learned how to extract audio fat and create a J-cut. Resources for Article : Further resources on this subject: Responsive Design with Media Queries [Article] Creating a Custom HUD [Article] Top features you'll want to know about [Article]
Read more
  • 0
  • 0
  • 2684
Modal Close icon
Modal Close icon