Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-jquery-ui-themes-states-cues-overlays-and-shadows
Packt
05 Aug 2011
7 min read
Save for later

jQuery UI Themes: States, Cues, Overlays and Shadows

Packt
05 Aug 2011
7 min read
jQuery UI Themes Beginner's Guide Create new themes for your JQuery site with this step-by-step guide States jQueryUi widgets are always in one state or another. These states also play a role in themes. A widget in one state should look different than widgets in another state. These different appearances are controlled by CSS style properties within the theme. States are especially prevalent in widgets that interact with mouse events. When a user hovers over a widget that is interested in these types of events, the widget changes into a hover state. When the mouse leaves the widget, it returns to a default state. Even when nothing is happening with a widget—no events are taking place that the widget is interested in—the widget is in a default state. The reason we need a default state for widgets is so that they can return to their default appearance. The appearance of these states is entirely controlled through the applied theme. In this section, we'll change the ThemeRoller settings for widget states. Time for action - setting default state styles Some widgets that interact with the mouse have a default state applied to them. We can adjust how this state changes the appearance of the widget using ThemeRoller settings: Continuing with our theme design, expand the Clickable: default state section. In the Background color & texture section, click on the texture selector in the middle. Select the inset hard texture. In the Background color & texture section, set the background opacity to 65%. Change the Border color setting value to #b0b0b0. Change the Icon color setting value to #555555: What just happened? We've just changed the look and feel of the default widget state. We changed the background texture to match that of the header theme settings. Likewise, we also changed the background opacity to 65%, also to match the header theme settings. The border color is now slightly darker - this looks better with the new default state background settings. Finally, the icon color was updated to match the default state font color. Here is what the sample button looked like before we made our changes: Here is what the sample button looks like after we've updated our theme settings: Time for action - setting hover state styles The same widgets that may be in a default state, for instance, a button, may also be in a hover state. Widgets enter a hover state when a user moves the mouse pointer over the widget. We want our user interface to give some kind of visual indication that the user has hovered over something they can click. It's time to give our theme some hover state styles: Continuing with our theme design, expand the Clickable: hover state section. In the Background color & texture section, click on the texture selector in the middle. Select the inset hard texture. Change the Border color setting value to #787878. Change the Icon color setting value to #212121: What just happened? When we hover over widgets that support the hover state, their appearance is now harmonized with our theme settings. The background texture was updated to match the texture of the default state styles. The border color is now slightly darker. This makes the widget really stand out when the user hovers over it. At the same, it isn't so dark that it conflicts with the rest of the theme settings. Finally, we updated the icon color to match that of the font color. Here is what the sample button widget looked like before we change the hover state settings: Here is what the sample button widget looked like after we updated the hover state theme settings: Time for action - setting active state styles Some jQuery UI widgets, the same widgets that can be in either a default or hover state, can also be in an active state. Widgets become active after a user clicks them. For instance, the currently selected tab in a tabs widget is in an active state. We can control the appearance of active widgets through the ThemeRoller: Continuing with our theme design, expand the Clickable: active state section. In the Background color & texture section, change the color setting value on the left to #f9f9f9. In the Background color & texture section, click the texture selector in the middle. Select the flat texture. In the Background color & texture section, set the opacity setting value on the right-hand side to 100%. Change the Border color setting value to #808080. Change the Icon color setting value to #212121: What just happened? Widgets in the active state will now use our updated theme styles. We've changed the background color to something only marginally darker. The reason being, we are using the highlight soft texture in our content theme settings. This means that the color gets lighter toward the top. The color at the top is what we're aiming for in the active state styles. The texture has been changed to flat. Flat textures, unlike the others, have no pattern - they're simply a color. Accordingly, we've changed the background opacity to 100%. We do this because for these theme settings, we're only interested in showing the color. The active state border is slightly darker, a visual cue to show that the widget is in fact active. Finally, like other adjustments we've made in our theme, the icon color now mirrors the text color. Here is what the sample tabs widget looked like before we changed the active state theme settings: Here is what the sample tabs widget looks like after we've updated the active state theme settings. Notice that the selected tab's border stands out among the other tabs and how the selected tab blends better with the tab content. Cues In any web application, it is important to have the ability to notify users of events that have taken place. Perhaps an order was successfully processed, or a registration field was entered incorrectly. Both occurrences are worth letting the user know about. These are types of cues. The jQuery UI theming framework defines two types of cues used to notify the user. These are highlights and errors. A highlight cue is informational, something that needs to be brought to the user's attention. An error is something exceptional that should not have happened. Both cue categories can be customized to meet the requirements of any theme. It is important to keep in mind that cues are meant to aggressively grab the attention of the user - not to passively display information. So the theme styles applied to these elements really stand out. In this section we'll take a look at how to make this happen with the ThemeRoller. Time for action - changing the highlight cue A user of your jQuery UI application has just saved something. How do they know it was successful? Your application needs to inform them somehow—it needs to highlight the fact that something interesting just happened. To do this, your application will display a highlight cue. Let's add some highlight styles to our theme: Continuing with our theme design, expand the Highlight section. In the Background color & texture section, change the color setting value to #faf2c7. In the Background color & texture section, change the opacity setting value to 85%. Change the Border color setting value to #f8df49. Change the Text color setting value to #212121: What just happened? The theme settings for any highlight cues we display for the user have been updated. The background color is a shade darker and the opacity has been increased by 10%. The border color is now significantly darker than the background color. The contrast between the background and border colors defined here are now better aligned with the background-border contrast defined in other theme sections. Finally, we've updated the text color to be the same as the text in other sections. This is not for a noticeable difference (there isn't any), but for consistency reasons. Here is what the sample highlight cue looked like before we updated the theme settings: Here is what the sample highlight widget looks like after the theme setting changes:
Read more
  • 0
  • 0
  • 2825

article-image-customizing-wordpress-settings-seo
Packt
27 Apr 2011
12 min read
Save for later

Customizing WordPress Settings for SEO

Packt
27 Apr 2011
12 min read
  WordPress 3 Search Engine Optimization Optimize your website for popularity with search engines         Read more about this book       (For more resources on WordPress, see here.) We will begin by setting up the goals for your Internet presence and determining how best to leverage WordPress' flexibility and power for maximum benefit. We'll examine how to best determine and reach out to the specific audience for your goods or services. Different Internet models require different strategies. For example, if your goal is instant e-commerce sales, you strategize differently than if your goal is a broad-based branding campaign. We'll also examine how to determine how competitive the existing search market is, and how to develop a plan to penetrate that market. It's important to leverage WordPress' strengths. WordPress can effortlessly help you build large, broad-based sites. It can also improve the speed and ease with which you publish new content. It serves up simple, text-based navigation menus that search engines crawl and index easily. WordPress' tagging, pingback, and trackBack features help other blogs and websites find and connect with your content. For these reasons, and quite a few more, WordPress is search ready. In this article, we will look at what WordPress already does for your SEO. Of course, WordPress is designed as a blogging platform and a content management platform—not as a platform purely for ranking. We'll look at what WordPress doesn't accomplish innately and how to address that. Finally, we'll look at how WordPress communicates with search engines and blog update services. Following this article, we'll know how to plan out a new site or improve an existing one, how to gauge WordPress' innate strengths and supplant its weaknesses, and learn how WordPress sites get found by search engines and blog engines. Setting goals for your business and website and getting inspiration A dizzying variety of websites run on the WordPress platform, everything from The Wall Street Journal's blog to the photo sharing comedy site PeopleofWalMart. com . Not every reader will have purely commercial intent in creating his or her web presence. However, all webmasters want more traffic and more visibility for their sites. With that in mind, to increase the reach, visibility, and ranking of your website, you'll want to develop your website plan based on the type of audience you are trying to reach, the type of business you run, and what your business goals are. Analyzing your audience You will obviously want to analyze the nature of your audience. Your website's content, its design, its features, and even the pixel width of the viewable area will depend on your audience. Is your audience senior citizens? If so, your design will need to incorporate large fonts and you will want to keep your design to a pixel width of 800 or less. Senior citizens can have difficulty reading small text and many use older computers with 800 pixel monitors. And you can forget about the integrated Twitter and Facebook feeds; most seniors aren't as tuned into those technologies as young people. You might simply alienate your target users by including features that aren't a fit for your audience. Does your audience include purchasers of web design services? If so, be prepared to dazzle them with up-to-date design and features. Similarly, if you intend to rely on building up your user base by developing viral content, you will want to incorporate social media sharing into your design. Give some thought to the type of users you want to reach, and design your site for your audience. This exercise will go a long way in helping you build a successful site. Analyzing your visitors' screen sizes Have you ever wondered about the monitor sizes of the viewers of your own site? Google Analytics (www.google.com/analytics), the ubiquitous free analytics tool offered by Google, offers this capability. To use it, log on to your Google Analytics account (or sign up if you don't have one) and select the website whose statistics you wish to examine. On the left menu, select Visitors, and expand the Browser Capabilities menu entry. Then select Screen Resolutions. Google Analytics will offer up a table and chart of all the monitor resolutions used by your viewers. Determining the goal of your website The design, style, and features of your website should be dictated by your goal. If your site is to be a destination site for instant sales or sign-ups, you want a website design more in the style of a single-purpose landing page that focuses principally on conversions. A landing page is a special-purpose, conversion-focused page that appears when a potential customer clicks on an advertisement or enters through a search query. The page should display the sales copy that is a logical extension of the advertisement or link, and should employ a very shallow conversion funnel . A conversion funnel tracks the series of steps that a user must undertake to get from the entry point on a website to the satisfaction of a purchase or other conversion event. Shallow conversion funnels have fewer steps and deeper conversion funnels have more steps. When designing conversion-focused landing pages, consider if you want to eliminate navigation choices entirely on the landing page. Logically, if you are trying to get a user to make an immediate purchase, what benefit is served by giving the user easier choices to click onto the other pages? Individualized page-by-page navigation can get clunky with WordPress; you might want to ensure that your WordPress template can easily handle these demands. (Move the mouse over the image to enlarge it.) The previous screenshot shows the expert landing page for Netfiix's DVD rental service. Note the absence of navigational choices. There are other sophisticated conversion tools as well: clear explanation of benefits, free trial, arrows, and a color guide to the reader to the conversion event. If you sell a technical product or high-end consulting services, you rely heavily on the creation of content and the organization and presentation of that content, on your WordPress site. Creating a large amount of content covering broad topics in your niche will establish thought leadership that will help you draw in and maintain new customers. In sites with large amounts of educational content, you'll want to make absolutely sure that your content is well organized and has an easy-to-follow navigation. If you will be relying on social media and other forms of viral marketing to build up your user base, you'd want to integrate social media plug-ins and widgets into your site. Plug-ins and widgets are third-party software tools that you install on your WordPress site to contribute to a new functionality. A popular sports site integrates the TweetMeme and Facebook connect widgets. When users retweet or share the article, it means links, traffic, and sales. When compounded with a large amount of content, the effect can be very powerful. Following the leaders Once you have determined the essential framework and niche for your site, look for best-in-class websites for your inspiration. Trends in design and features are changing constantly. Aim for up-to-the-minute design and features: enlightened design sells more products and services, and sophisticated features will help you convert and engage your visitors more. Likewise, ease of functionality will keep visitors on your website longer and keep them coming back. For design inspiration, you can visit any one of the hundreds of website design gallery sites. These gallery sites showcase great designs in all website niches. The following design gallery sites feature the latest and greatest trends in web design and features (note that all of these sites run on WordPress): Urban Trash (www.urbantrash.net/cssgallery/): This gallery is truly one of the best and should be the first stop when seeking design inspiration. CSS Elite (www.csselite.com): Another of the truly high-end CSS galleries. Many fine sites are featured here. CSSDrive (www.cssdrive.com): CSS Drive is one of the elite classes of directories, and CSS Drive has many other design-related features as well. For general inspiration on everything from website design, to more specialized discussion of the best design for website elements such as sign-up boxes and footers, head to Smashing Magazine , especially its "inspiration" category (http://smashingmagazine.com/category/inspiration/). Ready-made WordPress designs by leading designers are available for purchase off-the-shelf at ThemeForest.net. These templates are resold to others, so they won't be exclusive to your site. A head's up: these top-end themes are full of advanced custom features. They might require a little effort to get them to display exactly as you want. For landing pages, get inspiration from retail monoliths in competitive search markets. DishNetwork and Netfiix have excellent landing pages. Sears' home improvement division serves up sophisticated landing pages for services such as vinyl siding and replacement windows. With thousands of hits per day, you can bet these retail giants are testing and retesting their landing pages periodically. You can save yourself the trouble and budget of your early-stage testing by employing the lessons that these giants have already put into practice. For navigation, usability, and site layout clues for large content-based sites, look for the blogging super-sites such as Blogs.wsj.com , POLITICO.com , Huffingtonpost. com , and Wikipedia.com Gauging competition in the search market Ideally, before you launch your site, you will want to gauge the competitive marketplace. On the Web, you have two spheres of competition: One sphere is traditional business competition: the competition for price, quality, and service in the consumer marketplace The other sphere of competition is search competition; competition for clicks, page views, conversions, user sign-ups, new visitors, returning visitors, search placement, and all the other metrics than help drive a web-based business The obvious way to get started gauging the search marketplace is to run some sample searches on the terms you believe your future customers might use when seeking out your products or services. The search leaders in your marketplace will be easy to spot; they will be in the first six positions in a Google search. While aiming for the first six positions, don't think in terms of the first page of a Google search. Studies show that the first five or six positions in a Google search yield 80 to 90 percent of the click-throughs. The first page of Google is a good milestone, but the highest positions on a search results page will yield significantly higher traffic than the bottom three positions on a search results page Once you've identified the five or six websites that are highly competitive and searches, you want to analyze what they're doing right and what you'll need to do to compete with them. Here's how to gauge the competition: Don't focus on the website in terms of the number one position for a given search. That may be too lofty a goal for the short term. Look at the sites in positions 4, 5, and 6. These positions will be your initial goal. You'll need to match or outdo these websites to earn those positions. First, you want to determine the Google PageRank of your competitor's sites. PageRank is a generalized, but helpful, indicator of the quality and number of inbound links that your competitors' websites have earned. Install a browser plug-in that shows the PageRank of any site to which you browse. For Firefox, try the SearchStatus plug-in (available at http://www.quirk.biz/searchstatus/). For Chrome, use SEO Site Tools (available through the Google Chrome Extensions gallery at https://chrome.google.com/extensions). Both of these tools are free, and they'll display a wide array of important SEO factors for any site you visit. How old are the domains of your competitors' websites? Older sites tend to outrank newer sites. If you are launching a new domain, you will most likely need to outpace your older competitors in other ways such as building more links or employing more advanced on-page optimization. Site age is a factor that can't be overcome with brains or hard work (although you can purchase an older, existing domain in the after market). Look at the size and scale of competing websites. You'll need to at least approach the size of the smallest of your competitors to place well. You will want to inspect your competitors' inbound links. Where are they getting their links from and how many links have they accumulated? To obtain a list of backlinks for any website, visit the Yahoo! Site Explorer at siteexplorer.search.yahoo.com. This free tool displays up to 1,000 links for any site. If you want to see more than 1,000 links, you'll need to purchase inbound link analysis software like SEO Spyglass from link-assistant.com. For most purposes, 1,000 links will give you a clear picture of where a site's links are coming from. Don't worry about high links counts because low-value links in large numbers are easy to overcome; high- value links like .edu links, links from article content, and links from high- PageRank sites will take more effort to surmount. You will want to examine the site's on-page optimization. Are the webmasters utilizing effective title tags and meta tags? Are they using heading tags and is their on-page text keyword-focused? If they aren't, you may be able to best beat your competitors through more effective on-page optimization. Don't forget to look at conversion. Is your competitor's site well-designed to convert his or her visitors into customers? If not, you might edge out your competition with better conversion techniques. When you analyze your competition, you are determining the standard you will need to meet or beat to earn competitive search placement. Don't be discouraged by well-placed competitors. 99 percent of all the websites are not well optimized. As you learn more, you'll be surprised how many webmasters are not employing effective optimization. Your goal as you develop or improve your website will be to do just a little bit more than your competition. Google and the other search engines will be happy to return your content in search results in favor of others if you meet a higher standard.
Read more
  • 0
  • 2
  • 2817

article-image-alfresco-3-web-scripts
Packt
21 Jul 2011
6 min read
Save for later

Alfresco 3: Web Scripts

Packt
21 Jul 2011
6 min read
  Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco Introduction You all know about Web Services – which took the web development world by storm a few years ago. Web Services have been instrumental in constructing Web APIs (Application Programming Interface) and making the web applications work as Service-Oriented Architecture. In the new Web 2.0 world, however, many criticisms arose around traditional Web Services – thus RESTful services came into the picture. REST (Representational State Transfer) attempts to expose the APIs using HTTP or similar protocol and interfaces using well-known, light-weight and standard methods such as GET, POST, PUT, DELETE, and so on. Alfresco Web Scripts provide RESTful APIs of the repository services and functions. Traditionally, ECM systems have been exposing the interfaces using RPC (Remote Procedure Call) – but gradually it turned out that RPC-based APIs are not particularly suitable in the wide Internet arena where multiple environments and technologies reside together and talk seamlessly. In the case of Web Scripts, the RESTful services overcome all these problems and integration with an ECM repository has never been so easy and secure. Alfresco Web Scripts were introduced in 2006 and since then it has been quite popular with the developer and system integrator community for implementing services on top of the Alfresco repository and to amalgamate Alfresco with any other system. What is a Web Script? A Web Script is simply a URI bound to a service using standard HTTP methods such as GET, POST, PUT, or DELETE. Web Scripts can be written using simply the Alfresco JavaScript APIs and Freemarker templates, and optionally Java API as well with or without any Freemarker template. For example, the http://localhost:8080/alfresco/service/api/search/person.html ?q=admin&p=1&c=10 URL will invoke the search service and return the output in HTML. Internally, a script has been written using JavaScript API (or Java API) that performs the search and a FreeMarker template is written to render the search output in a structured HTML format. All the Web Scripts are exposed as services and are generally prefixed with http://<<server-url>>/<<context-path>>/<<servicepath>>. In a standard scenario, this is http://localhost:8080/alfresco/service Web Script architecture Alfresco Web Scripts strictly follow the MVC architecture. Controller: Written using Alfresco Java or JavaScript API, you implement your business requirements for the Web Script in this layer. You also prepare your data model that is returned to the view layer. The controller code interacts with the repository via the APIs and other services and processes the business implementations. View: Written using Freemarker templates, you implement exactly what you want to return in your Web Script. For data Web Scripts you construct your JSON or XML data using the template; and for presentation Web Scripts you build your output HTML. The view can be implemented using Freemarker templates, or using Java-backed Web Script classes. Model: Normally constructed in the controller layer (in Java or JavaScript), these values are automatically available in the view layer. Types of Web Scripts Depending on the purpose and output, Web Scripts can be categorized in two types: Data Web Scripts: These Web Scripts mostly return data in plenty after processing of business requirements. Such Web Scripts are mostly used to retrieve, update, and create content in the repository or query the repository. Presentation Web Scripts: When you want to build a user interface using Web Scripts, you use these Web Scripts. They mostly return HTML output. Such Web Scripts are mostly used for creating dashlets in Alfresco Explorer or Alfresco Share or for creating JSR-168 portlets. Note that this categorization of Web Script is not technically different—it is just a logical separation. This means data Web Scripts and presentation Web Scripts are not technically dissimilar, only usage and purpose is different. Web Script files Defining and creating a Web Script in Alfresco requires creating certain files in particular folders. These files are: Web Script Descriptor: The descriptor is an XML file used to define the Web Script – the name of the script, the URL(s) on which the script can be invoked, the authentication mechanism of the script and so on. The name of the descriptor file should be of the form: <<service-id>>.<<http-method>>. desc.xml; for example, helloworld.get.desc.xml. Freemarker Template Response file(s) optional: The Freemarker Template output file(s) is the FTL file which is returned as the result of the Web Script. The name of the template files should be of the form: &lt;<service-id>>.<<httpmethod>>.<< response-format>>.ftl; for example, helloworld.get.html.ftl and helloworld.get.json.ftl. Controller JavaScript file (optional): The Controller JavaScript file is the business layer of your Web Script. The name of the JavaScript file should be of the form: <<service-id>>.<<http-method>>.js; for example, helloworld.get.js. Controller Java file (optional): You can write your business implementations in Java classes as well, instead of using JavaScript API. Configuration file (optional): You can optionally include a configuration XML file. The name of the file should be of the form: <<service-id>>.<<http-method>>.config.xml; for example, helloworld.get.config.js. Resource Bundle file (optional): These are standard message bundle files that can be used for making Web Script responses localized. The name of message files would be of the form: <<service-id>>.<<http-method>>.properties; for example, helloworld.get.properties. The naming conventions of Web Script files are fixed – they follow particular semantics. Alfresco, by default, has provided a quite rich list of built-in Web Scripts which can be found in the tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscriptsorgalfresco folder. There are a few locations where you can store your Web Scripts. Classpath folder: tomcatwebappsalfrescoWEB-INFclassesalfrescotemplateswebscripts Classpath folder (extension): tomcatwebappsalfrescoWEB-INFclassesalfrescoextensiontemplateswebscripts Repository folder: /Company Home/Data Dictionary/Web Scripts Repository folder (extension): /Company Home/Data Dictionary/Web Scripts Extensions It is not advised to keep your Web Scripts in the orgalfresco folder; this folder is reserved for Alfresco default Web Scripts. Create your own folders instead. Or better, you should create your Web Scripts in the extension folders. Web Script parameters You of course need to pass some parameters to your Web Script and execute your business implementations around that. You can pass parameters by query string for the GET Web Scripts. For example: http://localhost:8080/alfresco/service/api/search/person.html?q=admin&p=1&c=10 In this script, we have passed three parameters – q (for the search query), p (for the page index), and c (for the number of items per page). You can also pass parameters bound in HTML form data in the case of POST Web Scripts. One example of such Web Script is to upload a file using Web Script.  
Read more
  • 0
  • 0
  • 2816

article-image-start-ad-serving-openx
Packt
29 Mar 2010
3 min read
Save for later

Start Ad Serving with OpenX

Packt
29 Mar 2010
3 min read
Basic OpenX Ad serving steps diagram The following diagram shows the necessary steps for the completion of the serving an advertisement on a website using OpenX Ad Server: Sample Amazon.com banner serving flowchart In this scenario, we will start adding an Advertiser (Amazon). Then, we will create a Campaign (Amazon Toys & Games). We will add a Banner (Amazon Puzzle Games for Kids) to this campaign. Then, we will define our sample website at OpenX. We will create a zone (Toys & Games Zone) for this website. The next step is to link a banner to this zone. Finally, we will complete serving advertisements by embedding the zone code to a page on the website and visiting this page through a browser. Time for action – adding Amazon.com as an advertiser In this section, we will learn how to add Amazon.com as an advertiser. As you may have probably heard, Amazon runs a very popular affiliate program that is called as Amazon Associates. You can earn commissions from each sale that results from the links and banners you placed on your website by using this program. Read more about Amazon Associates program and register for free at http://affiliate-program.amazon.com. As the example will be a fictional one here, you don't essentially need to register at Amazon affiliate program before starting. The example will help you understand how to add any advertiser in a similar way. Let's log in to OpenX Authentication panel. Use the Username and Password that we have created earlier.The login page looks like this: Click on Inventory tab at the top menu and then click on Add new advertiser link. We are now in Add new advertiser page. Fill Name, Contact, and Email fields. You can type your own information for Contact and Email fields. Leave other fields as they are, untouched with default settings. Click Save Changes button to complete adding an advertiser. What just happened We have learned how to add a new advertiser to OpenX. We have logged into OpenX management screen using the administrator user and provided the basic necessary fields: Name, Contact, and Email. Time for action – adding a campaign for Amazon.com Now, let's add a simple campaign for Amazon.com. Click on Add new campaign link near Amazon advertiser on Advertisers page. Fill the Name field in as Amazon – Toys & Games and select Contract (Exclusive) option under it Leave Date, Pricing, and Priority in relation to other campaign sections to their default settings. Leave Delivery capping per visitor and Miscellaneous sections untouched as well. Click on Save Changes button to complete adding Amazon - Toys & Games campaign. What just happened We have learned how to add a campaign for an advertiser using minimum requirements. We used Name and Campaign type fields and ignored other fields as we will cover them later.
Read more
  • 0
  • 0
  • 2810

article-image-documenting-our-application-apache-struts-2-part-1
Packt
15 Oct 2009
12 min read
Save for later

Documenting our Application in Apache Struts 2 (part 1)

Packt
15 Oct 2009
12 min read
Documenting Java Everybody knows the basics of documenting Java, so we won't go into much detail. We'll talk a bit about ways of writing code whose intention is clear, mention some Javadoc tricks we can use, and highlight some tools that can help keep our code clean. Clean code is one of the most important ways we can document our application. Anything we can do to increase readability will reduce confusion later (including our own). Self-documenting code We've all heard the myth of self-documenting code. In theory, code is always clear enough to be easily understood. In reality, this isn't always the case. However, we should try to write code that is as self-documenting as possible. Keeping non-code artifacts in sync with the actual code is difficult. The only artifact that survives a project is the executable, which is created from code, not comments. This is one of the reasons for writing self-documenting code. (Annotations, XDoclet, and so on, make that somewhat less true.) There are little things we can do throughout our code to make our code read as much like our intent as possible and make extraneous comments just that: extraneous. Document why, not what Over-commenting wastes everybody's time. Time is wasted in writing a comment, reading it, keeping that comment in sync with the code, and, most importantly, a lot of time is wasted when a comment is not accurate. Ever seen this? a += 1; // increment a This is the most useless comment in the world. Firstly, it's really obvious we're incrementing something, regardless of what that something is. If the person reading our code doesn't know what += is, then we have more serious problems than them not knowing that we're incrementing, say, an array index. Secondly, if a is an array index, we should probably use either a more common array index or make it obvious that it's an array index. Using i and j is common for array indices, while idx or index is less common. It may make sense to be very explicit in variable naming under some circumstances. Generally, it's nice to avoid names such as indexOfOuterArrayOfFoobars. However, with a large loop body it might make sense to use something such as num or currentIndex, depending on the circumstances. With Java 1.5 and its support for collection iteration, it's often possible to do away with the index altogether, but not always. Make your code read like the problem Buzzphrases like Domain Specific Languages (DSLs) and Fluent Interfaces are often heard when discussing how to make our code look like our problem. We don't necessarily hear about them as much in the Java world because other languages support their creation in more "literate" ways. The recent interest in Ruby, Groovy, Scala, and other dynamic languages have brought the concept back into the mainstream. A DSL, in essence, is a computer language targeted at a very specific problem. Java is an example of a general-purpose language. YACC and regular expressions are examples of DSLs that are targeted at creating parsers and recognizing strings of interest respectively. DSLs may be external, where the implementing language processes the DSL appropriately, as well as internal, where the DSL is written in the implementing language itself. An internal DSL can also be thought of as an API or library, but one that reads more like a "little language". Fluent interfaces are slightly more difficult to define, but can be thought of as an internal DSL that "flows" when read aloud. This is a very informal definition, but will work for our purposes Java can actually be downright hostile to some common DSL and fluent techniques for various reasons, including the expectations of the JavaBean specification. However, it's still possible to use some of the techniques to good effect. One typical practice of fluent API techniques is simply returning the object instance in object methods. For example, following the JavaBean specification, an object will have a setter for the object's properties. For example, a User class might include the following: public class User {private String fname;private String lname;public void setFname(String fname) { this.fname = fname; }public void setLname(String lname) { this.lname = lname; }} Using the class is as simple as we'd expect it to be: User u = new User();u.setFname("James");u.setLname("Gosling"); Naturally, we might also supply a constructor that accepts the same parameters. However, it's easy to think of a class that has many properties making a full constructor impractical. It also seems like the code is a bit wordy, but we're used to this in Java. Another way of creating the same functionality is to include setter methods that return the current instance. If we want to maintain JavaBean compatibility, and there are reasons to do so, we would still need to include normal setters, but can still include "fluent" setters as shown here: public User fname(String fname) {this.fname = fname;return this;}public User lname(String lname) {this.lname = lname;return this;} This creates (what some people believe is) more readable code. It's certainly shorter: User u = new User().fname("James").lname("Gosling"); There is one potential "gotcha" with this technique. Moving initialization into methods has the potential to create an object in an invalid state. Depending on the object this may not always be a usable solution for object initialization. Users of Hibernate will recognize the "fluent" style, where method chaining is used to create criteria. Joshua Flanagan wrote a fluent regular expression interface, turning regular expressions (already a domain-specific language) into a series of chained method calls: Regex socialSecurityNumberCheck =new Regex(Pattern.With.AtBeginning.Digit.Repeat.Exactly(3).Literal("-").Repeat.Optional.Digit.Repeat.Exactly(2).Literal("-").Repeat.Optional.Digit.Repeat.Exactly(4).AtEnd); Whether or not this particular usage is an improvement is debatable, but it's certainly easier to read for the non-regex folks. Ultimately, the use of fluent interfaces can increase readability (by quite a bit in most cases), may introduce some extra work (or completely duplicate work, like in the case of setters, but code generation and/or IDE support can help mitigate that), and may occasionally be more verbose (but with the benefit of enhanced clarity and IDE completion support). Contract-oriented programming Aspect-oriented programming (AOP) is a way of encapsulating cross-cutting functionality outside of the mainline code. That's a mouthful, but essentially it means is that we can remove common code that is found across our application and consolidate it in one place. The canonical examples are logging and transactions, but AOP can be used in other ways as well. Design by Contract (DbC) is a software methodology that states our interfaces should define and enforce precise specifications regarding operation. "Design by Contract" is a registered trademark of Interactive Software Engineering Inc. Other terms include Programming by Contract (PbC) or Contract Oriented Programming (COP). How does COP help create self-documenting code? Consider the following portion of a stack implementation: public void push(final Object o) {stack.add(o);} What happens if we attempt to push a null? Let's assume that for this implementation, we don't want to allow pushing a null onto the stack. /*** Pushes non-null objects on to stack.*/public void push(final Object o) {if (o == null) return;stack.add(o);} Once again, this is simple enough. We'll add the comment to the Javadocs stating that null objects will not be pushed (and that the call will fail/return silently). This will become the "contract" of the push method—captured in code and documented in Javadocs. The contract is specified twice—once in the code (the ultimate arbiter) and again in the documentation. However, the user of the class does not have proof that the underlying implementation actually honors that contract. There's no guarantee that if we pass in a null, it will return silently without pushing anything. The implied contract can change. We might decide to allow pushing nulls. We might throw an IllegalArgumentException or a NullPointerException on a null argument. We're not required to add a throwsclause to the method declaration when throwing runtime exceptions. This means further information may be lost in both the code and the documentation. Eiffel has language-level support for COP with the require/do/ensure/end construct. It goes beyond the simple null check in the above code. It actively encourages detailed pre- and post-condition contracts. An implementation's push() method might check the remaining stack capacity before pushing. It might throw exceptions for specific conditions. In pseudo-Eiffel, we'd represent the push() method in the following way: push (o: Object) require o /= null do -- push end A stack also has an implied contract. We assume (sometimes naively) that once we call the push method, the stack will contain whatever we pushed. The size of the stack will have increased by one, or whatever other conditions our stack implementation requires. Java, of course, doesn't have built-in contracts. However, it does contain a mechanism that can be used to get some of the benefits for a conceptually-simple price. The mechanism is not as complete, or as integrated, as Eiffel's version. However, it removes contract enforcement from the mainline code, and provides a way for both sides of the software to specify, accept, and document the contracts themselves. Removing the contract information from the mainline code keeps the implementation clean and makes the implementation code easier to understand. Having programmatic access to the contract means that the contract could be documented automatically rather than having to maintain a disconnected chunk of Javadoc. SpringContracts SpringContracts is a beta-level Java COP implementation based on Spring's AOP facilities, using annotations to state pre- and post-contract conditions. It formalizes the nature of a contract, which can ease development. Let's consider our VowelDecider that was developed through TDD. We can also use COP to express its contract (particularly the entry condition). This is a method that doesn't alter state, so post conditions don't apply here. Our implementation of VowelDecider ended up looking (more or less) like this: public boolean decide(final Object o) throws Exception {if ((o == null) || (!(o instanceof String))) {throw new IllegalArgumentException("Argument must be a non-null String.");}String s = (String) o;return s.matches(".*[aeiouy]+.*");} Once we remove the original contract enforcement code, which was mixed with the mainline code, our SpringContracts @Precondition annotation looks like the following: @Precondition(condition="arg1 != null && arg1.class.name == 'java.lang.String'",message="Argument must be a non-null String")public boolean decide(Object o) throws Exception {String s = (String) o;return s.matches(".*[aeiouy]+.*");} The pre-condition is that the argument must not be null and must be (precisely) a string. (Because of SpringContracts' Expression Language, we can't just say instanceof String in case we want to allow string subclasses.) We can unit-test this class in the same way we tested the TDD version. In fact, we can copy the tests directly. Running them should trigger test failures on the null and non-string argument tests, as we originally expected an IllegalArgumentException. We'll now get a contract violation exception from SpringContracts. One difference here is that we need to initialize the Spring context in our test. One way to do this is with JUnit's @BeforeClass annotation, along with a method that loads the Spring configuration file from the classpath and instantiates the decider as a Spring bean. Our class setup now looks like this: @BeforeClass public static void setup() {appContext = new ClassPathXmlApplicationContext("/com/packt/s2wad/applicationContext.xml");decider = (VowelDecider)appContext.getBean("vowelDecider");} We also need to configure SpringContracts in our Spring configuration file. Those unfamiliar with Spring's (or AspectJ's) AOP will be a bit confused. However, in the end, it's reasonably straightforward, with a potential "gotcha" regarding how Spring does proxying. <aop:aspectj-autoproxy proxy-target-class="true"/><aop:config><aop:aspect ref="contractValidationAspect"><aop:pointcut id="contractValidatingMethods"expression="execution(*com.packt.s2wad.example.CopVowelDecider.*(..))"/><aop:around pointcut-ref="contractValidatingMethods"method="validateMethodCall"/></aop:aspect></aop:config><bean id="contractValidationAspect"class="org.springcontracts.dbc.interceptor.ContractValidationInterceptor"/><bean id="vowelDecider"class="com.packt.s2wad.example.CopVowelDecider" /> The SpringContracts documentation goes into it a bit more and the Spring documentation contains a wealth of information regarding how AOP works in Spring. The main difference between this and the simplest AOP setup is that our autoproxy target must be a class, which requires CGLib. This could also potentially affect operation. The only other modification is to change the exception we're expecting to SpringContract's ContractViolationCollectionException, and our test starts passing. These pre- and post-condition annotations use the @Documented meta-annotation, so the SpringContracts COP annotations will appear in the Javadocs. It would also be possible to use various other means to extract and document contract information. Getting into details This mechanism, or its implementation, may not be a good fit for every situation. Runtime performance is a potential issue. As it's just some Spring magic, it can be turned off by a simple configuration change. However, if we do, we'll lose the value of the on-all-the-time contract management. On the other hand, under certain circumstances, it may be enough to say that once the contracts are consistently honored under all of the test conditions, the system is correct enough to run without them. This view holds the contracts more as an acceptance test, rather than as run-time checking. Indeed, there is an overlap between COP and unit testing as the way to keep code honest. As unit tests aren't run all the time, it may be reasonable to use COP as a temporary runtime unit test or acceptance test.
Read more
  • 0
  • 0
  • 2808

article-image-jquery-refresher
Packt
30 Aug 2013
6 min read
Save for later

jQuery refresher

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) If you haven't used jQuery in a while, that's okay, we'll get you up to speed very quickly. The first thing to realize is that the Document.Ready function is extremely important when using UI. Although page loading happens incredibly fast, we always want our DOM (the HTML content) to be loaded before our UI code gets applied. Otherwise we have nothing to apply it to! We want to place our code inside the Document.Ready function, and we will be writing it the shorthand way as we did previously. Please remove the previous UI checking code in your header that you had previously: $(function() {// Your code here is called only once the DOM is completelyloaded}); Easy enough. Let's refresh on some jQuery selectors. We'll be using these a lot in our examples so we can manipulate our page. I'll write out a few DOM elements next and how you can select them. I will apply hide() to them so we know what's been selected and hidden. Feel free to place the JavaScript portion in your header script tags and the HTML elements within your <body> tags as follows: Selecting elements (unchanging the HTML entities) as follows: $('p').hide();<p>This is a paragraph</p><p>And here is another</p><p>All paragraphs will go hidden!</p> Selecting classes as follows: $('.edit').hide();<p>This is an intro paragraph</p><p class="edit">But this will go hidden!</p><p>Another paragraph</p><p class="edit">This will also go hidden!</p> Selecting IDs as follows: <div id="box">Hide the Box </div><div id="house">Just a random divider</div> Those are the three basic selectors. We can get more advanced and use the CSS3 selectors as follows: $("input[type=submit]").hide();<form><input type="text" name="name" /><input type="submit" /></form> Lastly, you can chain your DOM tree to hide elements more specifically: $("table tr td.hidden").hide(); <table> <tbody> <tr> <td>Data</td> <td class="hidden">Hide Me</td> </tr> </tbody> </table> Step 3 – console.log is your best friend I brought up that developing with the console open is very helpful. When you need to know details about a JavaScript item you have, whether it be the typeof type or value, a friend of yours is the console.log() method. Notice that it is always in lowercase. This allows you to place things in the console rather than somewhere on your page. For example, if I were having trouble figuring out what a value was returning to me, I would simply do the following: function add(a, b) {return a + b;}var total = add(5, 20);console.log(total); This will give me the result I wanted to know quickly and easily. Internet Explorer does not support console logging, it will prevent your JavaScript from running once it hits a console.log method. Make sure to comment out or remove all the console logs before releasing a live project or else all the IE users will have a serious problem. Step 4 – creating the slider widget Let's get busy! Open your template file and let's create a DOM element to attach a slider widget to. And to make it more interesting, we are also going to add an additional DIV to show a text value. Here is what I placed in my <body> tag: <div id="slider"></div><div id="text"></div> It doesn't have to be a <div> tag, but it's a good generic block-level element to use. Next, to attach a slider element we place the following in our <script> tags (the empty ones): $(function() {var my_slider = $("#slider").slider();}); Refresh your page, and you will have a widget that can slide along a bar. If you don't see a slider, first check your browser's development tools console to see if there are any JavaScript errors. If you don't see any still, make sure you don't have a JavaScript blocker on! The reason we assign a variable to the slider is because, later on, we may want to reference the options, which you'll see next. You are not required to do this, but if you want to access the slider outside of its initial setup, you must give it a variable name. Our widget doesn't do much now, but it feels cool to finally make something, whatever it is! Let's break down a few things we can customize. There are three categories: Options: These are defined in a JavaScript object ({}) and will determine how you want your widget to behave when it's loaded, for example, you could set your slider to have minimum and maximum values. Events: These are always a function and they are triggered when a user does something to your item. Methods: You can use methods to destroy a widget, get and set values from outside of the widget, and even set different options from what you started with. To play with a few categories, the easiest start is to adjust the options. Let's do it by creating an empty object inside our slider: var my_slider = $("#slider").slider({}); Then we'll create a minimum and maximum value for our slider using the following code: var my_slider = $("#slider").slider({min: 1,max: 50}); Now our slider will accept and move along a bar with 50 values. There are many more options at UI API located at api.jquery.com under slider. You'll find many other options we won't have time to cover such as a step option to make the slider count every two digits, as follows: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2}); If we want to attach this to a text field we created in the DOM, a good way to start is by assigning the minimum value in the DIV, as this way we only have to change it once: var min = my_slider.slider('option', 'min');$("#text").html(min); Next we want to update the text value every time the slider is moved, easy enough; this will introduce us to our first event. Let's add it: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2,change: function(event, ui) {$("#text").html(ui.value);}}); Summary This article describes the basis for all widgets. Creating them, setting the options, events, and methods. That is the very simple pattern that handles everything for us. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 2805
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-event-delivery-network-oracle-soa-suite-11g-r1
Packt
18 Nov 2009
2 min read
Save for later

Event Delivery Network with Oracle SOA Suite 11g R1

Packt
18 Nov 2009
2 min read
Creating truly decoupled composite SOA applications requires a complete separation of the service consumer and the service provider.This is typically achieved through the use of asynchronous messaging.In an asynchronous messaging pattern, applications can perform in a"fire and forget" mode. This removes the need of an application to know details of the application on the other side. Additionally, it also improves resource utilization as applications are not holding onto resources until the interaction is complete. On the other hand, this introduces complexities of creating and managing message queues and topics. It requires that both the publisher of the message and the consumer use the same messaging technology. Each messaging system also has its own constraints on the types of programming languages and environments that can use the service. In a service-oriented world, this tight coupling to the implementation of the underlying messaging system is at odds with the fundamental requirement of implementation independence. What's needed is a level of abstraction that allows applications to generate an event using business terms and associate a business object in an implementation‑independent form. Oracle SOA Suite 11g addresses this with the introduction of anew feature in the form of the Event Delivery Network. Introducing events The Event Delivery Network (EDN) in Oracle SOA Suite 11g provides a declarative way to use a publish/subscribe model to generate and consume business events without worrying about the underlying message infrastructure. Developers only need to produce or consume events without having to deal with any particular messaging API like JMS, AQ, and MQ, and so on. Consuming an event means expressing an interest in the occurrence of a specific situation,while producing an event means advertising this occurrence. Using the same concepts that are used in Web Service Definition Language (WSDL), EDN uses an XML-based Event Definition Language, which allows you to define the event and its associated,strongly typed data. This definition is then registered with the SOA Infrastructure and is available to all composites to publish or subscribe.   SERVICES MESSAGING EDN WSDL:Standard service interface model JMS API:Application Programming Interface EDL:Event Definition Language XSD:Strong typing Handful of raw types XSD Business-oriented Developer-oriented Business-oriented Wealth of tools Mostly coding tools Fully declarative  
Read more
  • 0
  • 0
  • 2804

article-image-sharing-content-wordpress-top-plugins
Packt
16 Sep 2010
6 min read
Save for later

Sharing Content in WordPress Top Plugins

Packt
16 Sep 2010
6 min read
(For more resources on WordPress see here.) TweetMeme By Alex King (http://alexking.org/) Why it's awesome: Allows Twitter users to quickly share your blog post, and it tracks how often they do it Why it was picked: Super simple to install; no Twitter account is needed Manual Install URL: http://wordpress.org/extend/plugins/tweetmeme/ Automatic Install search term: TweetMeme Geek level: Webmaster Configuration location: Top Navigation | TweetMeme Used in: Posts, pages The TweetMeme button is the fastest way to allow your readers to quickly share your blog posts to Twitter with a single click. In addition to offering this awesome sharing tool, you can also sign up for TweetMeme's analytic services to track the effectiveness of each of your posts. Setting up TweetMeme You can access TweetMeme's settings from the Top Level Navigation and then TweetMeme Settings|. The following is a list of the most important settings to focus on and what they do. Display: Choose this if you want to display the TweetMeme button on pages, on the home page, and within your feed Position: TweetMeme allows you to display the TweetMeme button in various positions around your blog posts—before, after, both before and after, shortcode, or manually Type: TweetMeme offers two types of buttons—normal and compact. By default, the normal button is displayed Source: Supply your Twitter username here, if you have one URL Shortener: Often, WordPress blog post URLs are rather lengthy and can eat up to many of the 140 characters Twitter allows. It recommends using Bit.ly for your URL shortening—Bit.ly has proven to be a resilient company that won't be disappearing any time soon. TweetMeme API App ID & TweetMeme API App Key: TweetMeme offers detailed analytics when people re-tweet your blog posts. This service is not free, but they do offer a free trial to get your hands dirty. To leverage TweetMeme analytics, you will need to create an account at http://my.tweetmeme.com, after which you will be able to find your API App ID and API App key from http://my.tweetmeme.com/profile Wordbook By Robert Tsai (http://www.tsaiberspace.net/) Why it's awesome: You can share your posts to a user's Facebook wall Why it was picked: Ease of setup, integration level with Facebook Manual Install URL: http://wordpress.org/extend/plugins/wordbook/ Automatic Install search term: Wordbook Geek level: Webmaster Configuration location: Settings|Wordbook Used in: Posts Robert Tsai's Wordbook is an awesome way to get your blog posts listed on your Facebook profile or your Facebook pages. WordPress will post to Facebook a snippet of your post and a thumbnail of any images that might be in your post. Once installed, you will see a red bar across your WordPress administrator section stating Wordbook needs to be setup—click the Wordbook link to start the configuration. In order to publish your posts to Facebook, you will need to connect your blog to Facebook—to start this process, click the blue Facebook button. Once you start the process, you will be redirected to Facebook to authorize WordPress to communicate with your Facebook account. Now, each time you create a blog post, WordPress will create a new shared item in your Facebook newsfeed. WP Download Manager By Lester Chan (http://lesterchan.net/) Why it's awesome: Quickly add downloadable files and media to your blog Why it was picked: Lester's plugins are legendary, they offer download tracking Manual Install URL: http://wordpress.org/extend/plugins/nextgen-gallery/ Automatic Install search term: WP Download Manager Geek level: Webmaster Configuration location: Top Navigation | Downloads Used in: Posts, pages It offers your readers the ability to download content. It is one sure fire way of increasing your blog traffic and the happiness of your readers. WP Download Manager makes managing and tracking downloaded files from your blog a snap. Adding a new download Start off by clicking Add File from the Downloads menu and follow these steps: File: Select how you want to add a file: Browse File will show you a drop-down menu containing any files in the wp-content/files directory on your web server. Upload File allows you to select any file from your computer to upload. The majority of the time, this is the method you will be using. Remote File allows you to grab a file from any web server using a URL. File Name: Give your download a human-readable name. File Description: Supply a little information that will help your readers know what the file is about. File Category: Select the category for this file. You can create new categories from the Download | Download Options menu. File Size: Unless you are using the Remote File method, leave this field blank. WP Download Manager offers automatic detection of file size, but sometimes it will not work properly on Remote Files. File Date: If you need to back-date your file, you can use this field to do so. However, the majority of the time you will leave this field alone. Starting File Hits: If you want to make your download look more popular, you can pad the number of downloads. Allowed to Download: If you want to limit downloads only to readers who are subscribed and logged in, use this drop-down menu to control who can download. Your options are Everyone, Registered Users Only, Contributors, Authors, Editors, or Administrators. Click Add File. Inserting a download into a post Now that we have a new file uploaded to the Download Manager, we need to make it accessible to our readers. The WP Download plugin will add a new button to your WYSIWYG menu on posts and pages. One downside to WP Download is that you have to know the ID of the file you want to include in your post or page. Once you click the Download button, provide the download id in the pop-up window. You can find the ID of your file by clicking Download | Manage Downloads; the first column will contain the ID of the download. Twiogle Twitter Commenter By Twiogle (http://twiogle.com/) Why it's awesome: Quickly fills your blog's comments with somewhat relevant content Why it was picked: Easy to set up, works exactly as advertised Manual Install URL: http://wordpress.org/extend/plugins/twiogle-search/ Automatic Install search term: Twiogle Commenter Geek level: Newbie Configuration location: Settings | Twiogle Twitter Commenter Used in: Posts, pages, widgets Twiogle Twitter Commenter has a horrible name, but an awesome result. If your blog is lacking in comments and chatter, this plugin will automatically add new ones from recent tweets. It works its magic by taking the tags from your blog post and searches Twitter for tweets that use the same tags. To really take advantage of this plugin, you need to make sure you add tags to your blog posts; the more specific your tags are, the more relevant the comments that will be imported. With any service that automatically adds content to your blog, you need to keep an eye on it. It is very much possible for your comments to get overrun with pointless Twitter chatter if you run this plugin for too long.
Read more
  • 0
  • 0
  • 2804

article-image-oracle-universal-content-management-how-set-and-change-workflows
Packt
09 Aug 2010
4 min read
Save for later

Oracle Universal Content Management: How to Set Up and Change Workflows

Packt
09 Aug 2010
4 min read
(For more resources on Oracle, see here.) How to set up and change workflows First thing's first. Let's start by looking at the tools that you will be using to set up and configure your workflows. Discover the Workflow Admin application Go to Administration Admin Applets| and launch Workflow Admin. The Workflow Admin application comes up (as shown in the following screenshot): There are three tabs: Workflows: This tab is used for administering Basic or Manual Workflows. Criteria: This tab deals with Automatic or Criteria Workflows—the type we will be using most often. Templates: This is the place where you can pre-assemble Workflow Templates—reusable pieces that you can use to create new basic workflows. Let's create a simple automatic workflow. I call it automatic because content enters the workflow automatically when it is modified or created. If you will be using e-mail notifications then be sure to check your Internet Configuration screen in Admin Server. I'll walk you through the steps in using automatic workflows. Lab 7: Using automatic workflows Here's the process for creating a criteria workflow: Creating a criteria workflow Follow these steps: Go to the Criteria tab and click on Add. The New Criteria Workflow dialog comes up (as shown in the following screenshot): Fill in Workflow Name and Description. Pick the Security Group. Only items with the same security group as the workflow can enter it. Let's use the security group we've created. Select accounting. We're creating a Criteria Workflow, so let's check the Has Criteria Definition box. Now you can specify criteria that content must match to enter the workflow.For the sake of this lab, let's pick Account for the Field, and accounting/payable/current for the Value. Please note that a content item must match at least two conditions to enter the workflow: it must belong to the same security group as the workflow, and it must match the criteria of the workflow. As soon as a new content item is created with Security Group of accounting and Content Account value is set to accounting/payable/current, it will enter our workflow. It will not enter the workflow if its metadata is simply updated to these values. It takes a new check-in for an item to enter a criteria workflow. If you need it to enter a workflow after a metadata update then consider custom components available from the Fishbowl Solutions (www.fishbowlsolutions.com). You can use any metadata field and value pair as criteria for entering the workflow. But you can only have one condition. What if that's not enough? If you need to perform additional checks before you can accept the item in a workflow then keep your criteria really open, and do your checks in the workflow itself. I'll show you how, later in this article. The diagram next illustrates how a content item flows through a criteria workflow. You may find it useful to refer back to it as you follow the steps in this lab. OK. We have a workflow created but there're two problems with it: it has no steps in it and it is disabled. Let's begin by seeing how to add workflow steps. Adding workflow steps Here's how you add workflow steps: Click on the Add button in the Steps section on the right (as shown in the following screenshot): The Add New Step dialog opens. Fill in the step name and description (as shown in the following screenshot): Click on the Add User button on the right and select approvers for this step. Also add yourself to the list of approvers so you can test the workflow. Switch to the Exit Conditions tab (as shown in the following screenshot): You can change the number of approvers required to move the item to the next step. You can make all approvers required to advance a step or just any one as shown on the screenshot. And if you put zero in the text box, no approvers will be required at all. They will still receive notification, but the item will go immediately to the next step. And when the current step is the last the workflow will end and the new revision will be released into the system. What do I mean by that? Until workflow is complete, revisions that are currently in a workflow will not come up in searchers and will not show on the Web. You will still see them in the content info screen but that's it. OK the dialog. You now have a workflow with one step. Let's test it. But first, you need to enable the workflow.
Read more
  • 0
  • 0
  • 2803

article-image-user-security-and-access-control-jboss-portals
Packt
15 Oct 2009
6 min read
Save for later

User Security and Access Control in JBoss portals

Packt
15 Oct 2009
6 min read
Authentication Authentication in JBoss portal builds on the JEE security provided by the JBoss server. The JEE specification defines the roles and constraints under which certain URLs and components are protected. However, this might not always be sufficient for building enterprise applications or portals. Application server providers such as JBoss supplement the authentication and authorization features provided by the JEE specification with additional features such as role-to-group mapping and session logout. Authentication in JBoss portal can be divided into configuration files and portal server configuration. The jboss-portal.sar/portal-server.war file is the portal deployment on the JBoss application server. Assuming that the portal server is like any JEE application deployed on an application server, all user authentication configurations go into the WEB-INF/web.xml and the WEB-INF/jboss-web.xml files. The WEB-INF/web.xml entry defines the authentication mode, with the default being form-based authentication. This file is also used to define the login and error pages, as defined by the JEE specification. The default security domain defined by the JBoss application server is java:/jaas/portal for JBoss portal. The security domain maps the JEE security constructs to the operational domain. This is defined in a proprietary file, WEB-INF/jboss-web.xml. The portal security domain authentication stack is defined in the jboss-portal.sar/conf/login-config.xml file, and is deployed along with the portal. Login-config.xml houses the JAAS modules for authentication. Custom modules can be written and added here to support special scenarios. The server provides a defined set of JAAS login modules that can be used for various scenarios. For example, the IdentityLoginModule is used for authentication based on local portal data, SynchronizingLdapLoginModule for authentication using LDAP, and DBIdentityLoginModule for authentication using a database. Within the jboss-portal.sar/portal-server.war application, all portal requests are routed through a single servlet called org.jboss.portal.server.servlet.PortalServlet. This servlet is defined twice, as follows, in the configuration file WEB-INF/web.xml to ensure that all possible request sources are covered: PortalServletWithPathMapping for path mappings PortalServletWithDefaultServletMapping for the default servlet mapping The servlet is mapped four times with variations to address a combination of secure SSL access and authenticated URLs, as follows: /*: Default access, and with no security constraint, allows access to everybody /sec/*: All requests to a secure protocol are routed through this path, ensuring SSL transport /auth/*: Authenticated access. Requires user to be authenticated before accessing the content under this tree /authsec/*: An authenticated and secure access The following snippet from web.xml shows the entries: <!-- Provide access to unauthenticated users --> <servlet-mapping> <servlet-name>PortalServletWithPathMapping</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> <!-- Provide secure access to unauthenticated users --> <servlet-mapping> <servlet-name>PortalServletWithPathMapping</servlet-name> <url-pattern>/sec/*</url-pattern> </servlet-mapping> <!-- Provide access to authenticated users --> <servlet-mapping> <servlet-name>PortalServletWithPathMapping</servlet-name> <url-pattern>/auth/*</url-pattern> </servlet-mapping> <!-- Provide secure access to authenticated users --> <servlet-mapping> <servlet-name>PortalServletWithPathMapping</servlet-name> <url-pattern>/authsec/*</url-pattern> </servlet-mapping> The URL patterns can be changed based on personal preference. Authorization Authorization is the process of determining if an authenticated user has access to a particular resource. Similar to authentication, JBoss portal provides in-built support for authorization, through Java Authorization Contract for Containers(JACC). JACC is a JSR-115 specification for the authorization models of the Java2 and JEE enterprise platforms. In the next few sections, we will look at how JBoss portal facilitates authorization using JACC. However, before we go into the details of access controls and authorization configurations, let's quickly look at how roles are configured in JBoss Portal. User and role management A role is an authorization construct that denotes the group that a user of the portal belongs to. Typically, roles are used to determine the access rights and the extent of these rights for a given resource. We saw in an earlier section how to configured portal assets such as, portals, pages, and portlet instances, to restrict certain actions to specific roles. We used a role called SPECIAL_USER for our examples. However, we never really defined what this role means to JBoss portal. Let's use the JBoss portal server console to register this role with the server. Log in as admin, and then click on the Members tab. This takes us to the User Management and Role Management tabs. The User Management tab is used for creating new users. We will come back to this shortly, but for now, let's switch over to the Role Management tab and click on the Create role link on the bottom right of the page. We can now add our SPECIAL_USER role and provide a display name for it. Once we submit it, the role will be registered with the portal server. As we will see later, every attempt by an authenticated user to access a resource that has security constraints through a specific role will be matched by the portal before granting or denying access to the resource. Users can be added to a role by using the User Management tab. Each user has a role property assigned, and this can be edited to check all of the roles that we want the user to belong to. We can see that for the user User, we now have an option to add the user to the Special User role. The portal permission A permission object carries the relevant permission for a given entity. The org.jboss.portal.security.PortalPermission object is used to describe permission for the portal. Like all the other entity-specific permission classes, it extends the java.security.Permission class, and any permission checked in the portal should extend the PortalPermission as well. Two additional fields of significance are as follows: uri: A string that specifies the URI of the resource that is described by the permission collection: An object of class org.jboss.portal.security.PortalPermissionCollection, which is used when the permission acts as a container for other permissions The authorization provider The authorization provider is a generic interface of the type org.jboss.portal.security.spi.provider.AuthorizationDomain, and provides access to several services. public interface AuthorizationDomain{ String getType(); DomainConfigurator getConfigurator(); PermissionRepository getPermissionRepository(); PermissionFactory getPermissionFactory();} Let us look into these classes a bit more in detail: org.jboss.portal.security.spi.provider.DomainConfigurator provides configuration access to an authorization domain. The authorization schema consists of bindings between URIs, roles, and permissions. org.jboss.portal.security.spi.provider.PermissionRepository provides runtime access to the authorization domain. It is used to retrieve the permissions for a specific role and URI. It is used at runtime by the framework, to take security decisions. org.jboss.portal.security.spi.provider.PermissionFactory is a factory to instantiate permissions for the specific domain. It is used at runtime to create permission objects of the appropriate type by the security framework.
Read more
  • 0
  • 0
  • 2800
article-image-alfresco-3-writing-and-executing-scripts
Packt
27 Jul 2011
4 min read
Save for later

Alfresco 3: Writing and Executing Scripts

Packt
27 Jul 2011
4 min read
  Alfresco 3 Cookbook Over 70 recipes for implementing the most important functionalities of Alfresco The reader can benefit from the previous article on Implementing Alfresco JavaScript API Functionalities. Introduction Alfresco, like any other enterprise open source framework, exposes a number of APIs including Alfresco SDK (Software Development Kit) a set of development tools that allows the creation of an application for a certain software package or framework and JavaScript API. Available JavaScript APIs Alfresco JavaScript API exposes all important repository objects as JavaScript objects that can be used in a script file. The API follows the object-oriented programming model for well known Alfresco concepts such as Nodes, Properties, Associations, and Aspects. The JavaScript API is capable of performing several essential functions for the script developer, such as: Create Node, Update Node: You can create, upload, or update files using these. Check In/Check Out: You can programmatically check-out and check-in your content. Access Rights Management Permissioning: You can manage your content’s security aspects. Transformation: You can transform your content using this. For example, you want to generate a PDF version of your MS-Office document. Tagging: Tagging APIs will help you tag your contents. Classifying: You can categorize or classify your contents using this. People: Using these APIs, you can handle all user-and group-related operations in your script; such as creating a new user, changing the password of a user, and so on. Searching: One of most important and powerful APIs exposed. You can search your contents using these APIs. You can perform Lucene-based search or XPath-based search operations using these APIs. Workflow: You can manage the tasks and workflows in your system using these APIs and services. Thumbnail: Exposes APIs to manage the thumbnail operations of various content items. Node operations: You use these APIs to perform several node-related functions such as Manage Properties, Manage Aspects, copying, deleting, moving, and so on. Thus, as you can see, pretty much most of the things can be done in a JavaScript file using these APIs. However, one thing is important, that you should not mix the usual JavaScript code you write for your HTML or JSP web pages. Those scripts are executed by your browser (this means, at the client side). The scripts you write using Alfresco JavaScript API are not client-side JavaScript file – this means these do not get executed by your browser. Instead, they get executed in your server and the browser has nothing to do in these scripts. It is called JavaScript API since the APIs are exposed using the ECMA script model and syntaxes. The programs you develop using these APIs are written in JavaScript language. The JavaScript API model Alfresco has provided a number of objects in the JavaScript API – these are more usually named as Root Scope Objects. These objects are your entry point into the repository. Each of the root level objects refers to a particular entity or functional point in the repository. For example, userhome object refers to the home space node of the current user. Each of these objects presents a number of properties and functionalities, thus enabling the script writer to implement several different requirements. For example, the userhome.name statement will return the name of the root folder of the current user. Some important and most frequently used root scope objects are: Companyhome: Returns the company home script node object Userhome: Returns the home folder node of the current user Person: Represents the current user person object Space: Stands for the current space object Document: Returns the currently selected document Search: Offers fully functional search APIs People: Encapsulates all functionalities related to user, groups, roles, permissions, and so on. Sites: Exposes the site service functionalities Actions: Provides invocation methods for registered actions Workflow: Handles all functionalities related to workflow implementation within the repository Among these, companyhome, userhome, person, space, and document objects represent Alfresco Node objects and allow access to the properties and aspects of the corresponding node object. Each of the node objects provides a number of APIs which are termed ScriptNode API. The others – search, people, sites, workflow, and actions – expose several methods that would help you implement specific business requirements. For example, if you want to write a script that searches some documents and contents, you would use the search API. If you want to create a new user – the people API will help you.  
Read more
  • 0
  • 0
  • 2799

article-image-testing-help-system-java-application
Packt
22 Oct 2009
6 min read
Save for later

Testing a HELP System in a Java Application

Packt
22 Oct 2009
6 min read
Introduction {literal}As more and more features get added to your software, the Help system for the software becomes extensive. It could probably contains hundreds of HTML files plus a similar number of images. We could always face the problems listed below. There could be broken links in the help index. Some files may not be listed in the index, therefore they can not be read by the customers. Some of the contextual help buttons could show the wrong topic. Some of the HTML files can contain broken links or incorrect image tags. Not all of the file titles would match their index entry. Another problem could occur when the user does a free text search of the help system. The result of such a search is a list of files, each represented by its title. In our system,  documents could have the title "untitled". In fact, the JavaHelp 2.0 System User's Guide contains the recommendation "To avoid confusion, ensure that the <TITLE> tag corresponds to the title used in the table of contents." Given that customers mostly use the Help system when they are already frustrated by our software, we should always see to it that such errors do not exist in our help system. To do this, we will write a tool, HelpGenerator, that generates some of the boilerplate XML in the help system and checks the HTML and index files for the problems listed above. We will also build tools for displaying and testing the contextual help. We've re-engineered and improved these tools and present them in this article. In this article we are assuming familiarity with the JavaHelp system. Documentation and sample code for JavaHelp can be found at: http://java.sun.com/products/javahelp. Overview A JavaHelp package consists of: A collection of HTML and image files containing the specific Help information to be displayed. A file defining the index of the Help topics. Each index item in the file consists of the text of the index entry and a string representing the target of the HTML file to be displayed for that index entry, for example: <index version="1.0"> <indexitem text="This is an example topic."target="Topic"> <indexitem text="This is an sub-topic."target="SubTopic"/> </indexitem></index> A file associating each target with its corresponding HTML file (or more generally, a URL)—the map file. Each map entry consists of the target name and the URL it is mapped to, for example: <map version="1.0"> <mapID target="Topic" url="Topic.html"/> <mapID target="SubTopic" url="SubTopic.html"/></map> A HelpSet file (by default HelpSet.hs) which specifies the names of the index and map files and the folder containing the search database. Our software will normally have a main menu item to activate the Help and, in addition, buttons or menu items on specific dialogs to activate a Help page for a particular topic, that is, "context-sensitive" Help. What Tests Do We Need?? At an overall structural level, we need to check: For each target referred to in the index file, is there a corresponding entry in the map file? In the previous example, the index file refers to targets called Topic and SubTopic. Are there entries for these targets in the map file? For each URL referred to in the map file, is that URL reachable? In the example above, do the files Topic.html and SubTopic.html exist? Are there HTML files in our help package which are never referred to? If a Help button or menu item on some dialog or window is activated, does the Help facility show the expected topic? If the Help search facility has been activated, do the expected search results show? That is, has the search database been built on the latest versions of our Help pages? At a lower level, we need to check the contents of each of the HTML files: Do the image tags in the files really point to images in our help system? Are there any broken links? Finally, we need to check that the contents of the files and the indexes are consistent Does the title of each help page match its index? To simplify these tests, we will follow a simple naming pattern as follows: We adopt the convention that the name of each HTML file should be in CamelCase format (conventional Java class name format) plus the .html extension. Also, we use this name, without the extension, as the target name. For example, the target named SubTopic will correspond to the file SubTopic.html. Furthermore, we assume that there is a single Java package containing all the required help files, namely, the HTML files, the image files, the index file, and the map file. Finally, we assume a fixed location for the Help search database. With this convention, we can now write a program that: Generates the list of available targets from the names of the HTML files. Checks that this list is consistent with the targets referred to in the index file. Checks that the index file is well-formed in that: It is a valid XML document. It has no blank index entries. It has no duplicate index entries. Each index entry refers to a unique target. Generates the map file, thereby guaranteeing that it will be consistent with the index file and the HTML files. The class HelpGenerator in the package jet.testtools.help does all this,and, if there are no inconsistencies found, it generates the map file. If an inconsistency or other error is found, an assertion will be raised. HelpGenerator also performs the consistency checks at the level of individual HTML files. Let's look at some examples. An HTML File That is Not Indexed Here is a simple help system with just three HTML files: The index file, HelpIndex.xml, only lists two of the HTML files: <index version="1.0"> <indexitem text="This is an example topic." target="ExampleTopic"> <indexitem text="This is an example sub-topic." target="ExampleSubTopic"/> </indexitem></index> When we run HelpGenerator over this system (we'll see how to do this later in this article), we get an assertion with the error messageThe Help file: TopicWithoutTarget.html was not referenced in the Index file: HelpIndex.xml.
Read more
  • 0
  • 0
  • 2795

article-image-authentication-zendauth-zend-framework-18
Packt
18 Nov 2009
6 min read
Save for later

Authentication with Zend_Auth in Zend Framework 1.8

Packt
18 Nov 2009
6 min read
Let's get started. Authentication versus Authorization Before we go any further, we need to first look at what exactly authentication and authorization is, as they are often misunderstood. Authorization is the process of allowing someone or something to actually do something. For example, if I go into a data centre, then the security guards control my authorization to the data centre and would, for instance, not allow me access to the server room if I was just a visitor but would if I worked there as a system admin. Authentication is the process of confirming someone or something's identity. For example, when I go to into the data centre the security guards will ask me for my identity, which most probably would be a card with my name and photo on. They use this to authenticate my identity. These concepts are very important so make sure you understand the difference. This is how I remember them: Authorization: Can they do this?Authentication: Are they who they say they are? Authentication with Zend_Auth To provide our authentication layer, we are going to use Zend_Auth. It provides an easy way to authenticate a request, obtain a result, and then store the identity of that authentication request. Zend_Auth Zend_Auth has three main areas—authentication adapters, authentication results, and identity persistence. Authentication adapters Authentication adapters work in a similar way to database adapters. We configure the adapter and then pass it to the Zend_Auth instance, which then uses it to authenticate the request. The following concrete adapters are provided by default: HTTP Digest authentication HTTP Basic authentication Database Table authentication LDAP authentication OpenID authentication InfoCard authentication All of these adapters implement the Zend_Auth_Adapter_Interface, meaning we can create our own adapters by implementing this interface. Authentication results All authentication adapters return a Zend_Auth_Result instance, which stores the result of the authentication request. The stored data includes whether the authentication request was successful, an identity if the request was successful, and any failure messages, if unsuccessful. Identity persistence The default persistence used is the PHP session. It uses Zend_Session_Namespace to store the identity information in the Zend_Auth namespace. There is one other type of storage available named NonPersistent, which is used for HTTP authentication. We can also create our own storage by implementing the Zend_Auth_Storage_Interface. Authentication Service We are going to create an Authentication Service that will handle authentication requests. We are using a service to keep the authentication logic away from our User Model. Let's create this class now: application/modules/storefront/services/Authentication.phpclass Storefront_Service_Authentication{ protected $_authAdapter; protected $_userModel; protected $_auth; public function __construct(Storefront_Model_User $userModel = null) { $this->_userModel = null === $userModel ? new Storefront_Model_User() : $userModel; } public function authenticate($credentials) { $adapter = $this->getAuthAdapter($credentials); $auth = $this->getAuth(); $result = $auth->authenticate($adapter); if (!$result->isValid()) { return false; } $user = $this->_userModel ->getUserByEmail($credentials['email']); $auth->getStorage()->write($user); return true;}public function getAuth(){ if (null === $this->_auth) { $this->_auth = Zend_Auth::getInstance(); } return $this->_auth;}public function getIdentity(){ $auth = $this->getAuth(); if ($auth->hasIdentity()) { return $auth->getIdentity(); } return false;}public function clear(){ $this->getAuth()->clearIdentity();}public function setAuthAdapter(Zend_Auth_Adapter_Interface $adapter){ $this->_authAdapter = $adapter;}public function getAuthAdapter($values){ if (null === $this->_authAdapter) { $authAdapter = new Zend_Auth_Adapter_DbTable( Zend_Db_Table_Abstract::getDefaultAdapter(), 'user', 'email', 'passwd' ); $this->setAuthAdapter($authAdapter); $this->_authAdapter ->setIdentity($values['email']); $this->_authAdapter ->setCredential($values['passwd']); $this->_authAdapter ->setCredentialTreatment( 'SHA1(CONCAT(?,salt))' ); } return $this->_authAdapter; }} The Authentication Service contains the following methods: __constuct: Creates or sets the User Model instance authenticate: Processes the authentication request getAuth: Returns the Zend_Auth instance getIdentity: Returns the stored identity clear: Clears the identity (log out) setAuthAdapter: Sets the authentication adapter to use getAuthAdapter: Returns the authentication adapter The Service is really separated into three areas. They are getting the Zend_Auth instance, configuring the adapter, and authenticating the request using Zend_Auth and the Adapter. To get the Zend_Auth instance, we have the getAuth() method. This method retrieves the singleton Zend_Auth instance and sets it on the $_auth property. It is important to remember that Zend_Auth is a singleton class, meaning that there can only ever be one instance of it. To configure the adapter, we have the getAuthAdapter() method. By default, we are going to use the Zend_Auth_Adapter_DbTable adapter to authenticate the request. However, we can also override this by setting another adapter using the setAuthAdapter() method. This is useful for adding authenticate strategies and testing. The configuration of the DbTable adapter is important here, so let's have a look at that code: $authAdapter = new Zend_Auth_Adapter_DbTable( Zend_Db_Table_Abstract::getDefaultAdapter(), 'user', 'email', 'passwd', 'SHA1(CONCAT(?,salt))');$this->setAuthAdapter($authAdapter);$this->_authAdapter->setIdentity($values['email']);$this->_authAdapter->setCredential($values['passwd']); The Zend_Auth_Adapter_DbTable constructor accepts five parameters. They are database adapter, database table, table name, identity column, and credential treatment. For our adapter, we supply the default database adapter for our table classes using the getDefaultAdapter() method, the user table, the email column, the passwd column, and the encryption and salting SQL for the password. Once we have our configured adapter, we set the identity and credential properties. These will then be used during authentication. To authenticate the request, we use the authenticate method. $adapter = $this->getAuthAdapter($credentials);$auth = $this->getAuth();$result = $auth->authenticate($adapter);if (!$result->isValid()) { return false;}$user = $this->_userModel ->getUserByEmail($credentials['email']);$auth->getStorage()->write($user);return true; Here we first get the configured adapter, get the Zend_Auth instance, and then fetch the result using Zend_Auth's authenticate method, while passing in the configured adapter. We then check that the authentication request was successful using the isValid() method. At this point, we can also choose to handle different kinds of failures using the getCode() method. This will return one of the following constants: Zend_Auth_Result::SUCCESSZend_Auth_Result::FAILUREZend_Auth_Result::FAILURE_IDENTITY_NOT_FOUNDZend_Auth_Result::FAILURE_IDENTITY_AMBIGUOUSZend_Auth_Result::FAILURE_CREDENTIAL_INVALIDZend_Auth_Result::FAILURE_UNCATEGORIZED By using these, we could switch and handle each error in a different way. However, for our purposes, this is not necessary. If the authentication request was successful, we then retrieve a Storefront_Resource_User_Item instance from the User Model and then write this object to Zend_Auth's persistence layer by getting the storage instance using  getStorage() and writing to it using write(). This will then store the user in the session so that we can retrieve the user information throughout the session. Our Authentication Service is now complete, and we can start using it to create a login system for the Storefront.
Read more
  • 0
  • 0
  • 2793
article-image-improving-performance-parallel-programming
Packt
12 Apr 2013
11 min read
Save for later

Improving Performance with Parallel Programming

Packt
12 Apr 2013
11 min read
(For more resources related to this topic, see here.) Parallelizing processing with pmap The easiest way to parallelize data is to take a loop we already have and handle each item in it in a thread. That is essentially what pmap does. If we replace a call to map with pmap, it takes each call to the function argument and executes it in a thread pool. pmap is not completely lazy, but it's not completely strict, either: it stays just ahead of the output consumed. So if the output is never used, it won't be fully realized. For this recipe, we'll calculate the Mandelbrot set. Each point in the output takes enough time that this is a good candidate to parallelize. We can just swap map for pmap and immediately see a speed-up. How to do it... The Mandelbrot set can be found by looking for points that don't settle on a value after passing through the formula that defines the set quickly. We need a function that takes a point and the maximum number of iterations to try and return the iteration that it escapes on. That just means that the value gets above 4. (defn get-escape-point [scaled-x scaled-y max-iterations] (loop [x 0, y 0, iteration 0] (let [x2 (* x x), y2 (* y y)] (if (and (< (+ x2 y2) 4) (< iteration max-iterations)) (recur (+ (- x2 y2) scaled-x) (+ (* 2 x y) scaled-y) (inc iteration)) iteration)))) The scaled points are the pixel points in the output, scaled to relative positions in the Mandelbrot set. Here are the functions that handle the scaling. Along with a particular x-y coordinate in the output, they're given the range of the set and the number of pixels each direction. (defn scale-to ([pixel maximum [lower upper]] (+ (* (/ pixel maximum) (Math/abs (- upper lower))) lower))) (defn scale-point ([pixel-x pixel-y max-x max-y set-range] [(scale-to pixel-x max-x (:x set-range)) (scale-to pixel-y max-y (:y set-range))])) The function output-points returns a sequence of x, y values for each of the pixels in the final output. (defn output-points ([max-x max-y] (let [range-y (range max-y)] (mapcat (fn [x] (map #(vector x %) range-y)) (range max-x))))) For each output pixel, we need to scale it to a location in the range of the Mandelbrot set and then get the escape point for that location. (defn mandelbrot-pixel ([max-x max-y max-iterations set-range] (partial mandelbrot-pixel max-x max-y max-iterations set-range)) ([max-x max-y max-iterations set-range [pixel-x pixel-y]] (let [[x y] (scale-point pixel-x pixel-y max-x max-y set-range)] (get-escape-point x y max-iterations)))) At this point, we can simply map mandelbrot-pixel over the results of outputpoints. We'll also pass in the function to use (map or pmap). (defn mandelbrot ([mapper max-iterations max-x max-y set-range] (doall (mapper (mandelbrot-pixel max-x max-y max-iterations set-range) (output-points max-x max-y))))) Finally, we have to define the range that the Mandelbrot set covers. (def mandelbrot-range {:x [-2.5, 1.0], :y [-1.0, 1.0]}) How do these two compare? A lot depends on the parameters we pass them. user=> (def m (time (mandelbrot map 500 1000 1000 mandelbrot-range))) "Elapsed time: 28981.112 msecs" #'user/m user=> (def m (time (mandelbrot pmap 500 1000 1000 mandelbrot-range))) "Elapsed time: 34205.122 msecs" #'user/m user=> (def m (time (mandelbrot map 1000 10001000 mandelbrot-range))) "Elapsed time: 85308.706 msecs" #'user/m user=> (def m (time (mandelbrot pmap 1000 10001000 mandelbrot-range))) "Elapsed time: 49067.584 msecs" #'user/m Refer to the following chart: If we only iterate at most 500 times for each point, it's slightly faster to use map and work sequentially. However, if we iterate 1,000 times each, pmap is faster. How it works... This shows that parallelization is a balancing act. If each separate work item is small, the overhead of creating the threads, coordinating them, and passing data back and forth takes more time than doing the work itself. However, when each thread has enough to do to make it worth it, we can get nice speed-ups just by using pmap. Behind the scenes, pmap takes each item and uses future to run it in a thread pool. It forces only a couple more items than you have processors, so it keeps your machine busy, without generating more work or data than you need. There's more... For an in-depth, excellent discussion of the nuts and bolts of pmap, along with pointers about things to watch out for, see David Liebke's talk, From Concurrency to Parallelism (http://blip.tv/clojure/david-liebke-from-concurrency-to-parallelism-4663526). See also The Partitioning Monte Carlo Simulations for better pmap performance recipe Parallelizing processing with Incanter One of its nice features is that it uses the Parallel Colt Java library (http://sourceforge.net/projects/parallelcolt/) to actually handle its processing, so when you use a lot of the matrix, statistical, or other functions, they're automatically executed on multiple threads. For this, we'll revisit the Virginia housing-unit census data and we'll fit it to a linear regression. Getting ready We'll need to add Incanter to our list of dependencies in our Leiningen project.clj file: :dependencies [[org.clojure/clojure "1.5.0"] [incanter "1.3.0"]] We'll also need to pull those libraries into our REPL or script: (use '(incanter core datasets io optimize charts stats)) We can use the following filename: (def data-file "data/all_160_in_51.P35.csv") How to do it... For this recipe, we'll extract the data to analyze and perform the linear regression. We'll then graph the data afterwards. First, we'll read in the data and pull the population and housing unit columns into their own matrix. (def data (to-matrix (sel (read-dataset data-file :header true) :cols [:POP100 :HU100]))) From this matrix, we can bind the population and the housing unit data to their own names. (def population (sel data :cols 0)) (def housing-units (sel data :cols 1)) Now that we have those, we can use Incanter to fit the data. (def lm (linear-model housing-units population)) Incanter makes it so easy, it's hard not to look at it. (def plot (scatter-plot population housing-units :legend true)) (add-lines plot population (:fitted lm)) (view plot) Here we can see that the graph of housing units to families makes a very straight line: How it works… Under the covers, Incanter takes the data matrix and partitions it into chunks. It then spreads those over the available CPUs to speed up processing. Of course, we don't have to worry about this. That's part of what makes Incanter so powerful. Partitioning Monte Carlo simulations for better pmap performance In the Parallelizing processing with pmap recipe, we found that while using pmap is easy enough, knowing when to use it is more complicated. Processing each task in the collection has to take enough time to make the costs of threading, coordinating processing, and communicating the data worth it. Otherwise, the program will spend more time concerned with how (parallelization) and not enough time with what (the task). The way to get around this is to make sure that pmap has enough to do at each step that it parallelizes. The easiest way to do that is to partition the input collection into chunks and run pmap on groups of the input. For this recipe, we'll use Monte Carlo methods to approximate pi . We'll compare a serial version against a naïve parallel version against a version that uses parallelization and partitions. Getting ready We'll use Criterium to handle benchmarking, so we'll need to include it as a dependency in our Leiningen project.clj file, shown as follows: :dependencies [[org.clojure/clojure "1.5.0"] [criterium "0.3.0"]] We'll use these dependencies and the java.lang.Math class in our script or REPL. (use 'criterium.core) (import [java.lang Math]) How to do it… To implement this, we'll define some core functions and then implement a Monte Carlo method for estimating pi that uses pmap. We need to define the functions necessary for the simulation. We'll have one that generates a random two-dimensional point that will fall somewhere in the unit square. (defn rand-point [] [(rand) (rand)]) Now, we need a function to return a point's distance from the origin. (defn center-dist [[x y]] (Math/sqrt (+ (* x x) (* y y)))) Next we'll define a function that takes a number of points to process, and creates that many random points. It will return the number of points that fall inside a circle. (defn count-in-circle [n] (->> (repeatedly n rand-point) (map center-dist) (filter #(<= % 1.0)) count)) That simplifies our definition of the base (serial) version. This calls count-incircle to get the proportion of random points in a unit square that fall inside a circle. It multiplies this by 4, which should approximate pi. (defn mc-pi [n] (* 4.0 (/ (count-in-circle n) n))) We'll use a different approach for the simple pmap version. The function that we'll parallelize will take a point and return 1 if it's in the circle, or 0 if not. Then we can add those up to find the number in the circle. (defn in-circle-flag [p] (if (<= (center-dist p) 1.0) 1 0)) (defn mc-pi-pmap [n] (let [in-circle (->> (repeatedly n rand-point) (pmap in-circle-flag) (reduce + 0))] (* 4.0 (/ in-circle n)))) For the version that chunks the input, we'll do something different again. Instead of creating the sequence of random points and partitioning that, we'll have a sequence that tells how large each partition should be and have pmap walk across that, calling count-in-circle. This means that creating the larger sequences are also parallelized. (defn mc-pi-part ([n] (mc-pi-part 512 n)) ([chunk-size n] (let [step (int (Math/floor (float (/ n chunk-size)))) remainder (mod n chunk-size) parts (lazy-seq (cons remainder (repeat step chunk-size))) in-circle (reduce + 0 (pmap count-in-circle parts))] (* 4.0 (/ in-circle n))))) Now, how do these work? We'll bind our parameters to names, and then we'll run one set of benchmarks before we look at a table of all of them. We'll discuss the results in the next section. user=> (def chunk-size 4096) #'user/chunk-size user=> (def input-size 1000000) #'user/input-size user=> (quick-bench (mc-pi input-size)) WARNING: Final GC required 4.001679309213317 % of runtime Evaluation count : 6 in 6 samples of 1 calls. Execution time mean :634.387833 ms Execution time std-deviation : 33.222001 ms Execution time lower quantile : 606.122000 ms ( 2.5%) Execution time upper quantile : 677.273125 ms (97.5%) nil Here's all the information in the form of a table: Function Input Size Chunk Size Mean Std Dev. GC Time mc-pi 1,000,000 NA 634.39ms 33.22 ms 4.0%   mc-pi-pmap 1,000,000 NA 1.92 sec 888.52 ms 2.60%   mc-pi-part 1,000,000 4,096 455.94 ms 4.19 ms 8.75%   Here's a chart with the same information: How it works… There are a couple of things we should talk about here. Primarily, we'll need to look at chunking the inputs for pmap, but we should also discuss Monte Carlo methods. Estimating with Monte Carlo simulations Monte Carlo simulations work by throwing random data at a problem that is fundamentally deterministic, but when it's practically infeasible to attempt a more straightforward solution. Calculating pi is one example of this. By randomly filling in points in a unit square, p/4 will be approximately the ratio of points that will fall within a circle centered on 0, 0. The more random points that we use, the better the approximation. I should note that this makes a good demonstration of Monte Carlo methods, but it's a terrible way to calculate pi. It tends to be both slower and less accurate than the other methods. Although not good for this task, Monte Carlo methods have been used for designing heat shields, simulating pollution, ray tracing, financial option pricing, evaluating business or financial products, and many, many more things. For a more in-depth discussion, Wikipedia has a good introduction to Monte Carlo methods at http://en.wikipedia.org/wiki/Monte_Carlo_method. Chunking data for pmap The table we saw earlier makes it clear that partitioning helped: the partitioned version took just 72 percent of the time that the serial version did, while the naïve parallel version took more than three times longer. Based on the standard deviations, the results were also more consistent. The speed up is because each thread is able to spend longer on each task. There is a performance penalty to spreading the work over multiple threads. Context switching (that is, switching between threads) costs time, and coordinating between threads does as well. But we expect to be able to make that time and more up by doing more things at once. However, if each task itself doesn't take long enough, then the benefit won't out-weigh the costs. Chunking the input—and effectively creating larger individual tasks for each thread— gets around this by giving each thread more to do, and thereby spending less time context switching and coordinating, relative to the overall time spent running.
Read more
  • 0
  • 0
  • 2790

article-image-mastering-newer-prezi-features
Packt
25 Jul 2012
10 min read
Save for later

Mastering the Newer Prezi Features

Packt
25 Jul 2012
10 min read
Templates There will always be time restraints put on us when building any business presentation. Mostly these will be pretty unrealistic time restraints as well. If you do find yourself against the clock when building a Prezi, then why not give yourself a slight advantage and use one of Prezi's templates to get your design started. There are lots of templates you can chose from and here's how to make the most out of them when the clock is ticking. The templates When you create any new Prezi online or in the desktop editor, you'll be presented with a choice of template as shown in the following screenshot: Before you decide which one to choose, you can explore them by simply selecting one and clicking the Preview button. You can see in the following screenshot that we've selected the Our Project template. Rolling your mouse over a template's thumbnail will show you some more details as well to help you choose. At the top of the screen, you'll see the options to either Start Editing or go Back to the templates screen. Before you make your choice, have a look around the template preview and check out all of the various objects available to you. Zoom in and out of certain areas that look interesting and use the arrows in the bottom right to go through the template's path and see how it flows. In the following screenshot, you can see that we've zoomed in to take a closer look at the assets included in this template: As you can see in the preceding screenshot, the Our Project template has some lovely assets included. The assets you'll be able to use in the template are images and sketches such as the Doodles that you can see in the top right of the screenshot. All of these assets can be moved around and used anywhere on your canvas. If you preview a template and decide it's the right one for you to use, then just click the Start Editing button to go into edit mode and begin building your Prezi. Getting the most from templates Once you go into edit mode, don't think that you're stuck with how everything is laid out. You can (and should) move things around to fit with the message you're trying to deliver to your audience. Paths The very first thing we'd suggest is clicking on the Paths button and taking a look at how the Prezi flows. The whole reason you're using a template is because you're pushed for time, but you should know how many frames you need and how many different areas you'll want to focus on in your presentation before you get started. If you do, then you can adjust the paths, add new path points, or delete some that are there already. Assets All of the templates, especially Our Project, will come with various assets included. Use them wherever you can. It'll save you lots of time searching for your own imagery if you can just move the existing assets around. As shown in the preceding screenshot, you are totally free to resize any asset in a template. Make the most of them and save yourself a whole heap of time. Branding The only down side of using templates is that they of course won't have any of your company colors, logo, or branding on them. This is easily fixed by using the Colors & Fonts|Theme Wizard found in the bubble menu. On the very first screen of the wizard, click the Replace Logo button to add your company logo. The logo must be a JPEG file no bigger than 250 pixels wide and 100 pixels high. Clicking the button will allow you to search for your logo and it will then be placed in the bottom left-hand corner of your Prezi at all times. On this screen, you can also change the background color of your entire canvas. On the next screen of the wizard, we recommend you switch to Manual mode by clicking the option in the bottom-left corner. In this screen, you can select the fonts to use in your Prezi. At the present time, Prezi still has only a limited number of fonts but we're confident you can find something close to the one your company uses. The reason we suggest switching to manual mode is because you'll be able to use your corporate colors for the fonts you select, and also on the frames and shapes within the Prezi. You'll need to know the RGB color values specified in your corporate branding. By using this final step, you'll get all the benefits of having an already designed Prezi without getting told off by your marketing team for going against their strict branding guidelines. Shapes A very simple element of the Prezi bubble menu which gets overlooked a lot is the InsertShapes| option. In this part of the article, we'll look at some things you may not have known about how shapes work within Prezi. Shortcut for shapes To quickly enter the Shapes menu when working in the Prezi canvas, just press the S key on your keyboard. Get creative In the first part of this chapter, we looked at the assets from a template called OurProject. Some of those assets were the line drawings shown below the male and female characters. When you see these "Doodles" as they're titled, you might think they've been drawn in some kind of graphics package and inserted into the Prezi canvas as you would anything else. On closer inspection in edit mode, you can see that each of the characters is actually made up of different lines from the Shapes menu. This is a great use of the line tool and we'd encourage you to try and create your own simple drawings wherever you can. These can then be reused over time, and will in turn save you lots of time searching for imagery via the Google image insert. Let's say that we want to add some more detail to the male character. Maybe we'll give him a more exciting hair style to replace the boring one that he has at the moment. First select the current hairline and delete it from the character's head. Now select the line tool from the Shapes menu and let's give this guy a flat top straight from the 80's. One of our lines is too long on the right. To adjust it, simply double-click to enter edit mode and drag the points to the right position as shown in the following screenshot: So there we have a great example of how to quickly draw your own image on the Prezi canvas by just using lines. It's an excellent feature of Prezi and as you can see, it's given our character a stunning new look. It's a shame his girlfriend doesn't think so too! Editing shapes In step three of giving our character a new haircut, you saw the edit menu which is accessed by a simple double-click. You can use the edit function on all items in the shapes menu apart from the Pencil and Highlighter tools. Any shape can be double-clicked to change its size and color as shown in the following screenshot. You can see that all of the shapes on the left have been copied and then edited to change their color and size. The edited versions on the right have all been double- clicked and one of the five extra available colors have been selected. The points of each shape have also been clicked on and dragged to change the dimensions of the shape. Holding the Shift key will not keep your shapes to scale. If you want to scale the shapes up or down, we recommend you use the transformation zebra by clicking the plus (+) or minus (-) signs. Editing lines When editing lines or arrows, you can change them from being straight to curved by dragging the center point in any direction. This is extremely useful when creating the line drawings we saw earlier. It's also useful to get arrows pointing at various objects on your canvas. Highlighter The highlighter tool from the shapes menu is extremely useful for pointing out key pieces of information like in the interesting fact shown in the following screenshot: Just drag it across the text you'd like to highlight. Once you've done that the highlighter marks become objects in their own right, so you can use the transformation zebra to change their size or position as shown in the following screenshot: Pencil The pencil tool can be used to draw freehand sketches like the one shown in the following screenshot. If you hadn't guessed it yet, our drawing is supposed to represent a brain which links to the interesting fact about ants. The pencil tool is great if you're good at sketching things out with your mouse. But if like us, your art skills need a little more work, you might want to stick to using the lines and shapes to create imagery! To change the color of your highlighter or pencil drawings, you will need to go into the Theme Wizard and edit the RGB values. This will help you keep things within your corporate branding guidelines again. Drawings and diagrams Another useful new feature and a big time saver within the Prezi insert menu are drawings and diagrams. You can locate the drawings and diagrams templates by clicking the button in-between YouTube and File from the Insert menu. There are twelve templates to choose from and each has been given a name that best describes their purpose. Rolling over each thumbnail will show you a little more detail to help you choose the right one. Once you have chosen, double-click the thumbnail and then decide where to place your drawing on the canvas. You can see in the following screenshot that the drawing or diagram is grouped together and will not become active until you click the green tick. Once you make the drawing active, you can access all of its frames, text, and any other elements that are included. In the following screenshot, you can see that we've zoomed into a section of the tree diagram. You can see in the preceding screenshot that the diagram uses lines, circular frames, and text which can all be edited in any way you like. This is the case for all of the diagrams and drawings available from the menu. Using these diagrams and drawings gives you a great chance to explain concepts and ideas to your colleagues with ease. You can see from the preceding screenshot that there's a good range of useful drawings and diagrams that you're used to seeing in business presentations. You can easily create organograms, timelines for projects, or business processes and cycles, simply by using the templates available and inserting your own content and imagery. By using the Theme wizard explained earlier in this chapter, you can make sure your drawings and diagrams use your corporate colors.
Read more
  • 0
  • 0
  • 2788
Modal Close icon
Modal Close icon