Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
Packt
16 Sep 2013
16 min read
Save for later

Linux Shell Scripting – various recipes to help you

Packt
16 Sep 2013
16 min read
(For more resources related to this topic, see here.) The shell scripting language is packed with all the essential problem-solving components for Unix/Linux systems. Text processing is one of the key areas where shell scripting is used, and there are beautiful utilities such as sed, awk, grep, and cut, which can be combined to solve problems related to text processing. Various utilities help to process a file in fine detail of a character, line, word, column, row, and so on, allowing us to manipulate a text file in many ways. Regular expressions are the core of pattern-matching techniques, and most of the text-processing utilities come with support for it. By using suitable regular expression strings, we can produce the desired output, such as filtering, stripping, replacing, and searching. Using regular expressions Regular expressions are the heart of text-processing techniques based on pattern matching. For fluency in writing text-processing tools, one must have a basic understanding of regular expressions. Using wild card techniques, the scope of matching text with patterns is very limited. Regular expressions are a form of tiny, highly-specialized programming language used to match text. A typical regular expression for matching an e-mail address might look like [a-z0-9_]+@[a-z0-9]+\.[a-z]+. If this looks weird, don't worry, it is really simple once you understand the concepts through this recipe. How to do it... Regular expressions are composed of text fragments and symbols, which have special meanings. Using these, we can construct any suitable regular expression string to match any text according to the context. As regex is a generic language to match texts, we are not introducing any tools in this recipe. Let's see a few examples of text matching: To match all words in a given text, we can write the regex as follows: ( ?[a-zA-Z]+ ?) ? is the notation for zero or one occurrence of the previous expression, which in this case is the space character. The [a-zA-Z]+ notation represents one or more alphabet characters (a-z and A-Z). To match an IP address, we can write the regex as follows: [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3} Or: [[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3}\.[[:digit:]]{1,3} We know that an IP address is in the form 192.168.0.2. It is in the form of four integers (each from 0 to 255), separated by dots (for example, 192.168.0.2). [0-9] or [:digit:] represents a match for digits from 0 to 9. {1,3} matches one to three digits and \. matches the dot character (.). This regex will match an IP address in the text being processed. However, it doesn't check for the validity of the address. For example, an IP address of the form 123.300.1.1 will be matched by the regex despite being an invalid IP. This is because when parsing text streams, usually the aim is to only detect IPs. How it works... Let's first go through the basic components of regular expressions (regex): regex Description Example ^ This specifies the start of the line marker. ^tux matches a line that starts with tux. $ This specifies the end of the line marker. tux$ matches a line that ends with tux. . This matches any one character. Hack. matches Hack1, Hacki, but not Hack12 or Hackil; only one additional character matches. [] This matches any one of the characters enclosed in [chars]. coo[kl] matches cook or cool. [^] This matches any one of the characters except those that are enclosed in [^chars]. 9[^01] matches 92 and 93, but not 91 and 90. [-] This matches any character within the range specified in []. [1-5] matches any digits from 1 to 5. ? This means that the preceding item must match one or zero times. colou?r matches color or colour, but not colouur. + This means that the preceding item must match one or more times. Rollno-9+ matches Rollno-99 and Rollno-9, but not Rollno-. * This means that the preceding item must match zero or more times. co*l matches cl, col, and coool. () This treats the terms enclosed as one entity ma(tri)?x matches max or matrix. {n} This means that the preceding item must match n times. [0-9]{3} matches any three-digit number. [0-9]{3} can be expanded as [0-9][0-9][0-9]. {n,} This specifies the minimum number of times the preceding item should match. [0-9]{2,} matches any number that is two digits or longer. {n, m} This specifies the minimum and maximum number of times the preceding item should match. [0-9]{2,5} matches any number has two digits to five digits. | This specifies the alternation-one of the items on either of sides of | should match. Oct (1st | 2nd) matches Oct 1st or Oct 2nd. \ This is the escape character for escaping any of the special characters mentioned previously. a\.b matches a.b, but not ajb. It ignores the special meaning of . because of \. For more details on the regular expression components available, you can refer to the following URL: http://www.linuxforu.com/2011/04/sed-explained-part-1/ There's more... Let's see how the special meanings of certain characters are specified in the regular expressions. Treatment of special characters Regular expressions use some characters, such as $, ^, ., *, +, {, and }, as special characters. But, what if we want to use these characters as normal text characters? Let's see an example of a regex, a.txt. This will match the character a, followed by any character (due to the '.' character), which is then followed by the string txt . However, we want '.' to match a literal '.' instead of any character. In order to achieve this, we precede the character with a backward slash \ (doing this is called escaping the character). This indicates that the regex wants to match the literal character rather than its special meaning. Hence, the final regex becomes a\.txt. Visualizing regular expressions Regular expressions can be tough to understand at times, but for people who are good at understanding things with diagrams, there are utilities available to help in visualizing regex. Here is one such tool that you can use by browsing to http://www.regexper.com; it basically lets you enter a regular expression and creates a nice graph to help understand it. Here is a screenshot showing the regular expression we saw in the previous section: Searching and mining a text inside a file with grep Searching inside a file is an important use case in text processing. We may need to search through thousands of lines in a file to find out some required data, by using certain specifications. This recipe will help you learn how to locate data items of a given specification from a pool of data. How to do it... The grep command is the magic Unix utility for searching in text. It accepts regular expressions, and can produce output in various formats. Additionally, it has numerous interesting options. Let's see how to use them: To search for lines of text that contain the given pattern: $ grep pattern filenamethis is the line containing pattern Or: $ grep "pattern" filenamethis is the line containing pattern We can also read from stdin as follows: $ echo -e "this is a word\nnext line" | grep wordthis is a word Perform a search in multiple files by using a single grep invocation, as follows: $ grep "match_text" file1 file2 file3 ... We can highlight the word in the line by using the --color option as follows: $ grep word filename --color=autothis is the line containing word Usually, the grep command only interprets some of the special characters in match_text. To use the full set of regular expressions as input arguments, the -E option should be added, which means an extended regular expression. Or, we can use an extended regular expression enabled grep command, egrep. For example: $ grep -E "[a-z]+" filename Or: $ egrep "[a-z]+" filename In order to output only the matching portion of a text in a file, use the -o option as follows: $ echo this is a line. | egrep -o "[a-z]+\." line. In order to print all of the lines, except the line containing match_pattern, use: $ grep -v match_pattern file The -v option added to grep inverts the match results. Count the number of lines in which a matching string or regex match appears in a file or text, as follows: $ grep -c "text" filename 10 It should be noted that -c counts only the number of matching lines, not the number of times a match is made. For example: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -c "[0-9]" 2 Even though there are six matching items, it prints 2, since there are only two matching lines. Multiple matches in a single line are counted only once. To count the number of matching items in a file, use the following trick: $ echo -e "1 2 3 4\nhello\n5 6" | egrep -o "[0-9]" | wc -l 6 Print the line number of the match string as follows: $ cat sample1.txt gnu is not unix linux is fun bash is art $ cat sample2.txt planetlinux $ grep linux -n sample1.txt 2:linux is fun or $ cat sample1.txt | grep linux -n If multiple files are used, it will also print the filename with the result as follows: $ grep linux -n sample1.txt sample2.txt sample1.txt:2:linux is fun sample2.txt:2:planetlinux Print the character or byte offset at which a pattern matches, as follows: $ echo gnu is not unix | grep -b -o "not" 7:not The character offset for a string in a line is a counter from 0, starting with the first character. In the preceding example, not is at the seventh offset position (that is, not starts from the seventh character in the line; that is, gnu is not unix). The -b option is always used with -o. To search over multiple files, and list which files contain the pattern, we use the following: $ grep -l linux sample1.txt sample2.txt sample1.txt sample2.txt The inverse of the -l argument is -L. The -L argument returns a list of non-matching files. There's more... We have seen the basic usages of the grep command, but that's not it; the grep command comes with even more features. Let's go through those. Recursively search many files To recursively search for a text over many directories of descendants, use the following command: $ grep "text" . -R -n In this command, "." specifies the current directory. The options -R and -r mean the same thing when used with grep. For example: $ cd src_dir $ grep "test_function()" . -R -n ./miscutils/test.c:16:test_function(); test_function() exists in line number 16 of miscutils/test.c. This is one of the most frequently used commands by developers. It is used to find files in the source code where a certain text exists. Ignoring case of pattern The -i argument helps match patterns to be evaluated, without considering the uppercase or lowercase. For example: $ echo hello world | grep -i "HELLO" hello grep by matching multiple patterns Usually, we specify single patterns for matching. However, we can use an argument -e to specify multiple patterns for matching, as follows: $ grep -e "pattern1" -e "pattern" This will print the lines that contain either of the patterns and output one line for each match. For example: $ echo this is a line of text | grep -e "this" -e "line" -o this line There is also another way to specify multiple patterns. We can use a pattern file for reading patterns. Write patterns to match line-by-line, and execute grep with a -f argument as follows: $ grep -f pattern_filesource_filename For example: $ cat pat_file hello cool $ echo hello this is cool | grep -f pat_file hello this is cool Including and excluding files in a grep search grep can include or exclude files in which to search. We can specify include files or exclude files by using wild card patterns. To search only for .c and .cpp files recursively in a directory by excluding all other file types, use the following command: $ grep "main()" . -r --include *.{c,cpp} Note, that some{string1,string2,string3} expands as somestring1 somestring2 somestring3. Exclude all README files in the search, as follows: $ grep "main()" . -r --exclude "README" To exclude directories, use the --exclude-dir option. To read a list of files to exclude from a file, use --exclude-from FILE. Using grep with xargs with zero-byte suffix The xargs command is often used to provide a list of file names as a command-line argument to another command. When filenames are used as command-line arguments, it is recommended to use a zero-byte terminator for the filenames instead of a space terminator. Some of the filenames can contain a space character, and it will be misinterpreted as a terminator, and a single filename may be broken into two file names (for example, New file.txt can be interpreted as two filenames New and file.txt). This problem can be avoided by using a zero-byte suffix. We use xargs so as to accept a stdin text from commands such as grep and find. Such commands can output text to stdout with a zero-byte suffix. In order to specify that the input terminator for filenames is zero byte (\0), we should use -0 with xargs. Create some test files as follows: $ echo "test" > file1 $ echo "cool" > file2 $ echo "test" > file3 In the following command sequence, grep outputs filenames with a zero-byte terminator (\0), because of the -Z option with grep. xargs -0 reads the input and separates filenames with a zero-byte terminator: $ grep "test" file* -lZ | xargs -0 rm Usually, -Z is used along with -l. Silent output for grep Sometimes, instead of actually looking at the matched strings, we are only interested in whether there was a match or not. For this, we can use the quiet option (-q), where the grep command does not write any output to the standard output. Instead, it runs the command and returns an exit status based on success or failure. We know that a command returns 0 on success, and non-zero on failure. Let's go through a script that makes use of grep in a quiet mode, for testing whether a match text appears in a file or not. #!/bin/bash #Filename: silent_grep.sh #Desc: Testing whether a file contain a text or not if [ $# -ne 2 ]; then echo "Usage: $0 match_text filename" exit 1 fi match_text=$1 filename=$2 grep -q "$match_text" $filename if [ $? -eq 0 ]; then echo "The text exists in the file" else echo "Text does not exist in the file" fi The silent_grep.sh script can be run as follows, by providing a match word (Student) and a file name (student_data.txt) as the command argument: $ ./silent_grep.sh Student student_data.txt The text exists in the file Printing lines before and after text matches Context-based printing is one of the nice features of grep. Suppose a matching line for a given match text is found, grep usually prints only the matching lines. But, we may need "n" lines after the matching line, or "n" lines before the matching line, or both. This can be performed by using context-line control in grep. Let's see how to do it. In order to print three lines after a match, use the -A option: $ seq 10 | grep 5 -A 3 5 6 7 8 In order to print three lines before the match, use the -B option: $ seq 10 | grep 5 -B 3 2 3 4 5 Print three lines after and before the match, and use the -C option as follows: $ seq 10 | grep 5 -C 3 2 3 4 5 6 7 8 If there are multiple matches, then each section is delimited by a line "--": $ echo -e "a\nb\nc\na\nb\nc" | grep a -A 1 a b -- a b Cutting a file column-wise with cut We may need to cut the text by a column rather than a row. Let's assume that we have a text file containing student reports with columns, such as Roll, Name, Mark, and Percentage. We need to extract only the name of the students to another file or any nth column in the file, or extract two or more columns. This recipe will illustrate how to perform this task. How to do it... cut is a small utility that often comes to our help for cutting in column fashion. It can also specify the delimiter that separates each column. In cut terminology, each column is known as a field . To extract particular fields or columns, use the following syntax: cut -f FIELD_LIST filename FIELD_LIST is a list of columns that are to be displayed. The list consists of column numbers delimited by commas. For example: $ cut -f 2,3 filename Here, the second and the third columns are displayed. cut can also read input text from stdin. Tab is the default delimiter for fields or columns. If lines without delimiters are found, they are also printed. To avoid printing lines that do not have delimiter characters, attach the -s option along with cut. An example of using the cut command for columns is as follows: $ cat student_data.txt No Name Mark Percent 1 Sarath 45 90 2 Alex 49 98 3 Anu 45 90 $ cut -f1 student_data.txt No 1 2 3 Extract multiple fields as follows: $ cut -f2,4 student_data.txt Name Percent Sarath 90 Alex 98 Anu 90 To print multiple columns, provide a list of column numbers separated by commas as arguments to -f. We can also complement the extracted fields by using the --complement option. Suppose you have many fields and you want to print all the columns except the third column, then use the following command: $ cut -f3 --complement student_data.txt No Name Percent 1 Sarath 90 2 Alex 98 3 Anu 90 To specify the delimiter character for the fields, use the -d option as follows: $ cat delimited_data.txt No;Name;Mark;Percent 1;Sarath;45;90 2;Alex;49;98 3;Anu;45;90 $ cut -f2 -d";" delimited_data.txt Name Sarath Alex Anu There's more The cut command has more options to specify the character sequences to be displayed as columns. Let's go through the additional options available with cut. Specifying the range of characters or bytes as fields Suppose that we don't rely on delimiters, but we need to extract fields in such a way that we need to define a range of characters (counting from 0 as the start of line) as a field. Such extractions are possible with cut. Let's see what notations are possible: N- from the Nth byte, character, or field, to the end of line N-M from the Nth to Mth (included) byte, character, or field -M from first to Mth (included) byte, character, or field We use the preceding notations to specify fields as a range of bytes or characters with the following options: -b for bytes -c for characters -f for defining fields For example: $ cat range_fields.txt abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxyz abcdefghijklmnopqrstuvwxy You can print the first to fifth characters as follows: $ cut -c1-5 range_fields.txt abcde abcde abcde abcde The first two characters can be printed as follows: $ cut range_fields.txt -c -2 ab ab ab ab Replace -c with -b to count in bytes. We can specify the output delimiter while using with -c, -f, and -b, as follows: --output-delimiter "delimiter string" When multiple fields are extracted with -b or -c, the --output-delimiter is a must. Otherwise, you cannot distinguish between fields if it is not provided. For example: $ cut range_fields.txt -c1-3,6-9 --output-delimiter "," abc,fghi abc,fghi abc,fghi abc,fghi
Read more
  • 0
  • 0
  • 2567

article-image-seam-data-validation
Packt
09 Oct 2009
8 min read
Save for later

Seam Data Validation

Packt
09 Oct 2009
8 min read
Data validation In order to perform consistent data validation, we would ideally want to perform all data validation within our data model. We want to perform data validation in our data model so that we can then keep all of the validation code in one place, which should then make it easier to keep it up-to-date if we ever change our minds about allowable data values. Seam makes extensive use of the Hibernate validation tools to perform validation of our domain model. The Hibernate validation tools grew from the Hibernate project (http://www.hibernate.org) to allow the validation of entities before they are persisted to the database. To use the Hibernate validation tools in an application, we need to add hibernate-validator.jar into the application's class path, after which we can use annotations to define the validation that we want to use for our data model. Let's look at a few validations that we can add to our sample Seam Calculator application. In order to implement data validation with Seam, we need to apply annotations either to the member variables in a class or to the getter of the member variables. It's good practice to always apply these annotations to the same place in a class. Hence, throughout this article, we will always apply our annotation to the getter methods within classes. In our sample application, we are allowing numeric values to be entered via edit boxes on a JSF form. To perform data validation against these inputs, there are a few annotations that can help us. Annotation Description @Min The @Min annotation allows a minimum value for a numeric variable to be specified. An error message to be displayed if the variable's value is less than the specified minimum can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be greater than or equal to ...). @Min(value=0, message="...") @Max The @Max annotation allows a maximum value for a numeric variable to be specified. An error message to be displayed if the variable's value is greater than the specified maximum can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be less than or equal to ...). @Max(Value=100, message="...") @Range The @Range annotation allows a numeric range-that is, both minimum and maximum values-to be specified for a variable. An error message to be displayed if the variable's value is outside the specified range can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be between ... and ...). @Range(min=0, max=10, message="...") At this point, you may be wondering why we need to have an @Range validator, when by combining the @Min and @Max validators, we can get a similar effect. If you want a different error message to be displayed when a variable is set above its maximum value as compared to the error message that is displayed when it is set below its minimum value, then the @Min and @Max annotations should be used. If you are happy with the same error message being displayed when a variable is set outside its minimum or maximum values, then the @Range validator should be used. Effectively, the @Min and @Max validators are providing a finer level of error message provision than the @Range validator. The following code sample shows how these annotations can be applied to a sample application, to add basic data validation to our user inputs. package com.davidsalter.seamcalculator; import java.io.Serializable; import org.jboss.seam.annotations.Name; import org.jboss.seam.faces.FacesMessages; import org.hibernate.validator.Max;import org.hibernate.validator.Min;import org.hibernate.validator.Range; @Name("calculator") public class Calculator implements Serializable { private double value1; private double value2; private double answer; @Min(value=0) @Max(value=100) public double getValue1() { return value1; } public void setValue1(double value1) { this.value1 = value1; } @Range(min=0, max=100) public double getValue2() { return value2; } public void setValue2(double value2) { this.value2 = value2; } public double getAnswer() { return answer; } ... } Displaying errors to the user In the previous section, we saw how to add data validation to our source code to stop invalid data from being entered into our domain model. Now that we have reached a level of data validation, we need to provide feedback to the user to inform them of any invalid data that they have entered. JSF applications have the concept of messages that can be displayed associated with different components. For example, if we have a form asking for a date of birth to be entered, we could display a message next to the entry edit box if an invalid date were entered. JSF maintains a collection of these error messages, and the simplest way of providing feedback to the user is to display a list of all of the error messages that were generated as a part of the previous operation. In order to obtain error messages within the JSF page, we need to tell JSF which components we want to be validated against the domain model. This is achieved by using the <s:validate/> or <s:validateAll/> tags. These are Seam-specific tags and are not a part of the standard JSF runtime. In order to use these tags, we need to add the following taglib reference to the top of the JSF page. <%@ taglib uri="http://jboss.com/products/seam/taglib" prefix="s" %> In order to use this tag library, we need to add a few additional JAR files into the WEB-INF/lib directory of our web application, namely: jboss-el.jar jboss-seam-ui.jar jsf-api.jar jsf-impl.jar This tag library allows us to validate all of the components (<s:validateAll/>) within a block of JSF code, or individual components (<s:validate/>) within a JSF page. To validate all components within a particular scope, wrap them all with the <s:validateAll/> tag as shown here: <h:form> <s:validateAll> <h:inputText value="..." /> <h:inputText value="..." /> </s:validateAll> </h:form> To validate individual components, embed the <s:validate/> tag within the component, as shown in the following code fragment. <h:form> <h:inputText value="..." > <s:validate/> </h:inputText> <h:inputText value="..." > <s:validate/> </h:inputText> </h:form> After specifying that we wish validation to occur against a specified set of controls, we can display error messages to the user. JSF maintains a collection of errors on a page, which can be displayed in its entirety to a user via the <h:messages/> tag. It can sometimes be useful to show a list of all of the errors on a page, but it isn't very useful to the user as it is impossible for them to say which error relates to which control on the form. Seam provides some additional support at this point to allow us to specify the formatting of a control to indicate error or warning messages to users. Seam provides three different JSF facets (<f:facet/>) to allow HTML to be specified both before and after the offending input, along with a CSS style for the HTML. Within these facets, the <s:message/> tag can be used to output the message itself. This tag could be applied either before or after the input box, as per requirements. Facet Description beforeInvalidField This facet allows HTML to be displayed before the input that is in error. This HTML could contain either text or images to notify the user that an error has occurred. <f:facet name="beforeInvalidField"> ... </f:facet> afterInvalidField This facet allows HTML to be displayed after the input that is in error. This HTML could contain either text or images to notify the user that an error has occurred. <f:facet name="afterInvalidField"> ... </f:facet> aroundInvalidField This facet allows the CSS style of the text surrounding the input that is in error to be specified. <f:facet name="aroundInvalidField"> ... </f:facet> In order to specify these facets for a particular field, the <s:decorate/>  tag must be specified outside the facet scope. <s:decorate> <f:facet name="aroundInvalidField"> <s:span styleClass="invalidInput"/> </f:facet> <f:facet name="beforeInvalidField"> <f:verbatim>**</f:verbatim> </f:facet> <f:facet name="afterInvalidField"> <s:message/> </f:facet> <h:inputText value="#{calculator.value1}" required="true" > <s:validate/> </h:inputText> </s:decorate> In the preceding code snippet, we can see that a CSS style called invalidInput is being applied to any error or warning information that is to be displayed regarding the <inputText/> field. An erroneous input field is being adorned with a double asterisk (**) preceding the edit box, and the error message specific to the inputText field after is displayed in the edit box.
Read more
  • 0
  • 0
  • 2566

article-image-gamified-websites-framework
Packt
07 Oct 2013
15 min read
Save for later

Gamified Websites: The Framework

Packt
07 Oct 2013
15 min read
(For more resources related to this topic, see here.) Business objectives Before we can go too far down the road on any journey, we first have to be clear about where we are trying to go. This is where business objectives come into the picture. Although games are about fun, and gamification is about generating positive emotion without losing sight of the business objectives, gamification is a serious business. Organizations spend millions of dollars every year on information technology. Consistent and steady investment in information technology is expected to bring a return on that investment in the way of improved business process flow. It's meant to help the organization run smoother and easier. Gamification is all about "improving" business processes. Organizations try to improve the process itself, wherever possible, whereas technology only facilitates the process. Therefore, gamification efforts will be scrutinized under similar microscope and success metrics that information technology efforts will. The fact that customers, employees, or stakeholders are having more fun with the organization's offering is not enough. It will have to meet a business objective. The place to start with defining business objectives is with the business process that the organization is looking to improve. In our case, the process we are planning to improve is e-learning. We are looking at the process of K-12 aged persons learning "thinking". How does that process look right now? Image source: http://www.moddb.com/groups/critical-thinkers-of-moddb/images/critical-thinking-skills-explained In a full-blown e-learning situation, we would be looking to gamify as much of this process as possible. For our purpose, we will focus on the areas of negotiation and cooperation. According to the Negotiate and Cooperate phase of the Critical Thinking Process, learners consider different perspectives and engage in discussions with others. This gives us a clear picture of what some of our objectives might be. They might be, among others: Increasing engagement in discussion with others Increasing the level of consideration of different perspectives Note that these objectives are measurable. We will be able to test whether the increases/improvements we are looking for are actually happening over time. With a set of measurable objectives, we can turn our attention to the next step, that is target behaviors, in our Gamification Design Framework. Target behaviors Now that we are clear about what we are trying to accomplish with our system, we will focus on the actions we are hoping to incentivize: our target behaviors. One of the big questions around gamification efforts is can it really cause behavioral change. Will employees, customers, and stakeholders simply go back to doing things the way they are used to once the game is over? Will they figure out a way to "cheat" the system? The only way to meet long-term organizational objectives in a systematic way is the application to not only cause change for the moment, but lasting change over time. Many gamification applications fail in long-term behavior change, and here's why. Psychologists have studied the behavior change life cycle at length. . The study revealed that people go through five distinct phases when changing a behavior. Each phase presents a different set of challenges. The five phases of the behavioral life cycle are as follows: Awareness: Before a person will take any action to change a behavior, he/she must first be aware of their current behavior and how it might need to change. Buy in: After a person becomes aware that they need to change, they must agree that they actually need to change and make the necessary commitment to do so. Learn: But what actually does a person need to do to change? It cannot be assumed that he/she knows how to change. They must learn the new behavior. Adopt: Now that he/she has learned the necessary skills, they have to actually implement them. They need to take the new action. Maintain: Finally, after adopting a new behavior, it can only become a lasting change with constant practice. Image source: http://www.accenture.com/us-en/blogs/technology-labs-blog/archive/2012/03/28/gamification-and-the-behavior-change-lifecycle.aspx) How can we use this understanding to establish our target behaviors? Keep in mind that our objectives are to increase interaction through discussion and increase consideration for other perspectives. According to our understanding of changing behavior around our objectives, we need our users to: Become aware of their discussion frequency with other users Become aware that other perspectives exist Commit to more discussions with other users Commit to considering other users' perspectives Learn how to have more discussions with other users Learn about other users' perspectives Have more discussions with other users Actually consider other users' perspectives Continue to have more discussions with other users on a consistent basis Continue to consider other users' perspectives over time This outlines the list of activities that needs to be performed for our systems to meet our objectives. Of course, some of our target behaviors will be clear. In other cases, it will require some creativity on our part to get users to take these actions. So what are some possible actions that we can have our users take to move them along the behavior change life cycle? Check their discussion thread count Review the Differing Point of View section Set a target discussion amount for a particular time period Set a target number of Differing Points of View to review Watch a video (or some instructional material) on how to use the discussion area Watch a video (or some instructional material) on the value of viewing other perspectives Participate in the discussion groups Read through other users' discussions posts Participate in the discussion groups over time Read through other users' perspectives over time Some of these target behaviors are relatively straightforward to implement. Others will require more thought. More importantly, we have now identified the target behaviors we want our users to take. This will guide the rest of our development efforts. Players Although the last few sections have been about the serious side of things, such as objectives and target behaviors, we still have gamification as the focal point. Hence, from this point on we will refer to our users as players. We must keep in mind that although we have defined the actions that we want our players to take, the strategies to motivate them to take that action vary from player to player. Gamification is definitely not a one-size-fits-all process. We will have to look at each of our target behaviors from the perspective of our players. We must take their motivations into consideration, unless our mechanics are pretty much trial and error. We will need an approach that's a little more structured. According to the Bartle's Player Motivations theory, players of any game system fall into one of the following four categories: Killers: These are people motivated to participate in a gaming scenario with the primary purpose of winning the game by "acting on" other players. This might include killing them, beating, and directly competing with other players in the game. Achievers: These, on the other hand, are motivated by taking clear actions against the system itself to win. They are less motivated by beating an opponent than by achieving things to win. Socializers: These have very different motivations for participating in a game. They are motivated more by interacting and engaging with other players. Explorers: Like socializers, explorers enjoy interaction and engagement, but less with other players than with the system itself. The following diagram outlines each player motivation type and what game mechanic might best keep them engaged. Image source: http://frankcaron.com/Flogger/?p=1732 As we define our activity loops, we need to make sure that we include each of the four types of players and their motivations. Activity loops Gamified systems, like other systems, are simply a series of actions. The player acts on the system and the system responds. We refer to how the user interacts with the system as activity loops. We will talk about two types of activity loops, engagement loops and progression loops, to describe our player interactions. Engagement loops describe how a player engages the system. They outline what a player does and how the system responds. Activity will be different for players depending on their motivations, so we must also take into consideration why the player is taking the action he is taking. A progression loop describes how the player engages the system as a whole. It outlines how he/she might progress through the game itself. Whereas engagement loops discuss what the player does on a detailed level, progression loops outline the movement of the player through the system. For example, when a person drives a car, he/she is interacting with the car almost constantly. This interaction is a set of engagement loops. All the while, the car is going somewhere. Where the car is going describes its progression loops. Activity loops tend to follow the Motivation, Action, Feedback pattern. The players are sufficiently motivated to take an action. When the players take the action and they get a feedback from the system, the feedback hopefully motivates the players enough to take another action. They take that action and get more feedback. In a perfect world, this cycle would continue indefinitely and the players would never stop playing our gamified system. Our goal is to get as close to this continuous activity loop as we possibly can. Progression loops We have spent the last few pages looking at the detailed interactions that a player will have with the system in our engagement loops. Now it's time to turn our attention to the other type of activity loop, the progression loop. Progression loops look at the system at a macro level. They describe the player's journey through the system. We usually think about levels, badges, and/or modes when we are thinking about progression loops We answer questions such as: where have you been, where are you now, and where are you going. This can all be summed up into codifying the player's mastery level. In our application, we will look at the journey from the vantage point of a novice, an expert, and a master. Upon joining the game, players will begin at novice level. At novice level we will focus on: Welcome On-boarding and getting the user acclimated to using the system Achievable goals In the Welcome stage, we will simply introduce the user to the game and encourage him/her to try it out. Upon on-boarding, we need to make the process as easy as possible and give back positive feedback as soon as possible. Once the user is on board, we will outline the easiest way to get involved and begin the journey. At the expert level, the player is engaging regularly in the game. However, other players would not consider this player a leader in the game. Our goal at this level is to present more difficult challenges. When the player reaches a challenge that is appearing too difficult, we can include surprise alternatives along the way to keep him/her motivated until they can break through the expert barrier to master level. The game and other players recognize masters. They should be prominently displayed within the game and might tend to want to help others at novice and expert levels. These options should become available at later stages in the game. Fun After we have done the work of identifying our objectives, defining target behaviors, scoping our players, and laying out the activities of our system, we can finally think about the area of the system where many novice game designers start: the fun. Other gamification practitioners will avoid, or at least disguise, the fun aspect of the gamification design process. It is important that we don't over or under emphasize the fun in the process. For example, chefs prepare an entire meal with spices, but they don't add all spices together. They use the spices in a balanced amount in their cooking to bring flavor to their dishes. Think of fun as an array of spices that we can apply to our activity loops. Marc Leblanc has categorized fun into eight distinct categories. We will attempt to sprinkle just enough of each, where appropriate, to accomplish the desired amount of fun. Keep in mind that what one player will experience as fun will not be the same for another. One size definitely does not fit all in this case. Sensation: A pleasurable experience Narrative: An unfolding story Challenge: An obstacle course Fantasy: Make believe Fellowship: A social framework Discovery: Exploring uncharted territory Expression: Player is given a platform Submission: Mindless activity So how can we sparingly introduce the above dimensions of fun in our system? Action to take Dimension of fun Check their discussion thread count Challenge Review a differing point of the View section Discovery Set a target discussion  amount for a particular time period Challenge Set a target number of "Differing Points of View" to review Challenge Watch a video (or some instructional material) on the how to use the discussion area Challenge Watch a video (or some instructional material) on the value of viewing other perspectives Challenge Participate in the discussion groups Fellowship Expression Read through other users' discussions posts Discovery Participate in the discussion groups over time Fellowship Expression Read through other users' perspectives over time Discovery Tools We are finally at the stage from where we can begin implementation. At this point, we can look at the various game elements (tools) to implement our gamified system. If we have followed the framework upto this point, the mechanics and elements should become apparent. We are not simply adding leader boards or a point system for the sake of it. We can tie all the tools we use back to our previous work. This will result in a Gamification Design Matrix for our application. But before we go there, let's stop and take a look at some tools we have at our disposal. There are a myriad of tools, mechanics, and strategies at our disposal. New ones are being designed everyday. Here are a few of the most common mechanics that we will encounter when designing our gamified system: Achievements: These are specific objectives that a player meets. Avatars: These are visual representations of a player's role, persona, or character in a game. Badges: These are visual elements used to recognize a particular accomplishment. They give players a sense of pride that they can show off to others. Boss fight: This is an exceptionally difficult challenge in a game scenario, usually at the end of a level to demonstrate enough skill level to move up to the next level. Leaderboards: These show rankings of players publicly. They recognize an accomplishment like a badge, but they are visible for all to see. We see this almost every day, in every way from sports team rankings to sales rep monthly results. Points: These are rather straightforward. Players accumulate points and take various actions in the system. Quests/Mission: These are specialized challenges in a game scenario having narrative and objective as characteristics. Reward: This is anything used to extrinsically motivate the user to take a particular action. Team: This is a group of players playing as a single unit. Virtual assets: These are elements in the game that have some value and can be acquired or used to acquire other assets, whether tangible or virtual. Now it's time to turn and take off our gamification design hat and put on our developer hat. Let's start by developing some initial mockups of what our final site might look like using the design we have outlined previously. Many people develop mockups using graphics tools such as Photoshop or Gimp. At this stage, we will be less detailed in our mockups and simply use pencil sketches or a mockup tool such as Balsamiq. Login screen This is a mock-up of the basic login screen in our application. Players are accustomed to a basic login and password scenario we provide here. Account creation screen First time players will have to create an account initially. This is the mock-up of our signup page. Main Player Screen This captures the main elements of our system when a player is fully engaged with the system. Main Player Post Response Screen We have outlined the key functionality of our gamified system via mock-ups. Mock-ups are a means of visually communicating to our team what we are building and why we are building it. Visual mock-ups also give us an opportunity to uncover issues in our design early in the process. Summary Most gamified applications will fail due to a poorly designed system. Hence, we have introduced a Gamification Design Framework to guide our development process. We know that our chances of developing a successful system increase tremendously if we: Define clear business objectives Establish target behaviors Understand our players Work through the activity loops Remember the fun Optimize the tools Resources for Article: Further resources on this subject: An Introduction to PHP-Nuke [Article] Installing phpMyAdmin [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2565

article-image-drupal-and-ubercart-2x-install-ready-made-drupal-theme
Packt
31 Mar 2010
5 min read
Save for later

Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme

Packt
31 Mar 2010
5 min read
Install a ready-made Drupal theme We have to admit that Drupal was not famous for its plethora of available themes. Until recently, the Drupal community was focused on developing the backend, debugging the code, and creating new modules. The release of Drupal 6 made theming much easier and helped the theming community to grow. Now, there are not only thousands of Drupal themes, but also dozens of themes designed and customized especially for Ubercart. Basic principles when choosing a theme Choosing a theme for your online shop is not an easy task. Moreover, it can be even harder considering that you want to promote specific items from your catalog, you need to change first page items often, and you need to rapidly communicate offers and loyalty policies and other business-related stuff. Ubercart-specific themes mostly target the following special areas: Product catalog Shopping cart Product-specific views You should keep these layout regions in mind, while going through the following section on theme selection. Before you search for any kind of theme layout, provide your neurons with enough input to inspire you and help you decide. Perform a quick Google search for online shops in your target market to get some inspiration and track down sites that make you, as a customer, feel comfortable during product searching and navigation. If you decide to search for professional help, a list of existing sites will help you to communicate your preferences much more directly. What better place to search for inspiration and successful practices than Ubercart's live site repository! You will find good practices and see how mostly people like you (without any development background) have solved all the problems that might occur during your search for themes.http://www.ubercart.org/site Next we describe the main user interface components that you should keep in mind when deciding for your online shop: Number of columns: The number of columns depends on the block information you want to provide to your end customers. If you need widgets that display on every page, information about who bought what, and product or kit suggestions, go with three columns. You will find a plethora of two-column Drupal themes and many three-column Drupal themes, while some of them can alternate between two and three columns. Color scheme: From a design perspective, you should choose a color scheme that matches your company logo and business profile. For instance, if your store sells wooden toys, go with something more comic such as rounded corners, but if you are a consulting firm, you should go with something more professional. Many themes let you choose color schemes dynamically; however, always keep in mind that color is a rather easy modification from the CSS. You can get great color combination ideas from COLOURlovers online service (http://www.colourlovers.com/) that match your logo and business colors. Be careful though. If you choose a complex theme with rounded corners, lots of images, and multiple backgrounds, it may be difficult to modify it. Drupal version: Make sure the Drupal theme you choose is compatible with the version of Drupal you are running. Before using a Drupal theme, look up notes on the theme to see if there are any known bugs or problems with it. If you are not a programmer, you do not want a Drupal theme that has open issues. Extra features: Many Drupal themes expose a large set of configuration options to the end users. Various functionality such as post author's visibility or color scheme selection are welcome for managing the initial setup. Moreover, you can change appearance in non-invasive ways for your online marque. Regions available: We have discussed column layouts, but for the Drupal template engine to show its full capabilities and customization, you definitely need multiple regions. The more regions, the more choices you have for where to put blocks of content. Therefore, you can have space for customizing new affiliate ads for instance, or provide information about some special deals, or even configure your main online shop page, as we will see in the next section. Further customization and updates: When you choose your theme, don't just keep the functionality of version 1.0 in mind, but consider all of the future business plans for approaching your target market and raising sales figures. Make a three-year plan and try to visualize any future actions that should be taken into account from day one. Although you can change themes easily, you are better off choosing a more flexible theme ahead of time than having to change the theme as your website grows. Always bear in mind that the famous razor of Occam also applies to online shop theme design. Keep it simple and professional by choosing simple layouts, which allow ease of use for the end user and ease further customize designs and themes (changing colors, adding a custom image header, and so on). Before you start, clearly define your timeline, risks, total theme budget, and skills. Theming is usually 25-40% of the budget of an entire online shop project. Drupal's theming engine closely integrates with actual functionality and many features are encapsulated inside the theme itself. There are a number of different ways in which you can get yourself the best theme for your online store. We will go through all these approaches with useful comments on what options best suits your needs.
Read more
  • 0
  • 0
  • 2562

article-image-database-interaction-codeigniter-17
Packt
26 Apr 2010
4 min read
Save for later

Database Interaction with Codeigniter 1.7

Packt
26 Apr 2010
4 min read
(Read more interesting articles on CodeIgniter 1.7 Professional Development here.) Loading the library Loading the Database library is slightly different from loading other libraries. This is because it is large and resides in a different folder, unlike the other libraries. $this->load->database(); Performing simple queries Let's dive straight in by starting with the simple stuff. CodeIgniter gives us a function that we can pass a SQL Query to, and the query will be run on the database. Here's how it works: $this->db->query('PUT YOUR SQL HERE'); This function is incredibly simple to use; you simply use this function in place of any native PHP functions you would use to run queries. This function will return TRUE or FALSE for write queries, and will return a dataset for read queries. There is another function that you can use for very simple queries; this will only return TRUE or FALSE. It won't let you cache your query or run the query timer. In most cases you won't want to use this function. $this->db->simple_query('PUT YOUR SQL HERE'); The SQL code that you pass to these functions are database-dependent. Only Active Record queries are independent of any type of Database SQL. Returning values You can assign the function $this->db->query() to a variable. You can then run a number of helper functions on the variable in order to return the data in different formats. Take the following example: $query = $this->db->query('SELECT * FROM 'users''); Return a result object In this case, returning the result will return an array of objects, or an empty array if the query failed. You would usually use this function in a foreach loop. foreach($query->result() as $row){ echo $row->username; echo $row->email;} If your query does not return a result, the CodeIgniter User Guide encourages you to check for a failure before using this function. if($query->num_rows > 0){ foreach($query->result() as $row) { echo $row->username; echo $row->email; }} Returning a result array You are also able to return the result dataset as an array. Typically, you would use this function inside a foreach loop as well. foreach($query->result_array() as $row){ echo $row['username']; echo $row['email'];} Returning a row object If your query is only expected to return a single result, you should return the row by using the following function. The row is returned as an object. if($query->num_rows() > 0){$row = $query->row();echo $row->username;echo $row->email;} You can return a specific row by passing the row number as a digit in the first parameter. $query->row(2); Returning a row array You can return a row as an array, if you prefer. The function is used in the same way as the previous example. if($query->num_rows() > 0){ $row = $query->row_array(); echo $row['username']; echo $row['email'];} You can return a numbered row by passing the digit to the first parameter, also. $query->row_array(2); Result helper functions Besides the helper function that helps to return the dataset in different ways, there are some other more generalized helper functions. Number of rows returned Used in the same way as the other helper functions, this will return the total number of rows returned from a query. Take the following example: echo $query->num_rows(); Number of fields returned Just like the previous function, this will return the number of fields returned by your query. echo $query->num_fields(); Free result This function will remove the resource ID associated with your query, and free the associated memory. PHP will usually do this by default, although when using many queries you may wish to use this to free up memory space. $query->free_result();
Read more
  • 0
  • 0
  • 2561

article-image-cms-made-simple-16-learning-smarty-basics
Packt
04 Mar 2010
8 min read
Save for later

CMS Made Simple 1.6: Learning Smarty Basics

Packt
04 Mar 2010
8 min read
Working with Smarty Variables Smarty variables are much simpler than complex Smarty plugins. They are placeholders that contain plain information about the actual page ID, page alias, or position of the page in the hierarchy. Some Smarty variables that you are not aware of, are already defined in your template. You do not need to know or remember all of them if you know how you can figure out their names and values. Time for action – getting Smarty variables We are going to get the number of the page in the page hierarchy to integrate this information into the design of the page title. How do we figure out the name of the Smarty variable that contains this information? We can get it from the template as follows: In the admin console, click on Layout | Templates. Open the Business World template for edit and add the plugin {get_template_vars} just before the last tag, as shown in the following code snippet: <!DOCTYPE html> <html> <head> <title>{title} - {sitename}</title> {stylesheet} {metadata} <meta name="description" content="" /> </head> <body> ........... {get_template_vars} </div> </body> </html> Click on Apply and then click on the magnifying glass icon on the top-right corner of the admin console to see the result. It should now look like the following screenshot: What just happened? With the Smarty {get_template_vars} plugin, you displayed all Smarty variables available in your template. In the list of variables on each line, one variable is displayed with its name and its value separated by an equals sign. These values change from page to page. For example, the variable with the name friendly_position contains the position of the page in the page hierarchy. If you navigate to other pages, you will see that the value of this variable is different on every page. How do you add a variable in your template? Smarty variables are enclosed in curly brackets as well, but unlike the Smarty plugins, they have a dollar sign at the beginning. To use the variable friendly_position, you just need to add the following Smarty tag to your template: {$friendly_position} You can delete the {get_template_vars} plugin now. It is helpful for you to see which Smarty variables exist and what values are stored there. You can add this plugin again, when you need to look for another variable. Let us use the information we have learned about Smarty plugins and Smarty variables by combining them both to create a title of the page. Open the template Business World (Layout | Templates)for editing and change the title of the page between the body tags and before the tag {content} shown as follows: <h1><span>{$friendly_position}</span> {title}</h1> Then open Business World Style Sheet for editing (Layout | Stylesheets), and add a CSS style to format the title of the page: h1 span { color: #ffffff; background: #cccccc; padding: 0 5px;} The result of the above formating should look as shown in the following screenshot: You  can use any Smarty variable from the template, except for variables with the value Array(). We will look at these special variables in the following section. Controlling output with the IF function You can create numerous templates for your website and assign different templates to different pages. This is useful if you use layouts with a different number of columns. However, sometimes there is only a tiny difference between the templates, and it is not efficient to create a new template each time you need only slight changes. For example, imagine you would like to display the last editor of the page, as we did with the {last_modified_by}tag. It is a useful piece of information on most pages but we would like to hide it on the contact page. You do not need to create a new template where this tag is not added. For such slight changes, it is better to know how to control the output in the same template with an IF structure. Time for action – displaying tags in dependence of the page We  are going to hide the {last_modified_by} tag on the page Contact Us. However, it has to be still displayed on all other pages. Open the template Business World for editing (Layout | Templates). Add the Smarty IF code around the {last_modified_by…} tag, as shown in the following code snippet: <!DOCTYPE html> <html> <head> <title>{title} - {sitename}</title> {stylesheet} {metadata} <meta name="description" content="" /> </head> <body> <div id="container"> <div id="header"> businessWorld </div> <div id="top-navi"> {menu number_of_levels="1" template="minimal_menu.tpl"} </div> <div id="content"> <h1>{title}</h1> {content} {if $page_alias neq "contact-us"} <p>Last modified by {last_modified_by format= "fullname"}</p> {/if} </div> <div id="sidebar"> {menu start_level="2" template="minimal_menu.tpl"} </div> <div id="footer"> 2009 businessWorld </div> </div> </body> </html> Click on Apply and then click on the magnifying glass icon in the top-right corner of the admin console to see the result. What just happened? The IF code that you have added around the paragraph containing the last modification causes CMS to check the page alias of the displayed page. If the page alias is equal to "contact-us", then everything between the IF structure is not shown, otherwise the information about the last modification is displayed. You have seen from the previous section that CMS knows what page of our website is currently being displayed. This information is stored in the Smarty variable {$page_alias}. With the built-in IF function, you can compare the page alias of the actual page with the page alias of the page Contact Us. If the value of the variable {$page_alias} is NOT equal to contact-us, then everything between the IF tags is displayed. If the page alias is equal to contact-us, then nothing is displayed. In this way, you can control the output of the template depending on the page alias.   The abbreviation neq (meaning not equal) between the variable {$page_alias} and the value contact-us is called a Qualifier. Qualifiers are used to build a logical condition in the IF code. The result of the logical condition can be true or false. If the result of the IF condition is true (and it is true if the page alias IS NOT EQUAL to contact-us), then everything placed in between the IF tags is displayed. If the result of the IF condition is false (and it is only false if the page alias IS EQUAL to contact-us), then everything between the IF tags is suppressed. There are more qualifiers that can be used to build logical conditions in Smarty. Some of them are listed in the following table: The IF structure is a useful tool for handling slight changes in one template depending on the page name or the position in the hierarchy. In the preceding example, you saw that you can use every variable from the template to build a logical condition. Creating navigation template with Smarty loop You can also change the HTML markup of the navigation. Before you can learn this principle, you have to understand some Smarty basics. When we added the top navigation to the website, we used a standard template for the navigation. It displays the navigation as an unordered HTML list. Imagine that you need a kind of footer navigation where all the links from the top navigation are shown. You do not need an unordered HTML list in this case. You just would like to show all links in one line separated by a pipe (|) shown as follows: Our Company | Announcements | History | Team | Photo gallery …… This means that you need a completely different HTML markup for this kind of navigation. The great advantage of CMS Made Simple is the ability to display a template in template. While you can use the main template to define the whole layout for the page, the  HTML markup of the navigation is saved in its own template. This navigation template is just a piece of the HTML code that is added to the main template at the place where the tag {menu} is placed.
Read more
  • 0
  • 0
  • 2561
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-seam-and-ajax
Packt
22 Oct 2009
11 min read
Save for later

Seam and AJAX

Packt
22 Oct 2009
11 min read
What is AJAX? AJAX (Asynchronous JavaScript and XML) is a technique rather than a new technology for developing highly interactive web applications. Traditionally, when JavaScript is written, it uses the browser's XMLHttp DOM API class to make asynchronous calls to a server-side component, for example, servlets. The server-side component generates a resulting XML package and returns this to the client browser, which can then update the browser page without having to re-render the entire page. The result of using AJAX technologies (many different technologies can be used to develop AJAX functionality, for example, PHP, Microsoft .NET, Servlets, and Seam) is to provide an appearance similar to a desktop, for web applications. AJAX and the Seam Framework The Seam Framework provides built-in support for AJAX via its direct integration with libraries such as RichFaces and AJAX4JSF. Discussing the AJAX support of RichFaces and AJAX4JSF could fill an entire book, if not two books, so we'll discuss these technologies briefly, towards the end of this article, where we'll give an overview of how they can be used in a Seam application. However, Seam provides a separate technology called Seam Remoting that we'll discuss in detail in this article. Seam Remoting allows a method on Seam components to be executed directly from JavaScript code running within a browser, allowing us to easily build AJAX-style applications. Seam Remoting uses annotations and is conversation-aware, so that we still get all of the benefits of writing conversationally-aware components, except that we can now access them via JavaScript as well as through other view technologies, such as Facelets. Seam Remoting provides a ready-to-use framework, making AJAX applications easier to develop. For example, it provides debugging facilities and logging facilities similar to the ones that we use everyday when writing Java components. Configuring Seam applications for Seam Remoting To use Seam Remoting, we need to configure the Seam web application to support JavaScript code that is making asynchronous calls to the server back end. In a traditional servlet-based system this would require writing complex servlets that could read, parse, and return XML as part of an HTTP GET or POST request. With Seam Remoting, we don't need to worry about managing XML data and its transport mechanism. We don't even need to worry about writing servlets that can handle the communication for us—all of this is a part of the framework. To configure a web application to use Seam Remoting, all we need to do is declare the Seam Resource servlet within our application's WEB-INF/web.xml file. We do this as follows. <servlet> <servlet-name>Seam Resource Servlet</servlet-name> <servlet-class> org.jboss.seam.servlet.SeamResourceServlet </servlet-class></servlet><servlet-mapping> <servlet-name>Seam Resource Servlet</servlet-name> <url-pattern>/seam/resource/*</url-pattern></servlet-mapping> That's all we need to do to make a Seam web application work with Seam Remoting. To make things even easier, this configuration is automatically done when applications are created with SeamGen, so you would have to worry about this configuration only if you are using non-SeamGen created projects. Configuring Seam Remoting server side To declare that a Seam component can be used via Seam Remoting, the methods that are to be exposed need to be annotated with the @WebRemote annotation. For simple POJO components, this annotation is applied directly on the POJO itself, as shown in the following code snippet. @Name("helloWorld")public class HelloWorld implements HelloWorldAction { @WebRemote public String sayHello() { return "Hello world !!";} For Session Beans, the annotation must be applied on the Session Beans business interface rather than on the implementation class itself. A Session Bean interface would be declared as follows. import javax.ejb.Local;import org.jboss.seam.annotations.remoting.WebRemote;@Localpublic interface HelloWorldAction { @WebRemote public String sayHello(); @WebRemote public String sayHelloWithArgs(String name);} The implementation class is defined as follows: import javax.ejb.Stateless;import org.jboss.seam.annotations.Name;@Stateless@Name("helloWorld")public class HelloWorld implements HelloWorldAction { public String sayHello() { return "Hello world !!"; } public String sayHelloWithArgs(String name) { return "Hello "+name; }} Note that, to make a method available to Seam Remoting, all we need to do is to annotate the method with @WebRemote and then import the relevant class. As we can see in the preceding code, it doesn't matter how many parameters our methods take. Configuring Seam Remoting client side In the previous sections, we've seen that minimal configuration is required to enable Seam Remoting and to declare Seam components as Remoting-aware. Similarly in this section, we'll see that minimal work is required within a Facelets file to enable Remoting. The Seam Framework provides built-in JavaScript to enable Seam Remoting. To use this JavaScript, we first need to define it within a Facelets file in the following way: <script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/resource/remote.js"></script><script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/interface.js?helloWorld"> To include the relevant JavaScript into a Facelets page, we need to import the /seam/resource/remoting/resource/remote.js and /seam/resource/remoting/interface.js JavaScript files. These files are served via the Seam resource servlet that we defined earlier in this article. You can see that the interface.js file takes an argument defining the name of the Seam component that we will be accessing (this is the name of the component for which we have defined methods with the @WebRemote annotation). If we wish to use two or more different Seam components from a Remoting interface, we would specify their names as parameters to the interface.js file separated by using an "&", for example: <script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/interface.js?helloWorld&mySecondComponent& myThirdComponent"> To specify that we will use Seam components from the web tier is straight-forward, however, the Seam tag library makes this even easier. Instead of specifying the JavaScript shown in the preceding examples, we can simply insert the <s:remote /> tag into Facelets, passing the name of the Seam component to use within the include parameter. <ui:compositiontemplate="layout/template.xhtml"> <ui:define name="body"> <h1>Hello World</h1> <s:remote include="helloWorld"/> To use the <s:remote /> tag, we need to import the Seam tag library, as shown in this example. When the web page is rendered, Seam will automatically generate the relevant JavaScript. If we are using the <s:remote /> tag and we want to invoke methods on multiple Seam components, we need to place the component names as comma-separated values within the include parameter of the tag instead, for example: <s:remote include="helloWorld, mySecondComponent, myThirdComponent" /> Invoking Seam components via Remoting Now that we have configured our web application, defined the services to be exposed from the server, and imported the JavaScript to perform the AJAX calls, we can execute our remote methods. To get an instance of a Seam component within JavaScript, we use the Seam.Component.getInstance() method. This method takes one parameter, which specifies the name of the Seam component that we wish to interact with. Seam.Component.getInstance("helloWorld") This method returns a reference to Seam Remoting JavaScript to allow our exposed @WebReference methods to be invoked. When invoking a method via JavaScript, we must specify any arguments to the method (possibly there will be none) and a callback function. The callback function will be invoked asynchronously when the server component's method has finished executing. Within the callback function we can perform any JavaScript processing (such as DOM processing) to give our required AJAX-style functionality. For example, to execute a simple Hello World client, passing no parameters to the server, we could define the following code within a Facelets file. <ui:define name="body"> <h1>Hello World</h1> <s:remote include="helloWorld"/> <p> <button onclick="javascript:sayHello()">Say Hello</button> </p> <p> <div id="helloResult"></div> </p> <script type="text/javascript"> function sayHello() { var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHello(callback); } </script></ui:define> Let's take a look at this code, one piece at a time, to see exactly what is happening. <s:remote include="helloWorld"/> <p> <button onclick="javascript:sayHello()">Say Hello</button> </p> In this part of the code, we have specified that we want to invoke methods on the helloWorld Seam component by using the <s:remote /> tag. We've then declared a button and specified that the sayHello() JavaScript method will be invoked when the button is clicked. <div id="helloResult"></div> Next we've defined an empty <div /> called helloResult. This <div /> will be populated via the JavaScript DOM API with the results from out server side method invocation. <script type="text/javascript"> function sayHello() { var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHello(callback); }</script> Next, we've defined our JavaScript function sayHello(), which is invoked when the button is clicked. This method declares a callback function that takes one parameter. The JavaScript DOM API uses this parameter to set the contents of the helloResult <div /> that we have defined earlier. So far, everything that we've done here has been simple JavaScript and hasn't used any Seam APIs. Finally, we invoke the Seam component using the Seam.Component.getInstance().sayHello() method, passing the callback function as the final parameter. When we open the page, the following flow of events occurs: The page is displayed with appropriate JavaScript created via the<s:remote /> tag. The user clicks on the button. The Seam JavaScript is invoked, which causes the sayHello() method on the helloWorld component to be invoked. The server side component completes execution, causing the JavaScript callback function to be invoked. The JavaScript DOM API uses the results from the server method to change the contents of the <div /> in the browser, without causing the entire page to be refreshed. This process shows how we've developed some AJAX functionality by writing a minimal amount of JavaScript, but more importantly, by not dealing with XML or the JavaScript XMLHttp class. The preceding code shows how we can easily invoke server side methods without passing any parameters. This code can easily be expanded to pass parameters, as shown in the following code snippet: <s:remote include="helloWorld"/><p> <button onclick="javascript:sayHelloWithArgs()"> Say Hello with Args </button></p><p> <div id="helloResult"></div></p><script type="text/javascript"> function sayHelloWithArgs() { var name = "David"; var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHelloWithArgs(name, callback); }</script> The preceding code shows that the process for invoking remote methods with parameters is similar to the process for invoking remote methods with no parameters. The important point to note is that the callback function is specified as the last parameter. When our simple application is run, we get the following screenshot. Clicking on either of the buttons on the page causes our AJAX code to be executed, and the text of the <div /> component to be changed. If we want to invoke a server side method via Seam Remoting and we want the method to be invoked as a part of a Seam conversation, we can use the Seam.Remoting.getcontext.setConversationId() method to set the conversation ID. This ID will then by used by the Seam Framework to ensure that the AJAX request is a part of the appropriate conversation. Seam.Remoting.getContext().setConversationId(#{conversation.id});
Read more
  • 0
  • 0
  • 2556

article-image-building-facebook-application-part-2
Packt
16 Oct 2009
11 min read
Save for later

Building a Facebook Application: Part 2

Packt
16 Oct 2009
11 min read
Mock AJAX and your Facebook profile I'm sure that you've heard of AJAX (Asynchronous JavaScript and XML) with which you can build interactive web pages. Well, Facebook has Mock AJAX, and with this you can create interactive elements within a profile page. Mock AJAX has three attributes that you need to be aware of: clickwriteform: The form to be used to process any data. clickwriteid: The id of a component to be used to display our data. clickwriteurl: The URL of the application that will process the data. When using Mock AJAX, our application must do two things: Return the output of any processed data (and we can do that by using either echo or print). Define a form with which we'll enter any data, and a div to receive the processed data Using a form on your profile Since we want to make our application more interactive, one simple way is to add a form. So, for our first example we can add a function (or in this case a set of functions) to appinclude.php that will create a form containing a simple combo-box: function country_combo () {/*You use this function to display a combo-box containing a list of countries. It's in its own function so that we can use it in other forms without having to add any extra code*/$country_combo = <<<EndOfText<select name=sel_country><option>England</option><option>India</option></select>EndOfText;return $country_combo;}function country_form () {/*Like country_combo-box we can use this form where ever needed because we've encapsulated it in its own function */global $appcallbackurl;$country_form = "<form>";$country_form .= country_combo ();$country_form .= <<<EndOfText<input type="submit" clickrewriteurl="$appcallbackurl" clickrewriteid="info_display" value="View Country"/><div id="info_display" style="border-style: solid; border-color: black; border-width: 1px; padding: 5px;">No country selected</div></form>EndOfText;return $country_form;}function display_simple_form () {/*This function displays the country form with a nice subtitle (on the Profile page)*/global $facebook, $_REQUEST;#Return any processed dataif (isset($_REQUEST['sel_country'])) { echo $_REQUEST['sel_country'] . " selected"; exit;}#Define the form and the div$fbml_text = <<<EndOfText<fb:subtitle><fb:name useyou=false uid=$user firstnameonly=true possessive=true> </fb:name> Suspect List</fb:subtitle>EndOfText;$fbml_text .= country_form ();$facebook->api_client->profile_setFBML($fbml_text, $user);echo $fbml_text;} And, of course, you'll need to edit index.php: display_simple_form (); You'll notice from the code that we need to create a div with the id info_display, and that this is what we use for the clickrewriteid of the submit button. You'll also notice that we're using $appcallbackurl for the clickrewriteurl ($appcallbackurl is defined in appinclude.php). Now, it's just a matter of viewing the new FMBL (by clicking on the application URL in the left-navigation panel): If you select a country, and then click on View Country, you'll see: I'm sure that you can see where we're going with this. The next stage is to incorporate this form into our Suspect Tracker application. And the great thing now is that because of the functions that we've already added to appinclude.php, this is now a very easy job: function first_suspect_tracker () {global $facebook, $_REQUEST;if (isset($_REQUEST['sel_country'])) { $friend_details = get_friends_details_ by_country ($_REQUEST['sel_country']); foreach ($friend_details as $friend) { $div_text .= "<fb:name uid=" . $friend['uid'] . " firstnameonly=false></fb:name>, "; } echo $div_text; exit;}$fbml_text .= country_form ();$facebook->api_client->profile_setFBML($fbml_text, $user);$facebook->redirect($facebook->get_facebook_url() . '/profile.php');} You may also want to change the country_form function, so that the submit button reads View Suspects. And, of course, we'll also need to update index.php. Just to call our new function: <?phprequire_once 'appinclude.php';first_suspect_tracker ();?> This time, we'll see the list of friends in the selected country: or: OK, I know what you're thinking, this is fine if all of your friends are in England and India, but what if they're not? And you don't want to enter the list of countries manually, do you? And what happens if someone from a country not in the list becomes your friend? Obviously, the answer to all of these questions is to create the combo-box dynamically. Creating a dynamic combo-box I'm sure that from what we've done so far, you can work out how to extract a list of countries from Facebook: function country_list_sql () {/*We're going to be using this piece of SQL quite often so it deserves its own function*/global $user;$country_list_sql = <<<EndSQLSELECT hometown_location.countryFROM userWHERE uid IN (SELECT uid1FROM friendWHERE uid2=$user)EndSQL;return $country_list_sql;}function full_country_list () {/*With the SQL in a separate function this one is very short and simple*/global $facebook;$sql = country_list_sql ();$full_country_list = $facebook-> api_client->fql_query($sql);print_r ($full_country_list);} However, from the output, you can see that there's a problem with the data: If you look through the contents of the array, you'll notice that some of the countries are listed more than once—you can see this even more clearly if we simulate building the combo-box: function options_country_list () {global $facebook;$sql = country_list_sql ();$country_list = $facebook->api_client->fql_query($sql);foreach ($country_list as $country){ echo "option:" . $country['hometown_location']['country'] ."<br>";}} From which, we'd get the output: This is obviously not what we want in the combo-box. Fortunately, we can solve the problem by making use of the array_unique method, and we can also order the list by using the sort function: function filtered_country_list () {global $facebook;$sql = country_list_sql ();$country_list = $facebook->api_client->fql_query($sql);$combo_full = array();foreach ($country_list as $country){ array_push($combo_full, $country['hometown_location']['country']);}$combo_list = array_unique($combo_full);sort($combo_list);foreach ($combo_list as $combo){ echo "option:" . $combo ."<br>";}} And now, we can produce a usable combo-box: Once we've added our code to include the dynamic combo-box, we've got the workings for a complete application, and all we have to do is update the country_combo function: function country_combo () {/*The function now produces a combo-box derived from the friends' countries */global $facebook;$country_combo = "<select name=sel_country>";$sql = country_list_sql ();$country_list = $facebook->api_client->fql_query($sql);$combo_full = array();foreach ($country_list as $country){ array_push($combo_full, $country['hometown_location']['country']);}$combo_list = array_unique($combo_full);sort($combo_list);foreach ($combo_list as $combo){ $country_combo .= "<option>" . $combo ."</option>";}$country_combo .= "</select>";return $country_combo;} Of course, you'll need to reload the application via the left-hand navigation panel for the result: Limiting access to the form You may have spotted a little fly in the ointment at this point. Anyone who can view your profile will also be able to access your form and you may not want that (if they want a form of their own they should install the application!). However, FBML has a number of if (then) else statements, and one of them is <fb:if-is-own-profile>: <?phprequire_once 'appinclude.php';$fbml_text = <<<EndOfText<fb:if-is-own-profile>Hi <fb:name useyou=false uid=$user firstnameonly=true></fb:name>, welcome to your Facebook Profile page.<fb:else>Sorry, but this is not your Facebook Profile page - it belongs to <fb:name useyou=false uid=$user firstnameonly=false> </fb:name>,</fb:else></fb:if-is-own-profile>EndOfText;$facebook->api_client->profile_setFBML($fbml_text, $user);echo "Profile updated";?> So, in this example, if you were logged on to Facebook, you'd see the following on your profile page: But anyone else viewing your profile page would see: And remember that the FBML is cached when you run: $facebook->api_client->profile_setFBML($fbml_text, $user); Also, don't forget, it is not dynamic that is it's not run every time that you view your profile page. You couldn't, for example, produce the following for a user called Fred Bloggs: Sorry Fred, but this is not Your Facebook Profile page - it belongs to Mark Bain That said, you are now able to alter what's seen on the screen, according to who is logged on. Storing data—keeping files on your server From what we've looked at so far, you already know that you not only have, but need, files stored on your server (the API libraries and your application files). However, there are other instances when it is useful to store files there. Storing FBML on your server In all of the examples that we've worked on so far, you've seen how to use FBML mixed into your code. However, you may be wondering if it's possible to separate the two. After all, much of the FBML is static—the only reason that we include it in the code is so that we can produce an output. As well as there may be times when you want to change the FBML, but you don't want to have to change your code every time you do that (working on the principle that the more times you edit the code the more opportunity there is to mess it up). And, of course, there is a simple solution. Let's look at a typical form: <form><div id="info_display" style="border-style: solid; border-color: black; border-width: 1px; padding: 5px;"></div><input name=input_text><input type="submit" clickrewriteurl="http://213.123.183.16/f8/penguin_pi/" clickrewriteid="info_display" value="Write Result"></form> Rather than enclosing this in $fbml_text = <<<EndOfText ... EndOfText; as we have done before, you can save the FBML into a file on your server, in a subdirectory of your application. For example /www/htdocs/f8/penguin_pi/fbml/form_input_text.fbml. "Aha" I hear your say, "won't this invalidate the caching of FBML, and cause Facebook to access my server more often than it needs?" Well, no, it won't. It's just that we need to tell Facebook to update the cache from our FBML file. So, first we need to inform FBML that some external text needs to be included, by making use of the <fb:ref> tag, and then we need to tell Facebook to update the cache by using the fbml_refreshRefUrl method: function form_from_server () {global $facebook, $_REQUEST, $appcallbackurl, $user;$fbml_file = $appcallbackurl . "fbml/form_input_text.fbml";if (isset($_REQUEST['input_text'])) { echo $_REQUEST['input_text']; exit;}$fbml_text .= "<fb:ref url='" . $fbml_file . "' />";$facebook->api_client->profile_setFBML($fbml_text, $user);$facebook->api_client->fbml_refreshRefUrl($fbml_file);echo $fbml_text;} As far as your users are concerned, there is no difference. They'll just see another form on their profile page: Even if your users don't appreciate this leap forward, it will make a big difference to your coding—you're now able to isolate any static FBML from your PHP (if you want). And now, we can turn our attention to one of the key advantages of having your own server—your data. Storing data on your server So far, we've concentrated on how to extract data from Facebook and display it on the profile page. You've seen, for example, how to list all of your friends from a given country. However, that's not how Pygoscelis' list would work in reality. In reality, you should be able to select one of your friends and add them to your suspect list. We will, therefore, spend just a little time on looking at creating and using our own data. We're going to be saving our data in files, and so your first job must be to create a directory in which to save those files. Your new directory needs to be a subdirectory of the one containing your application. So, for example, on my Linux server I would do: cd /www/htdocs/f8/penguin_pi       #Move to the application directory mkdir data #Create a new directory chgrp www-data data                          #Change the group of the directory chmod g+w data                                  #Ensure that the group can write to data
Read more
  • 0
  • 0
  • 2555

article-image-enhancing-user-experience-php-5-ecommerce-part-2
Packt
29 Jan 2010
6 min read
Save for later

Enhancing the User Experience with PHP 5 Ecommerce: Part 2

Packt
29 Jan 2010
6 min read
Providing wish lists Wish lists allow customers to maintain a list of products that they would like to purchase at some point, or that they would like others to purchase for them as a gift. Creating the structure To effectively maintain wish lists for customers, we need to keep a record of: The product the customer desires The quantity of the product If they are a logged-in customer, their user ID If they are not a logged-in customer, some way to identify their wish-list products for the duration of their visit to the site The date they added the products to their wish list The priority of the product in their wish lists; that is, if they really want the product, or if it is something they wouldn't mind having Let's translate that into a suitable database table that our framework can interact with: Field Type Description ID Integer (Primary Key, Auto Increment) A reference for the database Product Integer The product the user wishes to purchase Quantity Integer The number of them the user would like Date added Datetime The date they added the product to their wish list Priority Integer Relative to other products in their wish list, and how important is this one Session ID Varcharr The user's session id(so they don't need to be logged in) IP Address Varchar The user's IP address (so they don't need to be logged in) By combining the session ID and IP address of the customer, along with the timestamp of when they added the product to their wish list, we can maintain a record of their wish list for the duration of their visit. Of course, they would need to register, or log in, before leaving the site, for their wish list to be permanently saved. This also introduces an element of maintenance to this feature, as once a customer who has not logged in closes their session, their wish-list data cannot be retrieved, so we would need to implement some garbage collection functions to prune this table. The following SQL represents this table: CREATE TABLE `wish_list_products` (`ID` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,`product` INT NOT NULL,`quantity` INT NOT NULL,`user` INT NOT NULL,`dateadded` TIMESTAMP NOT NULLDEFAULT CURRENT_TIMESTAMP,`priority` INT NOT NULL,`sessionID` VARCHAR( 50 ) NOT NULL,`IPAddress` VARCHAR( 50 ) NOT NULL,INDEX ( `product` )) ENGINE = INNODB COMMENT = 'Wish list products'ALTER TABLE `wish_list_products` ADD FOREIGN KEY ( `product` ) REFERENCES `book4`.`content` (`ID`)ON DELETE CASCADE ON UPDATE CASCADE; Saving wishes Now that we have a structure in place for storing wish-list products, we need to have a process available to save them into the database. This involves a link or button placed on the product view, and either some modifications to our product controller, or a wish-list controller, to save the wish. As wish lists will have their own controller and model for viewing and managing the lists, we may as well add the functionality into the wish-list controller. So we will need: a controller a link in our product view Wish-list controller The controller needs to detect if the user is logged in or not; if they are, then it should add products to the user's wish list; otherwise, it should be added to a session-based wish list, which lasts for the duration of the user's session. The controller also needs to detect if the product is valid; we can do this by linking it up with the products model, and if it isn't a valid product, the customer should be informed of this. Let's look through a potential addProduct() method for our wish-list controller. /** * Add a product to a user's wish list * @param String $productPath the product path * @return void */ We first check if the product is valid, by creating a new product model object, which informs us if the product is valid. private function addProduct( $productPath ){// check product path is a valid and active product$pathToRemove = 'wishlist/add/';$productPath = str_replace( $pathToRemove, '',$this->registry->getURLPath() );require_once( FRAMEWORK_PATH . 'models/products/model.php');$this->product = new Product( $this->registry, $productPath );if( $this->product->isValid(){// check if user is logged in or notif( $this->registry->getObject('authenticate')->loggedIn() == true ){//Assuming the user is logged in, we can also store their ID,// so the insert data is slightly different. Here we insert the// wish into the database.$wish = array();$pdata = $this->product->getData();$wish['product'] = $pdata['ID'];$wish['quantity'] = 1;$wish['user'] = $this->registry->getObject('authenticate')->getUserID();$this->registry->getObject('db')->insertRecords('wish_list_products', $wish );// inform the user$this->registry->getObject('template')->getPage()->addTag('message_heading', 'Product added to your wish list');$this->registry->getObject('template')->getPage()->addTag('message_heading', 'A ' . $pdata['name'].' has been added to your wish list');$this->registry->getObject('template')->buildFromTemplates('header.tpl.php', 'message.tpl.php','footer.tpl.php');} The customer isn't logged into the website, so we add the wish to the database, using session and IP address data to tie the wish to the customer. else{// insert the wish$wish = array();$wish['sessionID'] = session_id();$wish['user'] = 0;$wish['IPAddress'] = $_SERVER['REMOTE_ADDR'];$pdata = $this->product->getData();$wish['product'] = $pdata['ID'];$wish['quantity'] = 1;$this->registry->getObject('db')->insertRecords('wish_list_products', $wish );// inform the user$this->registry->getObject('template')->getPage()->addTag('message_heading','Product added to your wish list');$this->registry->getObject('template')->getPage()->addTag('message_heading', 'A ' . $pdata['name'].' has been added to your wish list');$this->registry->getObject('template')->buildFromTemplates('header.tpl.php', 'message.tpl.php','footer.tpl.php');}} The product wasn't valid, so we can't insert the wish, so we need to inform the customer of this. else{// we can't insert the wish, so inform the user$this->registry->getObject('template')->getPage()->addTag('message_heading', 'Invalid product');$this->registry->getObject('template')->getPage()->addTag('message_heading', 'Unfortunately, the product youtried to add to your wish list was invalid, and was notadded, please try again');$this->registry->getObject('template')->buildFromTemplates('header.tpl.php', 'message.tpl.php','footer.tpl.php');}} Add to wish list To actually add a product to our wish list, we need a simple link within our products view. This should be /wishlist/add/product-path. <p><a href="wishlist/add/{product_path}"title="Add {product_name} to your wishlist">Add to wishlist.</a></p> We could encase this link around a nice image if we wanted, making it more user friendly. When the user clicks on this link, the product will be added to their wish list and they will be informed of that.
Read more
  • 0
  • 0
  • 2551

article-image-magento-exploring-themes
Packt
16 Aug 2011
6 min read
Save for later

Magento: Exploring Themes

Packt
16 Aug 2011
6 min read
  Magento 1.4 Themes Design Magento terminology Before you look at Magento themes, it's beneficial to know the difference between what Magento calls interfaces and what Magento calls themes, and the distinguishing factors of websites and stores. Magento websites and Magento stores To add to this, the terms websites and stores have a slightly different meaning in Magento than in general and in other systems. For example, if your business is called M2, you might have three Magento stores (managed through the same installation of Magento) called: Blue Store Red Store Yellow Store In this case, Magento refers to M2 as the website and the stores are Blue Store, Red Store, and Yellow Store. Each store then has one or more store views associated with it too. The simplest Magento website consists of a store and store view (usually of the same name): A slightly more complex Magento store may just have one store view for each store. This is a useful technique if you want to manage more than one store in the same Magento installation, with each store selling different products (for example, the Blue Store sells blue products and the Yellow Store sells yellow products). If a store were to make use of more than one Magento store view, it might be, to present customers with a bi-lingual website. For example, our Blue Store may have an English, French, and Japanese store view associated with it: Magento interfaces An interface consists of one or more Magento themes that comprise how your stores look and function for your customers. Interfaces can be assigned at two levels in Magento: At the website level At the store view level If you assign an interface at the website level of your Magento installation, all stores associated with the interface inherit the interface. For example, imagine your website is known as M2 in Magento and it contains three stores called: Blue Store Red Store Yellow Store If you assign an interface at the website level (that is, M2), then the subsequent stores, Blue Store, Red Store, and Yellow Store, inherit this interface: If you assigned the interface at the store view level of Magento, then each store view can retain a different interface: Magento packages A Magento package typically contains a base theme, which contains all of the templates, and other files that Magento needs to run successfully, and a custom theme. Let's take a typical example of a Magento store, M2. This may have two packages: the base package, located in the app/design/frontend/base/ directory and another package which itself consists of two themes: The base theme is in the app/design/frontend/base/ directory. The second package contains the custom theme's default theme in the app/design/frontend/ default/ directory, which acts as a base theme within the package. The custom theme itself, which is the non-default theme, is in the app/design/frontend/our-custom- theme/default/ and app/design/frontend/our-custom-theme/custom-theme/ directories. By default, Magento will look for a required file in the following order: Custom theme directory: app/design/frontend/our-custom-theme/ custom-theme/ Custom theme's default directory: app/design/frontend/our-custom-theme/ default/ Base directory: app/design/frontend/base/ Magento themes A Magento theme fits in to the Magento hierarchy in a number of positions: it can act as an interface or as a store view. There's more to discover about Magento themes yet, though there are two types of Magento theme: a base theme (this was called a default theme in Magento 1.3) and a non-default theme. Base theme A base theme provides all conceivable files that a Magento store requires to run without error, so that non-default themes built to customize a Magento store will not cause errors if a file does not exist within it. The base theme does not contain all of the CSS and images required to style your store, as you'll be doing this with our non-default theme. Don't change the base package! It is important that you do not edit any files in the base package and that you do not attempt to create a custom theme in the base package, as this will make upgrading Magento fully difficult. Make sure any custom themes you are working on are within their own design package; for example, your theme's files should be located at app/design/ frontend/your-package-name/default and skin/frontend/ your-package-name/default. Default themes A default theme in Magento 1.4 changes aspects of your store but does not need to include every file required by Magento as a base theme does, though it must just contain at least one file for at least one aspect of a theme (that is, locales, skins, templates, layout): Default themes in Magento 1.3 In Magento 1.3, the default theme acted the way the base theme did in Magento 1.4, providing every file that your Magento store required to operate. Non-default themes A non-default theme changes aspects of a Magento store but does not need to include every file required by Magento as the base theme does; it must just contain at least one file for at least one aspect of a theme (that is, locales, skins, templates, layout): In this way, non-default themes are similar to a default theme in Magento. Non-default themes can be used to alter your Magento store for different seasonal events such as Christmas, Easter, Eid, Passover, and other religious festivals, as well as events in your industry's corporate calendar such as annual exhibitions and conferences. Blocks in Magento Magento uses blocks to differentiate between the various components of its functionality, with the idea that this makes it easier for Magento developers and Magento theme designers to customize the functionality of Magento and the look and feel of Magento respectively. There are two types of blocks in Magento: Content blocks Structural blocks Content blocks A content block displays the generated XHTML provided by Magento for any given feature. Content blocks are used within Magento structural blocks. Examples of content blocks in Magento include the following: The search feature Product listings The mini cart Category listings Site navigation links Callouts (advertising blocks) The following diagram illustrates how a Magento store might have content blocks positioned within its structural blocks: Simply, content blocks are the what of a Magento theme: they define what type of content appears within any given page or view within Magento. Structural blocks In Magento, a structural block exists only to maintain a visual hierarchy to a page. Typical structural blocks in a Magento theme include: Header Primary area Left column Right column Footer
Read more
  • 0
  • 0
  • 2551
article-image-microsoft-silverlight-5-working-services
Packt
23 Apr 2012
11 min read
Save for later

Microsoft Silverlight 5: Working with Services

Packt
23 Apr 2012
11 min read
(For more resources on silverlight, see here.) Introduction Looking at the namespaces and classes in the Silverlight assemblies, it's easy to see that there are no ADO.NET-related classes available in Silverlight. Silverlight does not contain a DataReader, a DataSet, or any option to connect to a database directly. Thus, it's not possible to simply define a connection string for a database and let Silverlight applications connect with that database directly. The solution adds a layer on top of the database in the form of services. The services that talk directly to a database (or, more preferably, to a business and data access layer) can expose the data so that Silverlight can work with it. However, the data that is exposed in this way does not always have to come from a database. It can come from a third-party service, by reading a file, or be the result of an intensive calculation executed on the server. Silverlight has a wide range of options to connect with services. This is important as it's the main way of getting data into our applications. In this article, we'll look at the concepts of connecting with several types of services and external data. We'll start our journey by looking at how Silverlight connects and works with a regular service. We'll see the concepts that we use here recur for other types of service communications as well. One of these concepts is cross-domain service access. In other words, this means accessing a service on a domain that is different from the one where the Silverlight application is hosted. We'll see why Microsoft has implemented cross-domain restrictions in Silverlight and what we need to do to access externally hosted services. Next, we'll talk about working with the Windows Azure Platform. More specifically, we'll talk about how we can get our Silverlight application to get data from a SQL Azure database, how to communicate with a service in the cloud, and even how to host the Silverlight application in the cloud, using a hosted service or serving it from Azure Storage. Finally, we'll finish this chapter by looking at socket communication. This type of communication is rare and chances are that you'll never have to use it. However, if your application needs the fastest possible access to data, sockets may provide the answer. Connecting and reading from a standardized service Applies to Silverlight 3, 4 and 5 If we need data inside a Silverlight application, chances are that this data resides in a database or another data store on the server. Silverlight is a client-side technology, so when we need to connect to data sources, we need to rely on services. Silverlight has a broad spectrum of services to which it can connect. In this recipe, we'll look at the concepts of connecting with services, which are usually very similar for all types of services Silverlight can connect with. We'll start by creating an ASMX webservice—in other words, a regular web service. We'll then connect to this service from the Silverlight application and invoke and read its response after connecting to it. Getting ready In this recipe, we'll build the application from scratch. However, the completed code for this recipe can be found in the Chapter07/SilverlightJackpot_Read_Completed folder in the code bundle that is available on the Packt website. How to do it... We'll start to explore the usage of services with Silverlight using the following scenario. Imagine we are building a small game application in which a unique code belonging to a user needs to be checked to find out whether or not it is a winning code for some online lottery. The collection of winning codes is present on the server, perhaps in a database or an XML file. We'll create and invoke a service that will allow us to validate the user's code with the collection on the server. The following are the steps we need to follow: We'll build this application from scratch. Our first step is creating a new Silverlight application called SilverlightJackpot. As always, let Visual Studio create a hosting website for the Silverlight client by selecting the Host the Silverlight application in a new Web site checkbox in the New Silverlight Application dialog box. This will ensure that we have a website created for us, in which we can create the service as well. We need to start by creating a service. For the sake of simplicity, we'll create a basic ASMX web service. To do so, right-click on the project node in the SilverlightJackpot. Web project and select Add | New Item... in the menu. In the Add New Item dialog, select the Web Service item. We'll call the new service as JackpotService. Visual Studio creates an ASMX file (JackpotService.asmx) and a code-behind file (JackpotService.asmx.cs). To keep things simple, we'll mock the data retrieval by hardcoding the winning numbers. We'll do so by creating a new class called CodesRepository.cs in the web project. This class returns a list of winning codes. In real-world scenarios, this code would go out to a database and get the list of winning codes from there. The code in this class is very easy. The following is the code for this class: public class CodesRepository{ private List<string> winningCodes; public CodesRepository() { FillWinningCodes(); } private void FillWinningCodes() { if (winningCodes == null) { winningCodes = new List<string>(); winningCodes.Add("12345abc"); winningCodes.Add("azertyse"); winningCodes.Add("abcdefgh"); winningCodes.Add("helloall"); winningCodes.Add("ohnice11"); winningCodes.Add("yesigot1"); winningCodes.Add("superwin"); } } public List<string> WinningCodes { get { return winningCodes; } }} At this point, we need only one method in our JackpotService. This method should accept the code sent from the Silverlight application, check it with the list of winning codes, and return whether or not the user is lucky to have a winning code. Only the methods that are marked with the WebMethod attribute are made available over the service. The following is the code for our service: [WebService(Namespace = "http://tempuri.org/")][WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)][System.ComponentModel.ToolboxItem(false)]public class JackpotService : System.Web.Services.WebService{ List<string> winningCodes; public JackpotService() { winningCodes = new CodesRepository().WinningCodes; } [WebMethod] public bool IsWinningCode(string code) { if(winningCodes.Contains(code)) return true; return false; }} Build the solution at this point to ensure that our service will compile and can be connected from the client side. Now that the service is ready and waiting to be invoked, let's focus on the Silverlight application. To make the service known to our application, we need to add a reference to it. This is done by right-clicking on the SilverlightJackpot project node, and selecting the Add Service Reference... item. In the dialog that appears, we have the option to enter the address of the service ourselves. However, we can click on the Discover button as the service lives in the same solution as the Silverlight application. Visual Studio will search the solution for the available services. If there are no errors, our freshly created service should show up in the list. Select it and rename the Namespace: as JackpotService, as shown in the following screenshot. Visual Studio will now create a proxy class: The UI for the application is kept quite simple. An image of the UI can be seen a little further ahead. It contains a TextBox, where the user can enter a code, a Button that will invoke a check, and a TextBlock that will display the result. This can be seen in the following code: <StackPanel> <TextBox x_Name="CodeTextBox" Width="100" Height="20"> </TextBox> <Button x_Name="CheckForWinButton" Content="Check if I'm a winner!" Click="CheckForWinButton_Click"> </Button> <TextBlock x_Name="ResultTextBlock"> </TextBlock></StackPanel> In the Click event handler, we'll create an instance of the proxy class that was created by Visual Studio as shown in the following code: private void CheckForWinButton_Click(object sender, RoutedEventArgs e){ JackpotService.JackpotServiceSoapClient client = new SilverlightJackpot.JackpotService.JackpotServiceSoapClient();} All service communications in Silverlight happen asynchronously. Therefore, we need to provide a callback method that will be invoked when the service returns: client.IsWinningCodeCompleted += new EventHandler <SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs> (client_IsWinningCodeCompleted); To actually invoke the service, we need to call the IsWinningCodeAsync method as shown in the following line of code. This method will make the actual call to the service. We pass in the value that the user entered: client.IsWinningCodeAsync(CodeTextBox.Text); Finally, in the callback method, we can work with the result of the service via the Result property of the IsWinningCodeCompletedEventArgs instance. Based on the value, we display another message as shown in the following code: void client_IsWinningCodeCompleted(object sender, SilverlightJackpot.JackpotService. IsWinningCodeCompletedEventArgs e){ bool result = e.Result; if (result) ResultTextBlock.Text = "You are a winner! Enter your data below and we will contact you!"; else ResultTextBlock.Text = "You lose... Better luck next time!";} We now have a fully working Silverlight application that uses a service for its data needs. The following screenshot shows the result from entering a valid code: How it works... As it stands, the current version of Silverlight does not have support for using a local database. Silverlight thus needs to rely on external services for getting external data. Even if we had local database support, we would still need to use services in many scenarios. The sample used in this recipe is a good example of data that would need to reside in a secure location (meaning on the server). In any case, we should never store the winning codes in a local database that would be downloaded to the client side. Silverlight has the necessary plumbing on board to connect with the most common types of services. Services such as ASMX, WCF, REST, RSS, and so on, don't pose a problem for Silverlight. While the implementation of connecting with different types of services differs, the concepts are similar. In this recipe, we used a plain old web service. Only the methods that are attributed with the WebMethodAttribute are made available over the service. This means that even if we create a public method on the service, it won't be available to clients if it's not marked as a WebMethod. In this case, we only create a single method called IsWinningCode, which retrieves a list of winning codes from a class called CodesRepository. In real-world applications, this data could be read from a database or an XML file. Thus, this service is the entry point to the data. For Silverlight to work with the service, we need to add a reference to it. When doing so, Visual Studio will create a proxy class. Visual Studio can do this for us because the service exposes a Web Service Description Language (WSDL) file. This file contains an overview of the methods supported by the service. A proxy can be considered a copy of the server-side service class, but without the implementations. Instead, each copied method contains a call to the actual service method. The proxy creation process carried out by Visual Studio is the same as adding a service reference in a regular .NET application. However, invoking the service is somewhat different. All communication with services in Silverlight is carried out asynchronously. If this wasn't the case, Silverlight would have had to wait for the service to return its result. In the meantime, the UI thread would be blocked and no interaction with the rest of the application would be possible. To support the asynchronous service call inside the proxy, the IsWinningCodeAsync method as well as the IsWinningCodeCompleted event is generated. The IsWinningCodeAsync method is used to make the actual call to the service. To get access to the results of a service call, we need to define a callback method. This is where the IsWinningCodeCompleted event comes in. Using this event, we define which method should be called when the service returns (in our case, the client_IsWinningCodeCompleted method). Inside this method, we have access to the results through the Result parameter, which is always of the same type as the return type of the service method. See also Apart from reading data, we also have to persist data. In the next recipe, Persisting data using a standardized service, we'll do exactly that.
Read more
  • 0
  • 0
  • 2549

article-image-wordpress-avoiding-black-hat-techniques
Packt
26 Apr 2011
10 min read
Save for later

WordPress: Avoiding the Black Hat Techniques

Packt
26 Apr 2011
10 min read
  WordPress 3 Search Engine Optimization Optimize your website for popularity with search engines         Read more about this book       (For more resources on WordPress, see here.) Typical black hat techniques There is a wide range of black hat techniques fully available to all webmasters. Some techniques can improve rankings in short term, but generally not to the extent that legitimate web development would, if pursued with the same effort. The risk of black hat techniques is that they are routinely detected and punished. Black hat is never the way to go for a legitimate business, and pursuing black hat techniques can get your site (or sites) permanently banned and will also require you to build an entirely new website with an entirely new domain name. We will examine a few black hat techniques to help you avoid them. Hidden text on web pages Hidden text is the text that through either coding or coloring does not appear to users, but appears to search engines. Hidden text is a commonly-used technique, and would be better described as gray hat. It tends not to be severely punished when detected. One technique relies on the coloring of elements. When the color of a text element is set to the same color as the background (either through CSS or HTML coding), then the text disappears from human readers while still visible to search spiders. Unfortunately, for webmasters employing this technique, it's entirely detectible by Google. More easily detectible is the use of the CSS property display: none. In the language of CSS, this directs browsers to not display the text that is defined by that element. This technique is easily detectible by search engines. There is an obvious alternative to employing hidden text: Simply use your desired keywords in the text of your content and display the text to both users and search spiders. Spider detection, cloaking, redirection, and doorway pages Cloaking and spider detection are related techniques. Cloaking is a black hat SEO technique whereby the content presented to search engine spiders (via search spider detection) differs from the content presented to users. Who would employ such a technique? Cloaking is employed principally by sellers of products typically promoted by spam, such as pharmaceutics, adult sites, and gambling sites. Since legitimate search traffic is difficult to obtain in these niches, the purveyors of these products employ cloaking to gain visitors. Traditional cloaking relies upon spider detection. When a search spider visits a website, the headers accompanying a page view request identify the spider by names such as Goolgebot (Google's spider) or Slurp (Inktomi's spider). Conversely, an ordinary web browser (presumably with a human operator) will identify itself as Mozilla, Internet Explorer, or Safari, as the case may be. With simple JavaScript or with server configuration, it is quite easy to identify the requesting browser and deliver one version of a page to search spiders and another version of the page to human browsers. All you really need is to know the names of the spiders, which are publicly known. A variation of cloaking is a doorway page. A doorway page is a page through which human visitors are quickly redirected (through a meta refresh or JavaScript) to a destination page. Search spiders, however, index the doorway page, and not the destination page. Although the technique differs in execution, the effect is the same: Human visitors see one page, and the search engines see another. The potential harm from cloaking goes beyond search engine manipulation. More often than not, the true destination pages in a cloaking scheme are used for the transmission of malware, viruses, and Trojans. Because the search engines aren't necessarily reading the true destination pages, the malicious code isn't detected. Any type of cloaking, when reported or detected, is almost certain to result in a severe Google penalty, such as removal of a site from the search engine indexes. Linking to bad neighborhoods and link farms A bad neighborhood is a website or a network of websites that either earns inbound links through illegitimate means or employs other "black hat on-page" techniques such as cloaking, and redirects them. A link farm is a website that offers almost no content, but serves solely for the purpose of listing links. Link farms, in turn, offer links to other websites to increase the rankings of these sites. A wide range of black hat techniques can get a website labeled as a bad neighborhood. A quick test you can employ to determine if a site is a bad neighborhood is by entering the domain name as a part of the specialized Google search query, "site:the-website-domain.com" to see if Google displays any pages of that website in its index. If Google returns no results, the website is either brand new or has been removed from Google's index—a possible indicator that it has been labeled a bad neighborhood. Another quick test is to check the site's PageRank and compare the figure to the number of inbound links pointing to the site. If a site has a large number of backlinks but has a PageRank of zero, which would tend to indicate that its PageRank has been manually adjusted downwards due to a violation of Google's Webmaster Guidelines. If both of the previous tests are either positive or inconclusive, you would still be wise to give the site a "smell test". Here are some questions to ask when determining if a site might be deemed as a bad neighborhood: Does the site offer meaningful content? Did you detect any redirection while visiting the site? Did you get any virus warning while visiting the site? Is the site a little more than lists of links or text polluted with high numbers of links? Check the website's backlink profile. Are the links solely low-value inbound links? If it isn't a site you would engage with when visiting, don't link to it. Google Webmaster Guidelines Google Webmaster Guidelines are a set of written rules and prohibitions that outline recommended and forbidden website practices. You can find these webmaster guidelines at: http://www.google.com/support/webmasters/bin/ answer.py?hl=en&answer=35769, though you'll find it easier to search for "Google Webmaster Guidelines" and click on the top search result. You should read through the Google Webmaster Guidelines and refer to them occasionally. The guidelines are divided into design and content guidelines, technical guidelines, and quality guidelines. Google Webmaster Guidelines in a nutshell At their core, Google Webmaster Guidelines aim for quality in the technology underlying websites in their index, high-quality content, and also discourage manipulation of search results through deceptive techniques. All search engines have webmaster guidelines, but if you follow Google's dictates, you will not run afoul of any of the other search engines. Here, we'll discuss only the Google's rules. Google's design and content guidelines instruct that your site should have a clear navigational hierarchy with text links rather than image links. The guidelines specifically note that each page "should be reachable from at least one static text link". Because WordPress builds text-based, hierarchical navigation naturally, your site will also meet that rule naturally. The guidelines continue by instructing that your site should load quickly and display consistently among different browsers. The warnings come in Google's quality guidelines; in this section, you'll see how Google warns against a wide range of black hat techniques such as the following: Using hidden text or hidden links, elements that through coloring, font size, or CSS display properties to show to the search engines but do not show them to the users. The use of cloaking or "sneaky redirects". Cloaking means a script that detects search engine spiders and displays one version of a website to users and displays an alternate version to the search engines. The use of repetitive, automated queries to Google. Some unscrupulous software vendors (Google mentions one by name, WebPosition Gold, which is still in the market, luring unsuspecting webmasters) sell software and services that repeatedly query Google to determine website rankings. Google does allow such queries in some instances through their AJAX Search API Key—but you need to apply for one and abide by the terms of its use. The creation of multiple sites or pages that consist solely of duplicate content that appears on other web properties. The posting or installation of scripts that behave maliciously towards users, such as with viruses, trojans, browser interceptors, or other badware. Participation in link schemes. Google is quite public that it values inbound links as a measure of site quality, so it is ever vigilant to detect and punish illegitimate link programs. Linking to bad neighborhoods. A bad neighborhood means a website that uses illegitimate, forbidden techniques to earn inbound links or traffic. Stuffing keywords onto pages in order to fool search spiders. Keyword stuffing is "the oldest trick in the book". It's not only forbidden, but also highly ineffective at influencing search results and highly annoying to visitors. When Google detects violations of its guidelines Google, which is nearly an entirely automated system, is surprisingly capable of detecting violations of its guidelines. Google encourages user-reporting of spam websites, cloaked pages, and hidden text (through their page here: https://www. google.com/webmasters/tools/spamreport). They maintain an active antispam department that is fully engaged in an ongoing improvement in both, manual punishments for offending sites, and algorithmic improvements for detecting violations. When paid link abuses are detected, Google will nearly always punish the linking site, not necessarily the site receiving the link—even though the receiving site is the one earning a ranking benefit. At first glance, this may seem counter-intuitive, but there is a reason. If Google punished the site receiving a forbidden paid link, then any site owner could knock a competitor's website by buying a forbidden link, pointing to the competitor, and then reporting the link as spam. When an on-page black hat or gray hat element is detected, the penalty will be imposed upon the offending site. The penalties range from a ranking adjustment to an outright ban from search engine results. Generally, the penalty matches the crime; the more egregious penalties flow from more egregious violations. We need to draw a distinction, however, between a Google ban, penalty, and algorithmic filtering. Algorithmic filtering is simply an adjustment to the rankings or indexing of a site. If you publish content that is a duplicate of the other content on the Web, and Google doesn't rank or index that page, that's not a penalty, it's simply the search engine algorithm operating properly. If all of your pages are removed from the search index, that is most likely a ban. If the highest ranking you can achieve is position 40 for any search phrase, that could potentially be a penalty called a "-40 penalty". All search engines can impose discipline upon websites, but Google is the most strict and imposes far more penalties than the other search engines, so we will largely discuss Google here. Filtering is not a penalty; it is an adjustment that can be remedied by undoing the condition that led to the it. Filtering can occur for a variety of reasons but is often imposed following over optimization. For example, if your backlink profile comprises links of which 80% use the same anchor text, you might trigger a filter. The effect of a penalty or filter is the same: decreased rankings and traffic. In the following section, we'll look at a wide variety of known Google filters and penalties, and learn how to address them.
Read more
  • 0
  • 0
  • 2549

article-image-phplist-2-e-mail-campaign-manager-personalizing-e-mail-body
Packt
26 Jul 2011
5 min read
Save for later

phpList 2 E-mail Campaign Manager: Personalizing E-mail Body

Packt
26 Jul 2011
5 min read
Enhancing messages using built-in placeholders For simple functionality's sake, we generally want our phpList messages to contain at least a small amount of customization. For example, even the default footer, which phpList attaches to messages, contains three placeholders, customizing each message for each recipient: -- If you do not want to receive any more newsletters, [UNSUBSCRIBE] To update your preferences and to unsubscribe, visit [PREFERENCES] Forward a Message to Someone [FORWARD] The placeholders [UNSUBSCRIBE],[PREFERENCES], and [FORWARD] will be replaced with unique URLs per subscriber, allowing any subscriber to immediately unsubscribe, adjust their preferences, or forward a message to a friend simply by clicking on a link. There's a complete list of available placeholders documented on phpList's wiki page at http://docs.phplist.com/Placeholders. Here are some of the most frequently used ones: [CONTENT]: Use this while creating standard message templates. You can design a styled template which is re-used for every mailing and the [CONTENT] placeholder will be replaced with the unique content for that particular message. [EMAIL]: This is replaced by the user's e-mail address. It can be very helpful in the footer of an e-mail, so that subscribers know which e-mail address they used to sign up for list subscription. [LISTS]: The lists to which a member is subscribed. Having this information attached to system confirmation messages makes it easy for subscribers to manage their own subscriptions. Note that this placeholder is only applicable in system messages and not in general list messages. [UNSUBSCRIBEURL]: Almost certainly, you'll want to include some sort of "click here to unsubscribe" link on your messages, either as a pre-requisite for sending bulk mail (perhaps imposed by your ISP) or to avoid users inadvertently reporting you for spamming. [UNSUBSCRIBE]: This placeholder generates the entire hyperlink for you (including the link text, "unsubscribe"), whereas the [UNSUBSCRIBEURL] placeholder simply generates the URL. You would use the URL only if you wanted to link an image to the unsubscription page, as opposed to a simple link, or if you wanted the HTML link text to be something other than "unsubscribe". [USERTRACK]: This inserts an invisible tracker image into HTML messages, helping you to measure the effectiveness of your newsletter. You might combine several of these placeholders to add a standard signature to your messages, as follows: -- You ([EMAIL]) are receiving this message because you subscribed to one or more of our mailing lists. We only send messages to subscribers who have requested and confirmed their subscription (double-opt-in). You can adjust your list membership at any time by clicking on [PREFERENCES] or unsubscribe altogether by clicking on [UNSUBSCRIBE]. -- Placeholders in confirmation messages Some placeholders (such as [LISTS]) are only applicable in confirmation messages (that is, "thank you for subscribing to the following lists..."). These placeholders allow you to customize the following: Request to confirm: Sent initially to users when they subscribe, confirming their e-mail address and subscription request Confirmation of subscription: Sent to users to confirm that they've been successfully added to the requested lists (after they've confirmed their e-mail address) Confirmation of preferences update: Sent to users to confirm their updates when they change their list subscriptions/preferences themselves Confirmation of unsubscription: Sent to users after they've unsubscribed to confirm that their e-mail address will no longer receive messages from phpList Personalizing messages using member attributes Apart from the built-in placeholders, you can also use any member attributes to further personalize your messages. Say you captured the following attributes from your new members: First Name Last Name Hometown Favorite Food You could craft a personalized message as follows: Dear [FIRST NAME], Hello from your friends at the Funky Town Restaurant. We hope the [LAST NAME] family is well in the friendly town of [HOMETOWN]. If you're ever in the mood for a fresh [FAVORITE FOOD], please drop in - we'd be happy to have you! ... This would appear to different subscribers as: Dear Bart, Hello from your friends at the Funky Town Restaurant. We hope the Simpson family is well in the friendly town of Springfield. If you're ever in the mood for a fresh pizza, please drop in - we'd be happy to have you! ... Or: Dear Clark, Hello from your friends at the Funky Town Restaurant. We hope the Kent family is well in the friendly town of Smallville. If you're ever in the mood for a fresh Krypto-Burger, please drop in - we'd be happy to have you! ... If a user doesn't have an attribute for a particular placeholder, it will be replaced with a blank space. For example, if user "Mary" hadn't entered any attributes, her message would look like: Dear, Hello from your friends at the Funky Town Restaurant. We hope the family is well in the friendly town of . If you're ever in the mood for a fresh , please drop in - we'd be happy to have you! ... If the attributes on your subscription form are optional, try to structure your content in such a way that a blank placeholder substitution won't ruin the text. For example, the following text will look awkward with blank substitutions: Your name is [FIRST NAME], your favorite food is [FAVORITE FOOD], and your last name is [LAST NAME] Whereas the following text would at least "degrade gracefully": Your name: [FIRST NAME] Your favorite food: [FAVORITE FOOD] Your last name [LAST NAME]
Read more
  • 0
  • 0
  • 2547
article-image-user-interface-design-icefaces-18-part-2
Packt
30 Nov 2009
11 min read
Save for later

User Interface Design in ICEfaces 1.8: Part 2

Packt
30 Nov 2009
11 min read
Facelets templating To implement the layout design, we use the Facelets templating that is officially a part of the JSF specification since release 2.0. This article will only have a look at certain parts of the Facelets technology. So, we will not discuss how to configure a web project to use Facelets. You can study the source code examples of this article, or have a look at the developer documentation (https://facelets.dev.java.net/nonav/docs/dev/docbook.html) and the articles section of the Facelets wiki (http://wiki.java.net/bin/view/Projects/FaceletsArticles)for further details. The page template First of all, we define a page template that follows our mockup design. For this, we reuse the HelloWorld(Facelets) application. You can import the WAR file now if you did not create a Facelets project. For importing a WAR file, use the menu File | Import | Web | WAR file. In the dialog box, click on the Browse button and select the corresponding WAR file. Click on the Finish button to start the import. The run configuration is done. However, you do not have to configure the Jetty server again. Instead, it can be simply selected as your target. We start coding with a new XHTML file in the WebContent folder. Use the menu File | New | Other | Web | HTML Page and click on the Next button. Use page-template.xhtml for File name in the next dialog. Click on the Next button again and choose New ICEfaces Facelets.xhtml File (.xhtml). Click on the Finish button to create the file. The ICEfaces plugin creates this code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title> <ui:insert name="title"> Default title </ui:insert> </title> </head> <body> <div id="header"> <ui:include src="/header.xhtml"> <ui:param name="param_name" value="param_value"/> </ui:include> </div> <div id="content"> <ice:form> </ice:form> </div> </body> </html> The structure of the page is almost pure HTML. This is an advantage when using Facelets. The handling of pages is easier and can even be done with a standard HTML editor. The generated code is not what we need. If you try to run this, you will get an error because the header.xhtml file is missing in the project. So, we delete the code between the <body> tags and add the basic structure for the templating. The changed code looks like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title> <ui:insert name="title"> Default title </ui:insert> </title> </head> <body> <table align="center" cellpadding="0" cellspacing="0"> <tr><td><!-- header --></td></tr> <tr><td><!-- main navigation --></td></tr> <tr><td><!-- content --></td></tr> <tr><td><!-- footer --></td></tr> </table> </body> </html> We change the <body> part to a table structure. You may wonder why we use a <table> for the layout, and even the align attribute, when there is a <div> tag and CSS. The answer is pragmatism. We do not follow the doctrine because we want to get a clean code and keep things simple. If you have a look at the insufficient CSS support of the Internet Explorer family and the necessary waste of time to get things running, it makes no sense to do so. The CSS support in Internet Explorer is a good example of the violation of user expectations. We define four rows in the table to follow our layout design. You may have recognized that the <title> tag still has its <ui:insert> definition. This is the Facelets tag we use to tell the templating where we want to insert our page-specific code. To separate the different insert areas from each other, the <ui:insert> has a name attribute. We substitute the comments with the <ui:insert> definitions, so that the templating can do the replacements: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title> <ui:insert name="title"> Default title </ui:insert> </title> </head> <body> <table align="center" cellpadding="0" cellspacing="0"> <tr><td><ui:insert name="header"/></td></tr> <tr><td><ui:insert name="mainNavigation"/></td></tr> <tr><td><ui:insert name="content"/></td></tr> <tr><td><ui:insert name="footer"/></td></tr> </table> </body> </html> The <ui:insert> tag allows us to set defaults that are used if we do not define something for replacement. Everything defined between <ui:insert> and </ui:insert> will then be shown instead. We will use this to define a standard behavior of a page that can be overwritten, if necessary. Additionally, this allows us to give hints in the rendering output if something that should be defined in a page is missing. Here is the code showing both aspects: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <ice:outputStyle href="/xmlhttp/css/royale/royale.css" /> <title> <ui:insert name="title"> Please, define a title. </ui:insert> </title> </head> <body> <table align="center" cellpadding="0" cellspacing="0"> <tr><td> <ui:insert name="header"> <ice:graphicImage url="/logo.png" /> </ui:insert> </td></tr> <tr><td> <ui:insert name="mainNavigation"> <ice:form> <ice:menuBar noIcons="true"> <ice:menuItem value="Menu 1"/> <ice:menuItem value="Menu 2"/> <ice:menuItem value="Menu 3"/> </ice:menuBar> </ice:form> </ui:insert> </td></tr> <tr><td> <ui:insert name="content"> Please, define some content. </ui:insert> </td></tr> <tr><td> <ui:insert name="footer"> <ice:outputText value="&#169; 2009 by The ICEcubes." /> </ui:insert> </td></tr> </table> </body> </html> The header, the main navigation, and the footer now have defaults. For the page title and the page content, there are messages that ask for an explicit definition. The header has a reference to an image. Add any image you like to the WebContent and adapt the url attribute of the <ice:graphicImage> tag, if necessary. The example project for this article will show the ICEcube logo. It is the logo that is shown in the mockup above. The <ice:menuBar> tag has to be surrounded by a <ice:form> tag, so that the JSF actions of the menu entries can be processed. Additionally, we need a reference to one of the ICEfaces default skins in the <head> tag to get a correct menu presentation. We take the Royale skin here. If you do not know what the Royale skin looks like, you can have a look at the ICEfaces Component Showcase (http://component-showcase.icefaces.org) and select it in the combo box on the top left. After your selection, all components present themselves in this skin definition. Using the template A productive page template has a lot more to define and is also different in its structure. References to your own CSS, JavaScript, or FavIcon files are missing here. The page template would be unmaintainable soon if we were to manage the pull-down menu this way. However, we will primarily look at the basics here. So, we keep the page template for now. Next, we adapt the existing ICEfacesPage1.xhtml to use the page template for its rendering. Here is the original code: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head> <title> <ui:insert name="title"> Default title </ui:insert> </title> </head> <body> <div id="header"> <!-- <ui:include src="/header.xhtml" > <ui:param name="param_name" value="param_value" /> </ui:include> --> </div> <div id="content"> <ice:form> <ice:outputText value="Hello World!"/> <!-- drop ICEfaces components here --> </ice:form> </div> </body> </html> We keep the Hello World! output and use the new page template to give some decoration to it. First of all, we need a reference to the page template so that the templating knows that it has to manage the page. As the page template defines the page structure, we no longer need a <head> tag definition. You may recognize <ui:insert> in the <title> tag. This is indeed the code we normally use in a page template. Facelets has rendered the content in between because it did not find a replacement tag. Theoretically, you are free to define such statements in any location of your code. However, this is not recommended. Facelets has a look at the complete code base and matches pairs of corresponding name attribute definitions between <ui:insert name="..."> and <ui:define name="..."> tags. Here is the adapted code: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="/page-template.xhtml"> <div id="content"> <ice:form> <ice:outputText value="Hello World!"/> </ice:form> </div> </ui:composition> </body> </html> This code creates the following output: We can see our friendly reminders for the missing title and the missing content. The header, the main navigation, and the footer are rendered as expected. The structure of the template seems to be valid, although we recognize that a CSS fle is necessary to define some space between the rows of our layout table. However, something is wrong. Any idea what it is? If you have a look at the hello-world.xhtml again, you can find our Hello World! output; but this cannot be found in the rendering result. As we use the page template, we have to tell the templating where something has to be rendered in the page. However, we did not do this for our Hello World! output. The following code defines the missing <ui:define> tag and skips the <div> and <ice:form> tags that are not really necessary here: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="/page-template.xhtml"> <ui:define name="title"> Hello World on Facelets </ui:define> <ui:define name="content"> <ice:outputText value="Hello World!"/> </ui:define> </ui:composition> </body> </html>
Read more
  • 0
  • 0
  • 2540

article-image-aspnet-35-cms-adding-security-and-membership-part-2
Packt
16 Oct 2009
8 min read
Save for later

ASP.NET 3.5 CMS: Adding Security and Membership (Part 2)

Packt
16 Oct 2009
8 min read
Now that you understand the process behind forms authentication, we need to add it to our application. The process will be slightly different because we already have a database to use, but without the ASP.NET membership schema. We'll add that to the database and then create some user accounts and membership roles to handle the security for our application. We'll also secure some of our content and add a menu to our Master Page to navigate between the pages of our Content Management System. Preparing an existing SQL database As we have an existing database, we can't create a new database for our membership and authentication system. Well, actually we could, but using a second database is problematic when we upload the application to a host because many web hosting companies allow only a single database under the hosting plan. Besides, we can easily add the membership schema the same way we did earlier in the article with our empty database, using aspnet_regsql.exe. Previously we used the wizard, this time we'll use the command line. If you take a look at the database in SQL Server Management Studio Express now, before we execute the command to add the schemas, you should see the few tables that were already created, as shown below: The aspnet_regsql.exe tool Using the command line, the executable is simple, as long as you know the command line arguments. The syntax and command arguments for aspnet_regsql.exe are available online at http://msdn.microsoft.com/en-us/library/x28wfk74.aspx. The following table shows the arguments we will use: Argument Description What we use -S The server name SQLEXPRESS -U The database username sa -P The database password SimpleCMS -d The database name SimpleCMS_Database -A The schema functions to install All functions   Our command line will look like this (all one line): aspnet_regsql.exe -S .SQLEXPRESS -U sa -P SimpleCMS -d SimpleCMS_Database -A all To run the command line, go to Start | Run and enter cmd in the Run dialog box. Press Enter and you will be at a command prompt. Type cd C:WINDOWSMicrosoft.NETFrameworkv2.0.50727 and press Enter again, and you will be in the correct folder to find aspnet_regsql.exe. Note that you may need to change the path if your ASP.NET framework files are in a different location. Type the command line above and press Enter, and you should see that the command completed successfully, with a dialog similar to that below: Now that we have executed the aspnet_regsql.exe command line, if you look at the database tables in SQL Server Management Studio Express, you should see the added table for the users, membership, and roles we will use in our application. User accounts Earlier in the article, we created a single user account for accessing protected content. In a real-world environment, we would normally have many user accounts, way too many to add each account to each page we wanted to protect. Fortunately, the ASP.NET framework provides us with membership roles that we can place user accounts in, allowing us to define our access by role, not by user account. But first, we need some user accounts. Let's start by creating three accounts in our application  - User1, User2, and Administrator. Open the SimpleCMS web site in Visual Web Developer 2008 Express. Use the downloadable code provided for Chapter 4 of this book, it has the web.config file modified similar to what we did when we walked through the forms authentication demo earlier in the chapter. Open the Web Site Administration Tool by clicking on Website and then ASP.NET Configuration. If you click on the Security tab, you will see that we have no users configured for this application.  As you did earlier in the article, click on Create User and create the three users with user names of User1, User2, and Administrator. Use Password! as the password for each, and provide a valid email address for each (they can have the same email for testing). Also, provide a question and answer such as Favorite Color? and Blue. You can use the same question and answer for all three accounts if you wish. Each user entry should look something like the following: If you return to the Security tab, you will notice that we have three user accounts, but no roles for those accounts. Let's add them next. Membership roles ASP.NET membership roles provide the ability to group many individual accounts into a single role to provide access to a resource such as a page or application. Changing access for an individual user then becomes a simple task of assigning them to or removing them from the appropriate role. A single user account can belong to multiple roles to provide extremely granular access to the application resources if your security demands are extensive. To add roles to our application, we first need to enable roles. On the Security tab of the Web Site Administration Tool, under Roles, you should see a link to enable roles. Enabling roles consists of simply adding the following line to the web.config file in the system.web section: <roleManager enabled="true" /> Similar to the membership provider we created earlier, roles require a role provider. We need to add this provider to the role manager, so edit the web.config roleManager section to read: <roleManager enabled="true"><providers><clear/><add name="AspNetSqlRoleProvider"connectionStringName="SimpleCMS_DatabaseConnectionString"applicationName="/"type="System.Web.Security.SqlRoleProvider, System.Web,Version=2.0.0.0,Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /></providers></roleManager> This adds an AspNetSqlRoleProvider that uses our connection string to the SimpleCMS database. At this point we have no roles defined, so let's create a few. Open the Web Site Administration Tool. If it's already open, you may need to close and reopen it because we modified the web.config file to add the role provider. Now, open the Security tab. In the Roles section, click on Create or manage roles. Let's create an administration role first. We'll need it to secure areas to just administrative access. Simply enter Administrator, click on Add Role, and you'll see the new role in the list. Add roles for Author, Editor, and Registered User in the same manner. The roles list should look something like the following figure when you finish: Adding users to roles Once we have users and roles created, we need to assign users to roles. To do this, use the Security tab of the Web Site Administration Tool, under the Users section, to manage users.  You'll see a list of user accounts, in our case all three of them, along with the ability to edit the user, delete the user, and edit the user's roles. Click on Edit roles next to the Administrator user and you'll see a checkbox list of user roles this account can be added to. Any roles currently assigned to the user will be checked. As there are currently none, check the Administrator role, and the Administrator user will be immediately added to the Administrator role, as shown below: If you were to look at the database tables that hold the user accounts and roles, you would see something like this for the users: Similarly, the roles would look like this: You'll note that both the users and the roles contain an ApplicationID that defines what application these users and roles belong to, and that each user or role is identified by a UserID or RoleID. These are automatically created by the ASP.NET membership framework and are globally unique identifiers (GUIDs), which ensure that the specific user or role is unique across all possible applications and uses of this specific database store. You would also find in the database a table that identifies users in roles, looking something like this: You'll notice that this is a joining table, used in a database when there is a many-to-many relationship. Many users can belong to a role and a user can belong to many roles, thus the use of this table. You'll also notice that the database table uses the UserID and RoleID, making it very hard to simply look at this table directly to find what users are assigned to what roles.  Fortunately, with the ASP.NET framework, you're isolated from having to work directly with the database, as well as relieved from having to create it and the code needed to access it.
Read more
  • 0
  • 0
  • 2539
Modal Close icon
Modal Close icon