Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-designing-facebook-clone-and-creating-colony-using-ruby
Packt
25 Aug 2010
14 min read
Save for later

Designing the Facebook Clone and Creating Colony using Ruby

Packt
25 Aug 2010
14 min read
(For more resources on Ruby, see here.) Main features Online social networking services are complex applications with a large number of features. However, these features can be roughly grouped into a few common categories: User Community Content-sharing Developer User features are features that relate directly to and with the user. For example, the ability to create and share their own profiles, and the ability to share status and activities are user features. Community features are features that connect users with each other. An example of this is the friends list feature, which shows the number of friends a user has connected with in the social network. Content sharing features are quite easy to understand. These are features that allow a user to share his self-created content with other users, for example photo sharing or blogging. Social bookmarking features are those features that allow users to share content they have discovered with other users, such as sharing links and tagging items with labels. Finally, developer features are features that allow external developers to access the services and data in the social networks. While the social networking services out in the market often try to differentiate themselves from each other in order to gain an edge over their competition, in this article we will be building a stereotypical online social networking service. We will be choosing only a few of the more common features in each category, except for developer features, which for practical reasons will not be implemented here. Let's look at these features we will implement in Colony, by category. User User features are features that relate directly to users: Users' activities on the system are broadcast to friends as an activity feed. Users can post brief status updates to all users. Users can add or remove friends by inviting them to link up. Friendship in both ways need to be approved by the recipient of the invitation. Community Community features connect users with each other: Users can post to a wall belonging to a user, group, or event. A wall is a place where any user can post on and can be viewed by all users. Users can send private messages to other users. Users can create events that represent an actual event in the real world. Events pull together users, content, and provide basic event management capabilities, such as RSVP. Users can form and join groups. Groups represent a grouping of like-minded people and pulls together users and content. Groups are permanent. Users can comment on various types of shared and created content including photos, pages, statuses, and activities. Comments are textual only. Users can indicate that they like most types of shared and created content including photos, pages, statuses, and activities. Content sharing Content sharing features allow users to share content, either self-generated or discovered, with other users: Users can create albums and upload photos to them Users can create standalone pages belonging to them or attached pages belonging to events and groups Online social networking services grew from existing communications and community services, often evolving and incorporating features and capabilities from those services. Designing the clone Now that we have the list of features that we want to implement for Colony, let's start designing the clone. Authentication, access control, and user management Authentication is done through RPX, which means we delegate authentication to a third party provider such as Google, Yahoo!, or Facebook. Access control however is still done by Colony, while user management functions are shared between the authentication provider and Colony. Access control in Colony is done on all data, which prevents user from accessing data that they are not allowed to. This is done through control of the user account, to which all other data for a user belongs. In most cases a user is not allowed access to any data that does not belong to him/her (that is not shared to everyone). In some cases though access is implicit; for example, an event is accessible to be viewed only if you are the organizer of the event. As before, user management is a shared responsibility between the third party provider and the clone. The provider handles password management and general security while Colony stores a simple set of profile information for the user. Status updates Allowing you to send status updates about yourself is a major feature of all social networking services. This feature allows the user, a member of the social networking service, to announce and define his presence as well as state of mind to his network. In Colony, only the user's friends can read the statuses. Remember that a user's friend is someone validated and approved by the user and not just anyone off the street who happens to follow that user. Status updates belong to a single user but are viewable to all friends as a part of the user's activity feed. User activity feeds and news feeds Activity feeds, activity streams, or life streams are continuous streams of information on a user's activities. Activity feeds go beyond just status updates; they are a digital trace of a user's activity in the social network, which includes his status updates. This include public actions like posting to a wall, uploading photos, and commenting on content, but not private actions like sending messages to individuals. The user's activity feed is visible to all users who visit his user page. Activity feeds are a subset of news feeds that is an aggregate of activity feeds of the user and his network. News feeds give an insight into the user's activities as well as the activities of his network. In the design of our clone, the user's activity feed is what you see when you visit the user page, for example http://colony.saush. com/user/sausheong, while the news feed is what you see when you first log in to Colony, that's the landing page. This design is quite common to many social networking services. Friends list and inviting users to join One of the reasons why social networking services are so wildly successful is the ability to reach out to old friends or colleagues, and also to see friends of your friends. To clone this feature we provide a standard friends list and an option to search for friends. Searching for friends allows you to find other users in the system by their nicknames or their full names. By viewing a user's page, we are able to see his friends and therefore see his friend's user pages as well. Another critical feature in social networking services is the ability to invite friends and spread the word around. In Colony we tap on the capabilities of Facebook and invite friends who are already on Facebook to use Colony. While there is a certain amount of irony (using another social networking service to implement a feature of your social networking service), it makes a lot of practical sense, as Facebook is already one of the most popular social networking services on the planet. To implement this, we will use Facebook Connect. However, this means if the user wants to reach out and get others to join him in Colony he will need to log into Facebook to do so. As with most features, the implementation can be done in many ways and Facebook Connect (or any other type of third-party integration for that matter) is only one of them. Another popular strategy is to use web mail clients such as Yahoo! Mail or Gmail, and extract user contacts with the permission of the user. The e-mails extracted this way can be used as a mailing list to send to potential users. This is in fact a strategy used by Facebook. Posting to the wall A wall is a place where users can post messages. Walls are meant to be publicly read by all visitors. In a way it is like a virtual cork bulletin board that users can pin their messages on to be read by anyone. Wall posts are meant to be short public messages. The Messages feature can be used to send private messages. A wall can belong to a user, an event, or a group and each of these owning entities can have only one wall. This means any post sent to a user, event, or group is automatically placed on its one and only wall. A message on a wall is called a post, which in Colony is just a text message (Facebook's original implementation was text only but later extended to other types of media). Posts can be remarked on and are not threaded. Posts are placed on the wall in a reverse chronological order in a way that the latest post remains at the top of the wall. Sending messages The messaging feature of Colony is a private messaging mechanism. Messages are sent by senders and received by recipients. Messages that are received by a user are placed into an inbox while messages that the user sent are placed into a sent box. For Colony we will not be implementing folders so these are the only two message folders that every user has. Messages sent to and received from users are threaded and ordered by time. We thread the messages in order to group different messages sent back and forth as part of an ongoing conversation. Threaded messages are sorted in chronological order, where the last received message is at the bottom of the message thread. Attending events Events can be thought of as locations in time where people can come together for an activity. Social networking services often act as a nexus for a community so organizing and attending events is a natural extension of the features of a social networking service. Events have a wall, venue, date, and time where the event is happening, and can have event-specific pages that allow users to customize and market their event. In Colony we categorize users who attend events by their attendance status. Confirmed users are users who have confirmed their attendance. Pending users are users who haven't yet decided to attend the event. Declined users are users who have declined to attend the event after they have been invited. Declinations are explicit; there is an invisible group of users who are in none of the above three types. Attracting users to events or simply keeping them informed is a critical part of making this or any feature successful. To do so, we suggest events to users and display the suggested events in the user's landing page. The suggestion algorithm is simple, we just go through each of the user's friends and see which other events they have confirmed attending, and then suggest that event to the user. Besides suggestions, the other means of discovering events are through the activity feeds (whenever an event is created, it is logged as an activity and published on the activity feed) and through user pages, where the list of a user's pages are also displayed. All events are public, as with content created within events like wall posts and pages. Forming groups Social networking services are made of people and people have a tendency to form groups or categories based on common characteristics or interests. The idea of groups in Colony is to facilitate such grouping of people with a simple set of features. Conceptually groups and events are very similar to each other, except that groups are not time-based like events, and don't have a concept of attendance. Groups have members, a wall, and can have specific pages created by the group. Colony's capabilities to attract users to groups are slightly weaker than in events. Colony only suggests groups in the groups page rather than the landing page. However, groups also allow discovery through activity feeds and through user pages. Colony has only public groups and no restriction on who can join these public groups. Commenting on and liking content Two popular and common features in many consumer focused web applications are reviews and ratings. Reviews and ratings allow users to provide reviews (or comments) or ratings to editorial or user-generated content. The stereotypical review and ratings feature is Amazon.com's book review and rating, which allows users to provide book reviews as well as rate the book from one to five stars. Colony's review feature is called comments. Comments are applicable to all user-generated content such as status updates, wall posts, photos, and pages. Comments provide a means for users to review the content and give critique or encouragement to the content creator. Colony's rating feature is simple and follows Facebook's popular rating feature, called likes. While many rating features provide a range of one to five stars for the users to choose, Colony (and Facebook) asks the user to indicate if he likes the content. There is no dislike though, so the fewer number of likes a piece of content, the less popular it is. Colony's comments and liking feature is applicable to all user-generated content such as statuses, photos, wall posts, activities, and pages. Sharing photos Photos are one of the most popular types of user-generated content shared online, with users uploading 3 billion photos a month on Facebook; it's an important feature to include in Colony. The basic concept of photo sharing in Colony is that each user can have one or more albums and each album can have one or more photos. Photos can be commented, liked, and annotated. Blogging with pages Colony's pages are a means of allowing users to create their own full-page content, and attach it to their own accounts, a page, or a group. A user, event, or group can own one or more pages. Pages are meant to be user-generated content so the entire content of the page is written by the user. However in order to keep the look and feel consistent throughout the site, the page will be styled according to Colony's look and feel. To do this we only allow users to enter Markdown, a lightweight markup language that takes many cues from existing conventions for marking up plain text in e-mail. Markdown converts its marked-up text input to valid, well-formed XHTML. We use it here in Colony to let users write content easily without worrying about layout or creating a consistent look and feel. Technologies and platforms used We use a number of technologies in this article, mainly revolving around the Ruby programming language and its various libraries. In addition to Ruby and its libraries we also use mashups, which are described next. Mashups While the main features in the applications are all provided for, sometimes we still depend on other services provided by other providers. In this article we use four such external services—RPX for user web authentication, Gravatar for avatar services, Amazon Web Services S3 for photo storage, and Facebook Connect for reaching out to users on Facebook. Facebook Connect Facebook has a number of technologies and APIs used to interact and integrate with their platform, and Facebook Connect is one of them. Facebook Connect is a set of APIs that let users bring their identity and information into the application itself. We use Facebook Connect to send out requests to a user's friends, inviting them to join our social network. Note that for the user invitation feature, once a user has logged in through Facebook with RPX, he is considered to have logged into Facebook Connect and therefore can send invitations immediately without logging in again. Summary In this article, we described some of the essential features of Facebook and we categorized the features into User, Community, and Content sharing features. After that, we went into a high level discussion on these various features and how we implement them in our Facebook clone, Colony. After that, we went briefly into the various technologies used in the clone. In the next article, we will be building the Facebook clone using Ruby. Further resources on this subject: Building the Facebook Clone using Ruby [article] URL Shorteners – Designing the TinyURL Clone with Ruby [article]
Read more
  • 0
  • 0
  • 3107

article-image-websphere-mq-sample-programs
Packt
25 Aug 2010
5 min read
Save for later

WebSphere MQ Sample Programs

Packt
25 Aug 2010
5 min read
(For more resources on IBM, see here.) WebSphere MQ sample programs—server There is a whole set of sample programs available to the WebSphere MQ administrator. We are interested in only a couple of them: the program to put a message onto a queue and a program to retrieve a message from a queue. The names of these sample programs depend on whether we are running WebSphere MQ as a server or as a client. There is a server sample program called amqsput, which puts messages onto a queue using the MQPUT call. This sample program comes as part on the WebSphere MQ samples installation package, not as part of the base package. The maximum length of message that we can put onto a queue using the amqsput sample program is 100 bytes. There is a corresponding server sample program to retrieve messages from a queue called amqsget. Use the sample programs only when testing that the queues are set up correctly—do NOT use the programs once Q Capture and Q Apply have been started. This is because Q replication uses dense numbering between Q Capture and Q Apply, and if we insert or retrieve a message, then the dense numbering will not be maintained and Q Apply will stop. The usual response to this is to cold start Q Capture! To put a message onto a Queue (amqsput) The amqsput utility can be invoked from the command line or from within a batch program. If we are invoking the utility from the command line, the format of the command is: $ amqsput <Queue> <QM name> < <message> We would issue this command from the system on which the Queue Manager sits. We have to specify the queue (<Queue>) that we want to put the message on, and the Queue Manager (<QM name>) which controls this queue. We then pipe (<) the message (<message>) into this. An example of the command is: $ amqsput CAPA.TO.APPB.SENDQ.REMOTE QMA < hello We put these commands into an appropriately named batch fle (say SYSA_QMA_TESTP_UNI_AB.BAT), which would contain the following: Batch file—Windows example: call "C:Program FilesIBMWebSphere MQbinamqsput" CAPA.TO.APPB.SENDQ.REMOTE QMA < QMA_TEST1.TXT Batch file—UNIX example: "/opt/mqm/samp/bin/amqsput" CAPA.TO.APPB.SENDQ.REMOTE QMA < QMA_TEST1.TXT Where the QMA_TEST1.TXT fle contains the message we want to send. Once we have put a message onto the Send Queue, we need to be able to retrieve it. To retrieve a message from a Queue(amqsget) The amqsget utility can be invoked from the command line or from within a batch program. The utility takes 15 seconds to run. We need to specify the Receive Queue that we want to read from and the Queue Manager that the queue belongs to: $ amqsget <Queue> <QM name> As example of the command is shown here: $ amqsget CAPA.TO.APPB.RECVQ QMB If we have correctly set up all the queues, and the Listeners and Channels are running, then when we issue the preceding command, we should see the message we put onto the Send Queue. We can put the amqsget command into a batch fle, as shown next: Batch file—Windows example: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.TO.APPB.RECVQ QMB@ECHO You should see: test1 Batch file—UNIX example: echo This takes 15 seconds to run"/opt/mqm/samp/bin/amqsget" CAPA.TO.APPB.RECVQ QMBecho You should see: test1 Using these examples and putting messages onto the queues in a unidirectional scenario, then the "get" message batch fle for QMA (SYSA_QMA_TESTG_UNI_BA.BAT) contains: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.ADMINQ QMA@ECHO You should see: test2 From CLP-A, run the fle as: $ SYSA_QMA_TESTG_UNI_BA.BAT The "get" message batch file for QMB (SYSB_QMB_TESTG_UNI_AB.BAT) contains: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.TO.APPB.RECVQ QMB@ECHO You should see: test1 From CLP-B, run the file as: $ SYSB_QMB_TESTG_UNI_AB.BAT To browse a message on a Queue It is useful to be able to browse a queue, especially when setting up Event Publishing. There are three ways to browse the messages on a Local Queue. We can use the rfhutil utility, or the amqsbcg sample program, both of which are WebSphere MQ entities, or we can use the asnqmfmt Q replication command. Using the WebSphere MQ rfhutil utility: The rfhutil utility is part of the WebSphere MQ support pack available to download from the web—to find the current download website, simply type rfhutil into an internet search engine. The installation is very simple—unzip the file and run the rfhutil.exe file. Using the WebSphere MQ amqsbcg sample program: The amqsbcg sample program displays messages including message descriptors. $ amqsbcg CAPA.TO.APPB.RECVQ QMB
Read more
  • 0
  • 0
  • 8228

article-image-ibm-websphere-mq-commands
Packt
25 Aug 2010
10 min read
Save for later

IBM WebSphere MQ commands

Packt
25 Aug 2010
10 min read
(For more resources on IBM, see here.) After reading this article, you will not be a WebSphere MQ expert, but you will have brought your knowledge of MQ to a level where you can have a sensible conversation with your site's MQ administrator about what the Q replication requirements are. An introduction to MQ In a nutshell, WebSphere MQ is an assured delivery mechanism, which consists of queues managed by Queue Managers. We can put messages onto, and retrieve messages from queues, and the movement of messages between queues is facilitated by components called Channels and Transmission Queues. There are a number of fundamental points that we need to know about WebSphere MQ: All objects in WebSphere MQ are case sensitive We cannot read messages from a Remote Queue (only from a Local Queue) We can only put a message onto a Local Queue (not a Remote Queue) It does not matter at this stage if you do not understand the above points, all will become clear in the following sections. There are some standards regarding WebSphere MQ object names: Queue names, processes and Queue Manager names can have a maximum length of 48 characters Channel names can have a maximum length of 20 characters The following characters are allowed in names: A-Z,a-z,0-9, and . / _ % symbols There is no implied structure in a name — dots are there for readability Now let's move on to look at MQ queues in a little more detail. MQ queues MQ queues can be thought of as conduits to transport messages between Queue Managers. There are four different types of MQ queues and one related object. The four different types of queues are: Local Queue (QL), Remote Queue (QR), Transmission Queue (TQ), and Dead Letter Queue, and the related object is a Channel (CH). One of the fundamental processes of WebSphere MQ is the ability to move messages between Queue Managers. Let's take a high-level look at how messages are moved, as shown in the following diagram: When the application Appl1 wants to send a message to application Appl2, it opens a queue - the local Queue Manager (QM1) determines if it is a Local Queue or a Remote Queue. When Appl1 issues an MQPUT command to put a message onto the queue, then if the queue is local, the Queue Manager puts the message directly onto that queue. If the queue is a Remote Queue, then the Queue Manager puts the message onto a Transmission Queue. The Transmission Queue sends the message using the Sender Channel on QM1 to the Receiver Channel on the remote Queue Manager (QM2). The Receiver Channel puts the message onto a Local Queue on QM2. Appl2 issues a MQGET command to retrieve the message from this queue. Now let's move on to look at the queues used by Q replication and in particular, unidirectional replication, as shown in the following diagram. What we want to show here is the relationship between Remote Queues, Transmission Queues, Channels, and Local Queues. As an example, let's look at the path a message will take from Q Capture on QMA to Q Apply on QMB. (Move the mouse over the image to enlarge.) Note that in this diagram the Listeners are not shown. Q Capture puts the message onto a remotely-defned queue on QMA (the local Queue Manager for Q Capture). This Remote Queue (CAPA.TO.APPB.SENDQ.REMOTE) is effectively a "place holder" and points to a Local Queue (CAPA.TO.APPB.RECVQ) on QMB and specifes the Transmission Queue (QMB.XMITQ) it should use to get there. The Transmission Queue has, as part of its defnition, the Channel (QMA.TO.QMB) to use. The Channel QMA.TO.QMB has, as part of its defnition, the IP address and Listening port number of the remote Queue Manager (note that we do not name the remote Queue Manager in this defnition—it is specifed in the defnition for the Remote Queue). The defnition for unidirectional Replication Queue Map (circled queue names) is: SENDQ: CAPA.TO.APPB.SENDQ.REMOTE on the source RECVQ: CAPA.TO.APPB.RECVQ on the target ADMINQ: CAPA.ADMINQ.REMOTE on the target Let's look at the Remote Queue defnition for CAPA.TO.APPB.SENDQ.REMOTE, shown next. On the left-hand side are the defnitions on QMA, which comprise the Remote Queue, the Transmission Queue, and the Channel defnition. The defnitions on QMB are on the right-hand side and comprise the Local Queue and the Receiver Channel. Let's break down these defnitions to the core values to show the relationship between the different parameters, as shown next: We defne a Remote Queue by matching up the superscript numbers in the defnitions in the two Queue Managers: For defnitions on QMA, QMA is the local system and QMB is the remote system. For defnitions on QMB, QMB is the local system and QMA is the remote system. Remote Queue Manager name Name of the queue on the remote system Transmission Queue name Port number that the remote system is listening on The IP address of the Remote Queue Manager Local Queue Manager name Channel name Queue names: QMB: Decide on the Local Queue name on QMB—CAPA.TO.APPB.RECVQ. QMA: Decide on the Remote Queue name on QMB—CAPA.TO.APPB.SENDQ.REMOTE. Channels: QMB: Defne a Receiver Channel on QMB, QMA.TO.QMB—make sure the channel type (CHLTYPE) is RCVR. The Channel names on QMA and QMB have to match: QMA.TO.QMB. QMA: Defne a Sender Channel, which takes the messages from the Transmission Queue QMB.XMITQ and which points to the IP address and Listening port number of QMB. The Sender Channel name must be QMA.TO.QMB. Let's move on from unidirectional replication to bidirectional replication. The bidirectional queue diagram is shown next, which is a cut-down version of the full diagram of the The WebSphere MQ layer and just shows the queue names and types without the details. The principles in bidirectional replication are the same as for unidirectional replication. There are two Replication Queue Maps—one going from QMA to QMB (as unidirectional replication) and one going from QMB to QMA. MQ queue naming standards The naming of the WebSphere MQ queues is an important part of Q replication setup. It may be that your site already has a naming standard for MQ queues, but if it does not, then here are some thoughts on the subject (WebSphere MQ naming standards were discussed in the Introduction to MQ section): Queues are related to Q Capture and Q Apply programs, so it would be useful to have that fact refected in the name of the queues. A Q Capture needs a local Restart Queue and we use the name CAPA.RESTARTQ. Each Queue Manager can have a Dead Letter Queue. We use the prefx DEAD.LETTER.QUEUE with a suffx of the Queue Manager name giving DEAD.LETTER.QUEUE.QMA. Receive Queues are related to Send Queues. For every Send Queue, we need a Receive Queue. Our Send Queue names are made up of where they are coming from, Q Capture on QMA (CAPA), and where they are going to, Q Apply on QMB (APPB), and we also want to put in that it is a Send Queue and that it is a remote defnition, so we end up with CAPA.TO.APPB.SENDQ.REMOTE. The corresponding Receive Queue will be called CAPA.TO.APPB.RECVQ. Transmission Queues should refect the names of the "to" Queue Manager. Our Transmission Queue on QMA is called QMB.XMITQ, refecting the Queue Manager that it is going to, and that it is a Transmission Queue. Using this naming convention on QMB, the Transmission Queue is called QMA.XMITQ. Channels should refect the names of the "from" and "to" Queue Managers. Our Sender Channel defnition on QMA is QMA.TO.QMB refecting that it is a channel from QMA to QMB and the Receiver Channel on QMB is also called QMA.TO.QMB. The Receiver Queue on QMA is called QMB.TO.QMA for a Sender Channel of the same name on QMB. A Replication Queue Map defnition requires a local Send Queue, and a remote Receive Queue and a remote Administration Queue. The Send Queue is the queue that Q Capture writes to, the Receive Queue is the queue that Q Apply reads from, and the Administration Queue is the queue that Q Apply writes messages back to Q Capture with. MQ queues required for different scenarios This section lists the number of Local and Remote Queues and Channels that are needed for each type of replication scenario. The queues and channels required for unidirectional replication (including replicating to a Stored Procedure) and Event Publishing are shown in the following tables. Note that the queues and channels required for Event Publishing are a subset of those required for unidirectional replication, but creating extra queues and not using them is not a problem. The queues and channels required for unidirectional (including replicating to a Stored Procedure) are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Remote Queue: CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ   The queues required for Event Publishing are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Receiver Channel: QMA.TO.QMB The queues and channels required for bidirectional/P2P two-way replication are shown in the following table: QMA (10) QMB (10) 4 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA CAPB.TO.APPA.RECVQ 2 Remote Queues: CAPA.TO.APPB.SENDQ.REMOTE CAPB.ADMINQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL.MODELQ 4 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE.QMB CAPA.TO.APPB.RECVQ 2 Remote Queues: CAPB.TO.APPA.SENDQ.REMOTE CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ The queues and channels required for P2P three-way replication are shown in the following table: QMA (16) QMB (16) QMC (16) 5 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE. QMA CAPB.TO.APPA.RECVQ CAPC.TO.APPA.RECVQ 4 Remote Queues: CAPA.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE CAPA.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMB.XMITQ QMC.XMITQ 2 Sender Channels: QMA.TO.QMC QMA.TO.QMB 2 Receiver Channels: QMC.TO.QMA QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE. QMB CAPA.TO.APPB.RECVQ CAPC.TO.APPB.RECVQ 4 Remote Queues: CAPB.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPB.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMC.XMITQ 2 Sender Channels: QMB.TO.QMA QMB.TO.QMC 2 Receiver Channels: QMA.TO.QMB QMC.TO.QMB 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPC.ADMINQ CAPC.RESTARTQ DEAD.LETTER. QUEUE.QMC CAPA.TO.APPC.RECVQ CAPB.TO.APPC.RECVQ 4 Remote Queues: CAPC.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPC.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMB.XMITQ 2 Sender Channels: QMC.TO.QMA QMC.TO.QMB 2 Receiver Channels: QMA.TO.QMC QMB.TO.QMC 1 Model Queue: IBMQREP.SPILL. MODELQ
Read more
  • 0
  • 0
  • 13908

article-image-trapping-errors-using-built-objects-javascript-testing
Packt
25 Aug 2010
6 min read
Save for later

Trapping Errors by Using Built-In Objects in JavaScript Testing

Packt
25 Aug 2010
6 min read
(For more resources on JavaScript, see here.) The Error object An Error is a generic exception, and it accepts an optional message that provides details of the exception. We can use the Error object by using the following syntax: new Error(message); // message can be a string or an integer Here's an example that shows the Error object in action. The source code for this example can be found in the file error-object.html. <html><head><script type="text/javascript">function factorial(x) { if(x == 0) { return 1; } else { return x * factorial(x-1); } } try { var a = prompt("Please enter a positive integer", ""); if(a < 0){ var error = new Error(1); alert(error.message); alert(error.name); throw error; } else if(isNaN(a)){ var error = new Error("it must be a number"); alert(error.message); alert(error.name); throw error; } var f = factorial(a); alert(a + "! = " + f); } catch (error) { if(error.message == 1) { alert("value cannot be negative"); } else if(error.message == "it must be a number") { alert("value must be a number"); } else throw error; } finally { alert("ok, all is done!"); } </script> </head> <body> </body> </html> You may have noticed that the structure of this code is similar to the previous examples, in which we demonstrated try, catch, finally, and throw. In this example, we have made use of what we have learned, and instead of throwing the error directly, we have used the Error object. I need you to focus on the code given above. Notice that we have used an integer and a string as the message argument for var error, namely new Error(1) and new Error("it must be a number"). Take note that we can make use of alert() to create a pop-up window to inform the user of the error that has occurred and the name of the error, which is Error, as it is an Error object. Similarly, we can make use of the message property to create program logic for the appropriate error message. It is important to see how the Error object works, as the following built-in objects, which we are going to learn about, work similarly to how we have seen for the Error object. (We might be able to show how we can use these errors in the console log.) The RangeError object A RangeError occurs when a number is out of its appropriate range. The syntax is similar to what we have seen for the Error object. Here's the syntax for RangeError: new RangeError(message); message can either be a string or an integer. <html><head><script type="text/javascript">try { var anArray = new Array(-1); // an array length must be positive}catch (error) { alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> We'll start with a simple example to show how this works. Check out the following code that can be found in the source code folder, in the file rangeerror.html: When you run this example, you should see an alert window informing you that the array is of an invalid length. After this alert window, you should receive another alert window telling you that The error is RangeError, as this is a RangeError object. If you look at the code carefully, you will see that I have deliberately created this error by giving a negative value to the array's length (array's length must be positive). The ReferenceError object A ReferenceError occurs when a variable, object, function, or array that you have referenced does not exist. The syntax is similar to what you have seen so far and it is as follows: new ReferenceError(message); message can either be a string or an integer. As this is pretty straightforward, I'll dive right into the next example. The code for the following example can be found in the source code folder, in the file referenceerror.html. <html><head><script type="text/javascript">try { x = y; // notice that y is not defined // an array length must be positive}catch (error) { alert(error); alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> Take note that y is not defined, and we are expecting to catch this error in the catch block. Now try the previous example in your Firefox browser. You should receive four alert windows regarding the errors, with each window giving you a different message. The messages are as follows: ReferenceError: y is not defined y is not defined ReferenceError ok, all is done If you are using Internet Explorer, you will receive slightly different messages. You will see the following messages: [object Error] message y is undefined TypeError ok, all is done The TypeError object A TypeError is thrown when we try to access a value that is of the wrong type. The syntax is as follows: new TypeError(message); // message can be a string or an integer and it is optional An example of TypeError is as follows: <html><head><script type="text/javascript">try { y = 1 var test = function weird() { var foo = "weird string"; } y = test.foo(); // foo is not a function}catch (error) { alert(error); alert(error.message); alert(error.name);}finally { alert("ok, all is done!");}</script></head><body></body></html> If you try running this code in Firefox, you should receive an alert box stating that it is a TypeError. This is because test.foo() is not a function, and this results in a TypeError. JavaScript is capable of finding out what kind of error has been caught. Similarly, you can use the traditional method of throwing your own TypeError(), by uncommenting the code. The following built-in objects are less used, so we'll just move through quickly with the syntax of the built-in objects.
Read more
  • 0
  • 0
  • 5505

article-image-basics-exception-handling-mechanism-javascript-testing
Packt
25 Aug 2010
6 min read
Save for later

Basics of Exception Handling Mechanism in JavaScript Testing

Packt
25 Aug 2010
6 min read
(For more resources on JavaScript, see here.) Issues with combining scripts Consider the real-life situation where we typically use external JavaScript; what happens if we use more than one JavaScript file? What kind of issues can we expect if we use more than one external JavaScript file? We'll cover all of this in the subsections below. We'll start with the first issue—combining event handlers. JavaScript helps to bring life to our web page by adding interactivity. Event handlers are the heartbeat of interactivity. For example, we click on a button and a pop-up window appears, or we move our cursor over an HTML div element and the element changes color to provide visual feedback. To see how we can combine event handlers, consider the following example, which is found in the source code folder in the files combine-event-handlers.html and combine-event-handlers.js as shown in the following code: In combine-event-handlers.html, we have: <html> <head> <title>Event handlers</title> <script type="text/javascript" src="combine-event- handlers.js"></script> </head> <body> <div id="one" onclick="changeOne(this);"><p>Testing One</p></div> <div id="two" onclick="changeTwo(this);"><p>Testing Two</p></div> <div id="three" onclick="changeThree(this);"><p>Testing Three</p></div> </body></html> Notice that each of the div elements is handled by different functions, namely, changeOne(), changeTwo(), and changeThree() respectively. The event handlers are found in combine-event-handlers.js: function changeOne(element) { var id = element.id; var obj = document.getElementById(id); obj.innerHTML = ""; obj.innerHTML = "<h1>One is changed!</h1>"; return true;}function changeTwo(element) { var id = element.id; var obj = document.getElementById(id); obj.innerHTML = ""; obj.innerHTML = "<h1>Two is changed!</h1>"; return true;}function changeThree(element) { var id = element.id; var obj = document.getElementById(id); obj.innerHTML = ""; obj.innerHTML = "<h1>Three is changed!</h1>"; return true;} You might want to go ahead and test the program. As you click on the text, the content changes based on what is defined in the functions. However, we can rewrite the code such that all of the events are handled by one function. We can rewrite combine-event-handlers.js as follows: function combine(element) { var id = element.id; var obj = document.getElementById(id); if(id == "one"){ obj.innerHTML = ""; obj.innerHTML = "<h1>One is changed!</h1>"; return true;}else if(id == "two"){ obj.innerHTML = ""; obj.innerHTML = "<h1>Two is changed!</h1>"; return true;}else if(id == "three"){ obj.innerHTML = ""; obj.innerHTML = "<h1>Three is changed!</h1>"; return true;}else{ ; // do nothing }} When we use if else statements to check the id of the div elements that we are working on, and change the HTML contents accordingly, we will save quite a few lines of code. Take note that we have renamed the function to combine(). Because we have made some changes to the JavaScript code, we'll need to make the corresponding changes to our HTML. So combine-event-handlers.html will be rewritten as follows: <html> <head> <title>Event handlers</title> <script type="text/javascript" src="combine-event- handlers.js"></script> </head> <body> <div id="one" onclick="combine(this);"><p>Testing One</p></div> <div id="two" onclick="combine(this);"><p>Testing Two</p></div> <div id="three" onclick="combine(this);"><p>Testing Three</p></div> </body></html> Notice that the div elements are now handled by the same function, combine(). These rewritten examples can be found in combine-event-handlers-combined.html and combine-event-handlers-combined.js. Naming clashes Removing name clashes is the next issue that we need to deal with. Similar to the issue of combining event handlers, naming clashes occur when two or more variables, functions, events, or other objects have the same name. Although these variables or objects can be contained in different files, these name clashes do not allow our JavaScript program to run properly. Consider the following code snippets: In nameclash.html, we have the following code: <html> <head> <title>testing</title> <script type="text/javascript" src="nameclash1.js"></script> </head> <body> <div id="test" onclick="change(this);"><p>Testing</p></div> </body></html> In nameclash1.js, we have the following code: function change(element) { var id = element.id; var obj = document.getElementById(id); obj.innerHTML = ""; obj.innerHTML = "<h1>This is changed!</h1>"; return true;} If you run this code by opening the file in your browser and clicking on the text Testing, the HTML contents will be changed as expected. However, if we add &ltscript type="text/javascript" src="nameclash2.js"></script> after the &lttitle></title> tag, and if the content of nameclash2.js is as follows: function change(element) { alert("so what?!");} Then we will not be able to execute the code properly. We will see the alert box instead of the HTML contents being changed. If we switch the arrangement of the external JavaScript, then the HTML contents of the div elements will be changed and we will not be able to see the alert box. With such naming clashes, our program becomes unpredictable; the solution to this is to use unique names in your functions, classes, or events. If you have a relatively large program, it would be advisable to use namespaces, which is a common strategy in several JavaScript libraries such as YUI and jQuery.
Read more
  • 0
  • 0
  • 1985

article-image-building-facebook-clone-using-ruby
Packt
25 Aug 2010
8 min read
Save for later

Building the Facebook Clone using Ruby

Packt
25 Aug 2010
8 min read
(For more resources on Ruby, see here.) This is the largest clone and has many components. Some of the less interesting parts of the code are not listed or described here. To get access to the full source code please go to http://github.com/sausheong/saushengine. Configuring the clone We use a few external APIs in Colony so we need to configure our access to these APIs. In a Colony all these API keys and settings are stored in a Ruby file called config.rb as below. S3_CONFIG = {}S3_CONFIG['AWS_ACCESS_KEY'] = '<AWS ACCESS KEY>'S3_CONFIG['AWS_SECRET_KEY'] = '<AWS SECRET KEY>'RPX_API_KEY = '<RPX API KEY>' Modeling the data You will find a large number of classes and relationships in this article. The following diagram shows how the clone is modeled: User The first class we look at is the User class. There are more relationships with other classes and the relationship with other users follows that of a friends model rather than a followers model. class User include DataMapper::Resource property :id, Serial property :email, String, :length => 255 property :nickname, String, :length => 255 property :formatted_name, String, :length => 255 property :sex, String, :length => 6 property :relationship_status, String property :provider, String, :length => 255 property :identifier, String, :length => 255 property :photo_url, String, :length => 255 property :location, String, :length => 255 property :description, String, :length => 255 property :interests, Text property :education, Text has n, :relationships has n, :followers, :through => :relationships, :class_name => 'User', :child_key => [:user_id] has n, :follows, :through => :relationships, :class_name => 'User', :remote_name => :user, :child_key => [:follower_id] has n, :statuses belongs_to :wall has n, :groups, :through => Resource has n, :sent_messages, :class_name => 'Message', :child_key => [:user_id] has n, :received_messages, :class_name => 'Message', :child_key => [:recipient_id] has n, :confirms has n, :confirmed_events, :through => :confirms, :class_name => 'Event', :child_key => [:user_id], :date.gte => Date.today has n, :pendings has n, :pending_events, :through => :pendings, :class_name => 'Event', :child_key => [:user_id], :date.gte => Date.today has n, :requests has n, :albums has n, :photos, :through => :albums has n, :comments has n, :activities has n, :pages validates_is_unique :nickname, :message => "Someone else has taken up this nickname, try something else!" after :create, :create_s3_bucket after :create, :create_wall def add_friend(user) Relationship.create(:user => user, :follower => self) end def friends (followers + follows).uniq end def self.find(identifier) u = first(:identifier => identifier) u = new(:identifier => identifier) if u.nil? return u end def feed feed = [] + activities friends.each do |friend| feed += friend.activities end return feed.sort {|x,y| y.created_at <=> x.created_at} end def possessive_pronoun sex.downcase == 'male' ? 'his' : 'her' end def pronoun sex.downcase == 'male' ? 'he' : 'she' end def create_s3_bucket S3.create_bucket("fc.#{id}") end def create_wall self.wall = Wall.create self.save end def all_events confirmed_events + pending_events end def friend_events events = [] friends.each do |friend| events += friend.confirmed_events end return events.sort {|x,y| y.time <=> x.time} end def friend_groups groups = [] friends.each do |friend| groups += friend.groups end groups - self.groups endend As mentioned in the design section above, the data used in Colony is user-centric. All data in Colony eventually links up to a user. A user has following relationships with other models: A user has none, one, or more status updates A user is associated with a wall A user belongs to none, one, or more groups A user has none, one, or more sent and received messages A user has none, one, or more confirmed and pending attendances at events A user has none, one, or more user invitations A user has none, one, or more albums and in each album there are none, one, or more photos A user makes none, one, or more comments A user has none, one, or more pages A user has none, one, or more activities Finally of course, a user has one or more friends Once a user is created, there are two actions we need to take. Firstly, we need to create an Amazon S3 bucket for this user, to store his photos. after :create, :create_s3_bucketdef create_s3_bucket S3.create_bucket("fc.#{id}")end We also need to create a wall for the user where he or his friends can post to. after :create, :create_walldef create_wall self.wall = Wall.create self.saveend Adding a friend means creating a relationship between the user and the friend. def add_friend(user) Relationship.create(:user => user, :follower => self) end Colony treats the following relationship as a friends relationship. The question here is who will initiate the request to join? This is why when we ask the User object to give us its friends, it will add both followers and follows together and return a unique array representing all the user's friends. def friends (followers + follows).uniqend In the Relationship class, each time a new relationship is created, an Activity object is also created to indicate that both users are now friends. class Relationship include DataMapper::Resource property :user_id, Integer, :key => true property :follower_id, Integer, :key => true belongs_to :user, :child_key => [:user_id] belongs_to :follower, :class_name => 'User', :child_key => [:follower_id] after :save, :add_activity def add_activity Activity.create(:user => user, :activity_type => 'relationship', :text => "<a href='/user/#{user.nickname}'>#{user.formatted_name}</a> and <a href='/user/#{follower.nickname}'>#{follower.formatted_name}</a> are now friends.") end end Finally we get the user's news feed by taking the user's activities and going through each of the user's friends, their activities as well. def feed feed = [] + activities friends.each do |friend| feed += friend.activities end return feed.sort {|x,y| y.created_at <=> x.created_at}end Request We use a simple mechanism for users to invite other users to be their friends. The mechanism goes like this: Alice identifies another Bob whom she wants to befriend and sends him an invitation This creates a Request class which is then attached to Bob When Bob approves the request to be a friend, Alice is added as a friend (which is essentially making Alice follow Bob, since the definition of a friend in Colony is either a follower or follows another user) class Request include DataMapper::Resource property :id, Serial property :text, Text property :created_at, DateTime belongs_to :from, :class_name => User, :child_key => [:from_id] belongs_to :user def approve self.user.add_friend(self.from) endend Message Messages in Colony are private messages that are sent between users of Colony. As a result, messages sent or received are not tracked as activities in the user's activity feed. class Message include DataMapper::Resource property :id, Serial property :subject, String property :text, Text property :created_at, DateTime property :read, Boolean, :default => false property :thread, Integer belongs_to :sender, :class_name => 'User', :child_key => [:user_id] belongs_to :recipient, :class_name => 'User', :child_key => [:recipient_id] end A message must have a sender and a recipient, both of which are users. has n, :sent_messages, :class_name => 'Message', :child_key => [:user_id]has n, :received_messages, :class_name => 'Message', :child_key => [:recipient_id] The read property tells us if the message has been read by the recipient, while the thread property tells us how to group messages together for display. Album An activity is logged, each time an album is created. class Album include DataMapper::Resource property :id, Serial property :name, String, :length => 255 property :description, Text property :created_at, DateTime belongs_to :user has n, :photos belongs_to :cover_photo, :class_name => 'Photo', :child_key => [:cover_photo_id] after :save, :add_activity def add_activity Activity.create(:user => user, :activity_type => 'album', :text => "<a href='/user/#{user.nickname}'>#{user.formatted_name}</a> created a new album <a href='/album/#{self.id}'>#{self.name}</a>") end end
Read more
  • 0
  • 0
  • 2431
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-upgrading-opencart
Packt
23 Aug 2010
3 min read
Save for later

Upgrading OpenCart

Packt
23 Aug 2010
3 min read
This article is suggested reading even for an experienced user. It will show us any possible problems that might occur while upgrading, so we can avoid them. Making backups of the current OpenCart system One thing we should certainly do is backup our files and database before starting any upgrade process. This will allow us to restore the OpenCart system if the upgrade fails and if we cannot solve the reason behind it. Time for action – backing up OpenCart files and database In this section, we will now learn how to back up the necessary files and database of the current OpenCart system before starting the upgrading processes. We will start with backing up database files. We have two choices to achieve this. The first method is easier and uses the built-in OpenCart module in the administration panel. We need to open the System | Backup / Restore menu. In this screen, we should be sure that all modules are selected. If not, click on the Select All link first. Then, we will need to click on the Backup button. A backup.sql file will be generated for us automatically. We will save the file on our local computer. The second method to backup OpenCart database is through the Backup Wizard on cPanel administration panel which most hosting services provide this as a standard management tool for their clients. If you have applied the first method which we have just seen, skip the following section to apply. Still, it is useful to learn about alternative Backup Wizard tool on cPanel. Let's open cPanel screen that our hosting services provided for us. Click on the Backup Wizard item under the Files section. On the next screen, click on the Backup button. We will click on the MySQL Databases button on the Select Partial Backup menu. We will right-click on our OpenCart database file backup and save it on our local computer by clicking on Save Link As. Let's return to the cPanel home screen and open File Manager under the Files menu. Let's browse into the web directory where our OpenCart store files are stored. Right-click on the directory and then Compress it. We will compress the whole OpenCart directory as a Zip Archive file. As we can see from the following screenshot, the compressed store.zip file resides on the web server. We can also optionally download the file to our local computer. What just happened? We have backed up our OpenCart database using cPanel. After this, we also backed up our OpenCart files as a compressed archive file using File Manager in cPanel.
Read more
  • 0
  • 0
  • 2238

article-image-implementing-panels-drupal-6
Packt
23 Aug 2010
4 min read
Save for later

Implementing Panels in Drupal 6

Packt
23 Aug 2010
4 min read
(For more resources on Drupal, see here.) Introduction This is an article centered on building website looks with Panels. We will create a custom front page for a website and a node edit form with a Panel. We will also see the process of generating a node override within Panels and take a look at Mini panels. Making a new front page using Panels and Views (for dynamic content display) We will now create a recipe with Views and Panels to make a custom front page. Getting ready We will need to install Views as the module, if not done already. As the Views and Panels projects are both led by Earl Miles (merlinofchaos), the UIs are very similar. We will not discuss Views in detail as it is out of the scope of the article. To use this recipe, a basic knowledge of Views is required. We will use our recipe of fl exible layout as our start for this recipe. To make a better layout and a custom website, I recommend using adaptive themes (http://adaptivethemes.com/). Here, in this recipe, I have used that theme. The adaptive theme is a starter theme and it includes several custom panels. There is also a built-in support for skins, which helps to make theming a lot easier. We will be using adaptive themes in this recipe for demonstration and will change our administration view from Garland to adaptive theme. The adaptive themes add extra layouts as shown in the following screenshot. How to do it... Go to the Flexible layout recipe we created or you can create your own layout using the same recipe. Now, we will create Views to be included in our Panels layout. Assuming that Views is installed, go to Site building | Views. Add a View. In the View name, add storyblock1, and add a description of your choice. Select the Row style as Node. Put in Items to display as 3. In the Style, we can select Unformatted or Grid depending on how you want to display the nodes. I will use Grid. In the Sort criteria, put in Node: Post date asc and Node: Type asc. In Filters, we just want the posts promoted to the first page. Do a live preview. We will need to add display of the default view as Block, so that the View is available as a block, which we can select in our Panels. We can also put the views default output as a Panel pane, but using blocks as a display of the Views gives the "read more" links by default. In the direct View, we have to create it. Say, we create a block—storyblock1, as shown in the following screenshot: Now, we need to go to the Flexible Layouts UI, as a layout created by you. Go to the Content tab. Earlier, we had displayed a static block; now we will display a dynamic View. Disable earlier panes in the first static row. Select the gears in first static rows and select Add content | Miscellaneous. The custom view block will be here, as shown in the following screenshot. Select it. Save and preview. So, we have now integrated the dynamic View in one of our Panel panes. Let's add sample content to each region now. You can select your own content as you want on your front page, as shown in the following screenshot: Go to Site configuration | Site information. Chan ge the default home pages to the created Panels page. Your home page is now the custom Panel page. How it works... In this recipe, we implemented Panel panes with views and blocks to make a separate custom page and separate display for the existing content in the website.
Read more
  • 0
  • 0
  • 1829

Packt
20 Aug 2010
11 min read
Save for later

Adding Features to your Joomla! Form using ChronoForms

Packt
20 Aug 2010
11 min read
(For more resources on ChronoForms, see here.) Introduction We have so far mostly worked with fairly standard forms where the user is shown some inputs, enters some data, and the results are e-mailed and/or saved to a database table. Many forms are just like this, and some have other features added. These features can be of many different kinds and the recipes in this article are correspondingly a mixture. Some, like Adding a validated checkbox, change the way the form works. Others, like Signing up to a newsletter service change what happens after the form is submitted. While you can use these recipes as they are presented, they are just as useful as suggestions for ways to use ChronoForms to solve a wide range of user interactions on your site. Adding a validated checkbox Checkboxes are less often used on forms than most of the other elements and they have some slightly unusual behavior that we need to manage. ChronoForms will do a little to help us, but not everything that we need. In this recipe, we'll look at one of the most common applications—a stand alone checkbox that the user is asked to click to ensure that they've accepted some terms and conditions. We want to make sure that the form is not submitted unless the box is checked. Getting ready We'll just add one more element to our basic newsletter form. It's probably going to be best to recreate a new version of the form using the Form Wizard to make sure that we have a clean starting point. How to do it... In the Form Wizard, create a new form with two TextBox elements. In the Properties box, add the Labels "Name" and "Email" and the Field Names "name" and "email" respectively. Now drag in a CheckBox element. You'll see that ChronoForms inserts the element with three checkboxes and we only need one. In the Properties box remove the default values and type in "I agree". While you are there change the label to "Terms and Conditions". Lastly, we want to make sure that this box is checked so check the Validation | One Required checkbox and add "please confirm your agreement" in the Validation Message box. Apply the changes to the Properties. To complete the form add the Button element, then save your form, publish it, and view it in your browser. To test, click the Submit button without entering anything. You should find that the form does not submit and an error message is displayed. How it works... The only special thing to notice about this is that the validation we used was validate-one- required and not the more familiar required. Checkbox arrays, radio button groups, and select drop-downs will not work with the required option as they always have a value set, at least from the perspective of the JavaScript that is running the validation. There's more... Validating the checkbox server-side If the checkbox is really important to us, then we may want to confirm that it has been checked using the server-side validation box. We want to check and, if our box isn't checked, then create the error message. However, there is a little problem—an unchecked checkbox doesn't return anything at all, there is just no entry in the form results array. Joomla! has some functionality that will help us out though; the JRequest::getVar() function that we use to get the form results allows us to set a default value. If nothing is found in the form results, then the default value will be used instead. So we can add this code block to the server-side validation box: <?php $agree = JRequest::getString('check0[]', 'empty', 'post'); if ( $agree == 'empty' ) { return 'Please check the box to confirm your agreement'; } ?> Note: To test this, we need to remove the validate-one-required class from the input in the Form HTML. Now when we submit the empty form, we see the ChronoForms error message. Notice that the input name in the code snippet is check0[]. ChronoForms doesn't give you the option of setting the name of a checkbox element in the Form Wizard | Properties box. It assigns a check0, check1, and so on value for you. (You can edit this in the Form Editor if you like.) And because checkboxes often come in arrays of several linked boxes with the same name, ChronoForms also adds the [] to create an array name. If this isn't done then only the value of the last checked box will be returned. Locking the Submit button until the box is checked If we want to make the point about terms and conditions even more strongly then we can add some JavaScript to the form to disable the Submit button until the box is checked. We need to make one change to the Form HTML to make this task a little easier. ChronoForms does not add ID attributes to the Submit button input; so open the form in the Form Editor, find the line near the end of the Form HTML and alter it to read: <input value="Submit" name="submit" id='submit' type="submit" /> Now add the following snippet into the Form JavaScript box: // stop the code executing // until the page is loaded in the browser window.addEvent('load', function() { // function to enable and disable the submit button function agree() { if ( $('check00').checked == true ) { $('submit').disabled = false; } else { $('submit').disabled = true; } }; // disable the submit button on load $('submit').disabled = true; //execute the function when the checkbox is clicked $('check00').addEvent('click', agree); }); Apply or save the form and view it in your browser. Now as you tick or untick the checkbox, the submit button will be enabled and disabled. This is a simple example of adding a custom script to a form to add a useful feature. If you are reasonably competent in JavaScript, you will find that there is quite a lot more that you can do. There are different styles of laying out both JavaScript and PHP and sometimes fierce debates about where line breaks and spaces should go. We've adopted a style here that is hopefully fairly clear, reasonably compact, and more or less the same for both JavaScript and PHP. If it's not the style you are accustomed to, then we're sorry. Adding an "other" box to a drop-down Drop-downs are a valuable way of offering a list of choices to your user to select from. And sometimes it just isn't possible to make the list complete, there's always another option that someone will want to add. So we add an "other" option to the drop-down. But that tells us nothing, so we need to add an input to tell us what "other" means here. Getting ready We'll just add one more element to our basic newsletter form. We haven't used a drop-down before but it is very similar to the check-box element from the previous recipe. How to do it... Use the Form Wizard to create a form with two TextBox elements, a DropDown element, and a Button element. The changes to make in the element are: Add "I heard from" in the Label Change the Field Name to "hearabout" Add some options to the Options box—"Google", "Newspaper", "Friend", and "Other" Leave the Add Choose Option box checked and leave Choose Option in the Choose Option Text box. Apply the Properties box. Make any other changes you need to the form elements; then save the form, publish it, and view it in your browser. Notice that as well as the four options we added the Choose Option entry is at the top of the list. That comes from the checkbox and text field that we left with their default values. It's important to have a "null" option like this in a drop-down for two reasons. First, so that it is obvious to a user that no choice has been made. Otherwise it's very easy for them to leave the first option showing and this value—Google in this case—will be returned by default. Second, so that we can validate select-one-required if necessary. The "null" option has no value set and so can be detected by validation script. Now we just need one more text box to collect details if Other is selected. Open the form in the Wizard Edit; add one more TextBox element after the DropDown element. Give it the Label please add details and the name "other". Even though we set the name to "other", ChronoForms will have left the input ID attribute as text_4 or something similar. Open the Form in the Form Editor and change the ID to "other" as well. The same is true of the drop-down. The ID there is select_2, change that to hearabout. Now we need a script snippet to enable and disable the "other" text box if the Other option is selected in the drop-down. Here's the code to put in the Form JavaScript box: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other').disabled = false; } else { $('other').disabled = true; } }); $('other').disabled = true; }); This is very similar to the code in the last recipe except that it's been condensed a little more by merging the function directly into the addEvent(). When you view the form you will see that the text box for please add details is grayed out and blocked until you select Other in the drop-down. Make sure that you don't make the please add details input required. It's an easy mistake to make but it stops the form working correctly as you have to select Other in the drop-down to be able to submit it. How it works Once again, this is a little JavaScript that is checking for changes in one part of the form in order to alter the display of another part of the form. There's more... Hiding the whole input It looks a little untidy to have the disabled box showing on the form when it is not required. Let's change the script a little to hide and unhide the input instead of disabling and enabling it. To make this work we need a way of recognizing the input together with its label. We could deal with both separately, but let's make our lives simpler. In the Form Editor, open the Form HTML box and look near the end for the other input block: <div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label" style="width: 150px;">please add details</label> <input class="cf_inputbox" maxlength="150" size="30" title="" id="other" name="other" type="text" /> </div> <div class="cfclear">&nbsp;</div> </div> That <div class="form_element cf_textbox"> looks like it is just what we need so let's add an ID attribute to make it visible to the JavaScript:   <div class="form_element cf_textbox" id="other_input"> Now we'll modify our script snippet to use this: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); // initialise the display if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); Apply or save the form and view it in your browser. Now the input is invisible see the following screenshot labeled 1 until you select Other from the drop-down see the following screenshot labeled 2. The disadvantage of this approach is that the form can appear to "jump around" as extra fields appear. You can overcome this with a little thought, for example by leaving an empty space. See also In some of the script here we are using shortcuts from the MooTools JavaScript framework. Version 1.1 of MooTools is installed with Joomla! 1.5 and is usually loaded by ChronoForms. You can find the documentation for MooTools v1.1 at http://docs111.mootools.net/ Version 1.1 is not the latest version of MooTools and many of the more recent MooTools script will not run with the earlier version. Joomla 1.6 is expected to use the latest release.
Read more
  • 0
  • 0
  • 7986

article-image-getting-started-drupal-6-panels
Packt
20 Aug 2010
7 min read
Save for later

Getting Started with Drupal 6 Panels

Packt
20 Aug 2010
7 min read
(For more resources on Drupal, see here.) Introduction Drupal Panels are distinct pieces of rectangular content that create a custom layout of the page—where different Panels are more visible and presentable as a structured web page. Panels is a freely-distributed, open source module developed for Drupal 6. With Panels, you can display various content in a customizable grid layout on one page. Each page created by Panels can include a unique structure and content. Using the drag-and-drop user interface, you select a design for the layout and position various kinds of content (or add custom content) within that layout. Panels integrates with other Drupal modules like Views and CCK. Permissions, deciding which users can view which elements, are also integrated into Panels. You can even override system pages such as the display of keywords (taxonomy) and individual content pages (nodes). In the next section, we will see what the Panels can actually do, as defined on drupal.org: http://drupal.org/project/panels. Basically, Panels will help you to arrange a large content on a single page. While Panels can be used to arrange a lot of content on a single page, it is equally useful for small amounts of related content and/or teasers. Panels support styles, which control how individual content's panes, regions within a Panel, and the entire Panels will be rendered. While Panels ship with few styles, styles can be provided as plugins by modules, as well as by themes: The User Interface is nice for visually designing a layout, but a real HTML guru doesn't want the somewhat weighty HTML that this will create. Modules and themes can provide custom layouts that can fit a designer's exacting specifications, but still allow the site builders to place content wherever they like. Panels include a pluggable caching mechanism: a single cache type is included, the 'simple' cache, which is time-based. Since most sites have very specific caching needs based upon the content and traffic patterns, this system was designed to let sites that need to devise their own triggers for cache clearing and implement plugins that will work with Panels. A cache mechanism can be defined for each pane or region with the Panel page. Simple caching is a time-based cache. This is a hard limit, and once cached, it will remain that way until the time limit expires. If "arguments" are selected, this content will be cached per individual argument to the entire display; if "contexts" are selected, this content will be cached per unique context in the pane or display; if "neither", there will be only one cache for this pane. Panels can also be cached as a whole, meaning the entire output of the Panels can be cached, or individual content panes that are heavy, like large views, can be cached. Panels can be integrated with the Drupal module Organic Groups through the #og_Panels module to allow individual groups to have their own customized layouts. Panels integrates Views to allow administrators to add any view as content. We will discuss Module Integration in the coming recipes. Shown in the previous screenshot is one of the example sites that use Panels 3 for their home page (http://concernfast.org). The home page is built using a custom Panels 3 layout with a couple of dedicated Content types that are used to build nodes to drop into the various Panels areas. The case study can be found at: http://drupal.org/node/629860. Panels arrange your site content into an easy navigational pattern, which can be clearly seen in the following screenshot. There are several terms often used within Panels that administrators should become familiar with as we will be using the same throughout the recipes. The common terms in Panels are: Panels page: The page that will display your Panels. This could be the front page of a site, a news page, and so on. These pages are given a path just like any other node. Panels: A container for content. A Panel can have several pieces of content within it, and can be styled. Pane: A unit of content in a Panel. This can be a node, view, arbitrary HTML code, and so on. Panes can be shifted up and down within a Panel and moved from one Panel to another. Layout: Provides a pre-defined collection of Panels that you can select from. A layout might have two columns, a header, footer, or three columns in the middle, or even seven Panels stacked like bricks. Setting up Ctools and Panels We will now set up Ctools, which is required for Panels. "Chaos tools" is a centralized library, which is used by the most powerful modules of Drupal Panels and views. Most functions in Panels are inherited from the chaos library. Getting ready Download the Panels modules for the Drupal website: http://drupal.org/project/Panels You would need Ctools as a dependency module, which can be downloaded from: http://drupal.org/project/ctools How to do it... Upload both the files, Ctools and Panels, into /sites/all/modules. It is always a best practice to keep the installed modules separate from the "core" (the files that install with Drupal) into the /sites/all/modules folder. This makes it easy to upgrade the modules at a later stage when your site becomes complex and has too many modules. Go to the modules page in admin (Admin| Site Building | Modules) and enable Ctools, then enable Panels. Go to permissions (Admin | User Management | Permissions) and give site builders permission to use Panels. Enable the Page manager module in the Chaos tools suite. This module enables the page manager for Panels. To integrate views with Panels, enable the Views content panes module too. We will discuss more about views later on. Enable Panels and set the permissions. You will need to enable Panel nodes, the Panel module, and Mini panels too (as shown in the following screenshot) as we will use the same in our advanced recipes. Go to administer by module in the Site building | Modules. Here, you find the Panels User Interface. There is more Chaos tools suite includes the following tools that form the base of the Panels module. You do not need to go into the details of it to use Panels but it is good to know what it includes. This is the powerhouse that makes Panels the most efficient tool to design complex layouts: Plugins—tools to make it easy for modules to let other modules implement plugins from .inc files. Exportables—tools to make it easier for modules to have objects that live in database or live in code, such as 'default views'. AJAX responder—tools to make it easier for the server to handle AJAX requests and tell the client what to do with them. Form tools—tools to make it easier for forms to deal with AJAX. Object caching—tool to make it easier to edit an object across multiple page requests and cache the editing work. Contexts—the notion of wrapping objects in a unified wrapper and providing an API to create and accept these contexts as input. Modal dialog—tool to make it simple to put a form in a modal dialog. Dependent—a simple form widget to make form items appear and disappear based upon the selections in another item. Content—pluggable Content types used as panes in Panels and other modules like Dashboard. Form wizard—an API to make multi-step forms much easier. CSS tools—tools to cache and sanitize CSS easily to make user input CSS safe. How it works... Now, we have our Panels UI ready to generate layouts. We will discuss each of them in the following recipes. The Panels dashboard will help you to generate the layouts for Drupal with ease.
Read more
  • 0
  • 0
  • 2432
article-image-sessions-and-users-php-5-cms
Packt
17 Aug 2010
14 min read
Save for later

Sessions and Users in PHP 5 CMS

Packt
17 Aug 2010
14 min read
(For more resources on PHP, see here.) The problem Dealing with sessions can be confusing, and is also a source of security loopholes. So we want our CMS framework to provide basic mechanisms that are robust. We want them to be easy to use by more application-oriented software. To achieve these aims, we need to consider: The need for sessions and their working The pitfalls that can introduce vulnerabilities Efficiency and scalability considerations Discussion and considerations To see what is required for our session handling, we shall first review the need for them and consider how they work in a PHP environment. Then the vulnerabilities that can arise through session handling will be considered. Web crawlers for search engines and more nefarious activities can place a heavy and unnecessary load on session handling, so we shall look at ways to avoid this load. Finally, the question of how best to store session data is studied. Why sessions? The need for continuity was mentioned when we first discussed users. But it is worth reviewing the requirement in a little more detail. If Tim Berners-Lee and his colleagues had known all the developments that would eventually occur in the internet world, maybe the Web would have been designed differently. In particular, the basic web transport protocol HTTP might not have treated each request in isolation. But that is hindsight, and the Web was originally designed to present information in a computer-independent way. Simple password schemes were sufficient to control access to specific pages. Nowadays, we need to cater for complex user management, or to handle things like shopping carts, and for these we need continuity. Many people have recognized this, and introduced the idea of sessions. The basic idea is that a session is a series of requests from an individual website visitor, and the session provides access to enduring information that is available throughout the session. The shopping cart is an obvious example of information being retained across the requests that make up a session. PHP has its own implementation of sessions, and there is no point reinventing the wheel, so PHP sessions are the obvious tool for us to use to provide continuity. How sessions work There are three main choices which have been available for handling continuity: Adding extra information to the URI Using cookies Using hidden fields in the form sent to the browser All of them can be used at times. Which of them is most suitable for handling sessions? PHP uses either of the first two alternatives. Web software often makes use of hidden variables, but they do not offer a neat way to provide an unobtrusive general mechanism for maintaining continuity. In fact, whenever hidden variables are used, it is worth considering whether session data would be a better alternative. For reasons discussed in detail later, we shall consider only the use of cookies, and reject the URI alternative. There was a time when there were lots of scary stories about cookies, and people were inclined to block them. While there will always be security issues associated with web browsing, the situation has changed, and the majority of sites now rely on cookies. It is generally considered acceptable for a site to demand the use of cookies for operations such as user login or for shopping carts and purchase checkout. The PHP cookie-based session mechanism can seem obscure, so it is worth explaining how it works. First we need to review the working of cookies. A cookie is simply a named piece of data, usually limited to around 4,000 bytes, which is stored by the browser in order to help the web server to retain information about a user. More strictly, the connection is with the browser, not the user. Any cookie is tied to a specific website, and optionally to a particular part of the website, indicated by a path. It also has a life time that can be specified explicitly as a duration; a zero duration means that the cookie will be kept for as long as the browser is kept open, and then discarded. The browser does nothing with cookies, except to save and then return them to the server along with requests. Every cookie that relates to the particular website will be sent if either the cookie is for the site as a whole, or the optional path matches the path to which the request is being sent. So cookies are entirely the responsibility of the server, but the browser helps by storing and returning them. Note that, since the cookies are only ever sent back to the site that originated them, there are constraints on access to information about other sites that were visited using the same browser. In a PHP program, cookies can be written by calling the set_cookie function, or implicitly through session handling. The name of the cookie is a string, and the value to be stored is also a string, although the serialize function can be used to make more structured data into a string for storage as a cookie. Take care to keep the cookies within the size limit. PHP makes available the cookies that have been sent back by the browser in the $_COOKIES super-global, keyed by their names. Apart from any cookies explicitly written by code, PHP may also write a session cookie. It will do so either as a result of calls to session handling functions, or because the system has been configured to automatically start or resume a session for each request. By default, session cookies do not use the option of setting an expiry time, but can be deleted when the browser is closed down. Commonly, browsers keep this type of cookie in memory so that they are automatically lost on shutdown. Before looking at what PHP is doing with the session cookie, let's note that there is an important general consideration for writing cookies. In the construction of messages between the server and the browser, cookies are part of the header. That means rules about headers must be obeyed. Headers must be sent before anything else, and once anything else has been sent, it is not permitted to send more headers. So, in the case of server to browser communication, the moment any part of the XHTML has been written by the PHP program, it is too late to send a header, and therefore too late to write a cookie. For this reason, a PHP session is best started early in the processing. The only purpose PHP has in writing a session cookie is to allocate a unique key to the session, and retrieve it again on the next request. So the session cookie is given an identifying name, and its value is the session's unique key. The session key is usually called the session ID, and is used by PHP to pick out the correct set of persistent values that belong to the session. By default, the session name is PHPSESSID but it can, in most circumstances, be changed by calling the PHP function session_name prior to starting the session. Starting, or more often restarting, a session is done by calling session_start, which returns the session ID. In a simple situation, you do not need the session ID, as PHP places any existing session data in another superglobal, $_SESSION. In fact, we will have a use for the session ID as you will soon see. The $_SESSION super-global is available once session_start has been called, and the PHP program can store whatever data it chooses in it. It is an array, initially empty, and naturally the subscripts need to be chosen carefully in a complex system to avoid any clashes. The neat part of the PHP session is that provided it is restarted each time with session_start, the $_SESSION superglobal will retain any values assigned during the handling of previous requests. The data is thus preserved until the program decides to remove it. The only exception to this would be if the session expired, but in a default configuration, sessions do not expire automatically. Later in this article, we will look at ways to deliberately kill sessions after a determinate period of inactivity. As it is only the session ID that is stored in the cookie, rules about the timing of output do not apply to $_SESSION, which can be read or written at any time after session_start has been called. PHP stores the contents of $_SESSION at the end of processing or on request using the PHP function session_write_close. By default, PHP puts the data in a temporary file whose name includes the session ID. Whenever the session data is stored, PHP retrieves it again at the next session_start. Session data does not have to be stored in temporary files, and PHP permits the program to provide its own handling routines. We will look at a scheme for storing the session data in a database later in the article. Avoiding session vulnerabilities So far, the option to pass the session ID as part of the URI instead of as a cookie has not been considered. Looking at security will show why. The main security issue with sessions is that a cracker may find out the session ID for a user, and then hijack that user's session. Session handling should do its best to guard against that happening. PHP can pass the session ID as part of the URI. This makes it especially vulnerable to disclosure, since URIs can be stored in all kinds of places that may not be as inaccessible as we would like. As a result, secure systems avoid the URI option. It is also undesirable to find links appearing in search engines that include a session ID as part of the URI. These two points are enough to rule out the URI option for passing session ID. It can be prevented by the following PHP calls: ini_set('session.use_cookies', 1);ini_set('session.use_only_cookies', 1); These calls force PHP to use cookies for session handling, an option that is now considered acceptable. The extent to which the site will function without cookies depends on what a visitor can do with no continuity of data—user login will not stick, and anything like a shopping cart will not be remembered. It is best to avoid the default name of PHPSESSID for the session cookie, since that is something that a cracker could look for in the network traffic. One step that can be taken is to create a session name that is the MD5 hash of various items of internal information. This makes it harder but not impossible to sniff messages to find out a session ID, since it is no longer obvious what to seek—the well known name of PHPSESSID is not used. It is important for the session ID to be unpredictable, but we rely on PHP to achieve that. It is also desirable that the ID be long, since otherwise it might be possible for an attacker to try out all possible values within the life of a session. PHP uses 32 hexadecimal digits, which is a reasonable defense for most purposes. The other main vulnerability apart from session hijacking is called session fixation. This is typically implemented by a cracker setting up a link that takes the user to your site with a session already established, and known to the cracker. An important security step that is employed by robust systems is to change the session ID at significant points. So, although a session may be created as soon as a visitor arrives at the site, the session ID is changed at login. This technique is used by Amazon among others so that people can browse for items and build up a shopping cart, but on purchase a fresh login is required. Doing this reduces the available window for a cracker to obtain, and use, the session ID. It also blocks session fixation, since the original session is abandoned at critical points. It is also advisable to change the ID on logout, so although the session is continued, its data is lost and the ID is not the same. It is highly desirable to provide logout as an option, but this needs to be supplemented by time limits on inactive sessions. A significant part of session handling is devoted to keeping enough information to be able to expire sessions that have not been used for some time. It also makes sense to revoke a session that seems to have been used for any suspicious activity. Ideally, the session ID is never transmitted unencrypted, but achieving this requires the use of SSL, and is not always practical. It should certainly be considered for high security applications. Search engine bots One aspect of website building is, perhaps unexpectedly, the importance of handling the bots that crawl the web. They are often gathering data for search engines, although some have more dubious goals, such as trawling for e-mail addresses to add to spam lists. The load they place on a site can be substantial. Sometimes, search engines account for half or more of the bandwidth being used by a site, which certainly seems excessive. If no action is taken, these bots can consume significant resources, often for very little advantage to the site owner. They can also distort information about the site, such as when the number of current visitors is displayed but includes bots in the counts. Matters are made worse by the fact that bots will normally fail to handle cookies. After all, they are not browsers and have no need to implement support for cookies. This means that every request by a bot is separate from every other, as our standard mechanism for linking requests together will not work. If the system starts a new session, it will have to do this for every new request from a bot. There will never be a logout from the bot to terminate the session, so each bot-related session will last for the time set for automatic expiry. Clearly it is inadvisable to bar bots, since most sites are anxious to gain search engine exposure. But it is possible to build session handling so as to limit the workload created by visitors who do not permit cookies, which will mostly be bots. When we move into implementation techniques, the mechanisms will be demonstrated. Session data and scalability We could simply let PHP take care of session data. It does that by writing a serialized version of any data placed into $_SESSION into a file in a temporary directory. Each session has its own file. But PHP also allows us to implement our own session data handling mechanism. There are a couple of good reasons for using that facility, and storing the information in the database. One is that we can analyze and manage the data better, and especially limit the overhead of dealing with search engine bots. The other is that by storing session data in the database, we make it feasible for the site to be run across multiple servers. There may well be other issues before that can be achieved, but providing session continuity is an essential requirement if load sharing is to be fully effective. Storing session data in a database is a reliable solution to this issue. Arguments against storing session data in a database include questions about the overhead involved, constraints on database performance, or the possibility of a single point of failure. While these are real issues, they can certainly be mitigated. Most database engines, including MySQL, have many options for building scalable and robust systems. If necessary, the database can be spread across multiple computers linked by a high speed network, although this should never be done unless it is really needed. Design of such a system is outside the scope of this article, but the key point is that the arguments against storing session data in a database are not particularly strong.
Read more
  • 0
  • 0
  • 3437

article-image-implementing-microsoft-net-application-using-alfresco-web-services
Packt
17 Aug 2010
6 min read
Save for later

Implementing a Microsoft .NET Application using the Alfresco Web Services

Packt
17 Aug 2010
6 min read
(For more resources on Alfresco, see here.) For the first step, you will see how to set up the .NET project in the development environment. Then when we take a look at the sample code, we will learn how to perform the following operations from your .NET application: How to authenticate users How to search contents How to manipulate contents How to manage child associations Setting up the project In order to execute samples included with this article, you need to download and install the following software components in your Windows operating system: Microsoft .NET Framework 3.5 Web Services Enhancements (WSE) 3.0 for Microsoft .NET SharpDevelop 3.2 IDE The Microsoft .NET Framework 3.5 is the main framework used to compile the application, and you can download it using the following URL: http://www.microsoft.com/downloads/details.aspx?familyid=333325fd-ae52-4e35-b531-508d977d32a6&displaylang=en. Before importing the code in the development environment, you need to download and install the Web Services Enhancements (WSE) 3.0, which you can find at this address: http://www.microsoft.com/downloads/details.aspx?FamilyID=018a09fd-3a74-43c5-8ec1-8d789091255d. You can find more information about the Microsoft .NET framework on the official site at the following URL: http://www.microsoft.com/net/. From this page, you can access the latest news and the Developer Center where you can find the official forum and the developer community. SharpDevelop 3.2 IDE is an open source IDE for C# and VB.NET, and you can download it using the following URL: http://www.icsharpcode.net/OpenSource/SD/Download/#SharpDevelop3x. Once you have installed all the mentioned software components, you can import the sample project into SharpDevelop IDE in the following way: Click on File | Open | Project/Solution Browse and select this file in the root folder of the samples: AwsSamples.sln Now you should see a similar project tree in your development environment: More information about SharpDevelop IDE can be found on the official site at the following address: http://www.icsharpcode.net/opensource/sd/. From this page, you can download different versions of the product; which SharpDevelop IDE version you choose depends on the .NET version which you would like to use. You can also visit the official forum to interact with the community of developers. Also, notice that all the source code included with this article was implemented extending an existent open source project named dotnet. The dotnet project is available in the Alfresco Forge community, and it is downloadable from the following address:http://forge.alfresco.com/projects/dotnet/. Testing the .NET sample client Once you have set up the .NET solution in SharpDevelop, as explained in the previous section, you can execute all the tests to verify that the client is working correctly. We have provided a batch file named build.bat to allow you to build and run all the integration tests. You can find this batch file in the root folder of the sample code. Notice that you need to use a different version of msbuild for each different version of the .NET framework. If you want to compile using the .NET Framework 3.5, you need to set the following path in your environment: set PATH=%PATH%;%WinDir%Microsoft.NETFrameworkv3.5 Otherwise, you have to set .NET Framework 2.0 using the following path: set PATH=%PATH%;%WinDir%Microsoft.NETFrameworkv2.0.50727 We are going to assume that Alfresco is running correctly and it is listening on host localhost and on port 8080. Once executed, the build.bat program should start compiling and executing all the integration tests included in this article. After a few seconds have elapsed, you should see the following output in the command line: .........****************** Running tests ******************NUnit version 2.5.5.10112Copyright (C) 2002-2009 Charlie Poole.Copyright (C) 2002-2004 James W. Newkirk, Michael C. Two, Alexei A.Vorontsov.Copyright (C) 2000-2002 Philip Craig.All Rights Reserved.Runtime Environment - OS Version: Microsoft Windows NT 5.1.2600 Service Pack 2 CLR Version: 2.0.50727.3053 ( Net 2.0 )ProcessModel: Default DomainUsage: SingleExecution Runtime: net-2.0............Tests run: 12, Errors: 0, Failures: 0, Inconclusive: 0, Time: 14.170376seconds Not run: 0, Invalid: 0, Ignored: 0, Skipped: 0********* Done ********* As you can see from the project tree, you have some of the following packages: Search Crud Association The Search package shows you how to perform queries against the repository. The Crud package contains samples related to all the CRUD operations that show you how to perform basic operations; namely, how to create/get/update/remove nodes in the repository. The Association package shows you how to create and remove association instances among nodes. Searching the repository Once you have authenticated a user, you can start to execute queries against the repository. In the following sample code, we will see how to perform a query using the RepositoryService of Alfresco: RepositoryService repositoryService = WebServiceFactory.getRepositoryService(); Then we need to create a store where we would like to search contents: Store spacesStore = new Store(StoreEnum.workspace, "SpacesStore"); Now we need to create a Lucene query. In this sample, we want to search the Company Home space, and this means that we have to execute the following query: String luceneQuery = "PATH:"/app:company_home""; In the next step, we need to use the query method available from the RepositoryService. In this way, we can execute the Lucene query and we can get all the results from the repository: Query query =new Query(Constants.QUERY_LANG_LUCENE, luceneQuery);QueryResult queryResult =repositoryService.query(spacesStore, query, false); You can retrieve all the results from the queryResult object, iterating the ResultSetRow object in the following way: ResultSet resultSet = queryResult.resultSet;ResultSetRow[] results = resultSet.rows;//your custom listIList<CustomResultVO> customResultList =new List<CustomResultVO>();//retrieve results from the resultSetforeach(ResultSetRow resultRow in results){ ResultSetRowNode nodeResult = resultRow.node; //create your custom value object CustomResultVO customResultVo = new CustomResultVO(); customResultVo.Id = nodeResult.id; customResultVo.Type = nodeResult.type; //retrieve properties from the current node foreach(NamedValue namedValue in resultRow.columns) { if (Constants.PROP_NAME.Equals(namedValue.name)) { customResultVo.Name = namedValue.value; } else if (Constants.PROP_DESCRIPTION.Equals(namedValue.name)) { customResultVo.Description = namedValue.value; } } //add the current result to your custom list customResultList.Add(customResultVo);} In the last sample, we iterated all the results and we created a new custom list with our custom value object CustomResultVO. More information about how to build Lucene queries can be found at this URL: http://wiki.alfresco.com/wiki/Search. Performing operations We can perform various operations on the repository. They are documented as follows: Authentication For each operation, you need to authenticate users before performing all the required operations on nodes. The class that provides the authentication feature is named AuthenticationUtils, and it allows you to invoke the startSession and endSession methods: String username = "johndoe";String password = "secret";AuthenticationUtils.startSession(username, password);try{}finally{ AuthenticationUtils.endSession();} Remember that the startSession method requires the user credentials: the username as the first argument and the password as the second. Notice that the default endpoint address of the Alfresco instance is as follows: http://localhost:8080/alfresco If you need to change the endpoint address, you can use the WebServiceFactory class invoking the setEndpointAddress method to set the new location of the Alfresco repository.
Read more
  • 0
  • 0
  • 3577

article-image-database-considerations-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Database Considerations for PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Building methods that: Handle common patterns of data manipulation securely and efficiently Help ease the database changes needed as data requirements evolve Provide powerful data objects at low cost Discussion and considerations Relational databases provide an effective and readily available means to store data. Once established, they normally behave consistently and reliably, making them easier to use than file systems. And clearly a database can do much more than a simple file system! Efficiency can quickly become an issue, both in relation to how often requests are made to a database, and how long queries take. One way to offset the cost of database queries is to use a cache at some stage in the processing. Whatever the framework does, a major factor will always be the care developers of extensions take over the design of table structures and software; the construction of SQL can also make a big difference. Examples included here have been assiduously optimized so far as the author is capable, although suggestions for further improvement are always welcome! Web applications are typically much less mature than more traditional data processing systems. This stems from factors such as speed of development and deployment. Also, techniques that are effective for programs that run for a relatively long time do not make sense for the brief processing that is applied to a single website request. For example, although PHP allows persistent database connections, thereby reducing the cost of making a fresh connection for each request, it is generally considered unwise to use this option because it is liable to create large numbers of dormant processes, and slow down database operations excessively. Likewise, prepared statements have advantages for performance and possibly security but are more laborious to implement. So, the advantages are diluted in a situation where a statement cannot be used more than once. Perhaps, even more than performance, security is an issue for web development, and there are well known routes for attacking databases. They need to be carefully blocked. The primary goal of a framework is to make further development easy. Writing web software frequently involves the same patterns of database access, and a framework can help a lot by implementing methods at a higher level than the basic PHP database access functions. In an ideal world, an object-oriented system is developed entirely on the basis of OO principles. But if no attention is paid to how the objects will be stored, problems arise. An object database has obvious appeal, but for a variety of reasons, such databases are not widely used. Web applications have to be pragmatic, and so the aim pursued here is the creation of database designs that occasionally ignore strict relational principles, and objects that are sometimes simpler than idealized designs might suggest. The benefit of making these compromises is that it becomes practical to achieve a useful correspondence between database rows and PHP objects. It is possible that PHP Data Objects (PDO) will become very important in this area, but it is a relatively new development. Use of PDO is likely to pick up gradually as it becomes more commonly found in typical web hosting, and as developers get to understand what it can offer. For the time being, the safest approach seems to be for the framework to provide classes on which effective data objects can be built. A great deal can be achieved using this technique. Database dependency Lest this section create too much disappointment, let me say at the outset that this article does not provide any help with achieving database independence. The best that can be done here is to explain why not, and what can be done to limit dependency. Nowadays, the most popular kind of database employs the relational model. All relational database systems implement the same theoretical principles, and even use more or less the same structured query language. People use products from different vendors for an immense variety of reasons, some better than others. For web development, MySQL is very widely available, although PostgreSQL is another highly regarded database system that is available without cost. There are a number of well-known proprietary systems, and existing databases often contain valuable information, which motivates attempts to link them into CMS implementations. In this situation, there are frequent requests for web software to become database independent. There are, sadly, practical obstacles towards achieving this. It is conceptually simple to provide the mechanics of access to a variety of different database systems, although the work involved is laborious. The result can be cumbersome, too. But the biggest problem is that SQL statements are inclined to vary across different systems. It is easy in theory to assert that only the common core of SQL that works on all database systems should be used. The serious obstacle here is that very few developers are knowledgeable about what comprises the common core. ANSI SQL might be thought to provide a system neutral language, but then not all of ANSI SQL is implemented by every system. So, the fact is that developers become expert in one particular database system, or at best a handful. Skilled developers are conscious of the standardization issue, and where there is a choice, they will prefer to write according to standards. For example, it is better to write: SELECT username, userid, count(userid) AS number FROM aliro_session AS s INNER JOIN aliro_session_data AS d ON s.session_id = d.session_id WHERE isadmin = 0 GROUP BY userid rather than, SELECT username, userid, count(userid) AS number FROM aliro_session AS s, aliro_session_data AS d WHERE s.session_id = d.session_id AND isadmin = 0 GROUP BY userid This is because it makes the nature of the query clearer, and also because it is less vulnerable to detailed syntax variations across database systems. Use of extensions that are only available in some database systems is a major problem for query standardization. Again, it is easy while theorizing to deplore the use of non-standard extensions. In practice, some of them are so tempting that few developers resist them. An older MySQL extension was the REPLACE command, which would either insert or update data depending on whether a matching key was already present in the database. This is now discouraged on the grounds that it achieved its result by deleting any matching data before doing an insertion. This can have adverse effects on linked foreign keys but the newer option of the INSERT ... ON DUPLICATE KEY construction provides a very neat, efficient way to handle the case where data needs to go into the database allowing for what is already there. It is more efficient in every way than trying to read before choosing between INSERT and UPDATE, and also avoids the issue of needing a transaction. Similarly, there is no standard way to obtain a slice of a result set, for example starting with the eleventh item, and comprising the next ten items. Yet this is exactly the operation that is needed to efficiently populate the second page of a list of items, ten per page. The MySQL extension that offers LIMIT and LIMITSTART is ideal for this purpose. Because of these practical issues, independence of database systems remains a desirable goal that is rarely fully achieved. The most practical policy seems to avoid dependencies where this is possible at reasonable cost. The role of the database We already noted that a database can be thought of as uncontrolled global data, assuming the database connection is generally available. So there should be policies on database access to prevent this becoming a liability. One policy adopted by Aliro is to use two distinct databases. The "core" database is reserved for tables that are needed by the basic framework of the CMS. Other tables, including those created by extensions to the CMS framework, use the "general" database. Although it is difficult to enforce restrictions, one policy that is immediately attractive is that the core database should never be accessed by extensions. How data is stored is an implementation matter for the various classes that make up the framework, and a selection of public methods should make up the public interface. Confining access to those public methods that constitute the API for the framework leaves open the possibility of development of the internal mechanisms with little or no change to the API. If the framework does not provide the information needed by extensions, then its API needs further development. The solution should not be direct access to the core database. Much the same applies to the general database, except that it may contain tables that are intended to be part of an API. By and large, extensions should restrict their database operations to their own tables, and provide object methods to implement interfaces across extensions. This is especially so for write operations, but should usually apply to all database operations. Level of database abstraction There have been some clues earlier in this article, but it is worth squarely addressing the question of how far the CMS database classes should go in insulating other classes from the database. All of the discussions here are based on the idea that currently the best available style of development is object oriented. But we have already decided that using a true object database is not usually a practical option for web development. The next option to consider is building a layer to provide an object-relational transformation, so that outside of the database classes, nobody needs to deal with purely relational concepts or with SQL. An example of a framework that does this is Propel, which can be found at http://propel.phpdb.org/trac/. While developments of this kind are interesting and attractive in principle, I am not convinced that they provide an acceptable level of performance and flexibility for current CMS developments. There can be severe overheads on object-relational operations and manual intervention is likely to be necessary if high performance is a goal. For that reason, it seems that for some while yet, CMS developments will be based on more direct use of a relational database. Another complicating factor is the limitations of PHP in respect of static methods, which are obliged to operate within the environment of the class in which they are declared, irrespective of the class that was invoked in the call. This constraint is lifted in PHP 5.3 but at the time of writing, reliance on PHP 5.3 would be premature, software that has not yet found its way into most stable software distributions. With more flexibility in the use of static methods and properties, it would be possible to create a better framework of database-related properties. Given what is currently practical, and given experience of what is actually useful in the development of applications to run within a CMS framework, the realistic goals are as follows: To create a database object that connects, possibly through a choice of different connectors, to a particular database and provides the ability to run SQL queries To enable the creation of objects that correspond to database rows and have the ability to load themselves with data or to store themselves in the database Some operations, such as the update of a single row, are best achieved through the use of a database row object. Others, such as deletion, are often applied to a number of rows, chosen from a list by the user, and are best effected through a SQL query. You can obtain powerful code for achieving the automatic creation of HTML by downloading the full Aliro project. Unfortunately, experience in use has been disappointing. Often, so much customization of the automated code is required that the gains are nullified, and the automation becomes just an overhead. This topic is therefore given little emphasis.
Read more
  • 0
  • 0
  • 1298
article-image-netbeans-platform-69-advanced-aspects-window-system
Packt
17 Aug 2010
5 min read
Save for later

NetBeans Platform 6.9: Advanced Aspects of Window System

Packt
17 Aug 2010
5 min read
(For more resources on NetBeans, see here.) Creating custom modes You can get quite far with the standard modes provided by the NetBeans Platform. Still, sometimes you may need to provide a custom mode, to provide a new position for the TopComponents within the application. A custom mode is created declaratively in XML files, rather than programmatically in Java code. In the following example, you create two new modes that are positioned side by side in the lower part of the application using a specific location relative to each other. Create a new module named CustomModes, with Code Name Base com.netbeansrcp.custommodes, within the existing WindowSystemExamples application. Right-click the module project and choose New | Other to open the New File dialog. Then choose Other | Empty File, as shown in the following screenshot: Type mode1.wsmode as the new filename and file extension, as shown in the following screenshot. Click Finish. Define the content of the new mode1.wsmode as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mode PUBLIC "-//NetBeans//DTD Mode Properties 2.3//EN" "http://www.netbeans.org/dtds/mode-properties2_3.dtd"> <mode version="2.3"> <name unique="mode1" /> <kind type="view" /> <state type="joined" /> <constraints> <path orientation="vertical" number="20" weight="0.2"/> <path orientation="horizontal" number="20" weight="0.5"/> </constraints> </mode> Create another file to define the second mode and name it mode2.wsmode. Add this content to the new file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mode PUBLIC "-//NetBeans//DTD Mode Properties 2.3//EN" "http://www.netbeans.org/dtds/mode-properties2_3.dtd"> <mode version="2.3"> <name unique="mode2" /> <kind type="view" /> <state type="joined" /> <constraints> <path orientation="vertical" number="20" weight="0.2"/> <path orientation="horizontal" number="40" weight="0.5"/> </constraints> </mode> Via the two wsmode files described above, you have defined two custom modes. The first mode has the unique name mode1, with the second named mode2. Both are created for normal TopComponents (view instead of editor) that are integrated into the main window, rather than being undocked by default (joined instead of separated). The constraints elements in the files are comparable to GridBagLayout, with a relative horizontal and vertical position, as well as a relative horizontal and vertical weight. You place mode1 in position 20/20 with a weighting of 0,5/0,2, while mode2 is placed in position 20/40 with the same weighting. If all the other defined modes have TopComponents opened within them, the TopComponents in the two new modes should lie side by side, right above the status bar, taking up 20% of the available vertical space, with the horizontal space shared between them. Let us now create two new TopComponents and register them in the layer.xml file so that they will be displayed in your new modes. Do this by using the New Window wizard twice in the CustomModes module, first creating a window called Class Name Prefix Red and then a window with Class Name Prefix Blue. What should I set the window position to? In the wizard, in both cases, it does not matter what you set to be the window position, as you are going to change that setting manually afterwards. Let both of them open automatically when the application starts. In the Design mode of both TopComponents, add a JPanel to each of the TopComponents. Change the background property of the panel in the RedTopComponent to red and in the BlueTopComponent to blue. Edit the layer.xml of CustomModes module, registering the two .wsmode files and ensuring that the two new TopComponents open in the new modes: <folder name="Windows2"> <folder name="Components"> <file name="BlueTopComponent.settings" url="BlueTopComponentSettings.xml"/> <file name="RedTopComponent.settings" url="RedTopComponentSettings.xml"/> </folder> <folder name="Modes"> <file name="mode1.wsmode" url="mode1.wsmode"/> <file name="mode2.wsmode" url="mode2.wsmode"/> <folder name="mode1"> <file name="RedTopComponent.wstcref" url="RedTopComponentWstcref.xml"/> </folder> <folder name="mode2"> <file name="BlueTopComponent.wstcref" url="BlueTopComponentWstcref.xml"/> </folder> </folder> </folder> As before, perform a Clean and Build on the application project node and then start the application again. It should look as shown in the following screenshot: In the summary, you defined two new modes in XML files and registered them in the module's layer.xml file. To confirm that the modes work correctly, you use the layer.xml file to register two new TopComponents so that they open by default into the new modes. As a result, you now know how to extend the default layout of a NetBeans Platform application with new modes.
Read more
  • 0
  • 0
  • 1446

article-image-error-handling-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Error Handling in PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Errors will happen whether we like it or not. Ideally the framework can help in their discovery, recording, and handling by: Trapping different kinds of errors Making a record of errors with sufficient detail to aid analysis Supporting a structure that mitigates the effect of errors Discussion There are three main kinds of errors that can arise. Many possible situations can crop up within PHP code that count as errors, such as an attempt to use a method on a variable that turns out not to be an object, or is an object but does not implement the specified method. The database will sometimes report errors, such as an attempt to retrieve information from a non-existent table, or to ask for a field that has not been defined for a table. And the logic of applications can often lead to situations that can only be described as errors. What resources do we have to handle these error situations? PHP error handling If nothing else is done, PHP has its own error handler. But developers are free to build their own handlers. So that is the first item on our to do list. Consistently with our generally object oriented approach, the natural thing to do is to build an error recording class, and then to tell PHP that one of its methods is to be called whenever PHP detects an error. Once that is done, the error handler must deal with whatever PHP passes, as it has taken over full responsibility for error handling. It has been a common practice to suppress the lowest levels of PHP error such as notices and warnings, but this is not really a good idea. Even these relatively unimportant messages can reveal more serious problems. It is not difficult to write code to avoid them, so that if a warning or notice does arise, it will indicate something unexpected and therefore worth investigation. For example, the PHP foreach statement expects to work on something iterable and will generate a warning if it is given, say, a null value. But this is easily avoided, either by making sure that methods which return arrays will always return an array, even if it is an array of zero items, rather than a null value. Failing that, the foreach can be protected by a preceding test. So it is safest to assume that a low level error may be a symptom of a bigger problem, and have our error handler record every error that is passed to it. The database is the obvious place to put the error, and the handler receives enough information to make it possible to save only the latest occurrence of the same error, thus avoiding a bloated table of many more or less identical errors. The other important mechanism offered by PHP is new to version 5 and is the try, catch, and throw construct. A section of code can be put within a try and followed by one or more catch specifications that define what is to be done if a particular kind of problem arises. The problems are triggered by using throw. This is a valuable mechanism for errors that need to break the flow of program execution, and is particularly helpful for dealing with database errors. It also has the advantage that the try sections can be nested, so if a large area of code, such as an entire component, is covered by a try it is still possible to write a try of narrower scope within that code. In general, it is better to be cautious about giving information about errors to users. For one thing, ordinary users are simply irritated by technically oriented error messages that mean nothing to them. Equally important is the issue of cracking, and the need to avoid displaying any weaknesses too clearly. It is bad enough that an error has occurred, without giving away details of what is going wrong. So a design assumption for error handling should be that the detail of errors is recorded for later analysis, but that only a very simple indication of the presence of an error is given to the user with a message that it has been noted for rectification. Database errors Errors in database operations are a particular problem for developers. Within the actual database handling code, it would be negligent to ignore the error indications that are available through the PHP interfaces to database systems. Yet within applications, it is hard to know what to do with such errors. SQL is very flexible, and a developer has no reason to expect any errors, so in the nature of things, any error that does arise is unexpected, and therefore difficult to handle. Furthermore, if there have to be several lines of error handling code every time the database is accessed, then the overhead in code size and loss of clarity is considerable. The best solution therefore seems to be to utilize the PHP try, catch, and throw structure. A special database error exception can be created by writing a suitable class, and the database handling code will then deal with an error situation by "throwing" a new error with an exception of that class. The CMS framework can have a default try and catch in place around most of its operation, so that individual applications within the CMS are not obliged to take any action. But if an application developer wants to handle database errors, it is always possible to do so by coding a nested try and catch within the application. One thing that must still be remembered by developers is that SQL easily allows some kinds of error situation to go unnoticed. For example, a DELETE or UPDATE SQL statement will not generate any error if nothing is deleted or updated. It is up to the developer to check how many rows, if any, were affected. This may not be worth doing, but issues of this kind need to be kept in mind when considering how software will work. A good error handling framework makes it easier for a developer to choose between different checking options. Application errors Even without there being a PHP or database error, an application may decide that an error situation has arisen. For some reason, normal processing is impossible, and the user cannot be expected to solve the problem. There are two main choices that will fit with the error handling framework we are considering. One is to use the PHP trigger_error statement. It raises a user error, and allows an error message to be specified. The error that is created will be trapped and passed to the error handler, since we have decided to have our own handler. This mechanism is best used for wholly unexpected errors that nonetheless could arise out of the logic of the application. The other choice is to use a complete try, catch, and throw structure within the application. This is most useful when there are a number of fatal errors that can arise, and are somewhat expected. The CMS extension installer uses this approach to deal with the various possible fatal errors that can occur during an attempt to install an extension. They are mostly related to errors in the XML packaging file, or in problems with accessing the file system. These are errors that need to be reported to help the user in resolving the problem, but they also involve abandoning the installation process. Whenever a situation of this kind arises, try, catch, and throw is a good way to deal with it. Exploring PHP—Error handling PHP provides quite a lot of control over error handling in its configuration. One question to be decided is whether to allow PHP to send any errors to the browser. This is determined by setting the value of display_errors in the php.ini configuration file. It is also possible to determine whether errors will be logged by setting log_errors and to decide where they should be logged by setting error_log. (Often there are several copies of this file, and it is important to find the one that is actually used by the system.) The case against sending errors is that it may give away information useful to crackers. Or it may look bad to users. On the other hand, it makes development and bug fixing harder if errors have to be looked up in a log file rather than being visible on the screen. And if errors are not sent to the screen, then in the event of a fatal error, the user will simply see a blank screen. This is not a good outcome either. Although the general advice is that errors should not be displayed on production systems, I am still rather inclined to show them. It seems to me that an error message, even if it is a technical one that is meaningless to the user, is rather better than a totally blank screen. The information given is only a bare description of the error, with the name and line number for the file having the error. It is unlikely to be a great deal of use to a cracker, especially since the PHP script just terminates on a fatal error, not leaving any clear opportunity for intrusion. You should make your own decision on which approach is preferable. Without any special action in the PHP code, an error will be reported by PHP giving details of where it occurred. Providing our own error handler by using the PHP set_error_handler function gives us far more flexibility to decide what information will be recorded and what will be shown to the user. A limitation on this is that PHP will still immediately terminate on a fatal error, such as attempting a method on something that is not an object. Termination also occurs whenever a parsing error is found, that is to say when the PHP program code is badly formed. It is not possible to have control transferred to a user provided error handler, which is an unfortunate limitation on what can be achieved. However, an error handler can take advantage of knowledge of the framework to capture relevant information. Quite apart from special information on the framework, the handler can make use of the useful PHP debug_backtrace function to find out the route that was followed before the error was reached. This will give information about what called the current code. It can then be used again to find what called that, and so on until no further trace information is available. A trace greatly increases the value of error reporting as it makes it much easier to find out the route that led to the error. When an error is trapped using PHP's try and catch, then it is best to trace the route to the error at the point the exception is thrown. Otherwise, the error trace will only show the chain of events from the exception to the error handler. There are a number of other PHP options that can further refine how errors are handled, but those just described form the primary tool box that we need for building a solid framework.
Read more
  • 0
  • 0
  • 3120
Modal Close icon
Modal Close icon