Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-why-ajax-is-a-different-type-of-software
Packt
27 Jan 2011
14 min read
Save for later

Why Ajax is a different type of software

Packt
27 Jan 2011
14 min read
Ajax is not a piece of software in the way we think about JavaScript or CSS being a piece of software. It's actually a lot more like an overlaid function. But what does that mean, exactly? Why Ajax is a bit like human speech Human speech is an overlaid function. What is meant by this is reflected in the answer to a question: "What part of the human body has the basic job of speech?" The tongue, for one answer, is used in speech, but it also tastes food and helps us swallow. The lungs and diaphragm, for another answer, perform the essential task of breathing. The brain cannot be overlooked, but it also does a great many other jobs. All of these parts of the body do something more essential than speech and, for that matter, all of these can be found among animals that cannot talk. Speech is something that is overlaid over organs that are there in the first place because of something other than speech. Something similar to this is true for Ajax, which is not a technology in itself, but something overlaid on top of other technologies. Ajax, some people say, stands for Asynchronous JavaScript and XML, but that was a retroactive expansion. JavaScript was introduced almost a decade before people began seriously talking about Ajax. Not only is it technically possible to use Ajax without JavaScript (one can substitute VBScript at the expense of browser compatibility), but there are quite a few substantial reasons to use JavaScript Object Notation (JSON) in lieu of heavy-on-the-wire eXtensible Markup Language (XML). Performing the overlaid function of Ajax with JSON replacing XML is just as eligible to be considered full-fledged Ajax as a solution incorporating XML. Ajax helps the client talk to the server Ajax is a way of using client-side technologies to talk with a server and perform partial page updates. Updates may be to all or part of the page, or simply to data handled behind the scenes. It is an alternative to the older paradigm of having a whole page replaced by a new page loaded when someone clicks on a link or submits a form. Partial page updates, in Ajax, are associated with Web 2.0, while whole page updates are associated with Web 1.0; it is important to note that "Web 2.0" and "Ajax" are not interchangeable. Web 2.0 includes more decentralized control and contributions besides Ajax, and for some objectives it may make perfect sense to develop an e-commerce site that uses Ajax but does not open the door to the same kind of community contributions as Web 2.0. Some of the key features common in Web 2.0 include: Partial page updates with JavaScript communicating with a server and rendering to a page An emphasis on user-centered design Enabling community participation to update the website Enabling information sharing as core to what this communication allows The concept of "partial page updates" may not sound very big, but part of its significance may be seen in an unintended effect. The original expectation of partial page updates was that it would enable web applications that were more responsive. The expectation was that if submitting a form would only change a small area of a page, using Ajax to just load the change would be faster than reloading the entire page for every minor change. That much was true, but once programmers began exploring, what they used Ajax for was not simply minor page updates, but making client-side applications that took on challenges more like those one would expect a desktop program to do, and the more interesting Ajax applications usually became slower. Again, this was not because you could not fetch part of the page and update it faster, but because programmers were trying to do things on the client side that simply were not possible under the older way of doing things, and were pushing the envelope on the concept of a web application and what web applications can do. Which technologies does Ajax overlay? Now let us look at some of the technologies where Ajax may be said to be overlaid. JavaScript JavaScript deserves pride of place, and while it is possible to use VBScript for Internet Explorer as much more than a proof of concept, for now if you are doing Ajax, it will almost certainly be Ajax running JavaScript as its engine. Your application will have JavaScript working with XMLHttpRequest, JavaScript working with HTML, XHTML, or HTML5; JavaScript working with the DOM, JavaScript working with CSS, JavaScript working with XML or JSON, and perhaps JavaScript working with other things. While addressing a group of Django developers or Pythonistas, it would seem appropriate to open with, "I share your enthusiasm." On the other hand, while addressing a group of JavaScript programmers, in a few ways it is more appropriate to say, "I feel your pain." JavaScript is a language that has been discovered as a gem, but its warts were enough for it to be largely unappreciated for a long time. "Ajax is the gateway drug to JavaScript," as it has been said—however, JavaScript needs a gateway drug before people get hooked on it. JavaScript is an excellent language and a terrible language rolled into one. Before discussing some of the strengths of JavaScript—and the language does have some truly deep strengths—I would like to say "I feel your pain" and discuss two quite distinct types of pain in the JavaScript language. The first source of pain is some of the language decisions in JavaScript: The Wikipedia article says it was designed to resemble Java but be easier for non-programmers, a decision reminiscent of SQL and COBOL. The Java programmer who finds the C-family idiom of for(i = 0; i < 100; ++i) available will be astonished to find that the functions are clobbering each other's assignments to i until they are explicitly declared local to the function by declaring the variables with var. There is more pain where that came from. The following two functions will not perform the naively expected mathematical calculation correctly; the assignments to i and the result will clobber each other: function outer() { result = 0; for(i = 0; i < 100; ++i) { result += inner(i); } return result } function inner(limit) { result = 0; for(i = 0; i < limit; ++i) { result += i; } return result; } The second source of pain is quite different. It is a pain of inconsistent implementation: the pain of, "Write once, debug everywhere." Strictly speaking, this is not JavaScript's fault; browsers are inconsistent. And it need not be a pain in the server-side use of JavaScript or other non-browser uses. However, it comes along for the ride for people who wish to use JavaScript to do Ajax. Cross-browser testing is a foundational practice in web development of any stripe; a good web page with semantic markup and good CSS styling that is developed on Firefox will usually look sane on Internet Explorer (or vice versa), even if not quite pixel-perfect. But program directly for the JavaScript implementation on one version of a browser, and you stand rather sharp odds of your application not working at all on another browser. The most important object by far for Ajax is the XMLHttpRequest and not only is it not the case that you may have to do different things to get an XMLHttpRequest in different browsers or sometimes different (common) versions of the same browser, and, even when you have code that will get an XMLHttpRequest object, the objects you have can be incompatible so that code that works on one will show strange bugs for another. Just because you have done the work of getting an XMLHttpRequest object in all of the major browsers, it doesn't mean you're home free. Before discussing some of the strengths of the JavaScript language itself, it would be worth pointing out that a good library significantly reduces the second source of pain. Almost any sane library will provide a single, consistent way to get XMLHttpRequest functionality, and consistent behavior for the access it provides. In other words, one of the services provided by a good JavaScript library is a much more uniform behavior, so that you are programming for only one model, or as close as it can manage, and not, for instance, pasting conditional boilerplate code to do simple things that are handled differently by different browser versions, often rendering surprisingly different interpretations of JavaScript. Many of the things we will see done well as we explore jQuery are also done well in other libraries. We previously said that JavaScript is an excellent language and a terrible language rolled into one; what is to be said in favor of JavaScript? The list of faults is hardly all that is wrong with JavaScript, and saying that libraries can dull the pain is not itself a great compliment. But in fact, something much stronger can be said for JavaScript: If you can figure out why Python is a good language, you can figure out why JavaScript is a good language. I remember, when I was chasing pointer errors in what became 60,000 lines of C, teasing a fellow student for using Perl instead of a real language. It was clear in my mind that there were interpreted scripting languages, such as the bash scripting that I used for minor convenience scripts, and then there were real languages, which were compiled to machine code. I was sure that a real language was identified with being compiled, among other things, and that power in a language was the sort of thing C traded in. (I wonder why he didn't ask me if he wasn't a real programmer because he didn't spend half his time chasing pointer errors.) Within the past year or so I've been asked if "Python is a real programming language or is just used for scripting," and something similar to the attitude shift I needed to appreciate Perl and Python is needed to properly appreciate JavaScript. The name "JavaScript" is unfortunate; like calling Python "Assembler Kit", it's a way to ask people not to see its real strengths. (Someone looking for tools for working on an assembler would be rather disgusted to buy an "Assembler Kit" and find Python inside. People looking for Java's strengths in JavaScript will almost certainly be disappointed.) JavaScript code may look like Java in an editor, but the resemblance is a façade; besides Mocha, which had been renamed LiveScript, being renamed to JavaScript just when Netscape was announcing Java support in web browsers, it is has been described as being descended from NewtonScript, Self, Smalltalk, and Lisp, as well as being influenced by Scheme, Perl, Python, C, and Java. What's under the Java façade is pretty interesting. And, in the sense of the simplifying "façade" design pattern, JavaScript was marketed in a way almost guaranteed not to communicate its strengths to programmers. It was marketed as something that nontechnical people could add snippets of, in order to achieve minor, and usually annoying, effects on their web pages. It may not have been a toy language, but it sure was dressed up like one. Python may not have functions clobbering each other's variables (at least not unless they are explicitly declared global), but Python and JavaScript are both multiparadigm languages that support object-oriented programming, and their versions of "object-oriented" have a lot in common, particularly as compared to (for instance) Java. In Java, an object's class defines its methods and the type of its fields, and this much is set in stone. In Python, an object's class defines what an object starts off as, but methods and fields can be attached and detached at will. In JavaScript, classes as such do not exist (unless simulated by a library such as Prototype), but an object can inherit from another object, making a prototype and by implication a prototype chain, and like Python it is dynamic in that fields can be attached and detached at will. In Java, the instanceof keyword is important, as are class casts, associated with strong, static typing. Python doesn't have casts, and its isinstance() function is seen by some as a mistake. The concern is that Python, like JavaScript, is a duck-typing language: "If it looks like a duck, and it quacks like a duck, it's a duck!" In a duck-typing language, if you write a program that polls weather data, and there's a ForecastFromScreenscraper object that is several years old and screenscrapes an HTML page, you should be able to write a ForecastFromRSS object that gets the same information much more cleanly from an RSS feed. You should be able to use it as a drop-in replacement as long as you have the interface right. That is different from Java; at least if it were a ForecastFromScreenscraper object, code would break immediately if you handed it a ForecastFromRSS object. Now, in fairness to Java, the "best practices" Java way to do it would probably separate out an IForecast interface, which would be implemented by both ForecastFromScreenscraper and later ForecastFromRSS, and Java has ways of allowing drop-in replacements if they have been explicitly foreseen and planned for. However, in duck-typed languages, the reality goes beyond the fact that if the people in charge designed things carefully and used an interface for a particular role played by an object, you can make a drop-in replacement. In a duck-typed language, you can make a drop-in replacement for things that the original developers never imagined you would want to replace. JavaScript's reputation is changing. More and more people are recognizing that there's more to the language than design flaws. More and more people are looking past the fact that JavaScript is packaged like Java, like packaging a hammer to give the impression that it is basically like a wrench. More and more people are looking past the silly "toy language" Halloween costume that JavaScript was stuffed into as a kid. One of the ways good programmers grow is by learning new languages, and JavaScript is not just the gateway to mainstream Ajax; it is an interesting language in itself. With that much stated, we will be making a carefully chosen, selective use of JavaScript, and not make a language lover's exploration of the JavaScript language, overall. Much of our work will be with the jQuery library; if you have just programmed a little "bare JavaScript", discovering jQuery is a bit like discovering Python, in terms of a tool that cuts like a hot knife through butter. It takes learning, but it yields power and interesting results soon as well as having some room to grow. What is XMLHttpRequest in relation to Ajax? The XMLHttpRequest object is the reason why the kind of games that can be implemented with Ajax technologies do not stop at clones of Tetris and other games that do not know or care if they are attached to a network. They include massive multiplayer online role-playing games where the network is the computer. Without having something like XMLHttpRequest, "Ajax chess" would probably mean a game of chess against a chess engine running in your browser's JavaScript engine; with XMLHttpRequest, "Ajax chess" is more likely man-to-man chess against another human player connected via the network. The XMLHttpRequest object is the object that lets Gmail, Google Maps, Bing Maps, Facebook, and many less famous Ajax applications deliver on Sun's promise: the network is the computer. There are differences and some incompatibilities between different versions of XMLHttpRequest, and efforts are underway to advance "level-2-compliant" XMLHttpRequest implementations, featuring everything that is expected of an XMLHttpRequest object today and providing further functionality in addition, somewhat in the spirit of level 2 or level 3 CSS compliance. We will not be looking at level 2 efforts, but we will look at the baseline of what is expected as standard in most XMLHttpRequest objects. The basic way that an XMLHttpRequest object is used is that the object is created or reused (the preferred practice usually being to reuse rather than create and discard a large number), a callback event handler is specified, the connection is opened, the data is sent, and then when the network operation completes, the callback handler retrieves the response from XMLHttpRequest and takes an appropriate action. A bare-bones XMLHttpRequest object can be expected to have the following methods and properties.
Read more
  • 0
  • 0
  • 10014

article-image-managing-records-alfresco-3
Packt
25 Jan 2011
12 min read
Save for later

Managing Records in Alfresco 3

Packt
25 Jan 2011
12 min read
Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Records Details Much of the description in this article focuses on record features that are found on the Records Details page. An abbreviated set of metadata and available actions for the record is shown on the row for the record in the File Plan. The Details page for a record is a composite screen that contains a complete listing of all information for a record, including the links to all possible actions and operations that can be performed on a record. We can get to the Details page for a record by clicking on the link to it from the File Plan page: The Record Details page provides a summary of all available information known about a record and has links to all possible actions that can be taken on it. This is the central screen from which a record can be managed. The Details screen is divided into three main columns. The first column on the screen provides a preview of the content for the record. The middle column lists the record Metadata, and the right-most column shows a list of Actions that can be taken on the record. There are other areas lower down on the page with additional functionality that include a way for the user to manually trigger events in the steps of the disposition, to get URL links to fi le content for the record, and to create relationship links to other records in the File Plan: Alfresco Flash previewer The web preview component in the left column of the Record Details page defines a region in which the content of the record can be visually previewed. It is a bit of an exaggeration to call the preview component a Universal Viewer, but it does come close to that. The viewer is capable of viewing a number of different common file formats and it can be extended to support the viewing of additional file formats. Natively, the viewer is capable of viewing both Flash SWF files and image formats like JPEG, PNG, or GIF. Microsoft Office, OpenOffice, and PDF files are also configured out-of-the-box to be previewed with the viewer by first converting the files to PDF and then to Flash. The use of an embedded viewer in Share means that client machines don't have to have a viewing application installed to be able to view the file contents of a record. For example, a client machine running an older version of Microsoft Word may not have the capability to open a record saved in the newer Word DOCX format, but within Share, using the viewer, that client would be able to preview and read the contents of the DOCX file. The top of the viewer has a header area that displays the icon of a record alongside the name of the record being viewed. Below that, there is a toolbar with controls for the viewing of the file: At the left of the toolbar, there are controls to change the zoom level. Small increments for zoom in and zoom out are controlled by clicking on the "+" and "-" buttons. The zoom setting can also be controlled by the slider or by specifying a zoom percentage or display factor like Fit Width from the drop-down menu. For multi-page documents, there are controls to go to the next or previous pages and to jump to a specific page. The Fullscreen button enlarges the view and displays it using the entire screen. Maximize enlarges the view to display it within the browser window. Image panning and positioning within the viewer can be done by using the scrollbar or by left-clicking and dragging the image with the mouse. A print option is available from an item on the right-mouse click menu. Record Metadata The centre column of the Record Details displays the metadata for the record. There are a lot of metadata properties that are stored with each record. To make it easier to locate specific properties, there is a grouping of the metadata, and each group has a label. The first metadata group is Identification and Status. It contains the Name, Title, and Description of the record. It shows the Unique Record Identifier for the record, and the unique identifier for the record Category to which the record belongs. Additional Metadata items track whether the record has been Declared, when it was Declared, and who Declared it: The General group for metadata tracks the Mimetype and the Size of the file content, as well as who Created or last made any modifications to the record. Additional metadata for the record is listed under groups like Record, Security, Vital Record Information, and Disposition. The Record group contains the metadata fields Location, Media Type, and Format, all of which are especially useful for managing non-electronic records. Record actions In the right-most column of the Record Details page, there is a list of Actions that are available to perform on the record. The list displayed is dynamic and changes based on the state of the record. For example, options like Declare as Record or Undo Cutoff are only displayed when the record is in a state where that action is possible: Download action The Download action does just that. Clicking on this action will cause the file content for the record to be downloaded to the user's desktop. Edit Metadata This action displays the Edit form matching the content type for the record. For example, if the record has a content type of cm:content, the Edit form associated with the type cm:content will be displayed to allow the editing of the metadata. Items identified with asterisks are required fields. Certain fields contain data that is not meant to change and are grayed out and non-selectable: Copy record Clicking on the Copy to action will pop up a repository directory browser that allows a copy of the record to be filed to any Folder within the File Plan. The name of the new record will start with the words "Copy of" and end with the name of the record being copied. Only a single copy of a record can be placed in a Folder without first changing the name of the first copy. It isn't possible to have two records in the same Folder with the same name. Move record Clicking on the Move to action pops up a dialog to browse to a new Folder for where the record will be moved. The record is removed from the original location and moved to the new location. File record Clicking on the File to action pops up a dialog to identify a new Folder for where the record will be filed. A reference to the record is placed in the new Folder. After this operation, the record will basically be in two locations. Deleting the record from either of the locations causes the record to be removed from both of the locations. After filing the record, a clip status icon is displayed on the upper-left next to the checkbox for selection. The status indicates that one record is filed in multiple Folders of the File Plan: Delete record Clicking on the Delete action permanently removes the item from the File Plan. Note that this action differs from Destroy that removes only the file content from a record as part of the final step of a disposition schedule. Audit log At any point in the lifecycle of a record, an audit log is available that shows a detailed history of all activities for the record. The record audit log can help to answer questions that may come up such as which users have been involved with the record and when specific lifecycle events for the record have occurred. The audit log also provides information that can confirm whether activities in the records system are both effective and compliant with record policies. The View Audit Log action creates and pops up a dialog containing a detailed historical report for the record. The report includes very detailed and granular information about every change that has ever been made to the record. Each entry in the audit log includes a timestamp for when the change was made, the user that made the change, and the type of change or event that occurred. If the event involved the change of any metadata, the original values and the changed values for the metadata are noted in the report. By clicking on the File as Record button on the dialog, the audit report for the record itself can be captured as a record that can then be filed within the File Plan. The report is saved in HTML file format. Clicking on the Export button at the top of the dialog enables the audit report to be downloaded in HTML format: The Audit log, discussed here, provides very granular information about any changes that have occurred to a specific record. Alfresco also provides a tool included with the Records Management Console, also called Audit, which can create a very detailed report showing all activities and actions that have occurred throughout the records system. Links Below the Actions component is a panel containing the Share component. This is a standard component that is also used in the Share Document Library. The component lists three URL links in fields that can be easily copied from and pasted to. The URLs allow record content and metadata to be easily shared with others. The first link in the component is the Download File URL. Referencing this link causes the content for the record to be downloaded as a file. The second link is the Document URL. It is similar to the first link, but if the browser is capable of viewing the file format type, the content will be displayed in the browser; otherwise it is downloaded as a file. The third link is the This Page URL. This is the URL to the record details page. Trying to access any of these three URLs will require the user to first authenticate himself/herself before access to any content will be allowed. Events Below the Flash preview panel on the Details page for the record is an area that displays any Events that are currently available to be manually triggered for this record. Remember that each step of a disposition schedule is actionable after either the expiration of a time deadline or by the manual triggering of an event. Events are triggered manually by a user needing to click on a button to indicate that an event has occurred. The location of the event trigger buttons differs depending on how the disposition in the record Category was applied. If the disposition was applied at the Folder level, the manual event trigger buttons will be available on the Details page for the Folder. If the disposition was applied at the record level, the event trigger buttons are available on the Record Details page. The buttons that we see on this page are the ones available from the disposition being applied at the record level. The event buttons that apply to a particular state will be grouped together based on whether or not the event has been marked as completed. After clicking on completion, the event is moved to the Completed group. If there are multiple possible events, it takes only a single one of them to complete in order to make the action available. Some actions, like cutoff, will be executed by the system. Other actions, like destruction, require a user to intervene, but will become available from the Share user interface: References Often it is useful to create references or relationships between records. A reference is a link that relates one record to another. Clicking on the link will retrieve and view the related record. In the lower right of the Details page, there is a component for tracking references from this record and to other records in the File Plan. It is especially useful for tracking, for instance, reference links to superseded or obsolete versions of the current record. To attach references, click on the Manage button on the References component: Then, from the next screen, select New Reference: A screen containing a standard Alfresco form will then be displayed. From this screen, it is possible to name the reference, pick another record to reference, and to mark the type of reference. Available reference types include: SupersededBy / Supersedes ObsoletedBy / Obsoletes Supporting Documentation / Supported Documentation VersionedBy / Versions Rendition Cross-Reference After creating the reference, you will then see the new reference show up in the list: How does it work? We've now looked at the functionality of the details page for records and the Series, Category, and Folder containers. In this "How does it work?" section, we'll investigate in greater detail how some of the internals for the record Details page work.
Read more
  • 0
  • 0
  • 1411

article-image-workflow-and-automation-records-alfresco-3
Packt
25 Jan 2011
13 min read
Save for later

Workflow and Automation for Records in Alfresco 3

Packt
25 Jan 2011
13 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) Current limitations in Records Management The 3.3 release of Alfresco Records Management comes with some limitations about how rules and workflow can be used. Records Management requirements, and specifically the requirements for complying with the DoD 5015.2 specification, made it necessary for Alfresco developers, at least for this first release of Records Management, to make design decisions that involved limiting some of the standard Alfresco capabilities within Records Management. The idea that records are things that need to be locked down and made unalterable is at odds with some of the capabilities of rules and workflow. Proper integration of workflow and rules with Records Management requires that a number of scenarios be carefully worked through. Because of that, as of the Alfresco 3.3 release, both workflow and rules are not yet available for use in Records Management. Another area that is lacking in the 3.3 release is the availability of client-side JavaScript APIs for automating Records Management functions. The implementation of Records Management exposes many webscripts for performing records functions, but that same records functionality hasn't been exposed via a client-side JavaScript API. It is expected that capabilities of Records Management working alongside with rules, workflow, and APIs will likely improve in future releases of Alfresco. While the topic of this article is workflow and automation, the limitations we've just mentioned don't necessarily mean that there isn't anything left to discuss in this article. Remember, Alfresco Records Management co-exists with the rest of Share, and rules and workflow are alive and well in standard Share. Also remember that prior to being filed and declared as a record, many records start out their lives as standard documents that require collaboration and versioning before they are finalized and turned into records. It is in these scenarios, prior to becoming a record, where the use of workflow often makes the most sense. With that in mind, let's look at the capabilities of rules and workflow within Alfresco Share and how these features can be used side-by-side with Records Management today, and also get a feel of how future releases of Alfresco Records Management might be able to more directly apply these capabilities. Alfresco rules The Alfresco rules engine provides an interface for implementing simple business rules for managing the processing and flow of content within an organization. Creating rules is easy to do. Alfresco rules were first available in the Alfresco Explorer client. While rules in the Explorer client were never really hard to apply, the new rules interface in Share makes the creation and use of rules even easier. Rules can be attached to a folder and triggered when content is moved into it, moved out of it, or updated. The triggering of a rule causes an action to be run which then operates on the folder or the contents within it. Filters can also be applied to the rules to limit the conditions under which the trigger will fire. A trigger can be set up to run from one of the many pre-defined actions or it can be customized to run a user-defined script. Rules are not available for Share Records Management Folders. You may find that it is possible to bypass this limitation by either navigating to records folders using the Repository browser in Share or by using the JSF Explorer client to get to the Records Management folder. From those interfaces, it is possible to create rules for record folders, but it's not a good idea. Resist the temptation. It is very easy to accidently corrupt records data by applying rules directly to records folders. Defining a rule While rules can't be applied directly to the folders of Records Management, it is possible to apply rules on folders that are outside of Records Management which can then push and file documents into records folders. We'll see that once a document is moved into a records folder, rules can be used to update its metadata and even declare it as a record. To apply rules to a folder outside of the Records Management site, we select the Manage Rules action available for a folder: On the next screen, we click on the Create Rules link, and we then see a page for creating the definition of the rules for the folder: A rule is defined by three main pieces of information: The trigger event Filters to limit the items that are processed The action that is performed Triggers for rules Three different types of events can trigger a rule: Creating or moving items into the folder Updating items in the folder Deleting or moving items from the folder Filters for rules By default, when an event occurs, the rule that is triggered applies the rule action to all items involved in the event. Filters can be applied that will make rules more restrictive, limiting the items that will be processed by the rule. Filters are a collection of conditional expressions that are built using metadata properties associated with the items. There are actually two conditions. The first condition is a list of criteria for different metadata properties, all of which must be met. Similarly, the second condition is a list of criteria for metadata properties, none of which must hold. For example, in the screenshot shown below, there is a filter defined that applies a rule only if the document name begins with FTK_, if it is a Microsoft Word file, and if it does not contain the word Alfresco in the Description property: By clicking on the + and – buttons to the right of each entry, new criteria can be added and existing criteria can be removed from each of the two sets. To help specify the filter criteria from the many possible properties available in Alfresco, a property browser lets the user navigate through properties that can be used when specifying the criteria. The browser shows all available properties associated with aspect and type names: Actions for rules Actions are the operations that a rule runs when triggered. There is a fairly extensive library of actions that come standard with Alfresco and that can be used in the definition of a rule. Many of the actions available can be configured with parameters. This means that a lot of capability can be easily customized into a rule without needing to do any programming. If the standard actions aren't sufficient to perform the desired task, an alternative is to write a custom server-side JavaScript that can be attached as the action to a rule that will be run when triggered. The complete Alfresco JavaScript API is available to be used when writing the custom action. Actions that are available for assignment as rule actions are shown in the next table: It is interesting to note that there are many Records Management functions in the list that are available as possible actions, even though rules themselves are not available to be applied directly to records folders. Actions can accept parameters. For example, the Move and Copy actions allow users to select the target destination parameter for the action. Using a pop-up in the Rules Manager, the user can find a destination location by navigating through the folder structure. In the screenshot below, we see another example where the Send email action pops up a form to help configure an e-mail notification that will be sent out when the action is run: Multiple rules per folder The Rules Manager supports the assignment of multiple rules to a single folder. A drag-and-drop interface allows individual rules to be moved into the desired order of execution. When an event occurs on a folder, each rule attached to the folder is checked sequentially to see if there is a match. When the criterion for a rule matches, the rule is run. By creating multiple rules with the same firing criteria, it's possible to arrange rules for sequential processing, allowing fairly complex operations to be performed. The screenshot below shows the rules user interface that lets the user order the processing order for the rules. Actions like move and copy allow the rules on one folder to pass documents on to other folders, which in turn may also have rules assigned to them that can perform additional processing. In this way, rules on Alfresco folders are effectively a "poor man's workflow". They are powerful enough to be able to handle a large number of business automation requirements, although at a fairly simple level. More complex workflows with many conditionals and loops need to be modeled using workflow tools like that of jBPM, which we will discuss later. The next figure shows an example for how rules can be sequentially ordered: Auto-declaration example Now let's look at an example where we file documents into a transit folder which are then automatically processed, moved into the Records Management site, and then declared as records. To do this, we'll create a transit folder and attach two rules to it for performing the processing. The first rule will run a custom script that applies record aspects to the document, completes mandatory records metadata, and then moves the document into a Folder under Records Management, effectively filing it. The second rule then declares the newly filed document as a record. The rules for this example will be applied against a folder called "Transit Folder", which is located within the document library of a standard Share site. Creating the auto-filing script Let's look at the first of the two rules that uses a custom script for the action. It is this script that does most of the work in the example. We'll break up the script into two parts and discuss each part individually: // Find the file name, minus the namespace prefix (assume cm:content)var fPieces = document.qnamePath.split('/');fName =fPieces[fPieces.length-1].substr(3);// Remember the ScriptNode object for the parent folder being filed tovar parentOrigNode = document.parent;// Get today's date. We use it later to fill in metadata.var d = new Date();// Find the ScriptNode for the destination to where we will file hardcoded here. More complex logic could be used here to categorize the incoming data to file into different locations.var destLocation = "Administrative/General Correspondence/2011_01 Correspondence";var filePlan = companyhome.childByNamePath("Sites/rm/documentlibrary");var recordFolder = filePlan.childByNamePath(destLocation);// Add aspects needed to turn this document into a recorddocument.addAspect("rma:filePlanComponent");document.addAspect("rma:record");// Complete mandatory metadata that will be needed to declare as a recorddocument.properties["rma:originator"] = document.properties["cm:creator"];document.properties["rma:originatingOrganization"] = "Formtek, Inc";document.properties["rma:publicationDate"] = d;document.properties["rma:dateFiled"] = d;// Build the unique record identifier -- based on the node-dbid valuevar idStr = '' + document.properties["sys:node-dbid"];// Pad the string with zeros to be 10 characters in lengthwhile (idStr.length < 10){ idStr = '0' + idStr ;}document.properties["rma:identifier"] = d.getFullYear() + '-' + idStr;document.save();document.move(recordFolder); At the top of the script, the filename that enters the folder is extracted from the document.qnamePath string that contains the complete filename. document is the variable passed into the script that refers to the document object created with information about the new file that is moved into the folder. The destination location to the folder in Records Management is hardcoded here. A more sophisticated script could file incoming documents, based on a variety of criteria into multiple folders. We add aspects rma:filePlanComponent and rma:record to the document to prepare it for becoming a record and then complete metadata properties that are mandatory for being able to declare the document as a record. We're bypassing some code in Records Management that normally would assign the unique record identifier to the document. Normally when filed into a folder, the unique record identifier is automatically generated within the Alfresco core Java code. Because of that, we will need to reconstruct the string and assign the property in the script. We'll follow Alfresco's convention for building the unique record identifier by appending a 10-digit zero-padded integer to the year. Alfresco already has a unique object ID with every object that is used when the record identifier is constructed. The unique ID is called the sys:node-dbid. Note that any unique string could be used for the unique record identifier, but we'll go with Alfresco's convention. Finally, the script saves the changes to the document and the document is filed into the Records Management folder. At this point, the document is now an undeclared document in the Records Management system. We could stop here with this script, but let's go one step further. Let's place a stub document in this same folder that will act as a placeholder to alert users as to where the documents that they filed have been moved. The second part of the same script handles the creation of a stub file: // Leave a marker to track the documentvar stubFileName = fName + '_stub.txt' ;// Create the new documentvar props = new Array();props["cm:title"] = ' Stub';props["cm:description"] = ' (Stub Reference to record in RM)';var stubDoc = parentOrigNode.createNode( stubFileName, "cm:content", props );stubDoc.content = "This file is now under records management control:n " + recordFolder.displayPath + '/' + fName;// Make a reference to the original document, now a recordstubDoc.addAspect("cm:referencing")stubDoc.createAssociation(document, "cm:references");stubDoc.save(); The document name we will use for the stub file is the same as the incoming filename with the suffix _stub.txt appended to it. The script then creates a new node of type cm:content in the transit directory where the user originally uploaded the file. The cm:title and cm:description properties are completed for the new node and text content is added to the document. The content contains the path to where the original file has been moved. Finally, the cm:referencing aspect is added to the document to allow a reference association to be made between the stub document and the original document that is now under Records Management. The stub document with these new properties is then saved. Installing the script In order for the script to be available for use in a rule, it must first be installed under the Data Dictionary area in the repository. To add it, we navigate to the folder Data Dictionary / Scripts in the repository browser within Share. The repository can be accessed from the Repository link across the top of the Share page: To install the script, we simply copy the script file to this folder. We also need to complete the title for the new document because the title is the string that will be used later to identify it. We will name this script Move to Records Management.
Read more
  • 0
  • 0
  • 2892

article-image-how-write-widget-wordpress-3
Packt
24 Jan 2011
7 min read
Save for later

How to Write a Widget in WordPress 3

Packt
24 Jan 2011
7 min read
  WordPress 3 Complete Create your own complete website or blog from scratch with WordPress Learn everything you need for creating your own feature-rich website or blog from scratch Clear and practical explanations of all aspects of WordPress In-depth coverage of installation, themes, plugins, and syndication Explore WordPress as a fully functional content management system Clear, easy-to-follow, concise; rich with examples and screenshots Recent posts from a Category Widget In this section, we will see how to write a widget that displays recent posts from a particular category in the sidebar. The user will be able to choose how many recent posts to show and whether or not to show an RSS feed link. It will look like the following screenshot: Let's get started! Naming the widget Widgets, like plugins, need to have a unique name. Again, I suggest you search the Web for the name you want to use in order to be sure of its uniqueness. Because of the widget class, you don't need to worry so much about uniqueness in your function and variable names, since the widget class unique-ifies them for you. I've given this widget the filename ahs_postfromcat_widget.php. As for the introduction, this comment code is the same as what you use for the plugin. For this widget, the introductory comment is this: /* Plugin Name: April's List Posts Cat Widget Plugin URI: http://springthistle.com/wordpress/plugin_postfromcat Description: Allows you to add a widget with some number of most recent posts from a particular category Author: April Hodge Silver Version: 1.0 Author URI: http://springthistle.com */ Widget structure When building a widget using the widget class, your widget needs to have the following structure: class UNIQUE_WIDGET_NAME extends WP_Widget { function UNIQUE_WIDGET_NAME() { $widget_ops = array(); $control_ops = array(); $this->WP_Widget(); } function form ($instance) { // prints the form on the widgets page } function update ($new_instance, $old_instance) { // used when the user saves their widget options } function widget ($args,$instance) { // used when the sidebar calls in the widget } } // initiate the widget // register the widget Of course, we need an actual unique widget name. I'm going to use Posts_From_Category. Now, let's flesh out this code one section at a time. Widget initiation function Let's start with the widget initiation function. Blank, it looks like this: function Posts_From_Category() { $widget_ops = array(); $control_ops = array(); $this->WP_Widget(); } In this function, which has the same name as the class itself and is therefore the constructor, we initialize various things that the WP_Widget class is expecting. The first two variables, to which you can give any name you want, are just a handy way to set the two array variables expected by the third line of code. Let's take a look at these three lines of code: The $widget_ops variable is where you can set the class name, which is given to the widget div itself, and the description, which is shown in the WP Admin on the widgets page. The $control_ops variable is where you can set options for the control box in the WP Admin on the widget page, like the width and height of the widget and the ID prefix used for the names and IDs of the items inside. When you call the parent class' constructor, WP_Widget(), you'll tell it the widget's unique ID, the widget's display title, and pass along the two arrays you created. For this widget, my code now looks like this: function Posts_From_Category() { $widget_ops = array( 'classname' => 'postsfromcat', 'description' => 'Allows you to display a list of recent posts within a particular category.'); $control_ops = array( 'width' => 250, 'height' => 250, 'id_base' => 'postsfromcat-widget'); $this->WP_Widget('postsfromcat-widget', 'Posts from a Category', $widget_ops, $control_ops ); } Widget form function This function has to be named form(). You may not rename it if you want the widget class to know what it's purpose is. You also need to have an argument in there, which I'm calling $instance, that the class also expects. This is where current widget settings are stored. This function needs to have all of the functionalities to create the form that users will see when adding the widget to a sidebar. Let's look at some abbreviated code and then explore what it's doing: <?php function form ($instance) { $defaults = array('numberposts' => '5','catid'=>'1','title'=>'','rss'=>''); $instance = wp_parse_args( (array) $instance, $defaults ); ?> <p> <label for="<?php echo $this->get_field_id('title'); ?>">Title:</label> <input type="text" name="<?php echo $this->get_field_name('title') ?>" id="<?php echo $this->get_field_id('title') ?> " value="<?php echo $instance['title'] ?>" size="20"> </p> <p> <label for="<?php echo $this->get_field_id('catid'); ?>">Category ID:</label> <?php wp_dropdown_categories('hide_empty=0&hierarchical=1&id='.$this->get_field_id('catid').'&name='.$this->get_field_name('catid').'&selected='.$instance['catid']); ?> </p> <p> <label for='<?php echo $this->get_field_id('numberposts'); ?>">Number of posts:</label> <select id="<?php echo $this->get_field_id('numberposts'); ?>" name="<?php echo $this->get_field_name('numberposts'); ?>"> <?php for ($i=1;$i<=20;$i++) { echo '<option value="'.$i.'"'; if ($i==$instance['numberposts']) echo ' selected="selected"'; echo '>'.$i.'</option>'; } ?> </select> </p> <p> <input type="checkbox" id="<?php echo $this->get_field_id('rss'); ?>" name="<?php echo $this->get_field_name('rss'); ?>" <?php if ($instance['rss']) echo 'checked="checked"' ?> /> <label for="<?php echo $this->get_field_id('rss'); ?>">Show RSS feed link?</label> </p> <?php } ?> First, I set some defaults, which in this case is just for the number of posts, which I think it would be nice to set to 5. You can set other defaults in this array as well. Then you use a WordPress function named wp_parse_args(), which creates an $instance array that your form will use. What's in it depends on what defaults you've set and what settings the user has already saved. Then you create form fields. Note that for each form field, I make use of the built-in functions that will create unique names and IDs and input existing values. $this->get-field_id creates a unique ID based on the widget instance (remember, you can create more than one instance of this widget). $this->get_field_name() creates a unique name based on the widget instance. The $instance array is where you will find the current values for the widget, whether they are defaults or user-saved data. All the other code in there is just regular PHP and HTML. Note that if you give the user the ability to set a title, name that field "title", WordPress will show it on the widget form when it's minimized. The widget form this will create will look like this:  
Read more
  • 0
  • 0
  • 3906

article-image-integrating-facebook-magento
Packt
21 Jan 2011
4 min read
Save for later

Integrating Facebook with Magento

Packt
21 Jan 2011
4 min read
  Magento 1.4 Themes Design Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine Install and configure Magento 1.4 and learn the fundamental principles behind Magento themes Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine by changing Magento templates, skin files and layout files Change the basics of your Magento theme from the logo of your store to the color scheme of your theme Integrate popular social media aspects such as Twitter and Facebook into your Magento store Facebook (http://www.facebook.com) is a social networking website that allows people to add each other as 'friends' and to send messages and share content. Move the mouse over the image to enlarge it. As with Twitter, there are two options you have for integrating Facebook with your Magento store: Adding a 'Like' button to your store's product pages to allow your customers to show their appreciation for individual products on your store. Integrating a widget of the latest news from your store's Facebook profile. Adding a 'Like' button to your Magento store's product pages The Facebook 'Like' button allows Facebook users to show that they approve of a particular web page and you can put this to use on your Magento store. Getting the 'Like' button markup To get the markup required for your store's 'Like' button, go to the Facebook Developers website at: http://developers.facebook.com/docs/reference/ plugins/like. Fill in the form below the description text with relevant values, leaving the URL to like field as URLTOLIKE for now, and setting the Width to 200: Click on the Get Code button at the bottom of the form and then copy the code that is presented in the iframe field: The generated markup should look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= URLTOLIKE&amp;layout=standard&amp;show_faces=true&amp;width= 200&amp;action=like&amp;colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> You now need to replace the URLTOLIKE in the previous markup to the URL of the current page in your Magento store. The PHP required to do this in Magento looks like the following: <?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?> The new Like button markup for your Magento store should now look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= ".<?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?>". &amp;layout=standard&amp;show_faces=true&amp;width=200&amp;action= like&amp;colorscheme=light&amp;height=80» scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> Open your theme's view.phtml file in the /app/design/frontend/default/m2/ template/catalog/product directory and locate the lines that read: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div></div> Insert the code generated by Facebook here, so that it now reads the following: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div> <iframe src="http://www.facebook.com/plugins/like.php?href=<?php echo $this->helper('core/url')->getCurrentUrl();?>&amp;layout= standard&amp;show_faces=true&amp;width=200&amp;action=like&amp; colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> </div> Save and upload this file back to your Magento installation and then visit a product page within your store to see the button appear below the brief description of the product: That's it, your product pages can now be liked on Facebook!
Read more
  • 0
  • 0
  • 3139

article-image-working-entities-google-web-toolkit-2
Packt
19 Jan 2011
9 min read
Save for later

Working with Entities in Google Web Toolkit 2

Packt
19 Jan 2011
9 min read
  Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on GWT, see here.) Finding an entity In this recipe, we are going to write the code to find an entity. From the client side, the ID of the entity will be passed to the server; the server will find the entity in the database using the JPA controller class, and then return the entity to the client in order to display it. How to do it... Declare the following method in the GWTService interface: public BranchDTO findBranch(int branchId); Declare the asynchronous version of the above method in GWTServiceAsync interface public void findBranch(int branchId, AsyncCallback<BranchDTO> asyncCallback); Implement this method in GWTServiceImpl class @Override public BranchDTO findBranch(int branchId) { Branch branch=branchJpaController.findBranch(branchId); BranchDTO branchDTO=null; if(branch!=null) { branchDTO=new BranchDTO(); branchDTO.setBranchId(branch.getBranchId()); branchDTO.setName(branch.getName()); branchDTO.setLocation(branch.getLocation()); } return branchDTO; } Create a callback instance in client side (BranchForm in this case) to call this method as shown in the following code: final AsyncCallback<BranchDTO> callbackFind = new AsyncCallback<BranchDTO>() { @Override public void onFailure(Throwable caught) { MessageBox messageBox = new MessageBox(); messageBox.setMessage("An error occured! Cannot complete the operation"); messageBox.show(); clear(); } @Override public void onSuccess(BranchDTO result) { branchDTO=result; if(result!=null) { branchIdField.setValue(""+branchDTO.getBranchId()); nameField.setValue(branchDTO.getName()); locationField.setValue(branchDTO.getLocation()); } else { MessageBox messageBox = new MessageBox(); messageBox.setMessage("No such Branch found"); messageBox.show(); clear(); } } }; Write the event-handling code for the find button as follows: findButton.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { MessageBox inputBox = MessageBox.prompt("Input", "Enter the Branch ID"); inputBox.addCallback(new Listener<MessageBoxEvent>() { public void handleEvent(MessageBoxEvent be) { int branchId = Integer.parseInt(be.getValue()); ((GWTServiceAsync)GWT.create(GWTService.class)). findBranch(branchId,callbackFind); } }); } }); How it works... Here, the steps for calling the RPC method are the same as we had done for the add/save operation. The only difference is the type of the result we have received from the server. We have passed the int branch ID and have received the complete BrachDTO object, from which the values are shown in the branch form. Updating an entity In this recipe, we are going to write the code to update an entity. The client will transfer the DTO of updated object, and the server will update the entity in the database using the JPA controller class. How to do it... Declare the following method in the GWTService interface: public boolean updateBranch(BranchDTO branchDTO); Declare the asynchronous version of this method in the GWTServiceAsync interface: public void updateBranch(BranchDTO branchDTO, AsyncCallback<java.lang.Boolean> asyncCallback); Implement the method in the GWTServiceImpl class: @Override public boolean updateBranch(BranchDTO branchDTO) { boolean updated=false; try { branchJpaController.edit(new Branch(branchDTO)); updated=true; } catch (IllegalOrphanException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } catch (NonexistentEntityException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } catch (Exception ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } return updated; } Create a callback instance for this method in the client side (BranchForm in this case, if it is not created yet): final AsyncCallback<Boolean> callback = new AsyncCallback<Boolean>() { MessageBox messageBox = new MessageBox(); @Override public void onFailure(Throwable caught) { messageBox.setMessage("An error occured! Cannot complete the operation"); messageBox.show(); } @Override public void onSuccess(Boolean result) { if (result) { messageBox.setMessage("Operation completed successfully"); } else { messageBox.setMessage("An error occured! Cannot complete the operation"); } messageBox.show(); } }; Write the event handle code for the update button: updateButton.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { branchDTO.setName(nameField.getValue()); branchDTO.setLocation(locationField.getValue()); ((GWTServiceAsync)GWT.create(GWTService.class)). updateBranch(branchDTO,callback); clear(); } }); How it works... This operation is also almost the same as the add operation shown previously. The difference here is the method of controller class. The method edit of the controller class is used to update an entity. Deleting an entity In this recipe, we are going to write the code to delete an entity. The client will transfer the ID of the object, and the server will delete the entity from the database using the JPA controller class. How to do it... Declare the following method in the GWTService interface public boolean deleteBranch(int branchId); Declare the asynchronous version of this method in GWTServiceAsync interface public void deleteBranch(int branchId, AsyncCallback<java.lang.Boolean> asyncCallback); Implement the method in GWTServiceImpl class @Override public boolean deleteBranch(int branchId) { boolean deleted=false; try { branchJpaController.destroy(branchId); deleted=true; } catch (IllegalOrphanException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } catch (NonexistentEntityException ex) { Logger.getLogger(GWTServiceImpl.class.getName()). log(Level.SEVERE, null, ex); } return deleted; } Create a callback instance for this method in the client side (BranchForm in this case, if it is not created yet): final AsyncCallback<Boolean> callback = new AsyncCallback<Boolean>() { MessageBox messageBox = new MessageBox(); @Override public void onFailure(Throwable caught) { messageBox.setMessage("An error occured! Cannot complete the operation"); messageBox.show(); } @Override public void onSuccess(Boolean result) { if (result) { messageBox.setMessage("Operation completed successfully"); } else { messageBox.setMessage("An error occured! Cannot complete the operation"); } messageBox.show(); } }; Write the event handling code for the delete button: deleteButton.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { ((GWTServiceAsync)GWT.create(GWTService.class)). deleteBranch(branchDTO.getBranchId(),callback); clear(); } }); Managing a list for RPC Sometimes, we need to transfer a list of objects as java.util.List (or a collection) back and forth between the server and the client. We already know from the preceding recipes that the JPA entity class objects are not transferable directly using RPC. Because of the same reason, any list of the JPA entity class is not transferable directly. To transfer java.util.List using RPC, the list must contain objects from DTO classes only. In this recipe, we will see how we can manage a list for RPC. In our scenario, we can consider two classes—Customer and Sales. The association between these two classes is that one customer makes zero or more sales and one sale is made by one customer. Because of such an association, the customer class contains a list of sales, and the sales class contains a single instance of customer class. For example, we want to transfer the full customer object with the list of sales made by this customer. Let's see how we can make that possible. How to do it... Create DTO classes for Customer and Sales (CustomerDTO and SalesDTO, respectively). In the following table, the required changes in data types are shown for the entity and DTO class attributes. The list in the DTO class contains objects of only the DTO class; on the other hand, the list of the entity class contains objects of entity class. Define the following constructor in the Customer entity class: public Customer(CustomerDTO customerDTO) { setCustomerNo(customerDTO.getCustomerNo()); setName(customerDTO.getName()); setAddress(customerDTO.getAddress()); setContactNo(customerDTO.getContactNo()); List<SalesDTO> salesDTOList=customerDTO.getSalesList(); salesList = new ArrayList<Sales>(); for(int i=0;i<salesDTOList.size();i++) { SalesDTO salesDTO=salesDTOList.get(i); Sales sales=new Sales(salesDTO); salesList.add(sales); } } Define the following constructor in the Sales entity class: public Sales(SalesDTO salesDTO) { setSalesNo(salesDTO.getSalesNo()); setSalesDate(salesDTO.getSalesDate()); setCustomer(new Customer(salesDTO.getCustomer())); // there's more but not relevant for this recipe } How it works... Now in the server side, the entity classes, Customer and Sales, will be used, and in the client side, CustomerDTO and SalesDTO, will be used. Constructors with DTO class type argument are defined for the mapping between entity class and DTO class. But here, the addition is the loop used for creating the list. From the CustomerDTO class, we get a list of SalesDTO. The loop gets one SalesDTO from the list, converts it to Sales, and adds it in the Sales list—that's all. Authenticating a user through username and password In this recipe, we are going to create the necessary methods to authenticate a user through a login process. Getting ready Create the DTO class for the entity class Users. How to do it... Declare the following method in the GWTService interface: public UsersDTO login(String username,String password); Declare the following method in the GWTServiceAsync interface: public void login(String username, String password, AsyncCallback<UsersDTO> asyncCallback); Implement the method in the GWTServiceImpl class: @Override public UsersDTO login(String username, String password) { UsersDTO userDTO = null; UsersJpaController usersJpaController = new UsersJpaController(); Users user = (Users) usersJpaController.findUsers(username); if (user != null) { if (user.getPassword().equals(password)) { userDTO=new UsersDTO(); userDTO.setUserName(user.getUserName()); userDTO.setPassword(user.getPassword()); EmployeeDTO employeeDTO= new EmployeeDTO(user.getEmployee().getEmployeeId()); employeeDTO.setName(user.getEmployee().getName()); userDTO.setEmployeeDTO(employeeDTO); } } return userDTO; } How it works... A username and password are passed to the method. An object of the UsersJpaController class is created to find the Users object based on the given username. If the find method returns null, it means that no such user exists. Otherwise, the password of the Users object is compared with the given password. If both the passwords match, a UsersDTO object is constructed and returned. The client will call this method during the login process. If the client gets null, the client should handle it accordingly, as the username/password is not correct. If it is not null, the user is authenticated. Summary In this article we how we can manage entities in GWT RPC. Specifically, we covered the following: Finding an entity Updating an entity Deleting an entity Managing a list for RPC Authenticating a user through username and password Further resources on this subject: Google Web Toolkit 2: Creating Page Layout [Article] Communicating with Server using Google Web Toolkit RPC [Article] Password Strength Checker in Google Web Toolkit and AJAX [Article] Google Web Toolkit GWT Java AJAX Programming [Book] Google Web Toolkit 2 Application Development Cookbook [Book]
Read more
  • 0
  • 0
  • 1731
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-communicating-server-using-google-web-toolkit-rpc
Packt
19 Jan 2011
5 min read
Save for later

Communicating with Server using Google Web Toolkit RPC

Packt
19 Jan 2011
5 min read
  Google Web Toolkit 2 Application Development Cookbook Over 70 simple but incredibly effective practical recipes to develop web applications using GWT with JPA , MySQL and i Report Create impressive, complex browser-based web applications with GWT 2 Learn the most effective ways to create reports with parameters, variables, and subreports using iReport Create Swing-like web-based GUIs using the Ext GWT class library Develop applications using browser quirks, Javascript,HTML scriplets from scratch Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible  The Graphical User Interface (GUI) resides in the client side of the application. This article introduces the communication between the server and the client, where the client (GUI) will send a request to the server, and the server will respond accordingly. In GWT, the interaction between the server and the client is made through the RPC mechanism. RPC stands for Remote Procedure Call. The concept is that there are some methods in the server side, which are called by the client at a remote location. The client calls the methods by passing the necessary arguments, and the server processes them, and then returns back the result to the client. GWT RPC allows the server and the client to pass Java objects back and forth. RPC has the following steps: Defining the GWTService interface: Not all the methods of the server are called by the client. The methods which are called remotely by the client are defined in an interface, which is called GWTService. Defining the GWTServiceAsync interface: Based on the GWTService interface, another interface is defined, which is actually an asynchronous version of the GWTService interface. By calling the asynchronous method, the caller (the client) is not blocked until the method completes the operation. Implementing the GWTService interface: A class is created where the abstract method of the GWTService interface is overridden. Calling the methods: The client calls the remote method to get the server response. Creating DTO classes In this application, the server and the client will pass Java objects back and forth for the operation. For example, the BranchForm will request the server to persist a Branch object, where the Branch object is created and passed to server by the client, and the server persists the object in the server database. In another example, the client will pass the Branch ID (as an int), the server will find the particular Branch information, and then send the Branch object to the client to be displayed in the branch form. So, both the server and client need to send or receive Java objects. We have already created the JPA entity classes and the JPA controller classes to manage the entity using the Entity Manager. But the JPA class objects are not transferable over the network using the RPC. JPA classes will just be used by the server on the server side. For the client side (to send and receive objects), DTO classes are used. DTO stands for Data Transfer Object. DTO is simply a transfer object which encapsulates the business data and transfers it across the network. Getting ready Create a package com.packtpub.client.dto, and create all the DTO classes in this package. How to do it... The steps required to complete the task are as follows: Create a class BranchDTO that implements the Serializable interface: public class BranchDTO implements Serializable Declare the attributes. You can copy the attribute declaration from the entity classes. But in this case, do not include the annotations: private Integer branchId; private String name; private String location Define the constructors, as shown in the following code: public BranchDTO(Integer branchId, String name, String location) { this.branchId = branchId; this.name = name; this.location = location; } public BranchDTO(Integer branchId, String name) { this.branchId = branchId; this.name = name; } public BranchDTO(Integer branchId) { this.branchId = branchId; } public BranchDTO() { } To generate the constructors automatically in NetBeans, right-click on the code, select Insert Code | Constructor, and then click on Generate after selecting the attribute(s). Define the getter and setter: public Integer getBranchId() { return branchId; } public void setBranchId(Integer branchId) { this.branchId = branchId; } public String getLocation() { return location; } public void setLocation(String location) { this.location = location; } public String getName() { return name; } public void setName(String name) { this.name = name; } To generate the setter and getter automatically in NetBeans, right-click on the code, select Insert Code | Getter and Setter…, and then click on Generate after selecting the attribute(s). Mapping entity classes and DTOs In RPC, the client will send and receive DTOs, but the server needs pure JPA objects to be used by the Entity Manager. That's why, we need to transform from DTO to JPA entity class and vice versa. In this recipe, we will learn how to map the entity class and DTO. Getting ready Create the entity and DTO classes. How to do it... Open the Branch entity class and define a constructor with a parameter of type BranchDTO. The constructor gets the properties from the DTO and sets them in its own properties: public Branch(BranchDTO branchDTO) { setBranchId(branchDTO.getBranchId()); setName(branchDTO.getName()); setLocation(branchDTO.getLocation()); } This constructor will be used to create the Branch entity class object from the BranchDTO object. In the same way, the BranchDTO object is constructed from the entity class object, but in this case, the constructor is not defined. Instead, it is done where it is required to construct DTO from the entity class. There's more... Some third-party libraries are available for automatically mapping entity class and DTO, such as Dozer and Gilead. For details, you may visit http://dozer.sourceforge.net/ and http://noon.gilead.free.fr/gilead/. Creating the GWT RPC Service In this recipe, we are going to create the GWTService interface, which will contain an abstract method to add a Branch object to the database. Getting ready Create the Branch entity class and the DTO class.  
Read more
  • 0
  • 0
  • 2198

article-image-tinkering-around-django-javascript-integration
Packt
18 Jan 2011
9 min read
Save for later

Tinkering Around in Django JavaScript Integration

Packt
18 Jan 2011
9 min read
Minor tweaks and bugfixes Good tinkering can be a process that begins with tweaks and bugfixes, and snowballs from there. Let's begin with some of the smaller tweaks and bugfixes before tinkering further. Setting a default name of "(Insert name here)" Most of the fields on an Entity default to blank, which is in general appropriate. However, this means that there is a zero-width link for any search result which has not had a name set. If a user fills out the Entity's name before navigating away from that page, everything is fine, but it is a very suspicious assumption that all users will magically use our software in whatever fashion would be most convenient for our implementation. So, instead, we set a default name of "(Insert name here)" in the definition of an Entity, in models.py: name = models.TextField(blank = True, default = u'(Insert name here)') Eliminating Borg behavior One variant on the classic Singleton pattern in Gang of Four is the Borg pattern, where arbitrarily many instances of a Borg class may exist, but they share the same dictionary, so that if you set an attribute on one of them, you set the attribute on all of them. At present we have a bug, which is that our views pull all available instances. We need to specify something different. We update the end of ajax_profile(), including a slot for time zones to be used later in this article, to: return render_to_response(u'profile_internal.html', { u'entities': directory.models.Entity.objects.filter( is_invisible = False).order_by(u'name'), u'entity': entity, u'first_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[:directory.settings.INITIAL_STATI], u'gps': gps, u'gps_url': gps_url, u'id': int(id), u'emails': directory.models.Email.objects.filter( entity = entity, is_invisible = False), u'phones': directory.models.Phone.objects.filter( entity = entity, is_invisible = False), u'second_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[directory.settings.INITIAL_STATI:], u'tags': directory.models.Tag.objects.filter(entity = entity, is_invisible = False).order_by(u'text'), u'time_zones': directory.models.TIME_ZONE_CHOICES, u'urls': directory.models.URL.objects.filter(entity = entity, is_invisible = False), }) Likewise, we update homepage(): profile = template.render(Context( { u'entities': directory.models.Entity.objects.filter( is_invisible = False), u'entity': entity, u'first_stati': directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[:directory.settings.INITIAL_STATI], u'gps': gps, u'gps_url': gps_url, u'id': int(id), u'emails': directory.models.Email.objects.filter( entity = entity, is_invisible = False), u'phones': directory.models.Phone.objects.filter( entity = entity, is_invisible = False), u'query': urllib.quote(query), u'second_stati':directory.models.Status.objects.filter( entity = id).order_by( u'-datetime')[directory.settings.INITIAL_STATI:], u'time_zones': directory.models.TIME_ZONE_CHOICES, u'tags': directory.models.Tag.objects.filter( entity = entity, is_invisible = False).order_by(u'text'), u'urls': directory.models.URL.objects.filter( entity = entity, is_invisible = False), })) Confusing jQuery's load() with html() If we have failed to load a profile in the main search.html template, we had a call to load(""). What we needed was: else { $("#profile").html(""); } $("#profile").load("") loads a copy of the current page into the div named profile. We can improve on this slightly to "blank" contents that include the default header: else { $("#profile").html("<h2>People, etc.</h2>"); } Preventing display of deleted instances In our system, enabling undo means that there can be instances (Entities, Emails, URLs, and so on) which have been deleted but are still available for undo. We have implemented deletion by setting an is_invisible flag to True, and we also need to check before displaying to avoid puzzling behavior like a user deleting an Entity, being told Your change has been saved, and then seeing the Entity's profile displayed exactly as before. We accomplish this by a specifying, for a Queryset .filter(is_invisible = False) where we might earlier have specified .all(), or adding is_invisible = False to the conditions of a pre-existing filter; for instance: def ajax_download_model(request, model): if directory.settings.SHOULD_DOWNLOAD_DIRECTORY: json_serializer = serializers.get_serializer(u'json')() response = HttpResponse(mimetype = u'application/json') if model == u'Entity': json_serializer.serialize(getattr(directory.models, model).objects.filter( is_invisible = False).order_by(u'name'), ensure_ascii = False, stream = response) else: json_serializer.serialize(getattr(directory.models, model).objects.filter(is_invisible = False), ensure_ascii = False, stream = response) return response else: return HttpResponse(u'This feature has been turned off.') In the main view for the profile, we add a check in the beginning so that a (basically) blank result page is shown: def ajax_profile(request, id): entity = directory.models.Entity.objects.filter(id = int(id))[0] if entity.is_invisible: return HttpResponse(u'<h2>People, etc.</h2>') One nicety we provide is usually loading a profile on mouseover for its area of the search result page. This means that users can more quickly and easily scan through drilldown pages in search of the right match; however, there is a performance gotcha for simply specifying an onmouseover handler. If you specify an onmouseover for a containing div, you may get a separate event call for every time the user hovers over an element contained in the div, easily getting 3+ calls if a user moves the mouse over to the link. That could be annoying to people on a VPN connection if it means that they are getting the network hits for numerous needless profile loads. To cut back on this, we define an initially null variable for the last profile moused over: Then we call the following function in the containing div element's onmouseover: PHOTO_DIRECTORY.last_mouseover_profile = null; Then we call the following function in the containing div element's onmouseover: PHOTO_DIRECTORY.mouseover_profile = function(profile) { if (profile != PHOTO_DIRECTORY.last_mouseover_profile) { PHOTO_DIRECTORY.load_profile(profile); PHOTO_DIRECTORY.last_mouseover_profile = profile; PHOTO_DIRECTORY.register_editables(); } } The relevant code from search_internal.html is as follows: <div class="search_result" onmouseover="PHOTO_DIRECTORY.mouseover_profile( {{ result.id }});" onclick="PHOTO_DIRECTORY.click_profile({{ result.id }});"> We usually, but not always, enable this mouseover functionality; not always, because it works out to annoying behavior if a person is trying to edit, does a drag select, mouses over the profile area, and reloads a fresh, non-edited profile. Here we edit the Jeditable plugin's source code and add a few lines; we also perform a second check for if the user is logged in, and offer a login form if so: /* if element is empty add something clickable (if requested) */if (!$.trim($(this).html())) { $(this).html(settings.placeholder);}$(this).bind(settings.event, function(e) { $("div").removeAttr("onmouseover"); if (!PHOTO_DIRECTORY.check_login()) { PHOTO_DIRECTORY.offer_login(); } /* abort if disabled for this element */ if (true === $(this).data('disabled.editable')) { return; } For Jeditable-enabled elements, we can override the placeholder for an empty element at method call, but the default placeholder is cleared when editing begins; overridden placeholders aren't. We override the placeholder with something that gives us a little more control and styling freedom: // publicly accessible defaults $.fn.editable.defaults = { name : 'value', id : 'id', type : 'text', width : 'auto', height : 'auto', event : 'click.editable', onblur : 'cancel', loadtype : 'GET', loadtext : 'Loading...', placeholder:'<span class="placeholder"> Click to add.</span>', loaddata : {}, submitdata : {}, ajaxoptions: {} }; All of this is added to the file jquery.jeditable.js. We now have, as well as an @ajax_login_required decorator, an @ajax_permission_required decorator. We test for this variable in the default postprocessor specified in $.ajaxSetup() for the complete handler. Because Jeditable will place the returned data inline, we also refresh the profile. This occurs after the code to check for an undoable edit and offer an undo option to the user. complete: function(XMLHttpRequest, textStatus) { var data = XMLHttpRequest.responseText; var regular_expression = new RegExp("<!-" + "-# (d+) #-" + "->"); if (data.match(regular_expression)) { var match = regular_expression.exec(data); PHOTO_DIRECTORY.undo_notification( "Your changes have been saved. " + "<a href='JavaScript:PHOTO_DIRECTORY.undo(" + match[1] + ")'>Undo</a>"); } else if (data == '{"not_permitted": true}' || data == "{'not_permitted': true}") { PHOTO_DIRECTORY.send_notification( "We are sorry, but we cannot allow you " + "to do that."); PHOTO_DIRECTORY.reload_profile(); } }, Note that we have tried to produce the least painful of clear message we can: we avoid both saying "You shouldn't be doing that," and a terse, "bad movie computer"-style message of "Access denied" or "Permission denied." We also removed from that method code to call offer_login() if a call came back not authenticated. This looked good on paper, but our code was making Ajax calls soon enough that the user would get an immediate, unprovoked, modal login dialog on loading the page. Adding a favicon.ico In terms of minor tweaks, some visually distinct favicon.ico (http://softpedia.com/ is one of many free sources of favicon.ico files, or the favicon generator at http://tools.dynamicdrive.com/favicon/ which can take an image like your company logo as the basis for an icon) helps your tabs look different at a glance from other tabs. Save a good, simple favicon in static/favicon.ico. The icon may not show up immediately when you refresh, but a good favicon makes it slightly easier for visitors to manage your pages among others that they have to deal with. It shows up in the address bar, bookmarks, and possibly other places. This brings us to the end of the minor tweaks; let us look at two slightly larger additions to the directory.
Read more
  • 0
  • 0
  • 2476

article-image-getting-started-alfresco-records-management-module
Packt
18 Jan 2011
7 min read
Save for later

Getting Started with the Alfresco Records Management Module

Packt
18 Jan 2011
7 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix     The Alfresco stack Alfresco software was designed for enterprise, and as such, supports a variety of different stack elements. Supported Alfresco stack elements include some of the most widely used operating systems, relational databases, and application servers. The core infrastructure of Alfresco is built on Java. This core provides the flexibility for the server to run on a variety of operating systems, like Microsoft Windows, Linux, Mac OS, and Sun Solaris. The use of Hibernate allows Alfresco to map objects and data from Java into almost any relational database. The databases that the Enterprise version of Alfresco software is certified to work with include Oracle, Microsoft SQL Server, MySQL, PostgresSQL, and DB2. Alfresco also runs on a variety of Application Servers that include Tomcat, JBoss, WebLogic, and WebSphere. Other relational databases and application servers may work as well, although they have not been explicitly tested and are also not supported. Details of which Alfresco stack elements are supported can be found on the Alfresco website: http://www.alfresco.com/services/subscription/supported-platforms/3-x/. Depending on the target deployment environment, different elements of the Alfresco stack may be favored over others. The exact configuration details for setting up the various stack element options is not discussed in this book. You can find ample discussion and details on the Alfresco wiki on how to configure, set up, and change the different stack elements. The version-specific installation and setup guides provided by Alfresco also contain very detailed information. The example description and screenshots given in this article are based on the Windows operating system. The details may differ for other operating systems, but you will find that the basic steps are very similar. Additional information on the internals of Alfresco software can be found on the Alfresco wiki at http://wiki.alfresco.com/wiki/Main_Page. Alfresco software As a first step to getting Alfresco Records Management up and running, we need to first acquire the software. Whether you plan to use either the Enterprise or the Community version of Alfresco, you should note that the Records Management module was not available until late 2009. The Records Management module was first certified with the 3.2 release of Alfresco Share. The first Enterprise version of Alfresco that supported Records Management was version 3.2R, which was released in February 2010. Make sure the software versions are compatible It is important to note that there was an early version of Records Management that was built for the Alfresco JSF-based Explorer client. That version was not certified for DoD 5015.2 compliance and is no longer supported by Alfresco. In fact, the Alfresco Explorer version of Records Management is not compatible with the Share version of Records Management, and trying to use the two implementations together can result in corrupt data. It is also important to make sure that the version of the Records Management module that you use matches the version of the base Alfresco Share software. For example, trying to use the Enterprise version of Records Management on a Community install of Alfresco will lead to problems, even if the version numbers are the same. The 3.3 Enterprise version of Records Management, as another example, is also not fully compatible with the 3.2R Enterprise version of Alfresco software. Downloading the Alfresco software The easiest way to get Alfresco Records Management up and running is by doing a fresh install of the latest available Alfresco software. Alfresco Community The Community version of Alfresco is a great place to get started. Especially if you are just interested in evaluating if Alfresco software meets your needs, and with no license fees to worry about, there's really nothing to lose in going this route. Since Alfresco Community software is constantly in the "in development" state and is not as rigorously tested, it tends to not be as stable as the Enterprise version. But, in terms of the Records Management module for the 3.2+ version releases of the software, the Community implementation is feature-complete. This means that the same Records Management features in the Enterprise version are also found in the Community version. The caveat with using the Community version is that support is only available from the Alfresco community, should you run across a problem. The Enterprise release also includes support from the Alfresco support team and may have bug fixes or patches not yet available for the community release. Also of note is the fact that there are other repository features beyond those of Records Management features, especially in the area of scalability, which are available only with the Enterprise release. Building from source code It is possible to get the most recent version of the Alfresco Community software by getting a snapshot copy of the source code from the publicly accessible Alfresco Subversion source code repository. A version of the software can be built from a snapshot of the source code taken from there. But unless you are anxiously waiting for a new Alfresco feature or bug fix and need to get your hands immediately on a build with that new code included as part of it, for most people, building from source is probably not the route to go. Building from source code can be time consuming and error prone. The final software version that you build can often be very buggy or unstable due to code that has been checked-in prematurely or changes that might be in the process of being merged into the Community release, but which weren't completely checked-in at the time you updated your snapshot of the code base. If you do decide that you'd like to try to build Alfresco software from source code, details on how to get set up to do that can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Alfresco_SVN_Development_Environment. Download a Community version snapshot build Builds of snapshots of the Alfresco Community source code are periodically taken and made available for download. Using a pre-built Community version of Alfresco software saves you much hassle and headaches from not having to do the build from scratch. While not thoroughly tested, the snapshot Community builds have been tested sufficiently so that they tend to be stable enough to see most of the functionality available for the release, although not everything may be working completely. Links to the most recent Alfresco Community version builds can be found on the Alfresco wiki: http://wiki.alfresco.com/wiki/Download_Community_Edition. Alfresco Enterprise The alternative to using Alfresco open source Community software is the Enterprise version of Alfresco. For most organizations, the fully certified Enterprise version of Alfresco software is the recommended choice. The Enterprise version of Alfresco software has been thoroughly tested and is fully supported. Alfresco customers and partners have access to the most recent Enterprise software from the Alfresco Network site: http://network.alfresco.com/. Trial copies of Alfresco Enterprise software can be downloaded from the Alfresco site: http://www.alfresco.com/try/. Time-limited access to on-demand instances of Alfresco software are also available and are a great way to get a good understanding of how Alfresco software works.
Read more
  • 0
  • 0
  • 2991

article-image-facebook-accessing-graph-api
Packt
18 Jan 2011
8 min read
Save for later

Facebook: Accessing Graph API

Packt
18 Jan 2011
8 min read
  Facebook Graph API Development with Flash Build social Flash applications fully integrated with the Facebook Graph API Build your own interactive applications and games that integrate with Facebook Add social features to your AS3 projects without having to build a new social network from scratch Learn how to retrieve information from Facebook's database A hands-on guide with step-by-step instructions and clear explanation that encourages experimentation and play Accessing the Graph API through a Browser We'll dive right in by taking a look at how the Graph API represents the information from a public Page. When I talk about a Page with a capital P, I don't just mean any web page within the Facebook site; I'm referring to a specific type of page, also known as a public profile. Every Facebook user has their own personal profile; you can see yours by logging in to Facebook and clicking on the "Profile" link in the navigation bar at the top of the site. Public profiles look similar, but are designed to be used by businesses, bands, products, organizations, and public figures, as a way of having a presence on Facebook. This means that many people have both a personal profile and a public profile. For example, Mark Zuckerberg, the CEO of Facebook, has a personal profile at http://www.facebook.com/zuck and a public profile (a Page) at http://www.facebook.com/markzuckerberg. This way, he can use his personal profile to keep in touch with his friends and family, while using his public profile to connect with his fans and supporters. There is a second type of Page: a Community Page. Again, these look very similar to personal profiles; the difference is that these are based on topics, experience, and causes, rather than entities. Also, they automatically retrieve information about the topic from Wikipedia, where relevant, and contain a live feed of wall posts talking about the topic. All this can feel a little confusing – don't worry about it! Once you start using it, it all makes sense. Time for action – loading a Page Browse to http://www.facebook.com/PacktPub to load Packt Publishing's Facebook Page. You'll see a list of recent wall posts, an Info tab, some photo albums (mostly containing book covers), a profile picture, and a list of fans and links. That's how website users view the information. How will our code "see" it? Take a look at how the Graph API represents Packt Publishing's Page by pointing your web browser at https://graph.facebook.com/PacktPub. This is called a Graph URL – note that it's the same URL as the Page itself, but with a secure https connection, and using the graph sub domain, rather than www. What you'll see is as follows: { "id": "204603129458", "name": "Packt Publishing", "picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/ hs302.ash1/23274_204603129458_7460_s.jpg", "link": "http://www.facebook.com/PacktPub", "category": "Products_other", "username": "PacktPub", "company_overview": "Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.", "fan_count": 412 } What just happened? You just fetched the Graph API's representation of the Packt Publishing Page in your browser. The Graph API is designed to be easy to pick up – practically self-documenting – and you can see that it's a success in that respect. It's pretty clear that the previous data is a list of fields and their values. The one field that's perhaps not clear is id; this number is what Facebook uses internally to refer to the Page. This means Pages can have two IDs: the numeric one assigned automatically by Facebook, and an alphanumeric one chosen by the Page's owner. The two IDs are equivalent: if you browse to https://graph.facebook.com/204603129458, you'll see exactly the same data as if you browse to https://graph.facebook.com/PacktPub. Have a go hero – exploring other objects Of course, the Packt Publishing Page is not the only Page you can explore with the Graph API in your browser. Find some other Pages through the Facebook website in your browser, then, using the https://graph.facebook.com/id format, take a look at their Graph API representations. Do they have more information, or less? Next, move on to other types of Facebook objects: personal profiles, events, groups. For personal profiles, the id may be alphanumeric (if the person has signed up for a custom Facebook Username at http://www.facebook.com/username/), but in general the id will be numeric, and auto-assigned by Facebook when the user signed up. For certain types of objects (like photo albums), the value of id will not be obvious from the URL within the Facebook website. In some cases, you'll get an error message, like: { "error": { "type": "OAuthAccessTokenException", "message": "An access token is required to request this resource." } } Accessing the Graph API through AS3 Now that you've got an idea of how easy it is to access and read Facebook data in a browser, we'll see how to fetch it in AS3. Time for action – retrieving a Page's information in AS3 Set up the project. Check that the project compiles with no errors (there may be a few warnings, depending on your IDE). You should see a 640 x 480 px SWF, all white, with just three buttons in the top-left corner: Zoom In, Zoom Out, and Reset View: This project is the basis for a Rich Internet Application (RIA) that will be able to explore all of the information on Facebook using the Graph API. All the code for the UI is in place, just waiting for some Graph data to render. Our job is to write code to retrieve the data and pass it on to the renderers. I'm not going to break down the entire project and explain what every class does. What you need to know at the moment is a single instance of the controllers. CustomGraphContainerController class is created when the project is initialized, and it is responsible for directing the flow of data to and from Facebook. It inherits some useful methods for this purpose from the controllers.GCController class; we'll make use of these later on. Open the CustomGraphContainerController class in your IDE. It can be found in srccontrollersCustomGraphContainerController.as, and should look like the listing below: package controllers { import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); } } } The first thing we'll do is grab the Graph API's representation of Packt Publishing's Page via a Graph URL, like we did using the web browser. For this we can use a URLLoader. The URLLoader and URLRequest classes are used together to download data from a URL. The data can be text, binary data, or URL-encoded variables. The download is triggered by passing a URLRequest object, whose url property contains the requested URL, to the load() method of a URLLoader. Once the required data has finished downloading, the URLLoader dispatches a COMPLETE event. The data can then be retrieved from its data property. Modify CustomGraphContainerController.as like so (the highlighted lines are new): package controllers { import flash.events.Event; import flash.net.URLLoader; import flash.net.URLRequest; import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest(); //Specify which Graph URL to load request.url = "https://graph.facebook.com/PacktPub"; loader.addEventListener(Event.COMPLETE, onGraphDataLoadComplete); //Start the actual loading process loader.load(request); } private function onGraphDataLoadComplete(a_event:Event):void { var loader:URLLoader = a_event.target as URLLoader; //obtain whatever data was loaded, and trace it var graphData:String = loader.data; trace(graphData); } } } All we're doing here is downloading whatever information is at https://graph.facebook.com/PackPub and tracing it to the output window. Test your project, and take a look at your output window. You should see the following data: {"id":"204603129458","name":"Packt Publishing","picture":"http:// profile.ak.fbcdn.net/hprofile-ak-snc4/hs302. ash1/23274_204603129458_7460_s.jpg","link":"http://www.facebook. com/PacktPub","category":"Products_other","username":"PacktPub", "company_overview":"Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.","fan_count":412} If you get an error, check that your code matches the previously mentioned code. If you see nothing in your output window, make sure that you are connected to the Internet. If you still don't see anything, it's possible that your security settings prevent you from accessing the Internet via Flash, so check those.  
Read more
  • 0
  • 0
  • 2669
article-image-roles-and-responsibilities-records-management-implementation-alfresco-3
Packt
17 Jan 2011
10 min read
Save for later

Roles and Responsibilities for Records Management Implementation in Alfresco 3

Packt
17 Jan 2011
10 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) The steering committee To succeed, our Records Management program needs continued commitment from all levels of the organization. A good way to cultivate that commitment is by establishing a steering committee for the records program. From a high level, the steering committee will direct the program, set priorities for it, and assist in making decisions. The steering committee will provide the leadership to ensure that the program is adequately funded, staffed, properly prioritized with business objectives, and successfully implemented. Committee members should know the organization well and be in a position to be both able and willing to make decisions. Once the program is implemented, the steering committee should not be dissolved; it still will play an important function. It will continue to meet and oversee the Records Management program to make sure that it is properly maintained and updated. The Records Management system is not something that can simply be turned on and forgotten. The steering committee should meet regularly, track the progress of the implementation, keep abreast of changes in regulatory controls, and be proactive in addressing the needs of the Records Management program. Key stakeholders The Records Management steering committee should include executives and senior management from core business units such as Compliance, Legal, Finance, IT, Risk Management, Human Resources, and any other groups that will be affected by Records Management. Each of these groups will represent the needs and responsibilities of their respective groups. They will provide input relative to policies and procedures. The groups will work together to develop a priority-sequenced implementation plan that all can agree upon. Creating a committee that is heavily weighted with company executives will visibly demonstrate that our company is strongly committed to the program and it ensures that we will have the right people on board when it is time to make decisions, and that will keep the program on track. The steering committee should also include representatives from Records Management, IT, and users. Alternatively, representatives from these groups can be appointed and, if not members of the steering committee, they should report directly to the steering committee on a regular basis: The Program Contact The Program Contact is the chair of the steering committee. This role is typically held by someone in senior management and is often someone from the technology side of the business, such as the Director of IT. The Program Contact signs off with the final approval on technology deliverables and budget items. The Program Sponsor A key member of the records steering committee is the Program Sponsor or Project Champion. This role is typically held by a senior executive who will be able to represent the records initiative within the organization's executive team. The Sponsor will be able to establish the priority of the records program, relative to other organizational initiatives and be able to persuade the executive team and others in the company of the importance of the records management initiative. Corporate Records Manager Another key role of the steering committee is the Corporate Records Manager. This role acts as the senior champion for the records program and is responsible for defining the procedures and policies around Records Management. The person in this role will promote the rollout of and the use of the records program. They will work with each of the participating departments or groups, cultivating local champions for Records Management within each of those groups. The Corporate Records Manager must effectively communicate with business units to explain the program to all staff members and work with the various business units to collect user feedback so that those ideas can be incorporated into the planning process. The Corporate Records Manager will try to minimize any adverse user impact or disruption. Project Manager The Project Manager typically is on the steering committee or reports directly to it. The Project Manager plans and tracks the implementation of work on the program and ensures that program milestones are met. The person in this role manages both, the details of the system setup and implementation. This Project Manager also manages the staff time spent working on the program tasks. Business Analyst The Business Analyst analyzes business processes and records, and from these, creates a design and plan for the records program implementation. The Business Analyst works closely with the Corporate Records Manager to develop records procedures and provides support for the system during rollout. Systems Administrator The Systems Administrator leads the technical team for supporting the records application. The Systems Administrator specifies and puts into place the hardware required for the records program, the storage space, memory, and CPU capabilities. The person in this role monitors the system performance and backs up the system regularly. The Systems Administrator leads the team to apply software upgrades and to perform system troubleshooting. The Network Administrator The Network Administrator ensures that the network infrastructure is in place for the records program to support the appropriate bandwidth for the server and client workstations that will access the application. The Network Administrator works closely with the Systems Administrator. The Technical Analyst The Technical Analyst is responsible for analyzing the configuration of the records program. The Technical Analyst needs to work closely with the Business Analyst and Corporate Records Manager. The person in this role will specify the classification and structure used for the records program File Plan. They will also specify the classes of documents stored as records in the records application and the associated metadata for those documents. The Records Assistant The Records Assistant assists in the configuration of the records application. Tasks that the Records Assistant will perform include data entry and creating the folder structure hierarchy of the File Plan within the records application based on the specification created by the Technical Analyst. The Records Developer The Records Developer is a software engineer that is assigned to support the implementation of the records program, based on requirements derived by the Business Analyst. The Records Developer may need to edit and update configuration files, often using technologies like XML. The Records Developer may also need to make customizations to the user interface of the application. The Trainer The Trainer will work with end users to ensure that they understand the system and their responsibilities in interacting with it. The trainer typically creates training materials and provides training seminars to users. The Technical Support Specialist The Technical Support Specialist provides support to users on the functioning of the Records Management application. This person is typically an advanced user and is trained to be able to provide guidance in interacting with the application. But more than just the Records Management application, the support specialist should also be well versed in and be able to assist users and answer their questions about records processes and procedures, as well as concepts like retention and disposition of documents. The Technical Support Specialist will, very often, be faced with requests or questions that are really enhancement requests. The support specialist needs to have a good understanding of the scope of the records implementation and be able to distinguish an enhancement request from a defect or bug report. Enhancements should be collected and routed back through the Project Manager and, depending on the nature of the request or problem, possibly even to the Corporate Records Manager or the Steering Committee. Similarly, application defects or bugs that are found should be reported back through to the Project Manager. Bug reports will be prioritized by the Project Manager, as appropriate, assigned to the Technical Developers, or reported to the Systems Integrator or to Alfresco. The Users The Users are the staff members who will use the Records Management application as part of their job. Users are often the key to the success or failure of a records program. Unfortunately, users are one aspect of the project that is often overlooked. Obviously, it is important that the records application be well designed and meet the objectives and requirements set out for it. But if users complain and can't accept it, then the program will be doomed to failure. Users will often be asked to make changes to processes that they have become very comfortable with. Frequent and early communication with users is a must in order to ultimately gain their acceptance and participation. Prior to and during the implementation of the records system, users should receive status updates and explanations from the Corporate Records Manager and also from the Records Manager lead in their business unit. It is important that frequent communications be made with users to ensure their opinions and ideas are heard, and also so that they will learn to be able to most effectively use the records system. Once the application is ready, or better yet, well before the application goes live, users should attend training sessions on proper records-handling behavior; they should experience hands-on training with the application; and they should also be instructed in how best to communicate with the Technical Support Specialist, should they ever have questions or encounter any problems. Alfresco, Consultants, and Systems Integrators Alfresco is the software vendor for Alfresco Records Management, but Alfresco typically does not work directly with customers. We could go at it alone, but more likely, we'll probably choose to work directly with one of Alfresco's System Integration partners or consultants in planning for and setting up our system. Depending on the size of our organization and the available skill set within it, the Systems Integrator can take on as much or as little of the burden for helping us to get up and running with our Records Management program. Almost any of the Technical Team roles discussed in this section, like those of the Analyst and Developer, and even the role of the Project Manager, are ones that can be performed by a Systems Integrator. A list of certified Alfresco Integrators can be found on the Alfresco site: http://www.alfresco.com/partners/search.jsp?t=si A Systems Integrator can bring to our project an important breadth of experience that can help save time and ensure that our project will go smoothly. Alfresco Systems Integration partners know their stuff. They are required to be certified in Alfresco technology and they have worked with Alfresco extensively. They are familiar with best practices and have picked up numerous implementation tips and tricks having worked on similar projects with other clients.
Read more
  • 0
  • 0
  • 11505

article-image-replication-mysql-admin
Packt
17 Jan 2011
10 min read
Save for later

Replication in MySQL Admin

Packt
17 Jan 2011
10 min read
Replication is an interesting feature of MySQL that can be used for a variety of purposes. It can help to balance server load across multiple machines, ease backups, provide a workaround for the lack of fulltext search capabilities in InnoDB, and much more. The basic idea behind replication is to reflect the contents of one database server (this can include all databases, only some of them, or even just a few tables) to more than one instance. Usually, those instances will be running on separate machines, even though this is not technically necessary. Traditionally, MySQL replication is based on the surprisingly simple idea of repeating the execution of all statements issued that can modify data—not SELECT—against a single master machine on other machines as well. Provided all secondary slave machines had identical data contents when the replication process began, they should automatically remain in sync. This is called Statement Based Replication (SBR). With MySQL 5.1, Row Based Replication (RBR) was added as an alternative method for replication, targeting some of the deficiencies SBR brings with it. While at first glance it may seem superior (and more reliable), it is not a silver bullet—the pain points of RBR are simply different from those of SBR. Even though there are certain use cases for RBR, all recipes in this chapter will be using Statement Based Replication. While MySQL makes replication generally easy to use, it is still important to understand what happens internally to be able to know the limitations and consequences of the actions and decisions you will have to make. We assume you already have a basic understanding of replication in general, but we will still go into a few important details. Statement Based Replication SBR is based on a simple but effective principle: if two or more machines have the same set of data to begin with, they will remain identical if all of them execute the exact same SQL statements in the same order. Executing all statements manually on multiple machines would be extremely tedious and impractical. SBR automates this process. In simple terms, it takes care of sending all the SQL statements that change data on one server (the master) to any number of additional instances (the slaves) over the network. The slaves receiving this stream of modification statements execute them automatically, thereby effectively reproducing the changes the master machine made to its data originally. That way they will keep their local data files in sync with the master's. One thing worth noting here is that the network connection between the master and its slave(s) need not be permanent. In case the link between a slave and its master fails, the slave will remember up to which point it had read the data last time and will continue from there once the network becomes available again. In order to minimize the dependency on the network link, the slaves will retrieve the binary logs (binlogs) from the master as quickly as they can, storing them on their local disk in files called relay logs. This way, the connection, which might be some sort of dial-up link, can be terminated much sooner while executing the statements from the local relay-log asynchronously. The relay log is just a copy of the master's binlog. The following image shows the overall architecture: Filtering In the image you can see that each slave may have its individual configuration on whether it executes all the statements coming in from the master, or just a selection of those. This can be helpful when you have some slaves dedicated to special tasks, where they might not need all the information from the master. All of the binary logs have to be sent to each slave, even though it might then decide to throw away most of them. Depending on the size of the binlogs, the number of slaves and the bandwidth of the connections in between, this can be a heavy burden on the network, especially if you are replicating via wide area networks. Even though the general idea of transferring SQL statements over the wire is rather simple, there are lots of things that can go wrong, especially because MySQL offers some configuration options that are quite counter-intuitive and lead to hard-to-find problems. For us, this has become a best practice: "Only use qualified statements and replicate-*-table configuration options for intuitively predictable replication!" What this means is that the only filtering rules that produce intuitive results are those based on the replicate-do-table and replicate-ignore-table configuration options. This includes those variants with wildcards, but specifically excludes the all-database options like replicate-do-db and replicate-ignore-db. These directives are applied on the slave side on all incoming relay logs. The master-side binlog-do-* and binlog-ignore-* configuration directives influence which statements are sent to the binlog and which are not. We strongly recommend against using them, because apart from hard-to-predict results they will make the binlogs undesirable for server backup and restore. They are often of limited use anyway as they do not allow individual configurations per slave but apply to all of them. Setting up automatically updated slaves of a server based on a SQL dump In this recipe, we will show you how to prepare a dump file of a MySQL master server and use it to set up one or more replication slaves. These will automatically be updated with changes made on the master server over the network. Getting ready You will need a running MySQL master database server that will act as the replication master and at least one more server to act as a replication slave. This needs to be a separate MySQL instance with its own data directory and configuration. It can reside on the same machine if you just want to try this out. In practice, a second machine is recommended because this technique's very goal is to distribute data across multiple pieces of hardware, not place an even higher burden on a single one. For production systems you should pick a time to do this when there is a lighter load on the master machine, often during the night when there are less users accessing the system. Taking the SQL dump uses some extra resources, but unless your server is maxed out already, the performance impact usually is not a serious problem. Exactly how long the dump will take depends mostly on the amount of data and speed of the I/O subsystem. You will need an administrative operating system account on the master and the slave servers to edit the MySQL server configuration files on both of them. Moreover, an administrative MySQL database user is required to set up replication. We will just replicate a single database called sakila in this example. Replicating more than one database In case you want to replicate more than one schema, just add their names to the commands shown below. To replicate all of them, just leave out any database name from the command line. How to do it... At the operating system level, connect to the master machine and open the MySQL configuration file with a text editor. Usually it is called my.ini on Windows and my.cnf on other operating systems. On the master machine, make sure the following entries are present and add them to the [mysqld] section if not already there: server-id=1000 log-bin=master-bin If one or both entries already exist, do not change them but simply note their values. The log-bin setting need not have a value, but can stand alone as well. Restart the master server if you need to modify the configuration. Create a user account on the master that can be used by the slaves to connect: master> grant replication slave on *.* to 'repl'@'%' identified by 'slavepass'; Using the mysqldump tool included in the default MySQL install, create the initial copy to set up the slave(s): $ mysqldump -uUSER -pPASS --master-data --single-transaction sakila > sakila_master.sql Transfer the sakila_master.sql dump file to each slave you want to set up, for example, by using an external drive or network copy. On the slave, make sure the following entries are present and add them to the [mysqld] section if not present: server-id=1001 replicate-wild-do-table=sakila.% When adding more than one slave, make sure the server-id setting is unique among master and all clients. Restart the slave server. Connect to the slave server and issue the following commands (assuming the data dump was stored in the /tmp directory): slave> create database sakila; slave> use sakila; slave> source /tmp/sakila_master.sql; slave> CHANGE MASTER TO master_host='master.example.com', master_port=3306, master_ user='repl', master_password='slavepass'; slave> START SLAVE; Verify the slave is running with: slave> SHOW SLAVE STATUSG ************************** 1. row *************************** ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... How it works... Some of the instructions discussed in the previous section are to make sure that both master and slave are configured with different server-id settings. This is of paramount importance for a successful replication setup. If you fail to provide unique server-id values to all your server instances, you might see strange replication errors that are hard to debug. Moreover, the master must be configured to write binlogs—a record of all statements manipulating data (this is what the slaves will receive). Before taking a full content dump of the sakila demo database, we create a user account for the slaves to use. This needs the REPLICATION SLAVE privilege. Then a data dump is created with the mysqldump command line tool. Notice the provided parameters --master-data and --single-transaction. The former is needed to have mysqldump include information about the precise moment the dump was created in the resulting output. The latter parameter is important when using InnoDB tables, because only then will the dump be created based on a transactional snapshot of the data. Without it, statements changing data while the tool was running could lead to an inconsistent dump. The output of the command is redirected to the /tmp/sakila_master.sql file. As the sakila database is not very big, you should not see any problems. However, if you apply this recipe to larger databases, make sure you send the data to a volume with sufficient free disk space—the SQL dump can become quite large. To save space here, you may optionally pipe the output through gzip or bzip2 at the cost of a higher CPU load on both the master and the slaves, because they will need to unpack the dump before they can load it, of course. If you open the uncompressed dump file with an editor, you will see a line with a CHANGE MASTER TO statement. This is what --master-data is for. Once the file is imported on a slave, it will know at which point in time (well, rather at which binlog position) this dump was taken. Everything that happened on the master after that needs to be replicated. Finally, we configure that slave to use the credentials set up on the master before to connect and then start the replication. Notice that the CHANGE MASTER TO statement used for that does not include the information about the log positions or file names because that was already taken from the dump file just read in. From here on the slave will go ahead and record all SQL statements sent from the master, store them in its relay logs, and then execute them against the local data set. This recipe is very important because the following recipes are based on this! So in case you have not fully understood the above steps yet, we recommend you go through them again, before trying out more complicated setups.
Read more
  • 0
  • 0
  • 1810

article-image-introduction-successful-records-management-implementation-alfresco-3
Packt
14 Jan 2011
15 min read
Save for later

Introduction to Successful Records Management Implementation in Alfresco 3

Packt
14 Jan 2011
15 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations  A preliminary investigation will also give us good information about the types of records we have and roughly how many records we're talking about. We'll also dig deeper into the area of Authority Documents and we'll determine exactly what our obligations are as an organization in complying with them. The data that we collect in the preliminary investigation will provide the basis for us to make a Business Case that we can present to the executives in the organization. It will outline the benefits and advantages of implementing a records system. We also will need to put in place and communicate organization-wide a formal policy that explains concisely the goals of the records program and what it means to the organization. The information covered in this article is important and easily overlooked when starting a Records Management program. We will discuss: The Preliminary Investigation Authority Documents The Steering Committee and Roles in the Records Management Program Making the Business Case for Records Management Project Management Best practices and standards In this article, we will focus on discussing Records Management best practices. Best practices are the processes, methods, and activities that, when applied correctly, can achieve the most repeatable, effective, and efficient results. While an important function of standards is to ensure consistency and interoperability, standards also often provide a good source of information for how to achieve best practice. Much of our discussion here draws heavily on the methodology described in the DIRKS and ISO-15489 standards that describe Records Management best practices. Before getting into a description of best practices though, let's look and see how these two particular standards have come into being and how they relate to other Records Management standards, like the DoD 5015.2 standard. Origins of Records Management Somewhat surprisingly, standards have only existed in Records Management for about the past fifteen years. But that's not to say that prior to today's standards, there wasn't a body of knowledge and written guidelines that existed as best practices for managing records. Diplomatics Actually, the concept of managing records can be traced back a long way. In the Middle Ages in Europe, important written documents from court transactions were recognized as records, and even then, there were issues around establishing authenticity of records to guard against forgery. From those early concerns around authenticity, the science of document analysis called diplomatics came into being in the late 1600s and became particularly important in Europe with the rise of government bureaucracies in the 1800s. While diplomatics started out as something closer to forensic handwriting analysis than Records Management, it gradually established principles that are still important to Records Management today, such as reliability and authenticity. Diplomatics even emphasized the importance of aligning rules for managing records with business processes, and it treated all records the same, regardless of the media that they are stored on. Records Management in the United States Records Management is something that has come into being very slowly in the United States. In fact, Records Management in the United States is really a twentieth century development. It wasn't even until 1930 that 90 percent of all births and deaths in the United States were recorded. The United States National Archives was first established in 1934 to manage only the federal government historical records, but the National Archives quickly became involved in the management of all federal current records. In 1941, a records administration program was created for federal agencies to transfer their historical records to the National Archives. In 1943, the Records Disposal Act authorized the first use of record disposition schedules. In 1946, all agencies in the executive branch of government were ordered as part of Executive Order 9784 to implement Records Management programs. It wasn't until 1949 with the publication of a pamphlet called Public Records Administration, written by an archivist at the National Archives, that the idea of Records Management was beginning to be seen as an activity that is separate and distinct from the long-term archival of records for preservation. Prior to the 1950s in the United States, most businesses did not have a formalized program for records management. However, that slowly began to change as the federal government provided itself as an example for how records should be managed. The 1950 Federal Records Act formalized Records Management in the United States. The Act included ideas about the creation, maintenance, and disposition of records. Perhaps somewhat similar to the dramatic growth in electronic documents that we are seeing today, the 1950s saw a huge increase in the number of paper records that needed to be managed. The growth in the volume of records and the requirements and the responsibilities imposed by the Federal Records Act led to the creation of regional records centers in the United States, and those centers slowly became models for records managers outside of government. In 1955, the second Hoover Commission was tasked with developing recommendations for paperwork management and published a document entitled Guide to Record Retention Requirements in 1955. While not officially sanctioned as a standard, this document, in many ways, served the same purpose. The guide was popular and has been republished frequently since then and has served as an often-used reference by both government and non-government organizations. As late as 1994, a revised version of the guide was printed by the Office of the Federal Register. That same year, in 1955, ARMA International, the international organization for records managers, was founded. ARMA continues through today to provide a forum for records and information managers, both inside and outside the government, to share information about best practices in the area of Records Management. From the 1950s, companies and non-government organizations were becoming more involved with record management policies, and the US federal government continued to drive much of the evolution of Records Management within the United States. In 1976, the Federal Records Act was amended and sections were added that emphasized paperwork reduction and the importance of documenting the recordkeeping process. The concept of the record lifecycle was also described in the amendments to the Act. In 1985, the National Archives was renamed as NARA, the National Archives and Records Administration, finally acknowledging in the name the role the agency plays in managing records as well as being involved in the long-term archival and preservation of documents. However, it wasn't until the 1990s that standards around Records Management began to take shape. In 1993, a government task force in the United States that included NARA, the US Army, and the US Air Force, began to devise processes for managing records that would include both the management of paper and electronic documents. The recommendations of that task force ultimately led to the DoD-5015.2 standard that was first released in 1997. Australia's AS-4390 and DIRKS In parallel to what was happening in the United States, standards for Records Management were also advancing in Australia. AS-4390 Standards Australia issued AS-4390 in 1996, a document that defined the scope of Records Management with recommendations for implementation in both public and private sectors in Australia. This was the first standard issued by any nation, but much of the language in the standard was very specific, making it usable really only within Australia. AS-4390 approached the management of records as a "continuum model" and addressed the "whole extent of the records' existence". DIRKS In 2000, the National Archives of Australia published DIRKS (Design and Implementation of Recordkeeping System), a methodology for implementing AS-4390. The Australian National Archives developed, tested, and successfully implemented the approach, summarizing the methodology for managing records into an eight-step process. The eight steps of the DIRKS methodology include: Organization assessment: Preliminary Investigation Analysis of business activity Identification of records requirements Assess areas for improvement: Assessment of the existing system Strategies for recordkeeping Design, implement, and review the changes: Design the recordkeeping system Implement the recordkeeping system Post-implementation review An international Records Management standard These two standards, AS-4390 and DIRKS, have had a tremendous influence not only within Australia, but also internationally. In 2001, ISO-15489 was published as an international standard for best practices for Records Management. Part one of the standard was based on AS-4390, and part two was based on the guidelines, as laid out in DIRKS. The same eight-step methodology of DIRKS is used in the part two guidelines of ISO-15489. The DIRKS manual can be freely downloaded from the National Archives of Australia: http://www.naa.gov.au/recordsmanagement/publications/dirks-manual.aspx The ISO-15489 document can be purchased from ISO: http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=31908 and http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35845 ISO-15489 has been a success in terms of international acceptance. 148 countries are members of ISO, and many of the participating countries have embraced the use of ISO-15489. Some countries where ISO-15489 is actively applied include Australia, China, UK, France, Germany, Netherlands, and Jamaica. Both ARMA International and AIIM now also promote the importance of the ISO-15489 standard. Much of the appeal behind the ISO-15489 standard is the fact that it is fairly generic. Because it describes the recordkeeping process at a very high level, it avoids contentious details that may be specific to any particular Records Management implementation. Consider, for example, the eight steps of the DIRKS process, as listed above, and replace the words "record" and "recordkeeping" with the name of some other type of enterprise software or project, like "ERP". The steps and associated recommendations from DIRKS are equally applicable. In fact, we recognize clear parallels between the steps presented in the DIRKS methodology and methodologies used for Project Management. Later in this article, we will look at similarities between Records Management and Project Management methodologies like PMBOK and Agile. Does ISO-15489 overlap with standards like DoD-5015.2 and MoReq? ISO-15489 differs considerably in approach from other Records Management standards, like the DoD-5015.2 standard and the MoReq standard which developed in Europe. While ISO-15489 outlines basic principles of Records Management and describes best practices, these latter two standards are very prescriptive in terms of detailing the specifics for how to implement a Records Management system. They are essentially functional requirement documents for computer systems. MoReq (Model Requirements for the Management of Electronic Records) was initiated by the DLM Forum and funded by the European Commission. MoReq was first published in 2001 as MoReq1 and was then extensively updated and republished as MoReq2 in 2008. In 2010, an effort was undertaken to update the specification with the new name MoReq2010. The MoReq2 standard has been translated into 12 languages and is referenced frequently when building Records Management systems in Europe today. Other international standards for Records Management A number of other standards exist internationally. In Australia, for example, the Public Record Office has published a standard known as the Victorian Electronic Records Strategy (VERS) to address the problem of ensuring that electronic records can be preserved for long periods of time and still remain accessible and readable. The preliminary investigation Before we start getting our hands dirty with the sticky details of designing and implementing our records system, let's first get a big-picture idea of how Records Management currently fits into our organization and then define our vision for the future of Records Management in our organization. To do that, let's make a preliminary investigation of the records that our organization deals with. In the preliminary investigation, we'll make a survey of the records in our organization to find out how they are currently being handled. The results of the survey will provide important input into building the Business Case for moving forward with building a new Records Management system for our organization. With the results of the preliminary investigation, we will be able to create an information map or diagram of where records currently are within our organization and which groups of the organization those records are relevant to. With that information, we will be able to create a very high-level charter for the records program, provide data to be used when building the Business Case for Records Management, and then have sufficient information to be able to calculate a rough estimate of the cost and effort needed for the program scope. Before executing on the preliminary investigation, a detailed plan of attack for the investigation should be made. While the primary goal of the investigation is to gather information, a secondary goal should be to do it in a way that minimizes any disruptions to staff members. To perform the investigation, we will need assistance from the various business units in the organization. Before starting, a 'heads up' should be sent out to the managers of the different business units involved so that they will understand the nature of the investigation, when it will be carried out, and they'll know roughly the amount of time that both they and their unit will need to make available to assist in the investigation. It would also be useful to hold a briefing meeting with staff members from business units, where we expect to find most of the records. The records survey Central to the preliminary investigation is the records survey, which is taken across the organization. A records survey attempts to identify the location and record types for both the electronic and non-electronic records used in the organization. Physical surveys versus questionnaires The records survey is usually either carried out as a physical one or as one managed remotely via questionnaires. In a physical survey, members of the records management team visit each business unit, and working together with staff members from that unit, make a detailed inventory. During the survey, all physical storage locations, such as cabinets, closets, desks, and boxes are inspected. Staff members are asked where they store their files, which business applications they use, and which network drives they have access to. The alternative to the physical survey is to send questionnaires to each of the business units and to ask them to complete the forms on their own. Inspections similar to that of the physical survey would be made, but the business unit is not supported by a records management team member. Which of the two approaches we use will depend on the organization. Of course, a hybrid approach, where a combination of both physical surveys and questionnaires is used would work too. Physical in-person surveys tend to provide more accurate and complete inventories, but they also are typically more expensive and time consuming to perform. Questionnaires, while cheaper, rely on each of the individual business units to complete the information on their own, which means that the reporting and investigation styles used by the different units might not be uniform. There is also the problem that some business units may not be sufficiently motivated to complete the questionnaires in a timely manner. Preparing for the survey: Review existing documentation Before we begin the survey, we should check to see if there already exists any background documentation that describes how records are currently being handled within the organization. Documentation has a habit of getting out of date quickly. Documentation can also be deceiving because sometimes it is written, but never implemented, or implemented in ways that deviate dramatically from the originally written description. So if we're actually lucky enough to find any documentation, we'll need to also validate how accurate that information really is. These are some examples of documents which may already exist and which can provide clues about how some organizational records are being handled today: The organization's disaster recovery plan Previous records surveys or studies The organization's record management policy statement Internal and external audit reports that involve consideration of records Organizational reports like risk assessment and cost-benefit analyses Other types of documents may also exist, which can be good indicators for where records, particularly paper records, might be getting stored. These include: Blueprints, maps, and building plans that show the location of furniture and equipment Contracts with storage companies or organizations that provide records or backup services Equipment and supply inventories that may indicate computer hardware Lists of databases, enterprise application software, and shared drives It may take some footwork and digging to find out exactly where and how records in the organization are currently being stored. Physical records could be getting stored in numerous places throughout office and storage areas. Electronic records might be currently saved on shared drives, local desktops, or other document repositories. The main actions of the records survey can be summarized by the LEAD acronym: Locate the places where records are being stored Examine the records and their contents Ask questions about the records to understand their significance Document the information about the records
Read more
  • 0
  • 0
  • 3693
article-image-integrating-twitter-magento
Packt
14 Jan 2011
2 min read
Save for later

Integrating Twitter with Magento

Packt
14 Jan 2011
2 min read
Integrating your Magento website with Twitter is a useful way to stay connected with your customers. You'll need a Twitter account (or more specifically an account for your business)  but once that's in place it's actually pretty easy. Adding a 'Follow Us On Twitter' button to your Magento store One of the more simple ways to integrate your store's Twitter feed with Magento is to add a 'Follow Us On Twitter' button to your store's design. Generating the markup from the Twitter website Go to the Twitter Goodies website (): Select the Follow Buttons option and then select the Looking for Follow us on Twitter buttons? towards the bottom of the screen: The buttons will now change to the FOLLOW US ON Twitter buttons: Select the style of button you'd like to make use of on your Magento store and then select the generated HTML that is provided in the pop-up that is displayed: The generated HTML for the M2 Store's Twitter account (with the username of M2MagentoStore) looks like the following: <a href="http://www.twitter.com/M2MagentoStore"> <img src="http://twitter-badges.s3.amazonaws.com/follow_us-a.png" alt="Follow M2MagentoStore on Twitter"/> </a> Adding a static block in Magento for your Twitter button Now you will need to create a new static block in the Magento CMS feature: navigate to CMS Static Blocks| in your Magento store's administration panel and click on Add New Block. As you did when creating a static block for the supplier logos used in your store's footer, complete the form to create the new static block. Add the Follow Us On Twitter button to the Content field by disabling the Rich Text Editor with the Show/Hide Editor button and pasting in the markup you generated previously: You don't need to upload an image to your store through Magento's CMS here as the Twitter buttons are hosted elsewhere. Note that the Identifier field reads follow-twitter—you will need this for the layout changes you are about to make!
Read more
  • 0
  • 0
  • 3670

article-image-promoting-efficient-communication-moodle-curriculum-and-information-management-system-curriculum-and-information-management-system
Packt
11 Jan 2011
8 min read
Save for later

Promoting efficient communication with Moodle

Packt
11 Jan 2011
8 min read
A key component of any quality educational program is its ability to facilitate communication among all of the parties involved in the program. Communication and the subsequent relaying of information and knowledge between instructional faculty, administrators, students, and support personnel must be concise, efficient, and, when so desired, as transparent as possible. Using Moodle as a hub for internal information distribution, collaboration, and communication Moodle's ability to facilitate information flow and communication among users within the system, who are registered users such as students and teachers, is a capability that has been a core function of Moodle since its inception. The module most often used to facilitate communication and information flow is the forum and we will thus focus primarily on creative uses of forums for communication within an educational program. Facilitating intra- or inter-departmental or program communication, collaboration, and information flow Many educational programs comprise sub-units such as departments or programs. These units usually consist of students, teachers, and administrators who interact with one another at varying levels in terms of the type of communication, its frequency, and content. The following example will demonstrate how a sub-unit—the reading program within our language program example—might set up a communication system, using a meta course in Moodle, that accomplishes the following: Allows the program to disseminate information to all students, teachers, and administrators involved in the program. The system must, of course, allow for settings enabling dissemination to only selected groups or to the entre group, if so desired. Establishes a forum for communication between and among teachers, students, and administrators. Again, this system must be fine-tunable such that communication can be limited to specific parties within the program. The example will also demonstrate, indirectly, how a meta course could be set up to facilitate communication and collaboration between individuals from different programs or sub-units. In such a case, the meta course would function as an inter-departmental communication and collaboration system. Time for action – setting up the meta course To set up a communication system that can be finely tuned to allow specific groups of users to interact with each other, follow these steps: We are going to set up a communication system using a meta course. Log in to your site as admin and click on the Show all courses link found at the bottom of your MyCourses block on the front page of your site. At the bottom of the subsequent Course Categories screen, click on the Add a new course button. Change the category from Miscellaneous to Reading and enter a Full name and Short name such as Reading Program and ReadProg. Enter a short description explaining that the course is to function as a communication area for the reading program. Use the drop-down menu next to the meta course heading, shown in the following screenshot, to select Yes in order to make this course a meta course: Change the Start date as you see fit. You don't need to add an Enrollment key under the Availability heading to prevent users who are not eligible to enter the course because the enrollment for meta courses is taken from child courses. If you've gotten into the habit of entering enrollment keys just to be safe however, doing so here won't cause any problems. Change the group setting, found under the Groups heading, to Separate. Do not force this setting however, in order to allow it to be set on an individual activity basis. This will allow us to set up forums that are only accessible to teachers and/or administrators. Other forums can be set up to allow only student and teacher access, for example. Click on the Save changes button found at the bottom of the screen and on the next screen, which will be the Child courses screen, search for all reading courses by entering Reading in the search field. After clicking on the Search button to initiate the search, you will see all of the reading courses, including the meta course we have just created. Add all of the courses, except the meta course, as shown in the following screenshot. Use the short name link found in the breadcrumb path at the top-left of the window, shown in the following screenshot, to navigate to the course after you have added all of the reading child courses: What just happened? We just created a meta course and included all of the reading courses as child courses of the meta course. This means that all of the users enrolled in the reading child courses have been automatically enrolled in the meta course with the same roles that they have in the child courses. It should be noted here that enrollments in meta courses are controlled via the enrollments in each of the child courses. If you wish to unenroll a user from a meta course, he or she must be unenrolled from the respective child course. In the next step, we'll create the groups within the meta course that will allow us to create targeted forums. Time for action – creating a group inside the meta course We are now going to create groups within our meta course in order to allow us to specify which users will be allowed to participate in, and view, the forums we set up later. This will allow us to control which sets of users have access to the information and communication that will be contained in each forum. Follow these steps to set up the forums: Log in to your Moodle site as admin and navigate to the meta course we just created. It will be located under the Reading heading from the MyCourses block and titled Reading Program if you followed the steps outlined earlier in this article. Click on the Groups link found inside the Administration block. The subsequent screen will be titled ReadingProg Groups. The ReadingProg portion of the title is from the short name of our course. From this screen, click on the Create group button. Title the group Teachers and write a short description for the group. Ignore the enrollment key option as enrollments for meta courses are controlled by the child course enrollments. Leave the picture field blank unless you would like to designate a picture for this group. Click on the Save changes button to create the group. You will now see the ReadingProg Groups screen again and it will now contain the Teachers group, we just created. Click once on the group name to enable the Add/remove users button. Click on the Add/remove users button to open the Add/remove users window. From this window, enter the word Teacher in the search window and click on the Search button. Select all of the teachers by clicking once on the first teacher and then scrolling to the last teacher and, while holding down the shift button on your keyboard, click on the last teacher. This will highlight all of the teachers in the list. Click on the Add button to add the selected teachers to the Existing members list on the left. Click on the Back to groups button to return to the ReadingProg Groups screen. The Teachers group will now appear as Teachers(20) and, when selected, the list of teachers will appear in the Members of: list found on the right side of the screen, as shown in the following screenshot: Next, navigate to the front page of your site and from the Site Administration block, click on the Miscellaneous heading link and then on the Experimental link. Scroll down to the Enable groupings setting and click the tickbox to enable this setting. This setting enables you to group multiple groups together and also to make activities exclusively available to specific groupings. We'll need this capability when we set up the forums later. For a more detailed explanation of the groupings feature, visit the associated Moodle Docs page at: http://docs.moodle.org/en/Groupings. What just happened? We just created a group, within our Reading Program meta course, for all of the teachers enrolled in the course. Because the enrollments for a meta course are pulled from the child courses associated with a meta course, the teachers are all teachers who are teaching reading courses in our program. Later in this article, we'll see how we can use this group when we set up forums that we only want our teachers to have access to.
Read more
  • 0
  • 0
  • 2062
Modal Close icon
Modal Close icon