Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-implementing-calendar-control-yahoo-user-interface-yui
Packt
23 Oct 2009
10 min read
Save for later

Implementing a Calendar Control in the Yahoo User Interface (YUI)

Packt
23 Oct 2009
10 min read
The Basic Calendar Class The most basic type of calendar is the single-panel Calendar which is created with the YAHOO.widget.Calendar class. To display a calendar, an HTML element is required to act as a container for the calendar. The screenshot shows a basic Calendar control: The constructor can then be called specifying, at the very least the id of the container element as an argument. You can also specify the id of the Calendar object as an argument, as well as an optional third argument that can accept a literal object containing various configuration properties. The configuration object is defined within curly braces within the class constructor. It contains a range of configuration properties and keys that can be used to control different Calendar attributes such as its title, a comma-delimited range of pre-selected dates, or a close button shown on the calendar. There are a large number of methods defined in the basic Calendar class; some of these are private methods that are used internally by the Calendar object to do certain things and which you normally wouldn't need to use yourself. Some of the more useful public methods include: Initialization methods including init, initEvents, and initStyles which initialize either the calendar itself or the built-in custom events and style constants of the calendar. A method for determining whether a date is outside of the current month: isDateOOM. Navigation methods such as nextMonth, nextYear, previousMonth, and previousYear that can be used to programmatically change the month or year displayed in the current panel. Operational methods such as addMonths, addYears, subtractMonths, and subtractYears which are used to change the month and year shown in the current panel by the specified number of months or years. The render method is used to draw the calendar on the page and is called in for every implementation of a calendar, after it has been configured. Without this method, no Calendar appears on the page. Two reset methods: reset which resets the Calendar to the month and year originally selected, and resetRenderers which resets the render stack of the calendar. Selection methods that select or deselect dates such as deselect, deselectAll, desellectCell, select, and selectCell. As you can see, there are many methods that you can call to take advantage of the advanced features of the calendar control. The CalendarGroup Class In addition to the basic calendar, you can also create a grouped calendar that displays two or more month panels at once using the YAHOO.widget.CalendarGroup class. The control automatically adjusts the Calendar's UI so that the navigation arrows are only displayed on the first and last calendar panels, and so that each panel has its own heading indicating which month it refers to. The CalendarGroup class contains additional built-in functionality for updating the calendar panels on display, automatically. If you have a two-panel calendar displaying, for example, January and February, clicking the right navigation arrow will move February to the left of the panel so that March will display as the right-hand panel. All of this is automatic and nothing needs to be configured by you. There are fewer methods in this class; some of those found in the basic Calendar class can also be found here, such as the navigation methods, selection methods, and some of the render methods. Native methods found only in the CalendarGroup class include: The subscribing methods sub and unsub, which subscribe or unsubscribe to custom events of each child calendar. Child functions such as the callChildFunction and setChildFunction methods which set and call functions within all child calendars in the calendar group. Implementing a Calendar To complete this example, the only tool other than the Yahoo User Interface (YUI) that you'll need is a basic text editor. Native support for the YUI is provided by some web authoring software packages, most notably Aptana, an open-source application that has been dubbed 'Dreamweaver Killer'. However, I always find that writing code manually while learning something is much more beneficial. It is very quick and easy to add the calendar, as the basic default implementations require very little configuration. It can be especially useful in forms where the visitor must enter a date. Checking that a date has been entered correctly and in the correct format takes valuable processing time, but using the YUI calendar means that dates are always exactly as you expect them to be. So far we've spent most of this article looking at a lot of the theoretical issues surrounding the library; I don't know about you, but I think it's definitely time to get on with some actual coding! The Initial HTML Page Our first example page contains a simple text field and an image, which once clicked will display the Calendar control on the page, thereby allowing for a date to be selected and added to the input. Begin with the following basic HTML page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"><html lang="en"><head><meta http-equiv="content-type" content="text/html;charset=utf-8"><title>YUI Calendar Control Example</title><link rel="stylesheet"type="text/css"href="yui/build/calendar/assets/skins/sam/calendar.css"><script type="text/javascript"src="yui/build/yahoo-dom-event/yahoo-dom-event.js"></script><script type="text/javascript"src="yui/build/calendar/calendar-min.js"></script><style type="text/css">input {margin:0px 10px 0px 10px;}</style></head><body class="yui-skin-sam"><div><label>Please enter your date of birth:</label><input type="text" name="dobfield" id="dobfield"><img id="calico" src="icons/cal.png"alt="Open the Calendar control"></div><div id="mycal"></div></body></html> We begin with a valid DOCTYPE declaration, a must in any web page. For validity, we can also add the lang attribute to the opening <html> tag and for good measure, enforce the utf-8 character set. Nothing so far is YUI-specific, but coding in this way every time is a good habit. We link to the stylesheet used to control the appearance of the calendar control, which is handled in this example by the sam skin within the <link> tag. Accordingly, we also need to add the appropriate class name to the <body> tag. Following this, we link to the required library files with <script> tags; the calendar control is relatively simple and requires just the YAHOO, Document Object Model (DOM), and Event components (using the aggregated yahoo-dom-event.js file for efficiency), as well as the underlying source file calendar-min.js. A brief <style> tag finishes the <head> section of the page with some CSS relevant to this particular example, and the <body> of the page at this stage contains just two <div> elements: the first holds a <label>, text field, and a calendar icon (which can be used to launch the control), while the second holds the calendar control. When viewed in a browser, the page at this point should appear like this: The calendar icon used in this example was taken, with gratitude from Mark Carson at http://markcarson.com. Beginning the Scripting We want the calendar to appear when the icon next to the text field is clicked, rather than it being displayed on the page-load, so the first thing we need to do is to set a listener for the click event on the image. Directly before closing </body> tag, add the following code: <script type="text/javascript">//create the namespace object for this exampleYAHOO.namespace("yuibook.calendar");//define the lauchCal function which creates the calendarYAHOO.yuibook.calendar.launchCal = function() {}//create calendar on page load YAHOO.util.Event.onDOMReady(YAHOO.yuibook.calendar.launchCal);</script> Let's look at each line of the above code. We first use the .namespace() method of the YAHOO utility to set up the namespace object used for this example. Next we define the anonymous launchCal function, which will hold all of the code that generates the calendar control. Then we use the .onDOMReady() method of the Event utility to execute the launchCal function when the DOM is in an usable state. We'll be looking at the DOM utility in much greater detail later in the book. Now we can add the extremely brief code that's required to actually produce the Calendar. Within the braces of our anonymous function, add the following code: //create the calendar object, specifying the containerVar myCal = new YAHOO.widget.Calendar("mycal");//draw the calendar on screenmyCal.render();//hide it again straight awaymyCal.hide(); This is all that we need to create the Calendar; we simply define myCal as a new Calendar object, specifying the underlying container HTML element as an argument of the constructor. Once we have a Calendar object, we can call the .render() method on it to create the calendar and display it on the page. No arguments are required for this method. Since we want the calendar to be displayed when its icon is clicked, we hide the calendar from view straight away. To display the calendar when the icon for it is clicked, we'll need one more anonymous function. Add the following code beneath the .hide() method: //define the showCal function which shows the calendarVar showCal = function() {//show the calendarmyCal.show();} Now we can attach a listener which detects the click event on the calendar icon: //attach listener for click event on calendar iconYAHOO.util.Event.addListener("calico", "click", showCal); Save the file that we've just created as calendar.html or similar in your yuisite directory. If you view it in your browser now and click the Calendar icon, you should see this: The calendar is automatically configured to display the current date, although this is something that can be changed using the configuration object mentioned earlier. If you use a DOM explorer to view the current DOM of a page with an open calendar on it, you'll see that a basic Calendar control is rendered as a table with eight rows and seven columns. The first row contains the images used to navigate between previous or forthcoming months and the title of the current month and year. The next row holds the two-letter representations of each of the different days of the week, and the rest of the rows hold the squares representing the individual days of the current month. The screenshot on the next page show some of the DOM representation of the Calendar control used in our example page: Now that we can call up the Calendar control by clicking on our Calendar icon, we need to customize it slightly. Unless the person completing the form is very young, they will need to navigate through a large number of calendar pages in order to find their date of birth. This is where the Calendar Navigator interface comes into play. We can easily enable this feature using a configuration object passed into the Calendar constructor. Alter your code so that it appears as follows: //create the calendar object, using container & config objectmyCal = new YAHOO.widget.Calendar("mycal", {navigator:true}); Clicking on the Month or Year label will now open an interface which allows your visitors to navigate directly to any given month and year: The configuration object can be used to set a range of calendar configuration properties including the original month and year displayed by the Calendar, the minimum and maximum dates available to the calendar, a title for the calendar, a close button, and various other properties.
Read more
  • 0
  • 0
  • 3721

article-image-users-roles-and-pages-dotnetnuke-5
Packt
14 Apr 2010
7 min read
Save for later

Users, Roles, and Pages in DotNetNuke 5

Packt
14 Apr 2010
7 min read
One of the most important aspects of running a DotNetNuke portal is trying to figure out how to administer the portal. From adding modules to working with users, it may take a while before you have mastered all the administration tasks associated with running a portal. DotNetNuke uses a few terms that may be unfamiliar at first. For example, the term "portal" can be used interchangeably with the more common term "site". Similarly, the terms "tab" and "page" may be used interchangeably within the context of DotNetNuke. Knowing that they are interchangeable will help to understand the topics that we will discuss. User accounts If you are used to working within a network environment, or have worked with different portals in the past, then you are probably comfortable with the term "users". For those of us who may not be very familiar with this term, can think of a website that you have registered with in the past. When the registration had been completed, a user account was provided to you. This account lets you return to the site and log in as a recognized user. Everything that takes place on your portal revolves around users and user accounts. Whether users are required to register in order to use your services (such as a member site) or only a few user accounts are required to manage the content, functionality, and layout of your site, you will need to understand how to create and manage user accounts. Let's start with a general description of a user, and then you will see how to create and manage your users. In order to work through the examples, you will need to bring up your portal and sign in as an administrator account, such as the one created during the initial portal installation. Who is a user? The simplest definition of a user is an individual who consumes the services that your portal provides. However, a user can take on many different roles, from a visitor who is just browsing (unregistered user) or a person who registers to gain access to your services (registered user), to a specialized member of the site such as a content editor, to the facilitator (administrator or host) who is responsible for the content, functionality, and design of your portal. The difference between an administrator account and a host (or super user) account will be explained later in detail. For now, we will focus on the administrator account that is associated with a single portal. Everything in DotNetNuke revolves around the user, so before we can do anything else, we need to learn a little about user accounts. Creating user accounts Creating user accounts Before you create user accounts, you should decide and configure how users will be able to register on the site. You can choose from the following four different types of registrations: None Private Public (default) Verified To set the registration type for your portal, go to the Site Settings link found in the ADMIN menu, as shown in the following screenshot: The User Registration section can be found under Advanced Settings | Security Settings, as shown in the next screenshot: Many sections within DotNetNuke may be collapsed or hidden by default. To view these sections, you will need to expand them by clicking the '+' in front of them. The type of registration you use depends on how you will be using your portal. The following table gives a brief explanation of the different User Registration types: Registration setting Description None Setting the user registration as None will remove the Register link from your portal. In this mode, users can only be added by the administrator or host users. Thanks to the new features in DotNetNuke 5, users who have been given the proper permissions can also manage user accounts. This allows the administrator or host users to delegate account management to other individuals with the proper access. If you plan to have all sections of your site available to everyone (including unregistered users), then selecting None as your registration option is a good choice. Private If you select Private, then the Register link will reappear. After users fi ll out the registration form, they will be informed that their request for registration will be reviewed by the administrator. The administrator will decide whom to give access to the site. Unless an administrator approves (authorizes in DNN terms) a user, they will not be able to log in. Public Public is the default registration for a DotNetNuke portal. When this is selected, the users will be able to register for yoursite by entering the required information. Once the registration form is fi lled out, they will be given access to the site and can log in without requiring any action on the part of an administrator. Verified If you select Verified as your registration option, then the users will be sent an e-mail with a verifi cation code, once they fi ll out the required information. This ensures that the e-mail address they enter in the registration process is valid. The fi rst time they sign in, they will be prompted for the verifi cation code. Alternatively, they can click on the Verification link in the e-mail sent to them. After they have been verifi ed, they will need to type in only their login name and password to gain access to the site. Please note that proper SMTP confi guration is required to ensure delivery of the verifi cation e-mails. Setting required registration fields The administrator has the ability to decide what information the user will be required to enter when registering. If you are logged in as an administrator, then you can accomplish this through a combination of User Settings and Profile Properties. A user who is given the appropriate permissions can also modify these settings; however, this requires a few additional configuration steps that we will not cover in this section. To manage the Profile Properties for your site, select the ADMIN | User Accounts link. The User Accounts section appears as shown in the following screenshot: In the preceding screen, select Manage Profile Properties, either by selecting the link at the bottom of the module container or by selecting the link in the Action menu—the menu that pops up when you point on the drop-down button adjacent to the module title, User Accounts (more information on Action menu is provided later). When you select this link, you will be redirected to a screen (see the next screenshot) that displays a list of the currently configured Profile Properties. You can manage some attributes of the Profile Properties from within this screen. For instance, you can delete a property by clicking on the 'X' icon in the second column. Alternatively, you can change the display order of the properties by clicking on one of the Dn or Up icons in the third and fourth columns. If you change the order this way, make sure you click the Apply Changes button at the bottom of the page to save any changes. Your changes will be lost if you leave this screen without clicking the Apply Changes button. If you want even more control, then you can edit a single property by clicking on the pencil icon in the first column. You will see this icon extensively in the DNN user interface. It allows you to Edit/Modify the item it is associated with. You can also add a new property by selecting the Add New Profile Property action from the Action menu. In either case, you will be redirected to another page, where you can enter information about the property. You will be redirected to the Add New Property Details page, as shown in the following screenshot: Note that if you are editing an existing property, the Property Name field cannot be changed, so make sure you get it right the first time. If you need to change the Property Name, then you will need to delete and recreate the property. Most of these fields are self explanatory, but we will describe a few of these fields.
Read more
  • 0
  • 0
  • 3702

article-image-introduction-successful-records-management-implementation-alfresco-3
Packt
14 Jan 2011
15 min read
Save for later

Introduction to Successful Records Management Implementation in Alfresco 3

Packt
14 Jan 2011
15 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations  A preliminary investigation will also give us good information about the types of records we have and roughly how many records we're talking about. We'll also dig deeper into the area of Authority Documents and we'll determine exactly what our obligations are as an organization in complying with them. The data that we collect in the preliminary investigation will provide the basis for us to make a Business Case that we can present to the executives in the organization. It will outline the benefits and advantages of implementing a records system. We also will need to put in place and communicate organization-wide a formal policy that explains concisely the goals of the records program and what it means to the organization. The information covered in this article is important and easily overlooked when starting a Records Management program. We will discuss: The Preliminary Investigation Authority Documents The Steering Committee and Roles in the Records Management Program Making the Business Case for Records Management Project Management Best practices and standards In this article, we will focus on discussing Records Management best practices. Best practices are the processes, methods, and activities that, when applied correctly, can achieve the most repeatable, effective, and efficient results. While an important function of standards is to ensure consistency and interoperability, standards also often provide a good source of information for how to achieve best practice. Much of our discussion here draws heavily on the methodology described in the DIRKS and ISO-15489 standards that describe Records Management best practices. Before getting into a description of best practices though, let's look and see how these two particular standards have come into being and how they relate to other Records Management standards, like the DoD 5015.2 standard. Origins of Records Management Somewhat surprisingly, standards have only existed in Records Management for about the past fifteen years. But that's not to say that prior to today's standards, there wasn't a body of knowledge and written guidelines that existed as best practices for managing records. Diplomatics Actually, the concept of managing records can be traced back a long way. In the Middle Ages in Europe, important written documents from court transactions were recognized as records, and even then, there were issues around establishing authenticity of records to guard against forgery. From those early concerns around authenticity, the science of document analysis called diplomatics came into being in the late 1600s and became particularly important in Europe with the rise of government bureaucracies in the 1800s. While diplomatics started out as something closer to forensic handwriting analysis than Records Management, it gradually established principles that are still important to Records Management today, such as reliability and authenticity. Diplomatics even emphasized the importance of aligning rules for managing records with business processes, and it treated all records the same, regardless of the media that they are stored on. Records Management in the United States Records Management is something that has come into being very slowly in the United States. In fact, Records Management in the United States is really a twentieth century development. It wasn't even until 1930 that 90 percent of all births and deaths in the United States were recorded. The United States National Archives was first established in 1934 to manage only the federal government historical records, but the National Archives quickly became involved in the management of all federal current records. In 1941, a records administration program was created for federal agencies to transfer their historical records to the National Archives. In 1943, the Records Disposal Act authorized the first use of record disposition schedules. In 1946, all agencies in the executive branch of government were ordered as part of Executive Order 9784 to implement Records Management programs. It wasn't until 1949 with the publication of a pamphlet called Public Records Administration, written by an archivist at the National Archives, that the idea of Records Management was beginning to be seen as an activity that is separate and distinct from the long-term archival of records for preservation. Prior to the 1950s in the United States, most businesses did not have a formalized program for records management. However, that slowly began to change as the federal government provided itself as an example for how records should be managed. The 1950 Federal Records Act formalized Records Management in the United States. The Act included ideas about the creation, maintenance, and disposition of records. Perhaps somewhat similar to the dramatic growth in electronic documents that we are seeing today, the 1950s saw a huge increase in the number of paper records that needed to be managed. The growth in the volume of records and the requirements and the responsibilities imposed by the Federal Records Act led to the creation of regional records centers in the United States, and those centers slowly became models for records managers outside of government. In 1955, the second Hoover Commission was tasked with developing recommendations for paperwork management and published a document entitled Guide to Record Retention Requirements in 1955. While not officially sanctioned as a standard, this document, in many ways, served the same purpose. The guide was popular and has been republished frequently since then and has served as an often-used reference by both government and non-government organizations. As late as 1994, a revised version of the guide was printed by the Office of the Federal Register. That same year, in 1955, ARMA International, the international organization for records managers, was founded. ARMA continues through today to provide a forum for records and information managers, both inside and outside the government, to share information about best practices in the area of Records Management. From the 1950s, companies and non-government organizations were becoming more involved with record management policies, and the US federal government continued to drive much of the evolution of Records Management within the United States. In 1976, the Federal Records Act was amended and sections were added that emphasized paperwork reduction and the importance of documenting the recordkeeping process. The concept of the record lifecycle was also described in the amendments to the Act. In 1985, the National Archives was renamed as NARA, the National Archives and Records Administration, finally acknowledging in the name the role the agency plays in managing records as well as being involved in the long-term archival and preservation of documents. However, it wasn't until the 1990s that standards around Records Management began to take shape. In 1993, a government task force in the United States that included NARA, the US Army, and the US Air Force, began to devise processes for managing records that would include both the management of paper and electronic documents. The recommendations of that task force ultimately led to the DoD-5015.2 standard that was first released in 1997. Australia's AS-4390 and DIRKS In parallel to what was happening in the United States, standards for Records Management were also advancing in Australia. AS-4390 Standards Australia issued AS-4390 in 1996, a document that defined the scope of Records Management with recommendations for implementation in both public and private sectors in Australia. This was the first standard issued by any nation, but much of the language in the standard was very specific, making it usable really only within Australia. AS-4390 approached the management of records as a "continuum model" and addressed the "whole extent of the records' existence". DIRKS In 2000, the National Archives of Australia published DIRKS (Design and Implementation of Recordkeeping System), a methodology for implementing AS-4390. The Australian National Archives developed, tested, and successfully implemented the approach, summarizing the methodology for managing records into an eight-step process. The eight steps of the DIRKS methodology include: Organization assessment: Preliminary Investigation Analysis of business activity Identification of records requirements Assess areas for improvement: Assessment of the existing system Strategies for recordkeeping Design, implement, and review the changes: Design the recordkeeping system Implement the recordkeeping system Post-implementation review An international Records Management standard These two standards, AS-4390 and DIRKS, have had a tremendous influence not only within Australia, but also internationally. In 2001, ISO-15489 was published as an international standard for best practices for Records Management. Part one of the standard was based on AS-4390, and part two was based on the guidelines, as laid out in DIRKS. The same eight-step methodology of DIRKS is used in the part two guidelines of ISO-15489. The DIRKS manual can be freely downloaded from the National Archives of Australia: http://www.naa.gov.au/recordsmanagement/publications/dirks-manual.aspx The ISO-15489 document can be purchased from ISO: http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=31908 and http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=35845 ISO-15489 has been a success in terms of international acceptance. 148 countries are members of ISO, and many of the participating countries have embraced the use of ISO-15489. Some countries where ISO-15489 is actively applied include Australia, China, UK, France, Germany, Netherlands, and Jamaica. Both ARMA International and AIIM now also promote the importance of the ISO-15489 standard. Much of the appeal behind the ISO-15489 standard is the fact that it is fairly generic. Because it describes the recordkeeping process at a very high level, it avoids contentious details that may be specific to any particular Records Management implementation. Consider, for example, the eight steps of the DIRKS process, as listed above, and replace the words "record" and "recordkeeping" with the name of some other type of enterprise software or project, like "ERP". The steps and associated recommendations from DIRKS are equally applicable. In fact, we recognize clear parallels between the steps presented in the DIRKS methodology and methodologies used for Project Management. Later in this article, we will look at similarities between Records Management and Project Management methodologies like PMBOK and Agile. Does ISO-15489 overlap with standards like DoD-5015.2 and MoReq? ISO-15489 differs considerably in approach from other Records Management standards, like the DoD-5015.2 standard and the MoReq standard which developed in Europe. While ISO-15489 outlines basic principles of Records Management and describes best practices, these latter two standards are very prescriptive in terms of detailing the specifics for how to implement a Records Management system. They are essentially functional requirement documents for computer systems. MoReq (Model Requirements for the Management of Electronic Records) was initiated by the DLM Forum and funded by the European Commission. MoReq was first published in 2001 as MoReq1 and was then extensively updated and republished as MoReq2 in 2008. In 2010, an effort was undertaken to update the specification with the new name MoReq2010. The MoReq2 standard has been translated into 12 languages and is referenced frequently when building Records Management systems in Europe today. Other international standards for Records Management A number of other standards exist internationally. In Australia, for example, the Public Record Office has published a standard known as the Victorian Electronic Records Strategy (VERS) to address the problem of ensuring that electronic records can be preserved for long periods of time and still remain accessible and readable. The preliminary investigation Before we start getting our hands dirty with the sticky details of designing and implementing our records system, let's first get a big-picture idea of how Records Management currently fits into our organization and then define our vision for the future of Records Management in our organization. To do that, let's make a preliminary investigation of the records that our organization deals with. In the preliminary investigation, we'll make a survey of the records in our organization to find out how they are currently being handled. The results of the survey will provide important input into building the Business Case for moving forward with building a new Records Management system for our organization. With the results of the preliminary investigation, we will be able to create an information map or diagram of where records currently are within our organization and which groups of the organization those records are relevant to. With that information, we will be able to create a very high-level charter for the records program, provide data to be used when building the Business Case for Records Management, and then have sufficient information to be able to calculate a rough estimate of the cost and effort needed for the program scope. Before executing on the preliminary investigation, a detailed plan of attack for the investigation should be made. While the primary goal of the investigation is to gather information, a secondary goal should be to do it in a way that minimizes any disruptions to staff members. To perform the investigation, we will need assistance from the various business units in the organization. Before starting, a 'heads up' should be sent out to the managers of the different business units involved so that they will understand the nature of the investigation, when it will be carried out, and they'll know roughly the amount of time that both they and their unit will need to make available to assist in the investigation. It would also be useful to hold a briefing meeting with staff members from business units, where we expect to find most of the records. The records survey Central to the preliminary investigation is the records survey, which is taken across the organization. A records survey attempts to identify the location and record types for both the electronic and non-electronic records used in the organization. Physical surveys versus questionnaires The records survey is usually either carried out as a physical one or as one managed remotely via questionnaires. In a physical survey, members of the records management team visit each business unit, and working together with staff members from that unit, make a detailed inventory. During the survey, all physical storage locations, such as cabinets, closets, desks, and boxes are inspected. Staff members are asked where they store their files, which business applications they use, and which network drives they have access to. The alternative to the physical survey is to send questionnaires to each of the business units and to ask them to complete the forms on their own. Inspections similar to that of the physical survey would be made, but the business unit is not supported by a records management team member. Which of the two approaches we use will depend on the organization. Of course, a hybrid approach, where a combination of both physical surveys and questionnaires is used would work too. Physical in-person surveys tend to provide more accurate and complete inventories, but they also are typically more expensive and time consuming to perform. Questionnaires, while cheaper, rely on each of the individual business units to complete the information on their own, which means that the reporting and investigation styles used by the different units might not be uniform. There is also the problem that some business units may not be sufficiently motivated to complete the questionnaires in a timely manner. Preparing for the survey: Review existing documentation Before we begin the survey, we should check to see if there already exists any background documentation that describes how records are currently being handled within the organization. Documentation has a habit of getting out of date quickly. Documentation can also be deceiving because sometimes it is written, but never implemented, or implemented in ways that deviate dramatically from the originally written description. So if we're actually lucky enough to find any documentation, we'll need to also validate how accurate that information really is. These are some examples of documents which may already exist and which can provide clues about how some organizational records are being handled today: The organization's disaster recovery plan Previous records surveys or studies The organization's record management policy statement Internal and external audit reports that involve consideration of records Organizational reports like risk assessment and cost-benefit analyses Other types of documents may also exist, which can be good indicators for where records, particularly paper records, might be getting stored. These include: Blueprints, maps, and building plans that show the location of furniture and equipment Contracts with storage companies or organizations that provide records or backup services Equipment and supply inventories that may indicate computer hardware Lists of databases, enterprise application software, and shared drives It may take some footwork and digging to find out exactly where and how records in the organization are currently being stored. Physical records could be getting stored in numerous places throughout office and storage areas. Electronic records might be currently saved on shared drives, local desktops, or other document repositories. The main actions of the records survey can be summarized by the LEAD acronym: Locate the places where records are being stored Examine the records and their contents Ask questions about the records to understand their significance Document the information about the records
Read more
  • 0
  • 0
  • 3693

article-image-introducing-magento-extension-development
Packt
10 Oct 2013
13 min read
Save for later

Introducing Magento extension development

Packt
10 Oct 2013
13 min read
Creating Magento extensions can be an extremely challenging and time-consuming task depending on several factors such as your knowledge of Magento internals, overall development skills, and the complexity of the extension functionality itself. Having a deep insight into Magento internals, its structure, and accompanying tips and tricks will provide you with a strong foundation for clean and unobtrusive Magento extension development. The word unobtrusive should be a constant thought throughout your entire development process. The reason is simple; given the massiveness of the Magento platform, it is way too easy to build extensions that clash with other third-party extensions. This is usually a beginner's flaw, which we will hopefully avoid once we have finished reading this article. The examples listed in this article are targeted towards Magento Community Edition 1.7.0.2. Version 1.7.0.2 is the last stable release at the time of writing this writing. Throughout this article we will be referencing our URL examples as if they are executing on the magento.loc domain. You are free to set your local Apache virtual host and host file to any domain you prefer, as long as you keep this in mind. If you're hearing about virtual host terminology for the first time, please refer to the Apache Virtual Host documentation. Here is a quick summary on each of those files and folders: .htaccess: This file is a directory-level configuration file supported by several web servers, most notably the Apache web server. It controls mod_rewrite for fancy URLs and sets configuration server variables (such as memory limit) and PHP maximum execution time. .htaccess.sample: This is basically a .htaccess template file used for creating new stores within subfolders. api.php: This is primarily used for the Magento REST API, but can be used for SOAP and XML-RPC API server functionality as well. app: This is where you will find Magento core code files for the backend and for the frontend. This folder is basically the heart of the Magento platform. Later on, we will dive into this folder for more details, given that this is the folder that you as an extension developer will spend most of your time on. cron.php: This file, when triggered via URL or via console PHP, will trigger certain Magento cron jobs logic. cron.sh: This file is a Unix shell script version of cron.php. downloader: This folder is used by the Magento Connect Manager, which is the functionality you access from the Magento administration area by navigating to System | Magento Connect | Magento Connect Manager. errors: This folder is a host for a slightly separate Magento functionality, the one that jumps in with error handling when your Magento store gets an exception during code execution. favicon.ico: This is your standard 16 x 16 px website icon. get.php: This file hosts a feature that allows core media files to be stored and served from the database. With the Database File Storage system in place, Magento would redirect requests for media files to get.php. includes: This folder is used by the Mage_Compiler extension whose functionality can be accessed via Magento administration System | Tools | Compilation. The idea behind the Magento compiler feature is that you end up with PHP system that pulls all of its classes from one folder, thus, giving it a massive performance boost. index.php: This is a main entry point to your application, the main loader file for Magento, and the file that initializes everything. Every request for every Magento page goes through this file. index.php.sample: This file is just a backup copy of the index.php file. js: This folder holds the core Magento JavaScript libraries, such as Prototype, scriptaculous.js, ExtJS, and a few others, some of which are from Magento itself. lib: This folder holds the core Magento PHP libraries, such as 3DSecure, Google Checkout, phpseclib, Zend, and a few others, some of which are from Magento itself. LICENSE*: These are the Magento licence files in various formats (LICENSE_AFL.txt, LICENSE.html, and LICENSE.txt). mage: This is a Magento Connect command-line tool. It allows you to add/remove channels, install and uninstall packages (extensions), and various other package related tasks. media: This folder contains all of the media files, mostly just images from various products, categories, and CMS pages. php.ini.sample: This file is a sample php.ini file for PHP CGI/FastCGI installations. Sample files are not actually used by Magento application. pkginfo: This folder contains text files that largely operate as debug files to inform us about changes when extensions are upgraded in any way. RELEASE_NOTES.txt: This file contains the release notes and changes for various Magento versions, starting from version 1.4.0.0 and later. shell: This folder contains several PHP-based shell tools, such as compiler, indexer, and logger. skin: This folder contains various CSS and JavaScript files specific for individual Magento themes. Files in this folder and its subfolder go hand in hand with files in app/design folder, as these two locations actually result in one fully featured Magento theme or package. var: This folder contains sessions, logs, reports, configuration cache, lock files for application processes, and possible various other files distributed among individual subfolders. During development, you can freely select all the subfolders and delete them, as Magento will recreate all of them on the next page request. From a standpoint of Magento extension developer, you might find yourself looking into the var/log and var/report folders every now and then. Code pools The folder code is a placeholder for what is called a codePool in Magento. Usually, there are three code pool's in Magento, that is, three subfolders: community, core, and local. The formula for your extension code location should be something like app/code/community/YourNamespace/YourModuleName/ or app/code/local/YourNamespace/YourModuleName/. There is a simple rule as to whether to chose community or local codePool: Choose the community codePool for extensions that you plan to share across projects, or possibly upload to Magento Connect Choose the local codePool for extensions that are specific for the project you are working on and won't be shared with the public For example, let's imagine that our company name is Foggyline and the extension we are building is called Happy Hour. As we wish to share our extension with the community, we can put it into a folder such as app/code/community/Foggyline/HappyHour/. The theme system In order to successfully build extensions that visually manifest themselves to the user either on the backend or frontend, we need to get familiar with the theme system. The theme system is comprised of two distributed parts: one found under the app/design folder and other under the root skin folder. Files found under the app/design folder are PHP template files and XML layout configuration files. Within the PHP template files you can find the mix of HTML, PHP, and some JavaScript. There is one important thing to know about Magento themes; they have a fallback mechanism, for example, if someone in the administration interface sets the configuration to use a theme called hello from the default package; and if the theme is missing, for example, the app/design/frontend/default/hello/template/catalog/product/view.phtml file in its structure, Magento will use app/design/frontend/default/default/template/catalog/product/view.phtml from the default theme; and if that file is missing as well, Magento will fall back to the base package for the app/design/frontend/base/default/template/catalog/product/view.phtml file. All your layout and view files should go under the /app/design/frontend/defaultdefault/default directory. Secondly, you should never overwrite the existing .xml layout or template .phtml file from within the /app/design/frontend/default/default directory, rather create your own. For example, imagine you are doing some product image switcher extension, and you conclude that you need to do some modifications to the app/design/frontend/default/default/template/catalog/product/view/media.phtml file. A more valid approach would be to create a proper XML layout update file with handles rewriting the media.phtml usage to let's say media_product_image_switcher.phtml. The model, resource, and collection A model represents the data for the better part, and to certain extent a business logic of your application. Models in Magento take the Object Relational Mapping (ORM) approach, thus, having the developer to strictly deal with objects while their data is then automatically persisted to database. If you are hearing about ORM for the first time, please take some time to familiarize yourself with the concept. Theoretically, you could write and execute raw SQL queries in Magento. However, doing so is not advised, especially if you plan on distributing your extensions. There are two types of models in Magento: Basic Data Model: This is a simpler model type, sort of like Active Record pattern based model. If you're hearing about Active Record for the first time, please take some time to familiarize yourself with the concept. EAV (Entity-Attribute-Value) Data Model: This is a complex model type, which enables you to dynamically create new attributes on an entity. As EAV Data Model is significantly more complex than Basic Data Model and Basic Data Model will suffice for most of the time, we will focus on Basic Data Model and everything important surrounding it. Each data model you plan to persist to database, that means models that present an entity, needs to have four files in order for it to work fully: The model file: This extends the Mage_Core_Model_Abstract class. This represents single entity, its properties (fields), and possible business logic within it. The model resource file: This extends the Mage_Core_Model_Resource_Db_Abstract class. This is your connection to database; think of it as the thing that saves your entity properties (fields) database. The model collection file: This extends the Mage_Core_Model_Resource_Db_Collection_Abstract class. This is your collection of several entities, a collection that can be filtered, sorted, and manipulated. The installation script file: In its simplest definition this is the PHP file through which you, in and object-oriented way, create your database table(s). The default Magento installation comes with several built in shipping methods available: Flat Rate, Table Rates, Free Shipping UPS, USPS, FedEx, DHL. For some merchants this is more than enough, for others you are free to build an additional custom Shipping extension with support for one or more shipping methods. Be careful about the terminology here. Shipping method resides within shipping extension. A single extension can define one or more shipping methods. In this article we will learn how to create our own shipping method. Shipping methods There are two, unofficially divided, types of shipping methods: Static, where shipping cost rates are based on a predefined set of rules. For example, you can create a shipping method called 5+ and make it available to the customer for selection under the checkout only if he added more than five products to the cart. Dynamic, where retrieval of shipping cost rates comes from various shipping providers. For example, you have a web service called ABC Shipping that exposes a SOAP web service API which accepts products weight, length, height, width, shipping address and returns the calculated shipping cost which you can then show to your customer. Experienced developers would probably expect one or more PHP interfaces to handle the implementation of new shipping methods. Same goes for Magento, implementing a new shipping method is done via an interface and via proper configuration. The default Magento installation comes with several built-in payment methods available: PayPal, Saved CC, Check/Money Order, Zero Subtotal Checkout, Bank Transfer Payment, Cash On Delivery payment, Purchase Order, and Authorize.Net. For some merchants this is more than enough. Various additional payment extensions can be found on Magento Connect. For those that do not yet exist, you are free to build an additional custom payment extension with support for one or more payment methods. Building a payment extension is usually a non-trivial task that requires a lot of focus. Payment methods There are several unofficially divided types of payment method implementations such as redirect payment, hosted (on-site) payment, and an embedded iframe. Two of them stand out as the most commonly used ones: Redirect payment: During the checkout, once the customer reaches the final ORDER REVIEW step, he/she clicks on the Place Order button. Magento then redirects the customer to specific payment provider website where customer is supposed to provide the credit card information and execute the actual payment. What's specific about this is that prior to redirection, Magento needs to create the order in the system and it does so by assigning this new order a Pending status. Later if customer provides the valid credit card information on the payment provider website, customer gets redirected back to Magento success page. The main concept to grasp here is that customer might just close the payment provider website and never return to your store, leaving your order indefinitely in Pending status. The great thing about this redirect type of payment method providers (gateways) is that they are relatively easy to implement in Magento. Hosted (on-site) payment: Unlike redirect payment, there is no redirection here. Everything is handled on the Magento store. During the checkout, once the customer reaches the Payment Information step, he/she is presented with a form for providing the credit card information. After which, when he/she clicks on the Place Order button in the last ORDER REVIEW checkout step, Magento then internally calls the appropriate payment provider web service, passing it the billing information. Depending on the web service response, Magento then internally sets the order status to either Processing or some other. For example, this payment provider web service can be a standard SOAP service with a few methods such as orderSubmit. Additionally, we don't even have to use a real payment provider, we can just make a "dummy" payment implementation like built-in Check/Money Order payment. You will often find that most of the merchants prefer this type of payment method, as they believe that redirecting the customer to third-party site might negatively affect their sale. Obviously, with this payment method there is more overhead for you as a developer to handle the implementation. On top of that there are security concerns of handling the credit card data on Magento side, in which case PCI compliance is obligatory. If this is your first time hearing about PCI compliance, please click here to learn more. This type of payment method is slightly more challenging to implement than the redirect payment method. Magento Connect Magento Connect is one of the world's largest eCommerce application marketplace where you can find various extensions to customize and enhance your Magento store. It allows Magento community members and partners to share their open source or commercial contributions for Magento with the community. You can access Magento Connect marketplace here. Publishing your extension to Magento Connect is a three-step process made of: Packaging your extension Creating an extension profile Uploading the extension package More of which we will talk later in the article. Only community members and partners have the ability to publish their contributions. Becoming a community member is simple, just register as a user on official Magento website https://www.magentocommerce.com. Member account is a requirement for further packaging and publishing of your extension. Read more: Categories and Attributes in Magento: Part 2 Integrating Twitter with Magento Magento Fundamentals for Developers
Read more
  • 0
  • 0
  • 3677

article-image-integrating-twitter-magento
Packt
14 Jan 2011
2 min read
Save for later

Integrating Twitter with Magento

Packt
14 Jan 2011
2 min read
Integrating your Magento website with Twitter is a useful way to stay connected with your customers. You'll need a Twitter account (or more specifically an account for your business)  but once that's in place it's actually pretty easy. Adding a 'Follow Us On Twitter' button to your Magento store One of the more simple ways to integrate your store's Twitter feed with Magento is to add a 'Follow Us On Twitter' button to your store's design. Generating the markup from the Twitter website Go to the Twitter Goodies website (): Select the Follow Buttons option and then select the Looking for Follow us on Twitter buttons? towards the bottom of the screen: The buttons will now change to the FOLLOW US ON Twitter buttons: Select the style of button you'd like to make use of on your Magento store and then select the generated HTML that is provided in the pop-up that is displayed: The generated HTML for the M2 Store's Twitter account (with the username of M2MagentoStore) looks like the following: <a href="http://www.twitter.com/M2MagentoStore"> <img src="http://twitter-badges.s3.amazonaws.com/follow_us-a.png" alt="Follow M2MagentoStore on Twitter"/> </a> Adding a static block in Magento for your Twitter button Now you will need to create a new static block in the Magento CMS feature: navigate to CMS Static Blocks| in your Magento store's administration panel and click on Add New Block. As you did when creating a static block for the supplier logos used in your store's footer, complete the form to create the new static block. Add the Follow Us On Twitter button to the Content field by disabling the Rich Text Editor with the Show/Hide Editor button and pasting in the markup you generated previously: You don't need to upload an image to your store through Magento's CMS here as the Twitter buttons are hosted elsewhere. Note that the Identifier field reads follow-twitter—you will need this for the layout changes you are about to make!
Read more
  • 0
  • 0
  • 3670

article-image-checkbox-persistence-tabular-forms-reports
Packt
17 Jun 2010
7 min read
Save for later

Checkbox Persistence in Tabular Forms (Reports)

Packt
17 Jun 2010
7 min read
(For more resources on Oracle, see here.) One of the problems we are facing with Tabular Forms is that pagination doesn't submit the current view of the Tabular Form (Report) page, and if we are using Partial Page Refresh (PPR), it doesn't even reload the entire page. As such, Session State is not saved prior to us moving to the next/previous view. Without saving Session State, all the changes that we might have made to the current form view will be lost upon using pagination. This problematic behavior is most notable when we are using a checkboxes column in our Tabular Form (Report). We can mark specific checkboxes in the current Tabular Form (Report) view, but if we paginate to another view, and then return, the marked checkboxes will be cleared (no Session State, no history to rely on). In some cases, it can be very useful to save the marked checkboxes while paginating through the Tabular Form (Report). Joel Kallman, from the APEX development team, blogged about this issue (http://joelkallman.blogspot.com/2008/03/ preserving-checked-checkboxes-in-report.html) and offered a simple solution, which uses AJAX and APEX collections. Using APEX collections means that the marked checkboxes will be preserved for the duration of a specific user's current APEX session. If that's what you need, Joel's solution is very good as it utilizes built-in APEX resources in an optimal way. However, sometimes the current APEX session is not persistent enough. In one of my applications I needed more lasting persistence, which can be used crossed APEX users and sessions. So, I took Joel's idea and modified it a bit. Instead of using APEX collections, I've decided to save the checked checkboxes into a database table. The database table, of course, can support unlimited persistence across users. Report on CUSTOMERS We are going to use a simple report on the CUSTOMERS table, where the first column is a checkboxes column. The following is a screenshot of the report region: W e are going to use AJAX to preserve the status of the checkboxes in the following scenarios: Using the checkbox in the header of the first column to check or clear all the checkboxes in the first column of the current report view Individual row check or clearing of a checkbox The first column—the checkboxes column—represents the CUST_ID column of the CUSTOMERS table, and we are going to implement persistence by saving the values of this column, for all the checked rows, in a table called CUSTOMERS_VIP. This table includes only one column: CREATE TABLE "CUSTOMERS_VIP" ( "CUST_ID" NUMBER(7,0) NOT NULL ENABLE, CONSTRAINT "CUSTOMERS_VIP_PK" PRIMARY KEY ("CUST_ID") ENABLE) Bear in mind: In this particular example we are talking about crossed APEX users and sessions persistence. If, however, you need to maintain a specific user-level persistence, as it happens natively when using APEX collections, you can add a second column to the table that can hold the APP_USER of the user. In this case, you'll need to amend the appropriate WHERE clauses and the INSERT statements, to include and reflect the second column. The report SQL query The following is the SQL code used for the report: SELECT apex_item.checkbox(10,l.cust_id,'onclick=updateCB(this);', r.cust_id) as cust_id, l.cust_name, l.cust_address1, l.cust_address2, l.cust_city, l.cust_zip_code, (select r1.sname from states r1 where l.cust_state = r1.code) state, (select r2.cname from countries r2 where l.cust_country = r2.code) countryFROM customers l, customers_vip rWHERE r.cust_id (+) = l.cust_idORDER BY cust_name The Bold segments of the SELECT statement are the ones we are most interested in. The APEX_ITEM.CHECKBOX function creates a checkboxes column in the report. Its third parameter—p_attributes—allows us to define HTML attributes within the checkbox <input> tag. We are using this parameter to attach an onclick event to every checkbox in the column. The event fires a JavaScript function— updateCB(this)—which takes the current checkbox object as a parameter and initiates an AJAX process. The fourth parameter of the APEX_ITEM.CHECKBOX function—p_checked_ values—allows us to determine the initial status of the checkbox. If the value of this parameter is equal to the value of the checkbox (determined by the second parameter—p_value) the checkbox will be checked. This parameter is the heart of the solution. Its value is taken from the CUSTOMERS_VIP table using outer join with the value of the checkbox. The outcome is that every time the CUSTOMERS_VIP table contains a CUST_ID value equal to the current checkbox value, this checkbox will be checked. The report headers In the Report Attributes tab we can set the report headers using the Custom option. We are going to use this option to set friendlier report headers, but mostly to define the first column header—a checkbox that allows us to toggle the status of all the column checkboxes. The full HTML code we are using for the header of the first column is: <input type="checkbox" id = "CB" onclick="toggleAll(this,10);"title="Mark/Clear All"> We are actually creating a checkbox, with an ID of CB and an onclick event that fires the JavaScript function toggleAll(this,10). The first parameter of this function is a reference to the checkbox object, and the second one is the first parameter—p_idx—of the APEX_ITEM.CHECKBOX function we are using to create the checkbox column. The AJAX client-side JavaScript functions So far, we have mentioned two JavaScript functions that initiate an AJAX call. The first—updateCB()—initiates an AJAX call that updates the CUSTOMERS_VIP file according to the status of a single (row) checkbox. The second one—toggleAll()— initiates an AJAX call that updates the CUSTOMERS_VIP file according to the status of the entire checkboxes column. Let's review these functions. The updateCB() JavaScript function The following is the code of this function: function updateCB(pItem){ var get = new htmldb_Get(null, $v('pFlowId'), 'APPLICATION_PROCESS=update_CB',$v('pFlowStepId')); get.addParam('x01',pItem.value); get.addParam('x02',pItem.checked); get.GetAsync(function(){return;}); get = null;} The function accepts, as a parameter, a reference to an object—this—that points to the checkbox we just clicked. We are using this reference to set the temporary item x01 to the value of the checkbox and x02 to its status (checked/unchecked). As we are using the AJ AX related temporary items, we are using the addParam() method to do so. These items will be available to us in the on-demand PL/SQL process update_CD, which implements the server-side logic of this AJAX call. We stated this process in the third parameter of the htmldb_Get constructor function— 'APPLICATION_PROCESS=update_CB'. In this example, we are using the name 'get' for the variable referencing the new instance of htmldb_Get object. The use of this name is very common in many AJAX examples, especially on the OTN APEX forum, and its related examples. As we'll see when we review the server-side logic of this AJAX call, all it does is update—insert or delete—the content of the CUSTOMERS_VIP table. As such, it doesn't have an immediate effect on the client side, and we don't need to wait for its result. This is a classic case for us to use an asynchronous AJAX call. We do so by using the GetAsync() method. In this specific case, as the client side doesn't need to process any server response, we can use an empty function as the GetAsync() parameter.
Read more
  • 0
  • 0
  • 3664
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-new-ios-social-project
Packt
08 Oct 2013
8 min read
Save for later

Creating a New iOS Social Project

Packt
08 Oct 2013
8 min read
Creating a New iOS Social Project In this article, by Giuseppe Macri, author of Integrating Facebook iOS SDK with Your Application, we will learn about: With this article, we start our coding journey. We are going to build our social application from the group up. In this article we will learn about: Creating a Facebook App ID: It is a key used with our APIs to communicate with the Facebook Platform. Downloading the Facebook SDK: iOS SDK can be downloaded from two different channels. We will look into both of them. Creating a new XCode Project: I will give a brief introduction on how to create a new XCode project and description of the IDE environment. Importing Facebook iOS SDK into our XCode project: I will go through the import of the Facebook SDK into our XCode project step-by-step. Getting familiar with Storyboard to build a better interface: This is a brief introduction on the Apple tool to build our application interface. Creating a Facebook App ID In order to communicate with the Facebook Platform using their SDK, we need an identifier for our application. This identifier, also known as Facebook App ID, will give access to the Platform; at the same time, we will be able to collect a lot of information about its usage, impressions, and ads. To obtain a Facebook App ID, we need a Facebook account. If you don't have one, you can create a Facebook account via the following page at https://www.facebook.com: The previous screenshot shows the new Facebook account sign up form. Fill out all the fields and you will be able to access the Facebook Developer Portal. Once we are logged into Facebook, we need to visit the Developer Portal. You can find it at https://developers.facebook.com/. I already mentioned the important role of Developer Portal in developing our social application. The previous screenshot shows the Facebook Developer Portal. The main section, the top part, is dedicated to the current SDKs. On the top-blue bar, click on the Apps link, and it will redirect us to the Facebook App Dashboard. The previous screenshot shows the Facebook App Dashboard. To the left, we have a list of apps; on the center of the page, we can see the details of the currently selected app from our list. The page shows the application's setting and analytics (Insights). In order to create a new Facebook App ID, you can click on Create New App on the top-right part of the App Dashboard. The previous screenshot shows the first step in order to create a Facebook App ID. When providing the App Name, be sure the name does not already exist or violate any copyright laws; otherwise, Facebook will remove your app. App Namespace is something that we need if we want to define custom objects and/or actions in the Open Graph structure. The App Namespace topic is not part of this book. Web hosting is really useful when creating a social web application. Facebook, in partnership with other providers, can create a web hosting for us if needed. This part is not going to be discussed in this book; therefore, do not check this option for your application. Once all the information is provided, we can move on to the next step. Please fill out the form, and move forward to the next one. On the top of the page, we can see both App ID and App Secret. These are the most important pieces of information about our new social applicaton. App ID is a piece of information that we can share unlike App Secret. At the center of our new Facebook Application Page, we have basic information fields. Do not worry about Namespace, App Domains, and Hosting URL; these fields are for web applications. Sandbox Mode only allows developers to use the current application. Developers are specified through the Developer Roles link on the left side bar. Moving down, select the type of app. For our goal, select Native iOS App. You can select multiple types and create multiplatform social applications. Once you have checked Native iOS App, you will be prompted with the following form: The only field we need to provide for now is the Bundle ID. The Bundle ID is something related to XCode settings. Be sure that the Facebook Bundle ID will match our XCode Social App Bundle Identifier. The format for the bundle identifier is always something like com.MyCompany.MyApp. iPhone/iPad App Store IDs are the App Store identifiers of your application if you have published your app in the App Store. If you didn't provide any of them after you saved your changes, you will receive a warning message; however, don't worry, our new App ID is now ready to be used. Save your changes and get ready to start our developing journey. Downloading the Facebook iOS SDK The iOS Facebook SDK can be downloaded through two different channels: Facebook Developer Portal: For downloading the installation package GitHub: For downloading the SDK source code Using Facebook Developer Portal, we can download the iOS SDK as the installation package. Visit https://developers.facebook.com/ios/ as shown in the following screenshot and click on Download the SDK to download the installation package. The package, once installed, will create a new FacebookSDK folder within our Documents folder. The previous screenshot shows the content of the iOS SDK installation package. Here, we can see four elements: FacebookSDK.framework: This is the framework that we will import in our XCode social project LICENSE: It contains information about licensing and usage of the framework README: It contains all the necessary information about the framework installation Samples: It contains a useful set of sample projects that uses the iOS framework's features With the installation package, we only have the compiled files to use, with no original source code. It is possible to download the source code using the GitHub channel. To clone git repo, you will need a Git client, either Terminal or GUI. The iOS SDK framework git repo is located at https://github.com/facebook/facebook-ios-sdk.git. I prefer the Terminal client that I am using in the following command: git clone https://github.com/facebook/facebook-ios-sdk.git After we have cloned the repo, the target folder will look as the following screenshot: The previous picture shows the content of the iOS SDK GitHub repo. Two new elements are present in this repo: src and scripts. src contains the framework source code that needs to be compiled. The scripts folder has all the necessary scripts needed to compile the source code. Using the GitHub version allows us to keep the framework in our social application always up-to-date, but for the scope of this book, we will be using the installation package. Creating a new XCode project We created a Facebook App ID and downloaded the iOS Facebook SDK. It's time for us to start our social application using XCode. The application will prompt the welcome dialog if Show this window when XCode launches is enabled. Choose the Create a new XCode project option. If the welcome dialog is disabled, navigate to File | New | Project…. Choosing the type of project to work with is the next step as shown in the following screenshot: The bar to the left defines whether the project is targeting a desktop or a mobile device. Navigate to iOS | Application and choose the Single View Application project type. The previous screenshot shows our new project's details. Provide the following information for your new project: Product Name: This is the name of our application Organization Name: I will strongly recommend filling out this part even if you don't belong to an organization because this field will be part of our Bundle Identifier Company Identifier: It is still optional, but we should definitely fill it out to respect the best-practice format for Bundle Identifier Class Prefix: This prefix will be prepended to every class we are going to create in our project Devices: We can select the target device of our application; in this case, it is an iPhone but we could also have chosen iPad or Universal Use Storyboards: We are going to use storyboards to create the user interface for our application Use Automatic Reference Counting: This feature enables reference counting in the Objective C Garbage Collector Include Unit Tests: If it is selected, XCode will also create a separate project target to unit-test our app; this is not part of this book Save the new project. I will strongly recommend checking the Create a local git repository for this project option in order to keep track of changes. Once the project is under version control, we can also decide to use GitHub as the remote host to store our source code.
Read more
  • 0
  • 0
  • 3661

article-image-jasperreports-36-creating-simple-one-page-toc-your-report
Packt
30 Jun 2010
3 min read
Save for later

JasperReports 3.6: Creating a Simple, One-page TOC for Your Report

Packt
30 Jun 2010
3 min read
(For more resources on JasperReports, see here.) Getting ready Refer to the installPostgreSQL.txt file included in the source code download (chap5) to install and run PostgreSQL, which should be up and running before you proceed. The source code also includes a file named copySampleDataIntoPGS.txt, which helps you create a database named jasperdb6 and copy sample data for this recipe into the database. How to do it... Open the SimpleTOCReport.jrxml file from the Task2 folder of the source code. The Designer tab of iReport shows a report containing data in Title, Column Header, Customer Group Header 1, Product Group Header 1, Detail 1, and Product Group Footer 1 sections, as shown in the following screenshot: Switch to the Preview tab and you will see invoices for each customer grouped by product names. Switch back to the Designer tab. Right-click on the Variables node in the Report Inspector window on the left side of your report. From the pop-up menu that appears, select the Add Variable option. A new variable named variable1 will be added at the end of the variables list. While variable1 is selected, find the Name property in the Properties window below the Palette of components and change its value to FirstRecordOfANewGroup. Now the name of the variable1 variable will change to FirstRecordOfANewGroup. Select the Variable Class property and change its value to java.lang.Integer. Select the Calculation property and change its value to Count. Select the Reset type property and change its value to Group. Select the Reset group property and change its value to Customer. Select the Variable Expression property and click the button beside it. A Variable Expression window with no default expression will open, as shown in the next screenshot: Select Variables in the first column of the lower-half of the Variable Expression window. Then double-click the FirstRecordOfANewGroup variable in the second column. A new expression $V{FirstRecordOfANewGroup} will appear in the Variable Expression window, as shown in the next screenshot. Press the OK button. Right-click on the Variables node in the Report Inspector window. A pop-up menu will appear. Select the Add Variable option. A new variable named variable1 will be added at the end of the variables list. While variable1 is selected, find the Name property in the Properties window below the Palette of components and change its value to TOC. Now the name of the variable1 variable will change to TOC. Select the Variable Class property and change its value to java.lang.String.
Read more
  • 0
  • 0
  • 3651

article-image-using-web-pages-upk-35
Packt
16 Nov 2009
12 min read
Save for later

Using Web Pages in UPK 3.5

Packt
16 Nov 2009
12 min read
Using Web Pages in the Concept pane The most common use of Web Pages is to provide context information for Topics. Look at the following image of the Outline for our example course: You will see that the upper-right section of the Outline window contains a pane labeled Concept. If you want any information to be displayed in this pane, then you need to create a Web Page and attach it to the content object in the Outline. Version Difference Although the Concept pane always appears in the Outline view, if it is empty, then it does not appear in the Player. This is, thankfully, a new feature in UPK 3.5. In previous versions, the Concept pane always appeared in the Player, often appearing as a blank frame, where developers couldn't be bothered to provide any concepts. To create a new Web Page and attach it to a content object, carry out the following steps: Open the Outline containing the content object to which you want to attach the Web Page, in the Outline Editor. Click on the content object to select it. Although in this example we are attaching a Web Page to the concept pane for a Topic, Modules, and Sections also have concept panes, so you can also attach Web Pages to these as well. Click on the Create new web page button () in the Concept pane. The Save As dialog box is displayed. Navigate to the folder in which you want to save the Web Page (we will use an Assets sub-folder for all of our Web Pages), enter a name for the Web Page (it makes sense to use a reference to the content object to which the Web Page relates), and then click on the Save button. The Web Page Editor is opened on the Developer screen, as shown in the next screenshot: Enter the information that you want to appear in the Concept pane in the editor (as has already been done in the previous example). You can set the font face and size; set text to bold, italics, and underlined; change the text color, and change change the background color (again, as has been done in the earlier example). You can also change the paragraph alignment, and format numbered and bulleted lists. Once you have entered the required text, click on the Save button () to save your changes, and then close the Web Page Editor tabbed page. You are returned to the Outline Editor. Now, the Concept pane shows the contents of the Web Page, as shown in the next screenshot: Version Difference In UPK 3.5 (and OnDemand 9.1) you can only attach a single Web Page to the Concept pane. This is a change from OnDemand 8.7, where you could attach multiple Infoblocks and they would be displayed sequentially. (But note that if you convert content from OnDemand 8.7 where there are multiple Infoblocks in a single Concept pane, then all of the attached Infoblocks will be converted to a single Web Page in UPK 3.5.) The above steps explain how to attach a Web Page to the Concept pane for an exercise from the Outline. Although this is done from the Outline, the Web Page is attached to the Topic content object and not to the outline element. If you subsequently insert the same Topic in another Outline, the same Web Page will be used in the Concept pane of the new Outline. You can also attach a Web Page to the Concept pane for an exercise from within the Topic Editor. This is a useful option if you want to create concept Web Page but have not yet built an Outline to house the Topic. To do this, follow these steps: Open the Topic in the Topic Editor. Select menu option View|Concept Properties. The Concept Properties dialog box is displayed. This is very similar to the Concept pane seen in the Overview. It contains the same buttons. Create and save the Web Page as described above. When you have finished, and the Web Page is displayed in the Concept Properties dialog box, click on the OK button. Using images in Web Pages As stated above, a Web Page can contain an image. This can be instead of, or in addition to, any text (although if you only wanted to include a single image in the Web Page you could always use a Package, as explained later in this article). Images are a nice way of adding interest to a Web Page (and therefore to your training), or of providing additional information that is better explained graphically (after all, a picture is worth a thousand words). However, if you are using images in the Concept pane, then you should consider the overall size of the image and the likely width of the Concept pane, bearing in mind that the trainee may run the Player in a smaller window than the one you design. For our sample exercise, given that we are providing simulations for the SAP system, we will include a small SAP logo in the Web Page that appears in the Concept pane for our course Module. For the sake of variety, we will do this from the Library, and not from the Outline Editor. To add an image to a Web Page, carry out the steps described below. In the Library, locate the folder containing the Web Page to which you want to add the image. Double-click on the Web Page, to open it in the Web Page Editor. As before, this is opened in a separate tab in the main UPK Developer window, as can be seen in the next screenshot: Within the Web Page, position the cursor at the place in the text where you want the image to appear. Select menu option Insert|Image. The Insert Image dialog box is displayed, as shown in the next screenshot: In the Link to bar on the leftmost side of the dialog box, select the location of the image file that you want to insert into the Web Page. You can insert an image that you have already imported into your Library (for example, in a Package), an image that is located on your computer (or any attached drive) (option Image on My Computer), or an image from the Internet (option URL). For our sample exercise, we will insert an image from our computer. In the rightmost side of the dialog box, navigate to and select the image file that you want to insert into the Web Page. Click on OK. The image is inserted, as shown in the following screenshot: Save the Web Page, and close the Web Page Editor, if you have finished editing this Web Page. (We have not; we want to adjust the image, as explained below). Of course, this doesn't look too pretty. Thankfully, we can do something about this, because UPK provides some rudimentary features for adjusting images in Web Pages. To adjust the size or position of an image in a Web Page, carry out the following steps: With the Web Page open in the Web Page Editor, right-click on the image that you want to adjust, and then select Image Properties from the context menu. The Image Properties dialog box is displayed, as shown in the next screenshot: In the Alternative Text field, enter a short text description of the image. This will be used as the ToolTip in some web browsers. Under Size, select whether you want to use the original image size or not, and specify a new height and width if you choose not. Under Appearance, select a border width and indicate where the image should be aligned on the Web Page. Your choices are: Top, Middle, Bottom, Left, and Right. The first three of these (the vertical options) control the position of the image relative to the line of text in which it is located. The last two (the horizontal options) determine whether the image is left-aligned or right-aligned within the overall Web Page. Although these are two independent things (vertical and horizontal), you can only select one, so if you want the image to be right-aligned and halfway down the page, you can't do it. Click on OK to confirm your changes. Save and close the Web Page. For our sample exercise, we resized the image, set it to be right-aligned, and added a 1pt border around it (because it looked odd with a white background against the blue of the Web Page, without a border). A better option would be to use an image with a transparent background. In this example we have used an image with a solid background just for the purposes of illustration. These settings are as shown in the previous screenshot. This gives us a final Web Page as shown in the next screenshot. Note that this screenshot is taken from the Player, so that you can see how images are handled in the Player. Note that the text flows around the image. You will see that there is a little more space to the right of the image than there is above it. This is to leave room for a scrollbar. Creating independent Web Pages In the previous section, we looked at how to use a Web Page to add information to the Concept pane of a content object. In this section, we will look at how to use Web Pages to provide information in other areas. Observant readers will have noticed that a Web Page is in fact an independent content object itself. When you created a Web Page to attach to a Concept pane, you edited this Web Page in its own tabbed editor and saved it to its own folder (our Assets folder). Hopefully, you also noticed that in addition to the Create new web page button (), the Concept pane also has a Create link button () that can be used to attach an existing Web Page to the Concept pane. It should, therefore, come as no surprise to learn that Web Pages can be created independently of the Concept pane. In fact, the Concept pane is only one of several uses of Web Pages. To create a Web Page that is independent of a Concepts pane (or anything else), carry out these steps. In the Library, navigate to the folder in which you want to store the Web Page. In our example, we are saving all of the Web Pages for a module in a sub-folder called Assets within the course folder. Select menu option File|New|Web Page. A Web Page Editor tabbed page is opened up on the Developer screen. The content and use of this is exactly as described above in the explanation of how to create a Web Page from within an Outline. Enter the required information into the Web Page, and format it as required. We have already covered most of the available options, above. Once you have made your changes, click on the Save button (). You will be prompted to select the destination folder (which will default to the folder selected in Step 1, although you can change this) and a file name. Refer to the description of the Save As dialog box above for additional help if necessary. Close the Web Page Editor. You have now created a new Web Page. Now let's look at how to use it. Using Web Pages in Topics If you recall our long-running exercise on maintaining your SAP user profile, you will remember that we ask the user to enter their last name and their first name. These terms may be confusing in some countries—especially in countries where the "family name" actually comes before the "given name"—so we want to provide some extra explanation of some common name formats in different countries, and how these translate into first name and last name. We'll provide this information in a Web Page, and then link to this Web Page at the relevant place(s) within our Topic. First, we need to create the Web Page. How to do this is explained in the section Creating independent Web Pages, above. For our exercise, our created Web Page is as follows: There are two ways in which you can link to a Web Page from a Topic. These are explained separately, below. Linking via a hyperlink With a hyperlink, the Web Page is linked from a word or phrase within the Bubble Text of a Frame. (Note that it is only possible to do this for Custom Text because you can't select the Template Text to hyperlink from.) To create a hyperlink to a Web Page from within a Frame in a Topic, carry out the steps described below: Open up the Topic in the Topic Editor. Navigate to the Frame from which you want to provide the hyperlink. In our exercise, we will link from the Explanation Frame describing the Last name field (this is Frame 5B). In the Bubble Properties pane, select the text that you want to form the hyperlink (that is, the text that the user will click on to display the Web Page). Click on the Bubble text link button () in the Bubble Properties pane. The Bubble Text Link Properties dialog box is displayed. Click on the Create link button () to create a link to an existing Web Page (you could also click on the Create new web page button () to create a new Web Page if you have not yet created it). The Insert Hyperlink dialog box is displayed, as shown in the next screenshot: Make sure that the Document in Library option is selected in the Link to: bar. In the Look in: field, navigate to the folder containing the Web Page (the Assets folder, in our example). In the file list, click on the Web Page to which you want to create a hyperlink. Click on the Open button. Back in the Bubble Text Link Properties dialog box, click on OK. This hyperlink will now appear as follows, in the Player: Note that there is no ToolTip for this hyperlink. There was no opportunity to enter one in the steps above, so UPK doesn't know what to use. Version Difference In OnDemand 8.7 the Infoblock name was used as the ToolTip, but this is not the case from OnDemand 9.x onwards.
Read more
  • 0
  • 0
  • 3648

article-image-dynamic-menus-wordpress
Packt
07 Dec 2009
5 min read
Save for later

Dynamic Menus in WordPress

Packt
07 Dec 2009
5 min read
This is the nice thing about WordPress—it's all "dynamic". Once you install WordPress and design a great theme for it, anyone with the right level of administrative capability can log into the Administration Panel and add, edit, or delete content and menu items. But generally, when people ask for "dynamic menus", what they really want are those appearing and disappearing drop-down menus which, I believe, they like because it quickly gives a site a very "busy" feel. I must add my own disclaimer—I don't like dropdowns. Before you get on to my case, I will say it's not that they're "wrong" or "bad"; they just don't meet my own aesthetic standards and I personally find them non-user friendly. I'd prefer to see a menu system that, if subsections are required, displays them somewhere consistently on the page, either by having a vertical navigation expand to display subsections underneath, or showing additional subjections in a set location on the page if a horizontal menu is used. I like to be able to look around and say, "OK, I'm in the New Items | Cool Drink section and I can also check out Red Dinksand Retro Dinks within this section". Having to constantly go back up to the menu and drop-down the options to remind myself of what's available and what my next move might be, is annoying. Still haven't convinced you not to use drop-downs? OK, read on. Drop-down menus So you're going to use dropdowns. Again it's not "wrong"; however, I would strongly caution you to help your client take a look at their target users before implementing them. If there's a good chance that most users are going to use the latest browsers that support the current JavaScript, CSS, and Flash standards, and everyone has great mobility and is "mouse-ready", then there's really no problem in going for it. If it becomes apparent that any percentage of the site's target users will be using older browsers or have disabilities that prevent them from using a mouse and will limit them to tabbing through content, you must consider not using drop-down menus. I was especially negative about drop-down menus as, until recently, they required bulky JavaScripting or the use of Flash, which does not make clean, semantic, and SEO-friendly (or accessible) XHTML. Enter the Suckerfish method developed by Patrick Griffiths and Dan Webb. This method is wonderful because it takes valid, semantically accurate, unordered lists (WordPress' favorite!), and using almost pure CSS, creates dropdowns. The drop-down menus are not tab accessible, but they will simply display as a single, clear unordered list to older browsers that don't support the required CSS. IE6, as per usual, poses a problem or two for us, so there is some minimal DOM JavaScripting needed to compensate and achieve the correct effect in that browser. If you haven't heard of or worked with the Suckerfish method, I'm going to recommend you to go online (right now!) and read Dan and Patrick's article in detail (http://alistapart.com/articles/dropdowns). More recently, Patrick and Dan have revisited this method with "Son-of-a-Suckerfish", which offers multiple levels and an even further pared down DOM JavaScript. Check it out at http://www.htmldog.com/articles/suckerfish/dropdowns/. I also suggest you play around with the sample code provided in these articles so that you understand exactly how it works. Go on, and read it. When you get back, I'll review how to apply this method to your WordPress theme. DIY SuckerFish menus in WordPress All done? Great! As you can see, the essential part of this effect is getting your menu items to show up as unordered lists with sub unordered lists. Once you do that, the rest of the magic can be easily handled by finessing the CSS that Patrick and Dan suggest into your theme's CSS and placing the DOM script in your theme's header tag(s), in your header.php and/or index.php template files. Seriously, that's it! The really good news is that WordPress already outputs your content's pages and their subpages using unordered lists. Right-click on the page links in Firefox to View Selected Source and check that the DOM inspector shows us that the menu is, in fact, being displayed using an unordered list. Now you can go into your WordPress Administration panel and add as many pages and subpages as you'd like (Administration | Page | Add New). You'll use the Page Parent tab on the right to assign your subpages to their parent. If you installed the pageMash plugin, it's even easier! You can drag-and-drop your created pages into any configuration you'd like. Just be sure to hit the Update button when you're done. Once you've added subpages to a page, you'll be able to use the DOM Source of Selection viewer to see that your menu is displayed with unordered lists and sublists.
Read more
  • 0
  • 0
  • 3630
article-image-q-replication-components-ibm-replication-server
Packt
16 Aug 2010
8 min read
Save for later

Q Replication Components in IBM Replication Server

Packt
16 Aug 2010
8 min read
The individual stages for the different layers are shown in the following diagram: The DB2 database layer The first layer is the DB2 database layer, which involves the following tasks: For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging. We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed. Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables. Database/table/column name compatibility In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition. Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of: Primary key Unique contraint Unique index If none of these exist, then Q Apply will apply the updates using all columns. However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram: The WebSphere MQ layer This is the second layer we should install and test—if this layer does not work then Q replication will not work! We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code. If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group. Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication: The following figure shows the MQ objects that need to be created for bidirectional replication: There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels. Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer. The Q replication layer This is the third and final layer, which comprises the following steps: Create the replication control tables on the source and target servers. Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use. Some of the terms that are covered in this section are: Logical table Replication Queue Map Q subscription Subscription group (SUBGROUP) What is a logical table? In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC: What is a Replication/Publication Queue Map? The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map. Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ. The SENDQ is the queue that Q Capture uses to send source data and informational messages. The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s). The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture. So using the queues in the first “Queues” figure, the Replication Queue Map definition would be: Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue. What is a Q subscription? The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination. Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed. The structure of a Q subscription is made up of a source and target section, and we have to specify: The Replication Queue Map The source and target table The type of target table The type of conflict detection and action to be used The type of initial load, if any, should be performed If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot. Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity. In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence. In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3: What is a subscription group? A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command. Q subscription activation In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
Read more
  • 0
  • 0
  • 3620

article-image-alfresco-3-business-solutions-document-migration-strategies
Packt
15 Feb 2011
13 min read
Save for later

Alfresco 3 Business Solutions: Document Migration Strategies

Packt
15 Feb 2011
13 min read
Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real-world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems. The Best Money CMS project is now in full swing and we have the folder structure with business rules designed and implemented and the domain content model created. It is now time to start importing any existing documents into the Alfresco repository. Most companies that implement an ECM system, and Best Money is no exception, will have a substantial amount of files that they want to import, classify, and make searchable in the new CMS system. The planning and preparation for the document migration actually has to start a lot earlier, as there are a lot of things that need to be prepared: Who is going to manage sorting out files that should be migrated? What is the strategy and process for the migration? What sort of classification should be done during the import? What filesystem metadata needs to be preserved during the import? Do we need to write any temporary scripts or rules just for the import? Document migration strategies The first thing we need to do is to figure out how the document migration is actually going to be done. There are several ways of making this happen. We will discuss a couple of different ways, such as via the CIFS interface and via tools. There are also some general strategies that apply to any migration method. General migration strategies There are some common things that need to be done no matter which import method is used, such as setting up a document migration staging area. Document staging area The end users need to be able to copy or move documents—that they want to migrate—to a kind of staging area that mirrors the new folder structure that we have set up in Alfresco. The best way to set up the staging area is to copy it from Alfresco via CIFS. When this is done the end users can start copying files to the staging area. However, it is a good idea to train the users in the new folder structure before they start copying documents to it. We should talk to them about folder structure changes, what rules and naming conventions have been set up, the idea behind it, and why it should be followed. If we do not train the end users in the new folder structure, they will not honor it and the old structure will get mixed up with the new structure via document migration, and this is not something that we want. We did plan and implement the new structure for today's requirements and future requirements and we do not want it broken before we even start using the system. The end users will typically work with the staging area over some time. It is good if they get a couple of weeks for this. It will take them time to think about what documents they want to migrate and if any re-organization is needed. Some documents might also need to be renamed. Preserving Modified Date on imported documents We know that Best Money wants all their modified dates on the files to be preserved during an import, as they have a review process that is dependent on it. This means that we have to use an import method that can preserve the Modified Date on the network drive files when they are merged into the Alfresco repository. The CIFS interface cannot be used for this as it sets Modified Date to Current Date. There are a couple of methods that can be used to import content into the repository and preserve the Modified Date: Create an ACP file via an external tool and then import it Custom code the import with the Foundation API and turn off the Audit Aspect before the import Use an import tool that also has the possibility to turn off the Audit Aspect At the time of writing (when I am using Alfresco 3.3.3 Enterprise and Alfresco Community 3.4a) there is no easy way to import files and preserve the Modified Date. When a file is added via Alfresco Explorer, Alfresco Share, FTP, CIFS, Foundation API, REST API, and so on, the Created Date and Modified Date is set to "now", so we lose all the Modified Date data that was set on the files on the network drive. The Created Date, Creator, Modified Date, Modifier, and Access Date are all so called Audit properties that are automatically managed by Alfresco if a node has the cm:auditable aspect applied. If we try and set these properties during an import via one of the APIs, it will not succeed. Most people want to import files via CIFS or via an external import tool. Alfresco is working towards supporting preserving dates when using both these methods for import. Currently, there is a solution to add files via the Foundation API and preserve the dates, which can be used by custom tools. The Alfresco product itself also needs this functionality in, for example, the Transfer Service Receiver, so the dates can be preserved when it receives files. The new solution that enables the use of the Foundation API to set Auditable properties manually has been implemented in version 3.3.2 Enterprise and 3.4a Community. To be able to set audit properties do the following: Inject the policy behavior filter in the class that should do the property update: <property name="behaviourFilter" ref="policyBehaviourFilter"/> Then in the class, turn off the audit aspect before the update, it has to be inside a new transaction, as in the following example: RetryingTransactionCallback<Object> txnWork = new RetryingTransactionCallback<Object>() { public Object execute() throws Exception { behaviourFilter.disableBehaviour (ContentModel.ASPECT_AUDITABLE); Then in the same transaction update the Created or Modified Date: nodeService.setProperty(nodeRef, ContentModel.PROP_MODIFIED, someDate); . . . } }; With JDK 6, the Modified Date is the only file data that we can access, so no other file metadata is available via the CIFS interface. If we use JDK 7, there is a new NIO 2 interface that gives access to more metadata. So, if we are implementing an import tool that creates an ACP file, we could use JDK 7 and preserve Created Date, Modified Date, and potentially other metadata as well. Post migration processing scripts When the document migration has been completed, we might want to do further processing of the documents such as setting extra metadata. This is specifically needed when documents are imported into Alfresco via the CIFS interface, which does not allow any custom metadata to be set during the import. There might also be situations, such as in the case of Best Money, where a lot of the imported documents have older filenames (that is, following an older naming convention) with important metadata that should be extracted and applied to the new document nodes. For post migration processing, JavaScript is a convenient tool to use. We can easily define Lucene queries for the nodes we want to process, as the rules have applied domain document types such as Meeting to the imported documents, and we can use regular expressions to match and extract the metadata we want to apply to the nodes. Search restrictions when running post-migration scripts What we have to think about though when running these post-migration scripts, is that the repository now contains a lot of content, so each query we run might very well return much more than 1,000 rows. And 1,000 rows is the default max limit that a search will return. To change this to allow for 5,000 rows to be returned, we have to make some changes to the permission check configuration (Alfresco checks the permissions for each node that is being accessed, so the user running the query is not getting back content that he or she should not have access to). Open the alfresco-global.properties file located in the alfresco/tomcat/shared/classes directory and add the following properties: # The maximum time spent pruning results (was 10000) system.acl.maxPermissionCheckTimeMillis=100000 # The maximum number of results to perform permission checks against (was 1000) system.acl.maxPermissionChecks=5000 Unwanted Modified Date updates when running scripts So we have turned off the audit feature during document migration or made some custom code changes to Alfresco, to get the document's Modified Date to be preserved during import. Then we have turned on auditing again so the system behaves in the way the users expect. The last thing we want now is for all those preserved modified dates to be set to the current date when we update metadata. And this is what will happen if we are not running the post-migration scripts with the audit feature turned off. So this is important to think about unless you want to start all over again with the document migration. Versioning problems when running post-migration scripts Another thing that can cause problems is when we have versioning turned on for documents that we are updating the post-migration scripts. If we see the following error: org.alfresco.service.cmr.version.VersionServiceException: 07120018 The current implementation of the version service does not support the creation of branches. By default, new versions will be created even when we just update properties/metadata. This can cause errors such as the preceding error and we might not even be able to check-in and check-out the document. To prevent this error from popping up, and turn off versioning during property updates once and for all, we can set the following property at the same time as we set the other domain metadata in the scripts: legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; Setting this property to false effectively turns off versioning during any property/metadata update for the document. Another thing that can be a problem is if folders have been set up as versionable by mistake. The most likely reason for this is that we probably forgot to set up the Versioning Rule to only apply to cm:content (and not to "All Items"). Folders in the workspace://SpacesStore store do not support versioning The WCM system comes with an AVM store that supports advanced folder versioning and change sets. Note that the WCM system can also store its data in the Workspace store. So we need to update the versioning rule to apply to the content and remove the versionable aspect from all folders, which have it applied, before we can update any content in these folders. Here is a script that removes the cm:versionable aspect from any folder having it applied: var store = "workspace://SpacesStore"; var query = "PATH:"/app:company_home//*" AND TYPE:"cm:folder" AND ASPECT:"cm:versionable""; var versionableFolders = search.luceneSearch(store, query); for each (versionableFolder in versionableFolders) { versionableFolder.removeAspect("cm:versionable"); logger.log("Removed versionable aspect from folder: " + versionableFolder.name); } logger.log("Removed versionable aspect from " + versionableFolders.length + " folders"); Post-migration script to extract legacy meeting metadata Best Money has a lot of documents that they are migrating to the Alfresco repository. Many of the documents have filenames following a certain naming convention. This is the case for the meeting documents that are imported. The naming convention for the old imported documents are not exactly the same as the new meeting naming convention, so we have to write the regular expression a little bit differently. An example of a filename with the new naming convention looks like this: 10En-FM.02_3_annex1.doc and the same filename with the old naming convention looks like this: 10Eng-FM.02_3_annex1.doc. The difference is that the old naming convention does not specify a two-character code for language but instead a list that looks like this: Arabic,Chinese,Eng|eng,F|Fr,G|Ger,Indonesian,Jpn,Port,Rus|Russian,Sp,Sw,Tagalog,Turkish. What we are interested in extracting is the language and the department code and the following script will do that with a regular expression: // Regulars Expression Definition var re = new RegExp("^d{2}(Arabic|Chinese|Eng|eng|F|Fr|G|Ger| Indonesian|Ital|Jpn|Port|Rus|Russian|Sp|Sw|Tagalog|Turkish)-(A| HR|FM|FS|FU|IT|M|L).*"); var store = "workspace://SpacesStore"; var query = "+PATH:"/app:company_home/cm:Meetings//*" + TYPE:"cm:content""; var legacyContentFiles = search.luceneSearch(store, query); for each (legacyContentFile in legacyContentFiles) { if (re.test(legacyContentFile.name) == true) { var language = getLanguageCode(RegExp.$1); var department = RegExp.$2; logger.log("Extracted and updated metadata (language=" + language + ")(department=" + department + ") for file: " + legacyContentFile.name); if (legacyContentFile.hasAspect("bmc:document_data")) { // Set some metadata extracted from file name legacyContentFile.properties["bmc:language"] = language; legacyContentFile.properties["bmc:department"] = department; // Make sure versioning is not enabled for property updates legacyContentFile.properties["cm:autoVersionOnUpdateProps"] = false; legacyContentFile.save(); } else { logger.log("Aspect bmc:document_data is not set for document" + legacyContentFile.name); } } else { logger.log("Did NOT extract metadata from file: " + legacyContentFile.name); } } /** * Convert from legacy language code to new 2 char language code * * @param parsedLanguage legacy language code */ function getLanguageCode(parsedLanguage) { if (parsedLanguage == "Arabic") { return "Ar"; } else if (parsedLanguage == "Chinese") { return "Ch"; } else if (parsedLanguage == "Eng" || parsedLanguage == "eng") { return "En"; } else if (parsedLanguage == "F" || parsedLanguage == "Fr") { return "Fr"; } else if (parsedLanguage == "G" || parsedLanguage == "Ger") { return "Ge"; } else if (parsedLanguage == "Indonesian") { return "In"; } else if (parsedLanguage == "Ital") { return ""; } else if (parsedLanguage == "Jpn") { return "Jp"; } else if (parsedLanguage == "Port") { return "Po"; } else if (parsedLanguage == "Rus" || parsedLanguage == "Russian") { return "Ru"; } else if (parsedLanguage == "Sp") { return "Sp"; } else if (parsedLanguage == "Sw") { return "Sw"; } else if (parsedLanguage == "Tagalog") { return "Ta"; } else if (parsedLanguage == "Turkish") { return "Tu"; } else { logger.log("Invalid parsed language code: " + parsedLanguage); return ""; } } This script can be run from any folder and it will search for all documents under the /Company Home/Meetings folder or any of its subfolders. All the documents that are returned by the search are looped through and matched with the regular expression. The regular expression defines two groups: one for the language code and one for the department. So after a document has been matched with the regular expression it is possible to back-reference the values that were matched in the groups by using RegExp.$1 and RegExp.$2. When the language code and the department code properties are set, we also set the cm:autoVersionOnUpdateProps property, so we do not get any problem with versioning during the update.
Read more
  • 0
  • 0
  • 3616

article-image-expressionengine-creating-photo-gallery
Packt
23 Oct 2009
6 min read
Save for later

ExpressionEngine: Creating a Photo Gallery

Packt
23 Oct 2009
6 min read
Install the Photo Gallery Module The photo gallery in ExpressionEngine is considered a separate module, even though it is included with every personal or commercial ExpressionEngine license. Installing it is therefore very simple: Log into the control panel using http://localhost/admin.php or http://www.example.com/admin.php, and select Modules from the top of the screen. About a quarter of the way down the page, we can see the Photo Gallery module. In the far-right column is a link to install it. Click Install. We will see a message at the top of the screen indicating that the photo gallery module was installed. That's it! Setting Up Our Photo Gallery Now that we have installed the photo gallery module, we need to define some basic settings and then create categories that we can use to organize our photos. Define the Basic Settings Still in the Modules tab, the photo gallery module should now have become a clickable link. Click on the Photo Gallery. We are presented with a message that says There are no image galleries. Select to Create a New Gallery. We are now prompted for our Image Folder Name. For our photo galleries, we are going to create a folder for our photos inside the images folder that should already exist. Navigate to C:xampphtdocsimages (or /Applications/MAMP/htdocs/images if using MAMP on a Mac) or to the images folder on your web server, and create a new folder called photos. Inside that folder, we are going to create a specific subfolder for our toast gallery images. (This will keep our article photos separate from any other galleries we may wish to create). Call the new folder toast. If doing this on a web server, set the permissions of the toast folder to 777 (read, write, and execute for owner, group, and public). This will allow everyone to upload images to this folder. Back in ExpressionEngine, type in the name of the folder we just created (toast) and hit Submit. We are now prompted to name our template gallery. We will use the imaginative name of toastgallery so that it is distinguishable from any other galleries we may create in the future. This name is what will be used as the default URL to the gallery and will be used as the template group name for our gallery templates. Hit Submit. We are now prompted to update the preferences for our new gallery. Expand the General Configuration option and define a Photo Gallery Name and Short Name. We are going to use Toast Photos as a Photo Gallery Name and toastphotos as a Short Name. The short name is what will be used in our templates to reference this photo gallery Next, expand the Image Paths section. Here the Image Folder Name should be the same name as the folder we created earlier (in our case toast). For XAMPP users, the Server Path to Image Directory is going to be C:/xampp/htdocs/images/photos/toast, and the Full URL to Image Directory is going to be http://localhost/images/photos/toast. For MAMP users on a Mac or when using a web server, these paths are going to be different depending on your setup. Verify these settings for correctness, making adjustments as necessary. Whenever we upload an image into the image gallery, ExpressionEngine creates three copies of the image—a medium-sized and a thumbnail-sized version of the image, in addition to the original image. The thumbnail image is fairly small, so we are going to double the size of the thumbnail image. Expand the Thumbnail Resizing Preferences section, and instead of a Thumbnail Width of 100, choose a width of 200. Check the box (the one outside of the text box) and the height should update to 150. Hit Submit to save the settings so far. We will review the rest of the settings later. We have now created our first gallery. However, before we can start uploading photos, we need to create some categories. Create Categories For the purposes of our toast website, we are going to create categories based on the seasons: spring, summer, autumn, and winter. We are going to have separate subfolders for each of the categories; these are created automatically when we create the categories. To do this, first select Categories from the new menu that has appeared across the top of the screen. We will see a message that says No categories exist. Select Add a New Category. We are going to use a Category Name of Spring and a Description that describes the category—we will later display this description on our site. We are going to create a Category Folder of spring. Leave the Category Parent as None, and hit Submit. Select Add a New Category, and continue to add three more categories: summer, autumn, and winter in the same way. After we a re done with creating all the categories, use the up and down arrows to order the categories correctly. In our case, we need to move Autumn down so that it appears after Summer. We now have the beginnings of a photo gallery. Next, we will upload our first photos so that we can see how the gallery works. Upload Our First Photos To upload a photo to a photo gallery is pretty straightforward. The example photos we are working with can be downloaded from the Packtpub support page at http://www.packtpub.com/files/code/3797_Graphics.zip. To upload a photo, select New Entry from the menu within the photo gallery module. For the File Name, click the Browse...> button and browse to the photo spring1.jpg. We are going to give this an Entry Title of Spring Flower. For Date, we could either leave it as a default or enter the date that the photo was taken on. We are going to use a date of 2006-04-22. Click on the calendar icon to expand the view to include a calendar that can be easily navigated. We are going to use a Category of Spring and a Status of Open. Leave the box checked to Allow Comments, and write a Caption that describes the photo. The Views allows us to indicate how many times this image has been viewed—in this case we are going to leave it at 0. Hit Submit New Entry when everything is done. We are presented with a message that reads Your file has been successfully submitted, and the image now appears underneath the entry information. In the folder where our image is uploaded, three versions of the same image are made. There is the original file (spring1.jpg), a thumbnail of the original file (spring1_thumb.jpg), and a medium-sized version of the original file (spring1_medium.jpg). Now, click on New Entry and repeat the same steps to upload the rest of the photos, using appropriate categories and descriptions that describe the photos. There are four example photos for each season (for example, winter1.jpg, winter2.jpg, winter3.jpg, and winter4.jpg). Having a few example photos in each category will better demonstrate how the photo gallery works.
Read more
  • 0
  • 0
  • 3602
article-image-installing-and-configuring-joomla-15
Packt
25 Sep 2010
7 min read
Save for later

Installing and Configuring Joomla! 1.5

Packt
25 Sep 2010
7 min read
  Building job sites with Joomla! A practical stepwise tutorial to build your professional website using Joomla!  Build your own monster.com using Joomla!  Take your job site to the next level using commercial Jobs! Extension  Administrate and publish your Joomla! job site easily using the Joomla! 1.5 administrator panel and Jobs! Pro control panel interface  Boost your job site ranking in search engines using Joomla! SEO Introduction You may have various approaches for building a jobsite, with job search and registration facilities for users and providing several services to your clients such as job posting, online application process, resume search, and so on. Joomla! is one of the best approaches and an affordable solution for building your jobsite, even if you are a novice to Joomla!. This is because Joomla! is a free, open source Content Management System (CMS) , which provides one of the most powerful web application development frameworks available today. These are all reasons for building a jobsite with Joomla!: It has a friendly interface for all types of users—designers, developers, authors, and administrators. This CMS is growing rapidly and improving since its release. Joomla! is designed to be easy to install and set up even if you're not an advanced user. Another advantage is that you need less time and effort to build a jobsite with Joomla!. You need to use a Joomla! jobsite extension to build your jobsite and you can use the commercial extension Jobs! because it's fully equipped to operate a jobsite, featuring tools to manage jobs, resumes, applications, and subscriptions. If you are looking for a jobsite such as Monster, Career Builder, a niche jobs listing such as Tech Crunch, or just posting job ads on your company site, Jobs! is an ideal solution. To know more about this extension, visit its official website http://www.instantphp.com/ Jobs! has two variations—Jobs! Pro and Jobs! Basic . The Jobs! Pro provides some additional features and facilities, which are not available in Jobs! Basic. You can use any one of them, depending upon your needs and budget. But if you need full control over your jobsite and more customization facilities, then Jobs! Pro is recommended. You can install Jobs! component and its modules easily, like any other Joomla! extension. You need to spend only a few minutes to install and configure Joomla! 1.5 and Jobs! Pro 1.3 or Jobs! Basic 1.0. It is a stepwise setup process. But first you must ensure that your system meets all the requirements that are recommended by developers. Prerequisites for installation of Joomla! 1.5 and Jobs! Joomla! is written in PHP and mainly uses MySQL database to store and manipulate information. Before installing Joomla! 1.5 and Jobs! extension, we need a server environment, that includes the following:     Software/Application Minimum Requirement Recommended Version Website PHP 5 5.2 http//php.net MySQL 4.1 or above 5 http://dev.mysql.com/downloads/mysql/5.0.html Apache 1.3 or above   http://httpd.apache.org IIS 6 7 http://www.iis.net/ mod_mysql mod_xml mod_zlib       You must ensure that you have the MySQL, XML, and zlib functionality enabled within your PHP installation. This is controlled within the php.ini file. Setting up a local server environment In order to run Joomla! properly, we need a server environment with pre-installed PHP and MySQL. In this case, you can use a virtual server or can choose other hosting options. But if you want to try out Joomla! on your own computer before using a remote server, we can set up a local server environment. To set up a server environment, we can use XAMPP solution. It comes equipped with Apache HTTP server, PHP, and MySQL. Installing these components individually is quite difficult and needs more time and effort. To install XAMPP, download the latest version of XAMPP 1.7.x from the Apache friends website: http://www.apachefriends.org/en/xampp.html. Windows operating system users can install XAMPP for Windows in two different variations—self-extracting RAR archive and ZIP archive. If you want to use self-extracting RAR archive, first download the .exe file and then follow these steps: Run the installer file, choose a directory, and click on the Install button. After extracting XAMPP, the setup script setup_xampp.bat will start automatically. After the installation is done, click on Start All Programs | Apache Friends | XAMPP | XAMPP Control Pane|. Start Apache and MySQL by clicking on the Start buttons beside each item. If prompted by Windows Firewall, click on the Unblock button.For more information on installing XAMPP on Windows or troubleshooting, go to the Windows FAQs page: http://www.apachefriends.org/en/faqxampp- windows.html. If you are using Linux platform, download the compressed .tar.gz file and follow these steps for installation: Go to a Linux shell and log in as the system administrator root: su Extract the downloaded archive file to /opt: tar xvfz xampp-linux-1.7.3a.tar.gz -C /opt XAMPP is now installed in the /opt/lampp directory. To start XAMPP, call the command: /opt/lampp/lampp start You should now see something similar to the following on your screen: Starting XAMPP 1.7.3a... LAMPP: Starting Apache... LAMPP: Starting MySQL... LAMPP started.   For more information on installing XAMPP on Linux or troubleshooting, go to the Linux FAQs page: http://www.apachefriends.org/en/faq-xampp-linux.html. If you want to use XAMPP in MAC operating system , download the .dmg file and follow these steps: Open the DMG-Image. Drag and drop the XAMPP folder into your Applications folder. XAMPP is now installed in the /Applications/XAMPP directory. To start XAMPP open XAMPP Control and start Apache and MySQL. After installation of XAMPP in a system, to test your installation, type the following URL in the browser: http://localhost/. You will see the XAMPP start page. Uploading installation packages and files to server Now, we need to copy or transfer Joomla! installation package files to server. Before copying the installation package, we must download Joomla_1.5.15-Stable-Full_ Package.zip from the webpage http://www.joomla.org/download.html, and then extract and unzip it. You can use WinZip or WinRAR to unzip these files. After unzipping the files, you have to copy files on your server root folder (for Apache, it is htdocs folder). If you are not using the XAMPP or local server environment, you need the File Transfer Protocol (FTP) software to transfer files to your server root folder, such as htdocs or wwwroot. The popular FTP software is FileZilla, which is absolutely free and available for different platforms, including Windows, Linux, and Mac OS. You can get it from the website http://filezilla-project.org/. Creating database and user Before installing and configuring Joomla! and Jobs! extension, we also need to create a database and a database user. You can easily add a new database and any user by using phpMyAdmin in XAMPP server environment. To add a database, by using phpMyAdmin, you must follow the following steps: Type the address http://localhost/phpmyadmin in the web browser. The front page of phpMyAdmin will be displayed. Type a name for the database you want to create. For example, my_db in the Create new Database field and then click on the Create button to create the database. To be connected with the database, we need a user account. You can add a user account by clicking on the Privileges tab of phpMyAdmin. You will see all users' information. Click on Add a new User link of Privileges window. After clicking on the link, a new window will appear. Provide the required information in the Login Information section of this window and then click on the Go button. We have now completed the preparation stage of installing Joomla!.
Read more
  • 0
  • 0
  • 3597

article-image-installing-and-configuring-drupal
Packt
11 Jul 2012
7 min read
Save for later

Installing and Configuring Drupal

Packt
11 Jul 2012
7 min read
(For more resources on Drupal 7, see here.) Installing Drupal There are a number of different ways to install Drupal on a web server, but in this recipe we will focus on the standard, most common installation, which is to say, Drupal running on an Apache server, which runs PHP with a MySQL database. In order to do this we will download the latest Drupal release, and walk you through all of the steps required to get it up and running. Getting ready Before beginning, you need to ensure that you meet the following minimal requirements: Web hosting with FTP access (or file access through a control panel). A server running PHP 5.2.5+ (5.3+ recommended). A blank MySQL database and the login credentials to access it. Ensure that register globals is set to off in the PHP.ini file. You may need to contact your hosting provider to do this. How to do it... The first step is to download the latest Drupal 7 release from the Drupal download page, which is located at http://drupal.org/project/drupal : This page displays the most recent and recommended releases for both Drupal 6 and 7. It also displays the most recent development versions, but be sure to download the recommended release (development versions are for developers who want to stay on the cutting edge). When the file is downloaded, extract it and upload the files to your chosen web server document root directory on the server. This may take some time. Configure your web server document root and server name (usually through a vhost directive). When the upload is complete, open your browser and in the address bar, type in the server name configured in the previous step to begin the installation wizard. Select Standard option and then select Save and continue: The next screen that you will see is the language selection screen; there should only be one language available at this point. Ensure that English is selected before proceeding: Following a requirements check, you will arrive at the database settings page. Enter your database name, username, and password in the required fields. Unless your database details have been supplied with a specific host name and port, you should leave the advanced options as they are and continue. You will now see the Site configuration page. Under Site information enter the name you would like to appear as the site's name. For Site e-mail address enter an e-mail address. Under the SITE MAINTENANCE ACCOUNT box, enter a username for the admin user (also known as user 1), followed by an e-mail address and password: (Move the mouse over the image to enlarge.) In the Server settings box, select your country from the drop-down, followed by your local time zone. Finally, in the Update notification box, ensure that both options are selected. Click on Save and continue to complete the installation. You will be presented with the congratulations page with a link to your new site. How it works... On the server requirements page, Drupal will carry out a number of tests. It is a requirement that PHP "register globals" is set to off or disabled. Register globals is a feature of PHP which allows global variables to be set from the contents of the Environment, GET, POST, Cookie, and Server variables. It can be a major security risk, as it enables potential hackers to overwrite important variables and gain unauthorized access. The Configure site page is where you specify the site name and e-mail addresses for the site and the admin user. The admin e-mail address will be used to contact the administrator with notifications from the site, and the site e-mail address is used as the originating e-mail address when the site sends e-mails to users. You can change these settings later on in the Site information page in the Configuration section. It's important to select the options to receive the site notifications so that you are aware when software updates are available for your site core and contrib modules; important security updates are available from time to time. There's more... In this recipe we have seen a regular Drupal installation procedure. There are various different ways to install and configure Drupal. We will explore some of these alternatives in the following sections. We will also cover some of the potential pitfalls you may come across with the requirements page. Uploading through a control panel If your web-hosting provider provides web access to your files through a control panel such as CPanel, you can save time by uploading the compressed Drupal installation package and running the unzip function on the file, if that functionality is provided. This will dramatically reduce the amount of time taken to perform the installation. Auto-installers There are other ways in which Drupal can be installed. Your hosting may come with an auto- installer such as Fantastico De Luxe or Softaculous. Both of these services provide a simple way to achieve the same results without the need to use FTP or to configure a database. Database table prefixes At the database setup screen there is an option to use a table prefix. Any prefix entered into the field would be added to the start of all table names in the database. This means that you could run multiple installations of Drupal, or possibly other CMSs from the same database by setting a different prefix. This method, however, will have implications for performance and maintenance. Installing on a Windows environment This recipe deals with installing Drupal on a Linux server. However, Drupal runs perfectly well on an IIS (Windows) server. Using Microsoft's WebMatrix software, it's easy to set up a Drupal site: http://www.microsoft.com/web/drupal Alternative languages Drupal supports many different languages. You can view and download the language packs at http://localize.drupal.org/download. You then need to upload the file to Drupal root/profiles/standard/translations. You will then see the option for that new language in the language selection page of the installation. Verifying the requirements page If all goes to plan, and the server is already configured correctly, then step 3, the server requirements page, will be skipped. However, you may come across problems in a few areas: Register Globals: This should be set to off in the php.ini file. This is very important in securing your site. If you find that register globals is turned on, then you will need to consult your hosting provider's documentation on this feature in order to switch it off. Drupal will attempt to create the following folder: Drupal root/sites/default/ files. If it fails, you may have to manually create this file on the server and give it the permission 755. Drupal will attempt to create a settings.php file by copying the default.settings.php file. If Drupal has trouble doing this, copy the default.settings.php file in the following directory: Drupal root/sites/default/default.settings.php and rename the copied file as settings.php. Give settings.php full write access CHMODD 777. After Drupal finishes the installation process, it will try to set the permission of this file to 444; you must check that this has been done, and manually set the file to 444, if it has not. See also See Installing Drupal distributions for more installation options using a preconfigured Drupal distribution. For more information about installing Drupal, see the installation guide at Drupal.org: http://drupal.org/documentation/install
Read more
  • 0
  • 0
  • 3583
Modal Close icon
Modal Close icon