Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-oracle-bpm-suite-11gr1-creating-bpm-application
Packt
25 Sep 2010
7 min read
Save for later

Oracle BPM Suite 11gR1: Creating a BPM Application

Packt
25 Sep 2010
7 min read
Getting Started with Oracle BPM Suite 11gR1 – A Hands-On Tutorial Learn from the experts – teach yourself Oracle BPM Suite 11g with an accelerated and hands-on learning path brought to you by Oracle BPM Suite Product Management team members Offers an accelerated learning path for the much-anticipated Oracle BPM Suite 11g release Set the stage for your BPM learning experience with a discussion into the evolution of BPM, and a comprehensive overview of the Oracle BPM Suite 11g Product Architecture Discover BPMN 2.0 modeling, simulation, and implementation Understand how Oracle uses standards like Services Component Architecture (SCA) to simplify the process of application development Describes how Oracle has unified services and events into one platform Built around an iterative tutorial, using a real-life scenario to illustrate all the key features Full of illustrations, diagrams, and tips for developing SOA applications faster, with clear step-by-step instructions and practical examples Written by Oracle BPM Suite Product Management team members   Read more about this book (For more resources on Oracle, see here.) The BPM Application consists of a set of related business processes and associated shared artifacts such as Process Participants and Organization models, User Interfaces, Services, and Data. The process-related artifacts such as Services and Data are stored in the Business Catalog. The Business Catalog facilitates collaboration between the various stakeholders involved in the development of the business process. It provides a mechanism for Process Developer (IT) to provide building blocks that can in turn be used by the process analyst in implementing the business process. Start BPM Studio using Start | Programs | Oracle Fusion Middleware 11.1.1.3 | Oracle JDeveloper 11.1.1.3. BPM Studio supports two roles or modes of process development. The BPM Role is recommended for Process Analysts and provides a business perspective with focus on business process modeling. The Default Role is recommended for Process Developers for refinement of business process models and generation of implementation artifacts to complete the BPM Application for deployment and execution. Tutorial: Creating SalesQuote project and modeling RequestQuote process This is the beginning of the BPM 11gR1 hands-on tutorial. Start by creating the BPM Application and then design the Sales Quote business process. Open BPM Studio by selecting the BPM Process Analyst role when you start JDeveloper or if JDeveloper is already open, select Preferences from the Tools menu and in the Roles section, select BPM Process Analyst. Go to File | New to launch the Application wizard. In the New Gallery window, select Applications in the Categories panel. Select BPM Application in the Items panel. Specify the Application Name —SalesQuoteLab; the folder name should also be set to SalesQuoteLab. Click on the Next button. Enter QuoteProcessLab for the Project Name. Click on the Finish button. Go to the View | BPM Project Navigator. The BPM Project Navigator opens up the QuoteProcessLab – BPM Project that you just created. A single BPM Project can contain multiple related business processes. Notice that the BPM Project contains several folders. Each folder is used to store a specific type of BPM artifact. The Processes folder stores BPMN business process models; the Activity Guide folder is used to store the process milestones; the Organization folder stores Organization model artifacts such as Roles and Organization Units; the Business Catalog folder stores Services and Data; the Simulation folder stores simulation models to capture what-if scenarios for the business process and the Resources folder holds XSLT data transformation artifacts. To create a new business process model, you need to right-click on Processes and select New | Process. This launches the Create BPMN Process wizard. Select From Pattern option and select the Manual Process pattern. Recall that the Sales Quote Process is instantiated when Enter Quote task gets assigned to the Sales Representative role. The Asynchronous Service and the Synchronous Service patterns are used to expose the BPMN process as a Service Provider. Click on the Next button. Specify the name for the Process—RequestQuoteLab. Click on the Finish button. This creates a RequestQuoteLab process with a Start Event (thin circle) and End Event (thicker circle) of type None with a User Task in between. The User Task represents a human step that is managed by the BPM run-time engine—workflow component. The Start Event of type None signifies that there is no external event triggering the process. The first activity after the Start Event creates the process instance. In addition, a default swim lane—Role, gets created. In BPM Studio, the swim lanes in the BPMN process point to logical roles. A logical role represents a process participant (user or group) and needs to be mapped to physical roles (LDAP users/groups) before the process is deployed. Right-click on the User Task step, select Properties, and specify the name Enter Quote Details for the step. Click on the OK button. The next step is to rename the default created role to SalesRep. Navigate to QuoteProcessLab—BPM Project node and select the Organization node underneath it. Right-click on the Organization node and select Open. This opens up the Organization pane. Highlight the default role named Role and use the pencil icon to edit it to be SalesRep. Click on the + sign to add the following roles—Approvers, BusinessPractices, and Contracts. The following screenshot shows the list of roles that you just created: Close the Organization window. Go back to the RequestQuoteLab—process model. The participant for the Enter Quote Details—User Task is now set to the SalesRep role. Ignore the yellow triangular symbol with the exclamation for now. It indicates that certain configuration information is missing for the activity. Right-click on the process diagram just below the SalesRep-Lane. Choose the Add Role option. Choose Business Practices from the list of options available. Click on the OK button. Open the View | Component Palette. Drag and drop a User Task from the Interactive Tasks section of the BPMN Component Palette. Note: The Interactive Tasks refers to a step that is managed by the workflow engine. The Assignees (Participants) represent the business users who need to carry out the Interactive Task. The associated Task (work to be performed) is shown in the inbox of the assignees (similar to Email Inbox) when the Interactive Task is triggered. The User Task is the simplest type of Interactive Task where the assignee of the task is set to a single role. The actual work is performed only when the Assignee executes on his Task. The Task is presented to the Assignee through a browser based worklist application. In BPM Studio, the Assignee is automatically set to the role associated with the swim lane into which the Interactive Task is dropped. Place this new User Task after the existing Enter Quote Details—User Task by hovering on the center of the connector until it turns blue and name it Business Practices Review. The connection lines are automatically created. Drag the new Business Practices Review—User Task to the Business Practices lane. The performer or assignee for the Business Practices Review—User Task is automatically set to Business Practices—role. Create two more lanes for Approvers and Contracts. Drag and drop three User Tasks on to the process diagram, one following the other, and name them Approve Deal, Approve Terms, and Finalize Contracts respectively. Pin the Approve Deal step to the Approvers Lane. Pin the other two User Tasks—Approve Terms and Finalize Contracts steps to the Contracts Lane. Finally add a Service Task right after the Finalize Contracts step from the BPM Component Palette and name it Save Quote. The modified diagram should look like the following:
Read more
  • 0
  • 0
  • 8746

article-image-creating-and-managing-user-groups-joomla-and-virtuemart
Packt
24 Oct 2009
9 min read
Save for later

Creating and Managing User Groups in Joomla! and VirtueMart

Packt
24 Oct 2009
9 min read
User manager In Joomla!, there is one User Manager component from where you can manage the users of that site. However, for the VirtueMart component, there is another  user manager which should be used for the VirtueMart shop. To be clear about  the differences of these two user managers, let us look into both. Joomla! user manager Let us first try Joomla!'s user manager. Go to the Joomla! control panel and click on the User Manager icon or click on Site | User Manager. This brings the User Manager screen of Joomla!: We see that the users registered to the Joomla! site are listed in this screen. This screen shows the username, full name, enabled status, group that the user is assigned to, email of the user, date and time when they last visited, and user ID. From this screen, you may guess that any user can be enabled or disabled by clicking on the icon in the Enabled column. Enabled user accounts show a green tick mark in the Enabled column. For viewing the details of any user, click on that user's name in the Name column. That brings up the User:[Edit] screen: As you see, the User Details section shows some important information about the user including Name, Username, E-mail, Group, and so on. You can edit and change these settings including the password. In the Group selection box, you must select one level. The deepest level gets the highest permission in the system. From this section, you can also block a user and decide whether they will receive system  emails or not. In the Parameters section, you can choose the Front-end Language and Time Zone for that user. If you have created contact items using Joomla!'s Contacts component, you may assign one contact to this user in the Contact Information section. VirtueMart user manager Let us now look into VirtueMart's user manager. From the Joomla! control panel, select Components | VirtueMart to reach the VirtueMart Administration Panel. To view the list of the user's registered to the VirtueMart store, click on Admin | Users. This brings the User List screen: As you can see, the User List screen shows the list of users registered to the shop. The screen shows their username, full name, group the user is assigned to, and their shopper group. In the Group column, note that there are two groups mentioned. One group is without brackets and another is inside brackets. The group name mentioned inside brackets is Joomla!'s standard user groups, whereas the one without brackets is VirtueMart's user group. We are going to learn about these user groups in the  next section. For viewing the details of a user, click on the user's name in Username column. That brings the Add/Update User Information screen: The screen has three tabs: General User Information, Shopper Information, and Order List. The General User Information tab contains the same information which was shown in Joomla!'s user manager's User: [Edit] screen. The Shopper Information tab contains shop related information for the user: The Shopper Information section contains: a vendor to which the user is registered the user group the user belongs to a customer number/ID the shopper group Other sections in this tab are: Shipping Addresses, Bill To Information, Bank Account, and any other section you have added to the user registration or account maintenance form. These sections contain fields which are either available on the registration or account maintenance form. If the user has placed some orders, the Order List tab will list the orders placed by that user. If no order has been placed,  the Order List tab will not be visible. Which user manager should we use? As we can see, there is a difference between Joomla!'s user manager and VirtueMart's user manager. VirtueMart's user manager shows some additional information fields, which are necessary for the operation of the shop. Therefore, whenever you are managing users for your shop, use the user manager in the VirtueMart component, not Joomla!'s user manager. Otherwise, all customer information will not be added or updated. This may create some problems in operating the VirtueMart store. User Groups Do you want to decide who can do what in your shop? There is a very good way for doing that in Joomla! and VirtueMart. Both Joomla! and VirtueMart have some predefined user groups. In both cases, you can create additional groups and assign permission levels to these groups. When users register to your site, you assign them to one of the user groups. Joomla! user groups Let us first look into Joomla! user groups. Predefined groups in Joomla! are  described below: User Group Permissions Public Frontend Registered Users in this group can login to the Joomla! site and view the contents, sections, categories, and the items which are marked only for registered users. This group has no access to content management. Author Users in this group get all the permissions the Registered group has. In addition to that, users in this group can submit articles for publishing, and can edit their own articles. Editor Users of this group have all the above permissions, and also can edit articles submitted by other users. However, they cannot publish the contents. Publisher Users in this group can login to the system and submit, edit, and publish their own content as well as contents submitted by other users. Public Backend Manager Users in this group can login to the administration panel and manage content items including articles, sections, categories, links, and so on. They cannot manage users, install modules or components, manage templates and languages, and access global configurations. Users in this group can access some of the components for which the administrator has given permission. Administrator In addition to content management, users in this group can add a user to Super Administrator group, edit a user, access the global configuration settings, access the mail function, and manage/install templates and language files. Super Administrator Users in this group can access all administration functions. For every site, at least one should be in this group to perform global configurations. You cannot delete a user in this group or move him/her to another group. As you can see, most of the users registering to your site should be assigned to the Registered group. By default, Joomla! assigns all newly registered users to the Registered group. You need to add some users to the Editor or Publisher group if they need to add or publish content to the site. The persons who are managing the shop should be assigned to other Public Backend groups such as Manager, Administrator or Super Administrator. VirtueMart user groups Let us now look into the user groups in VirtueMart. To see the user groups, go to VirtueMart's administration panel and click on Admin | User Groups. This shows the User Group List screen: By default, you will see four user groups: admin, storeadmin, shopper, and demo. These groups are used for assigning permissions to users. Also, note the values in the User Group Level column. The higher the value in this field, the lower the permissions assumed for the group. The admin group has a level value of 0, which means it has all of the permissions, and of course, more than the next group storeadmin. Similarly, storeadmin group has more permissions than the shopper group. These predefined groups are key groups in VirtueMart, and you cannot modify or delete these groups. These groups have the following permissions: Group Permissions admin This group has permissions to use all of the modules except checkout and shop. The admin group does not need these because admin users usually do not shop in their store. storeadmin This group has fewer permissions than admin group. Users in this group can access all the modules except the admin, vendor, shop, and checkout modules. They cannot set the global configurations for the store, but can add and edit payment methods, products, categories, and so on. shopper This group has the least permission among the three key groups. By default, users registered to the shop are assigned to this group. Users in this group can fully access the account module, and can use some functions of the shop, coupon, and checkout modules. demo This is a demo group created by default so that administrators can test and play with it. For most of the shops, these four predefined groups will be enough to implement appropriate permissions. However, in some cases you may need to create a new user group and assign separate permissions to that group. For example, you may want to employ some people as store managers who will add products to the catalog and manage the orders. They cannot add or edit payment methods, shipping methods, or other settings, except product and orders. If you add these people to the storeadmin group then they get more permissions than required. In such situations, a good solution is to create a new group, add selected user accounts to that group, and assign permissions to that group. Creating a new user group For creating a new user group, click on the New button in the toolbar on the User Group List screen. This brings Add/Edit a User Group screen: In the Add/Edit a User Group screen, enter the group's name and group level. You must type a higher value than existing groups (for example, 1000). Click on the Save icon to save the user group. You will now see the newly created user group in the User Group List screen.
Read more
  • 0
  • 0
  • 8734

article-image-google-earth-google-maps-and-your-photos-tutorial
Packt
22 Oct 2009
38 min read
Save for later

Google Earth, Google Maps and Your Photos: a Tutorial

Packt
22 Oct 2009
38 min read
Introduction A scholar who never travels but stays at home is not worthy to be accounted a scholar. From my youth on I had the ambition to travel, but could not afford to wander over the three hundred counties of Korea, much less the whole world. So, carrying out an ancient practise, I drew a geographical atlas. And while gazing at it for long stretches at a time I feel as though I was carrying out my ambition . . . Morning and evening while bending over my small study table, I meditate on it and play with it and there in one vast panorama are the districts, the prefectures and the four seas, and endless stretches of thousands of miles. WON HAK-SAENG Korean Student Preface to his untitled manuscript atlas of China during the Ming Dynasty, dated 1721. To say that maps are important is an understatement almost as big as the world itself. Maps quite literally connect us to the world in which we all live, and by extension, they link us to one another. The oldest preserved maps date back nearly 4500 years. In addition to connecting us to our past, they chart much of human progress and expose the relationships among people through time. Unfortunately, as a work of humankind, maps share many of the same shortcomings of all human endeavors. They are to some degree inaccurate and they reflect the bias of the map maker. Advancements in technology help to address the former issue, and offer us the opportunity to resist the latter. To the extent that it's possible for all of us to participate in the map making, the bias of a select few becomes less meaningful. Google Earth  and Google Maps  are two applications that allow each of us to assert our own place in the world and contribute our own unique perspective. I can think of no better way to accomplish this than by combining maps and photography. Photos reveal much about who we are, the places we have been, the people with whom we have shared those experiences, and the significant events in our lives. Pinning our photos to a map allows us to see them in their proper geographic context, a valuable way to explore and share them with friends and family. Photos can reveal the true character of a place, and afford others the opportunity to experience these destinations, perhaps faraway and unreachable for some of us, from the perspective of someone who has actually been there. In this tutorial I'll show you how to 'pin' your photos using Google Earth and Google Maps. Both applications are free, and available for Windows, Mac OS X, and Linux. In the case of Google Earth there are requirements associated with installing and running what is a local application. Google Maps has its own requirements: primary among them a compatible web browser (the highly regarded FireFox is recommended). In Google Earth, your photos show up on the map within the application complete with titles, descriptions, and other relevant information. You can choose to share your photos with everyone, only people you know, or even reserve them strictly for yourself. Google Maps offers the flexibility to present maps outside of a traditional application. For example you can embed a map on a webpage pinpointing the location of one particular photo, or mapping a collection of photos to present along with a photo gallery, or even collecting all of your digital photos together on one dynamic map. Over the course of a short couple of articles we'll cover everything you need to take advantage of both applications. I've put together two scripts to help us accomplish that goal. The first is a Perl script that works through your photos and generates a file in the proper format with all of the data necessary to include those photos in Google Earth. The second is a short bit of Javascript that works with the first file and builds a dynamic Google Map of those same photos. Both scripts are available for you to download, after which you are free to use them as is, or modify them to suit your own projects. I've purposefully kept the code as simple as possible to make them accessible to the widest audience, even those of you reading this who may be new to programming or unfamiliar with Perl, Javascript or both. I've taken care to comment the code generously so that everything is plainly obvious. I'm hopeful that you will be surprised at just how simple it is. There a couple of preliminary topics to examine briefly before we go any further. In the preceding paragraph I mentioned that the result of the first of our two scripts is a 'file in the proper format...'. This file, or more to the point the file format, is a very important part of the project. KML  (Keyhole Markup Language) is a fairly simple XML-based format that can be considered the native "language" of Google Earth. That description begs the question, 'What is XML?'. To oversimplify, because even a cursory discussion of XML is outside of the scope of this article, XML  (Extensible Markup Language) is an open data format (in contrast to proprietary formats) which allows us to present information in such way that we communicate not only the data itself, but also descriptive information about the data and the relationships among elements of the data. One of the technical terms that applies is 'metalanguage', which approximately translates to mean a language that makes it possible to describe other languages. If you're unfamiliar with the concept, it may be difficult to grasp at first or it may not seem impressive. However, metalanguages, and specifically XML, are an important innovation (I don't mean to suggest that it's a new concept. In fact XML has roots that are quite old, relative to the brief history of computing). These metalanguages make it possible for us to imbue data with meaning such that software can make use of that data. Let's look at an example taken from the Wikipedia entry for KML. <?xml version="1.0" encoding="UTF-8"?> <kml > <Placemark>  <description>New York City</description>  <name>New York City</name>  <Point>  <coordinates>-74.006393,40.714172,0</coordinates>  </Point> </Placemark> </kml>   Ignore all of the pro forma stuff before and after the <Placemark> tags and you might be able to guess what this bit of data represents. More importantly, a computer can be made to understand what it represents in some sense. Without the tags and structure, "New York City" is just a string, i.e. a sequence of characters. Considering the tags we can see that we're dealing with a place, (a Placemark), and that "New York City" is the name of this place (and in this example also its description). With all of this formal structure, programs can be made to roughly understand the concept of a Placemark, which is a useful thing for a mapping application. Let's think about this for a moment. There are a very large number of recognizable places on the planet, and a provably infinite number of strings. Given a block of text, how could a computer be made to identify a place from, for example the scientific name of a particular flower, or a person's name? It would be extremely difficult. We could try to create a database of every recognizable place and have the program check the database every time it encounters a string. That assumes it's possible to agree on a 'complete list of every place', which is almost certainly untrue. Keep in mind that we could be talking about informal places that are significant only to a small number of people or even a single person, e.g. 'the park where, as a child, I first learned to fly a kite'. It would be a very long list if we were going to include these sorts of places, and incredibly time-consuming to search. Relying on the structure of the fragment above, we can easily write a program that can identify 'New York City' as the name of a place, or for that matter 'The park where I first learned to fly a kite'. In fact, I could write a program that pulls all of the place names from a file like this, along with a description, and a pair of coordinate points for each, and includes them on a map. That's exactly what we're going to do. KML makes it possible. If I haven't made it clear, the structural bits of the file must be standardized. KML supports a limited set of elements (e.g. 'Placemark' is a supported element, as are 'Point' and 'coordinates'), and all of the elements used in a file must adhere to the standard for it to be considered valid. The second point we need to address before we begin is, appropriately enough... where to begin? Lewis Carroll famously tells us to "Begin at the beginning and go on till you come to the end: then stop."  Of course, Mr. Carroll  was writing a book at the time. If "Alice's Adventures in Wonderland" were an article, he might have had different advice. From the beginning to the end there is a lot of ground to cover. We're going to start somewhere further along, and make up the difference with the following assumptions. For the purposes of this discussion, I am going to assume that you have: Perl Access to Phil Harvey's excellent ExifTool , a Perl library and command-line application for reading, writing, and editing metadata in images (among other file types). We will be using this library in our first script A publicly accessible web server. Google requires the use of an API key  by which it can monitor the use of its map services. Google must be able to validate your key, and so your site must be publicly available. Note that this a requirement for Google Maps only Photos, preferably in a photo management application. Essentially, all you need is an app capable of generating both thumbnails and reduced size copies of your original photos . An app that can export a nice gallery for use on the web is even better Coordinate data as part of the EXIF metadata  embedded in your photos. If that sounds unfamiliar to you, then most likely you will have to take some additional steps before you can make use of this tutorial. I'm not aware of any digital cameras that automatically include this information at the time the photo is created. There are devices that can be used in combination with digital cameras, and there are a number of ways that you can 'tag' your photos with geographic data much the same way you would add keywords and other annotations. Let's begin!   Part 1: Google Earth Section 1: Introduction to Part 1 Time to begin the project in earnest. As I've already mentioned we'll spend the first half of this tutorial looking at Google Earth and putting together a Perl script whichthat, given a collection of geotagged photos, will build a set of KML files so that we can browse our photos in Google Earth. These same files will serve as the data source for our Google Maps application later on. Section 2: Some Advice Before We Begin Take your time to make sure you understand each topic before you move on to the next. Think of this as the first step in debugging your completed code. If you go slowly enough that you are to be able to identify aspects of the project that you don't quite understand, then you'll have some idea where to start looking for problems should things not go as expected. Furthermore, going slowly will give you the opportunity to identify those parts that you may want to modify to better fit the script to your own needs. If this is new to you, follow along as faithfully as possible with what I do here the first time through. Feel free to make notes for yourself as you go, but making changes on the first pass may make it difficult for you to catch back on to the narrative and piece together a functional script. After you have a working solution, it will be a simple matter to implement changes one at a time until you have something that works for you. Following this approach it will be easy to identify the silly mistakes that tend to creep in once you start making changes. There is also the issue of trust. This is probably the first time we're meeting each other, in which case you should have some doubt that my code works properly to begin with. If you minimize the number of changes you make, you can confirm that this works for you before blaming yourself or your changes for my mistakes. I will tell you up front that I'm building this project myself as we go. You can be certain at least that it functions as described for me as of the date attached to the article. I realize that this is quite different from being certain that the project will work for you, but at least it's something. The entirety of my project is available for you to download. You are free to use all of it for any legal purpose whatsoever,  including my photos in addition to all of the code, icons, etc. This is so you have some samples to use before you involve your own content. I don't promise that they are the most beautiful images you have ever seen, but they are all decent photos and properly annotated with the necessary metadata, including geographic tags. Section 3: Photos, Metadata and ExifTool To begin, we must have a clear understanding of what the Perl script will require from us. Essentially, we need to provide it with a selection of annotated image files, and information about how to reference those files. A simple folder of files is sufficient, and will be convenient for us, both as the programmer and end user. The script will be capable of negotiating nested folders, and if a directory contains both images and other file types, non-image types will be ignored. Typically, after a day of taking photos I'll have 100 to 200 that I want to keep. I delete the rest immediately after offloading them from the camera. For the files that are left, I preserve the original grouping, keeping all of the files together in a single folder. I place this folder of images in an appropriate location according to a scheme that serves to keep my complete collection of photos neatly organized. These are my digital 'negatives'. I handle all subsequent organization, editing, enhancements, and annotations within my photo management application. I use Apple Inc.'s Aperture; but there are many others that do the job equally well. Annotating your photos is well worth the investment of time and effort, but it's important that you have some strategy in mind so that you don't create meaningless tags that are difficult to use to your advantage. For the purposes of this project the tags we'll need are quite limited, which means that going forward we will be able to continue adding photos to our maps with a reasonable amount of work. The annotations we need are: Caption Latitude Longitude Image Date * Location/Sub-Location City Province/State Country Name Event People ImageWidth * ImageHeight * * Values for these Exif tags are generated by your camera. Note that these are labels used in Aperture, and are not necessarily consistent from one application to the next. Some of them are more likely than others to be used reliably. 'City' for example should be dependable, while the labels 'People', 'Events', and 'Location', among others, are more likely to differ. One explanation for these variations is that the meaning of these fields are more open to interpretation. Location, for example, is likely to be used to narrow down the area where the photo was taken within a particular city, but it is left to the person who is annotating the photo to decide that the field should name a specific street address, an informal place (e.g. 'home' or 'school'), or a larger area, for example a district or neighborhood. Fortunately things aren't so arbitrary as they seem. Each of these fields corresponds to a specific tag name that adheres to one of the common metadata formats (Exif1, IPTC2, XMP3, and there are others). These tag names are consistent as required by the standards. The trick is in determining the labels used in your application that correspond to the well-defined tag names. Our script relies on these metadata tags, so it is important that you know which fields to use in your application. This gives us an excuse to get acquainted with ExifTool4. From the project's website, we have this description of the application: ExifTool is a platform-independent Perl library plus a command-line application for reading, writing, and editing meta information in image, audio, and video files... ExifTool can seem a little intimidating at first. Just keep in mind that we will need to understand just a small part of it for this project, and then be happy that such a useful and flexible tool is freely available for you to use. The brief description above states in part that ExifTool is a Perl library and command line application that we can use to extract metadata from image files. With a single short command, we can have the app print all of the metadata contained in one of our image files. First, make sure ExifTool is installed. You can test for this by typing the name of the application at the command line. $ exiftool If it is installed, then running it with no options should prompt the tool to print its documentation. If this works, there will be more than one screen of text. You can page through it by pressing the spacebar. Press the 'q' key at any time to stop. If the tool is not installed, you will need to add it before continuing. See the appendix at the end of this tutorial for more information. Having confirmed that ExifTool is installed, typing the following command will result in a listing of the metadata for the named image: $ exiftool -f -s -g2 /path/image.jpg Where 'path' is an absolute path to image.jpg or a relative path from the current directory, and 'image.jpg' is the name of one of your tagged image files. We'll have more to say about ExifTool later, but because I believe that no tutorial should ask the reader to blindly enter commands as if they were magic incantations, I'll briefly describe each of the options used in the command above: -f, forces printing of tags even if their values are not found. This gives us a better idea about all of the available tag names, whether or not there are currently defined values for those tags. -s, prints tag names instead of descriptions. This is important for our purposes. We need to know the tag names so that we can request them in our Perl code. Descriptions, which are expanded, more human-readable forms of the same names obscure details we need. For example, compare the tag name 'GPSLatitude' to the description 'GPS Latitude'. We can use the tag name, but not the description to extract the latitude value from our files. -g2, organizes output by category. All location specific information is grouped together, as is all information related to the camera, date and time tags, etc.  You may feel, as I do, that this grouping makes it easier to examine the output. Also, this organization is more likely to reflect the grouping of field names used by your photo management application. If you prefer to save the output to a file, you can add ExifTool's -w option with a file extension. $ exiftool -f -s -g2 -w txt path/image.jpg This command will produce the same result but write the output to the file 'image.txt' in the current directory; again, where 'image.jpg' is the name of the image file. The -w option appends the named extension to the image file's basename, creates a new file with that name, and sets the new file as the destination for output. The tag names that correspond to the list of Aperture fields presented above are: metadata tag name Aperture field label Caption-Abstract Caption GPSLatitude Latitude GPSLongitude Longitude DateTimeOriginal Image Date Sub-location Location/Sub-Location City City Province-State Province/State Country-PrimaryLocationName Country Name FixtureIdentifier Event Contact People ImageWidth Pixel Width ImageHeight Pixel Height     Section 4: Making Photos Available to Google Earth We will use some of the metadata tags from our image files to locate our photos on the map (e.g. GPSLatitude, GPSLongitude), and others to describe the photos. For example, we will include the value of the People tag in the information window that accompanies each marker to identify friends and family who appear in the associated photo. Because we want to display and link to photos on the map, not just indicate their position, we need to include references to the location of our image files on a publicly accessible web server. You have some choice about how to do this, but for the implementation described here we will (1) Display a thumbnail in the info window of each map marker and (2) include a link to the details page for the image in a gallery created in our photo management app. When a visitor clicks on a map marker they will see a thumbnail photo along with other brief descriptive information. Clicking a link included as part of the description will open the viewer's web browser to a page displaying a large size image and additional details. Furthermore, because the page is part of a gallery, viewers can jump to an index page and step forward and back through the complete collection. This is a complementary way to browse our photos. Taking this one step further, we could add a comments section to the gallery pages or replace the gallery altogether, instead choosing to display each photo as a weblog post for example. The structure of the gallery created from my photo app is as follows... / (the root of the gallery directory) index.html large-1.html large-2.html large-3.html ... large-n.html assets/ css/ img/ catalog/ pictures/ picture-1.jpg picture-2.jpg picture-3.jpg ... picture-n.jpg thumbnails/ thumb-1.jpg thumb-2.jpg thumb-3.jpg ... thumb-n.jpg The application creates a root directory containing the complete gallery. Assuming we do not want to make any manual changes to the finished site, publishing is as easy as copying the entire directory to a location within the web server's document root. assets/: Is a subfolder containing files related to the theme itself. We don't need to concern ourselves with this sub-directory. catalog/: Contains a single catalog.plist file which is specific to Aperture and not relevant to this discussion. pictures/: Contains the large size images included on the detail gallery pages. thumbnails/: This subfolder contains the thumbnail images corresponding to the large size images in pictures/. Finally, there are a number of files at the root of the gallery. These include index pages and files named 'large-n.html', where n is a number starting at 1 and increasing sequentially e.g. large-1.html, large-2.html, large-3.html, ... The index files are the index pages of our gallery. The number of index pages generated will be dictated by the number of image files in the gallery, as well as the gallery's layout and design. index.html is always the first gallery page. The large-n.html files are the details pages of our gallery. Each page features an individual photo with links to the previous and next photos in sequence and a link back to the index. You can see the gallery I have created for this tutorial here: http://robreed.net/photos/tutorial_gallery/ If you take the time to look through the gallery, maybe you can appreciate the value of viewing these photos on a map. Web-based photo galleries like this one are nice enough, but the photos are more interesting when viewed in some meaningful context. There are a couple of things to notice about this gallery. Firstly, picture-1.jpg, thumb-1.jpg, and large-1.html all refer to the same image. So if we pick one of the three files we can easily predict the names of the other two. This relationship will be useful when it comes to writing our script. There is another important issue I need to call to your attention because it will not be apparent from looking only at the gallery structure. Aperture has renamed all of my photos in the process of exporting them. The name of the original file from which picture-1.jpg was generated (as well as large-1.html and thumb-1.jpg) is 'IMG_0277.JPG', which is the filename produced by my camera. Because I want to link to these gallery files, not my original photos which will stay safely tucked away on a local drive, I must run the script against the photos in the gallery. I cannot run it against the original image files because the filenames referenced in the output are unrelated to the files in the gallery. If my photo management app provided me the option of preserving the original filenames for the corresponding photos in the gallery, then I could run the script against the original image files or the gallery photos because all of the filenames would be consistent, but this is not the case. I don't have a problem as long as I run the script on the exported photos. However, if I'm running the script against the photos in the web gallery, either the pictures or thumbnail images must contain the same metadata as the original image files. Aperture preserves the metadata in both. Your application may not. A simple, dependable way to confirm that the metadata is present in the gallery files is to run ExifTool first against the original file and then the same photo in the pictures/ and thumbnails/ directories in the gallery. If ExifTool reports identical metadata, then you will have no trouble using one of pictures/ or thumbnails/ as your source directory. If the metadata is not present or not complete in the gallery files, you may need to use the script on your original image files. As has already been explained, this isn't a problem unless the gallery produces filenames that are inconsistent with the original filenames, as Aperture does. In this case you have a problem. You won't be able to run the script on the original image files because of the naming issue or on the gallery photos because they don't contain metadata. Make sure that you understand this point. If you find yourself in this situation, then your best bet is to generate files to use with your maps from your original photos in some other way, bypassing your photo management app's web gallery features altogether in favor of a solution that preserves the filenames, the metadata, or both. There is another option which involves setting up a relationship between the names of the original files and the gallery filenames. This tutorial does not include details about how to set up this association. Finally, keep in mind that though we've looked at the structure of a gallery generated by Aperture, virtually all photo management apps produce galleries with a similar structure. Regardless of the application used, you should find: A group of html files including index and details pages A folder of large size image files A folder of thumbnails Once you have identified the structure used by your application, as we have done here, it will be a simple task to translate these instructions. Section 5: Referencing Files Over the Internet Now we can talk about how to reference these files and gallery pages so that we can create a script to generate a KML file that includes these references. When we identify a file over the internet, it is not enough to use the filename, e.g. 'thumb-1.jpg', or even the absolute and relative path to the file on the local computer. In fact these paths are most likely not valid as far as your web server is concerned. Instead we need to know how to reference our files such that they can be accessed over the global internet, and the web more specifically. In other words, we need to be able to generate a URL (Uniform Resource Locator) which unambiguously describes the location of our files. The formal details of exactly what comprises a URL5; are more complicated than may be obvious at first, but most of us are familiar with the typical URL, like this one: http://www.ietf.org/rfc/rfc1630.txt which describes the location of a document titled "Universal Resource Identifiers in WWW" that just so happens to define the formal details of what comprises a URL. http://www.ietf.org This portion of the address is enough to describe the location of a particular web server over the public internet. In fact it does a little more than just specify the location of a machine. The http:// portion is called the scheme and it identifies a particular protocol (i.e. a set of rules governing communication) and a related application, namely the web. What I just said isn't quite correct; at one time, HTTP was used exclusively by the web, but that's no longer true. Many internet-based applications use the protocol because the popularity of the web ensures that data sent via HTTP isn't blocked or otherwise disrupted. You may not be accustomed to thinking of it as such, but the web itself is a highly-distributed, network-based application. /rfc/ This portion of the address specifies a directory on the server. It is equivalent to an absolute path on your local computer. The leading forward slash is the root of the web server's public document directory. Assuming no trickiness on the part of the server, all content lives under the document root. This tells us that rfc/ is a sub-directory contained within the document root. Though this directory happens to be located immediately under the root, this certainly need not be the case. In fact these paths can get quite long. We have now discussed all of the URL except for: rfc1630.txt which is the name of a specific file. The filename is no different than the filenames on your local computer. Let's manually construct a path to one of the large-n.html pages of the gallery we have created. The address of my server is robreed.net, so I know that the beginning of my URL will be: http://robreed.net I keep all of my galleries together within a photos/ directory, which is contained in the document root. http://robreed.net/photos/ Within photos/, each gallery is given its own folder. The name of the folder I have created for this tutorial is 'tutorial_gallery'. Putting this all together, the following URL brings me to the root of my photo gallery: http://robreed.net/photos/tutorial_gallery/ We've already gone over the directory structure of the gallery, so it should make sense you to that when referring to the 'large-1.html' detail page, the complete URL will be: http://robreed.net/photos/tutorial_gallery/large-1.html the URL of the image that corresponds to that detail page is: http://robreed.net/photos/tutorial_gallery/pictures/picture-1.jpg and the thumbnail can be found at: http://robreed.net/photos/tutorial_gallery/thumbnails/thumb-1.jpg Notice that the address of the gallery is shared among all of these resources. Also, notice that resources of each type (e.g. the large images, thumbnails, and html pages) share a more specific address with files of that same type. If we use the term 'base address' to refer to the shared portions of these URLs, then we can talk about several significant base addresses: The gallery base address: http://robreed.net/photos/tutorial_gallery/ The html pages base address: http://robreed.net/photos/tutorial_gallery/ The images base address: http://robreed.net/photos/tutorial_gallery/pictures/ The thumbnails base address: http://robreed.net/photos/tutorial_gallery/thumbnails/ Note that given the structure of this particular gallery, the html pages base address and the gallery base address are identical. This need not be the case, and may not be for the gallery produced by your application. We can hard-code the base addresses into our script. For each photo, we need only append the associated filename to construct valid URLs to any of these resources. As the script runs, it will have access to the name of the file that it is currently evaluating, and so it will be a simple matter to generate the references we need as we go. At this point we have discussed almost everything we need to put together our script. We have: Created a gallery at our server, which includes our photos with metadata in tow Identified all of the metadata tags we need to extract from our photos with the script and the corresponding field names in our photo management application Determined all of the base addresses we need to generate references to our image files Section 6: KML The last thing we need to understand is the format of the KML files we want to produce. We've already looked at a fragment of KML. The full details can be found on Google's KML documentation pages , which include samples, tutorials and a complete reference for the format. A quick look at the reference is enough to see that the language includes many elements and attributes, the majority of which we will not be including in our files. That statement correctly implies it is not necessary for every KML file to include all elements and attributes. The converse however is not true, which is to say that every element and attribute contained in any KML file must be a part of the standard. A small subset of KML will get us most, if not all, of what you will typically see in Google Earth from other applications. Many of the features we will not be using deal with aspects of the language that are either: Not relevant to this project, e.g. ground overlays (GroundOverlay) which "draw an image overlay draped onto the terrain" Minute details for which the default values are sensible There is no need to feel shortchanged because we are covering only a subset of the language. With the basic structure in place and a solid understanding of how to script the generation of KML, you will be able to extend the project to include any of the other components of the language as you see fit. The structure of our KML file is as follows:  1.    <?xml version="1.0" encoding="UTF-8"?> 2.    <kml > 3.        <Document> 4.            <Folder> 5.                <name>$folder_name</name> 6.                <description>$folder_description</description> 7.                <Placemark> 8.                    <name>$placemark_name</name> 9.                    <Snippet maxLines="1"> 10.                        $placemark_snippet 11.                    </Snippet> 12.                    <description><![CDATA[ 13.                        $placemark_description 14.                    ]]></description> 15.                    <Point> 16.                        <coordinates>$longitude,$latitude </coordinates> 17.                    </Point> 18.                </Placemark> 19.            </Folder> 20.        </Document> 21.    </kml> Line 1: XML header Every valid KML file must start with this line and nothing else is allowed to appear before it. As I've already mentioned, KML is an XML-based language and XML requires this header. Line 2: Namespace declaration More specifically this is the KML namespace declaration, and it is another formality. The value of the >Line 3: <Document> is a container element representing the KML file itself. If we do not explicitly name the document by including a name element then Google Earth will use the name of the KML file as the Document element <name>. The Document container will appear on the Google Earth 'Sidebar' within the 'Places' panel. Optionally we can control whether the container is closed or open by default. (This setting can be toggled in Google Earth using a typical disclosure triangle.) There are many other elements and attributes that can be applied to the Document element. Refer to the KML Reference for the full details. Line 4: <Folder> is another container element. The files we produce will include a single <Folder> containing all of our Placemarks, where each Placemark represents a single image. We could create multiple Folder elements to group our Placemarks according to some significant criteria. Think of the Folder element as being similar to your operating system's concept of a folder. At this point, note the structure of the fragment. The majority of it is contained within the Folder element. Folder, in turn, is an element of Document which is itself within the <kml> container. It should make sense that everything in the file that is considered part of the language must be contained within the kml element. From the KML reference: A Folder is used to arrange other Features hierarchically (Folders, Placemarks, NetworkLinks, or Overlays). A Feature is visible only if it and all its ancestors are visible. Line 5: The name element identifies an object, in this case the Folder object. The text that appears between the name tags can be any plain text that will serve as an appropriate label.   Line 6: <description> is any text that seems to adequately describe the object. The description element supports both plain text and a subset of HTML. We'll consider issues related to using HTML in <description> at the discussion of Placemark, lines 12 - 14. Lines 7 - 17 define a <Placemark> element. Note that Placemark contains a number of elements that also appear in Folder, including <name> (line 8), and <description> (lines 12 - 14). These elements serve the same purpose for Placemark as they do for the Folder element, but of course they refer to a different object. I've said that <description> can include a subset of HTML in addition to plain text. Under XML, some characters have special meaning. You may need to use these characters as part of the HTML included in your descriptions. Angle brackets (<, >) for example surround tag names in HTML, but serve a similar purpose in XML. When they are used strictly as part of the content, we want the XML parser to ignore these characters. We can accomplish this a few different ways: We can use entity references, either numeric character references or character entity references, to indicate that the symbol appears as part of the data and should not be treated as part of the syntax of the language. The character '<', which is required to include an image as part of the description (something we will be doing), (e.g. <img src=... />) can safely be included as the character entity reference '&lt;' or the numeric character reference '&#60'. The character entity references may be easier to remember and recognize on sight but are limited to the small subset of characters for which they have been defined. The numeric references on the other hand can specify any ASCII character. Of the two types, numeric character references should be preferred. There are also Unicode entity references which can specify any character at all. In the simple case of embedding short bits of HTML in KML descriptions, we can avoid the complication of these references altogether by enclosing the entire description in a CDATA6 element, which instructs the XML parser to ignore any special characters that appear until the end of the block set off by the CDATA tags. Notice the string '<![CDATA[' immediately after the opening <description> tag within <Placemark>, and the string ]]> immediately before the closing </description> tag. If we simply place all of our HTML and plain text between those two strings, it will all be ignored by the parser and we are not required to deal with special characters individually. This is how we'll handle the issue. Lines 9 - 11: <Snippet> The KML reference does a good job of clearly describing this element. From the KML reference: In Google Earth, this description is displayed in the Places panel under the name of the feature. If a Snippet is not supplied, the first two lines of the <description> are used. In Google Earth, if a Placemark contains both a description and a Snippet, the <Snippet> appears beneath the Placemark in the Places panel, and the <description> appears in the Placemark's description balloon. This tag does not support HTML markup. <Snippet> has a maxLines attribute, an integer that specifies the maximum number of lines to display. Default for maxLines is 2. Notice that in the block above at line 9, I have included a 'maxLines' attribute with a value of 1. Of course, you are free to substitute your own value for maxLines, or you can omit the attribute entirely to use the default value. The only element we have yet to review is <Point>, and again we need only look to the official reference for a nice description. From the KML reference: A geographic location defined by longitude, latitude, and (optional) altitude. When a Point is contained by a Placemark, the point itself determines the position of the Placemark's name and icon. <Point> in turn contains the element <coordinates> which is required. From the KML reference: A single tuple consisting of floating point values for longitude, latitude, and altitude (in that order). The reference also informs us that altitude is optional, and in fact we will not be generating altitude values. Finally the reference warns: Do not include spaces between the three values that describe a coordinate. This seems like an easy mistake to make. We'll need to be careful to avoid it. There will be a number of <Placemark> elements, one for each of our images. The question is how to handle these elements. The answer is that we'll treat KML as a 'fill in the blanks' style template. All of the structural and syntactic bits will be hard-coded, e.g. the XML header, namespace declaration, all of the element and attribute labels, and even the whitespace, which is not strictly required but will make it much easier for us to inspect the resulting files in a text editor. These components will form our template. The blanks are all of the text and html values of the various elements and attributes. We will use variables as place-holders everywhere we need dynamic data, i.e. values that change from one file to the next or one execution of the script to the next. Take a look at the strings I've used in the block above. $folder_name, $folder_description, $placemark_name, etc. For those of you unfamiliar with Perl, these are all valid variable names chosen to indicate where the variables slot into the structure. These are the same names used in the source file distributed with the tutorial. Section 7: Introduction to the Code At this point, having discussed every aspect of the project, we can succinctly describe how to write the code. We'll do this in 3 stages of increasing granularity. Firstly, we'll finish this tutorial with a natural language walk-through of the execution of the script. Secondly, if you look at the source file included with the project, you will notice immediately that comments dominate the code. Because instruction is as important an objective as the actual operation of the script, I use comments in the source to provide a running narration. For those of you who find this superabundant level of commenting a distraction, I'm distributing a second copy of the source file with much of the comments removed. Finally, there is the code itself. After all, source code is nothing more than a rigorously formal set of instructions that describe how to complete a task. Most programs, including this one, are a matter of transforming input in one form to output in another. In this very ge
Read more
  • 0
  • 0
  • 8682

article-image-user-interaction-and-email-automation-symfony-13-part2
Packt
18 Nov 2009
8 min read
Save for later

User Interaction and Email Automation in Symfony 1.3: Part2

Packt
18 Nov 2009
8 min read
Automated email responses Symfony comes with a default mailer library that is based on Swift Mailer 4, the detailed documentation is available from their web site at http://swiftmailer.org. After a user has signed up to our mailing list, we would like an email verification to be sent to the user's email address. This will inform the user that he/she has signed up, and will also ask him or her to activate their subscription. To use the library, we have to complete the following three steps: Store the mailing settings in the application settings file. Add the application logic to the action. Create the email template. Adding the mailer settings to the application Just like all the previous settings, we should add all the settings for sending emails to the module.yml file for the signup module. This will make it easier to implement any modifications required later. Initially, we should set variables like the email subject, the from name, the from address, and whether we want to send out emails within the dev environment. I have added the following items to our signup module's setting file, apps/frontend/config/module.yml: dev: mailer_deliver: true all: mailer_deliver: true mailer_subject: Milkshake Newsletter mailer_from_name: Tim mailer_from_email: no-reply@milkshake All of the settings can be contained under the all label. However, you can see that I have introduced a new label called dev. These labels represent the environments, and we have just added a specific variable to the dev environment. This setting will allow us to eventually turn off the sending of emails while in the dev environment. Creating the application logic Triggering the email should occur after the user's details have been saved to the database. To demonstrate this, I have added the highlighted amends to the submit action in the apps/frontend/modules/signup/actions/actions.class.php file, as shown in the following code: public function executeSubmit(sfWebRequest $request) { $this->form = new NewsletterSignupForm(); if ($request->isMethod('post') && $this->form-> bindAndSave($request->getParameter($this->form-> getName()))) { //Include the swift lib require_once('lib/vendor/swift-mailer/lib/swift_init.php'); try{ //Sendmail $transport = Swift_SendmailTransport::newInstance(); $mailBody = $this->getPartial('activationEmail', array('name' => $this->form->getValue('first_name'))); $mailer = Swift_Mailer::newInstance($transport); $message = Swift_Message::newInstance(); $message->setSubject(sfConfig::get('app_mailer_subject')); $message->setFrom(array(sfConfig:: get('app_mailer_from_email') => sfConfig::get('app_mailer_from_name'))); $message->setTo(array($this->form->getValue('email')=> $this-> form->getValue('first_name'))); $message->setBody($mailBody, 'text/html'); if(sfConfig::get('app_mailer_deliver')) { $result = $mailer->send($message); } } catch(Exception $e) { var_dump($e); exit; } $this->redirect('@signup'); } //Use the index template as it contains the form $this->setTemplate('index'); } Symfony comes with a sfMailer class that extends Swift_Mailer. To send mails you could simply implement the following Symfony method: $this->getMailer()->composeAndSend('from@example.com', 'to@example.com', 'Subject', 'Body'); Let's walk through the process: Instantiate the Swift Mailer. Retrieve the email template (which we will create next) using the $this->getPartial('activationEmail', array('name' => $this->form->getValue('first_name'))) method. Breaking this down, the function itself retrieves a partial template. The first argument is the name of the template to retrieve (that is activationEmail in our example) which, if you remember, means that the template will be called _activationEmail.php. The next argument is an array that contains variables related to the partial template. Here, I have set a name variable. The value for the name is important. Notice how I have used the value within the form object to retrieve the first_name value. This is because we know that these values have been cleaned and are safe to use. Set the subject, from, to, and the body items. These functions are Swift Mailer specific: setSubject(): It takes a string as an argument for the subject setFrom(): It takes the name and the mailing address setTo(): It takes the name and the mailing address setBody(): It takes the email body and mime type. Here we passed in our template and set the email to text/html Finally we send the email. There are more methods in Swift Mailer. Check out the documentation on the Swift Mailer web site (http://swiftmailer.org/). The partial email template Lastly, we need to create a partial template that will be used in the email body. In the templates folder of the signup module, create a file called _activationEmail.php and add the following code to it: Hi <?php echo $name; ?>, <br /><br /> Thank you for signing up to our newsletter. <br /><br /> Thank you, <br /> <strong>The Team</strong> The partial is no different from a regular template. We could have opted to pass on the body as a string, but using the template keeps our code uniform. Our signup process now incorporates the functionality to send an email. The purpose of this example is to show you how to send an automated email using a third-party library. For a real application, you should most certainly implement a two-phase option wherein the user must verify his or her action. Flashing temporary values Sometimes it is necessary to set a temporary variable for one request, or make a variable available to another action after forwarding but before having to delete the variable. Symfony provides this level of functionality within the sfUser object known as a flash variable. Once a flash variable has been set, it lasts until the end of the overall request before it is automatically destroyed. Setting and getting a flash attribute is managed through two of the sfUser methods. Also, you can test for a flash variable's existence using the third method of the methods listed here: $this->getUser()->setFlash($name, $value, $persist = true) $this->getUser()->getFlash($name) $this->getUser()->hasFlash($name) Although a flash variable will be available by default when a request is forwarded to another action, setting the argument to false will delete the flash variable before it is forwarded. To demonstrate how useful flash variables can be, let's readdress the signup form. After a user submits the signup form, the form is redisplayed. I further mentioned that you could create another action to handle a 'thank you' template. However, by using a flash variable we will not have to do so. As a part of the application logic for the form submission, we can set a flash variable. Then after the action redirects the request, the template can test whether there is a flash variable set. If there is one, the template should show a message rather than the form. Let's add the $this->getUser()->setFlash() function to the submit action in the apps/frontend/modules/signup/actions/actions.class.php file: //Include the swift lib require_once('lib/vendor/swift-mailer/lib/swift_init.php'); //set Flash $this->getUser()->setFlash('Form', 'completed'); try{ I have added the flash variable just under the require_once() statement. After the user has submitted a valid form, this flash variable will be set with the name of the Form and have a value completed. Next, we need to address the template logic. The template needs to check whether a flash variable called Form is set. If it is not set, the template shows the form. Otherwise it shows a thank you message. This is implemented using the following code: <?php if(!$sf_user->hasFlash('Form')): ?> <form action="<?php echo url_for('@signup_submit') ?>" method="post" name="Newsletter"> <div style="height: 30px;"> <div style="width: 150px; float: left"> <?php echo $form['first_name']->renderLabel() ?></div> <?php echo $form['first_name']->render(($form['first_name']-> hasError())? array('class'=>'boxError'): array ('class'=>'box')) ?> <?php echo ($form['first_name']->hasError())? ' <span class="errorMessage">* '.$form['first_name']->getError(). '</span>': '' ?> <div style="clear: both"></div> </div> .... </form> <?php else: ?><h1>Thank you</h1>You are now signed up.<?php endif ?> The form is now wrapped inside an if/else block. Accessing the flash variables from a template is done through $sf_user. To test if the variable has been set, I have used the hasFlash() method, $sf_user->hasFlash('Form'). The else part of the statement contains the text rather than the form. Now if you submit your form, you will see the result as shown in the following screenshot: We have now implemented an entire module for a user to sign up for our newsletter. Wouldn't it be really good if we could add this module to another application without all the copying, pasting, and fixing?
Read more
  • 0
  • 0
  • 8654

article-image-using-different-jquery-event-listeners-responsive-interaction
Packt
16 Sep 2013
9 min read
Save for later

Using different jQuery event listeners for responsive interaction

Packt
16 Sep 2013
9 min read
(For more resources related to this topic, see here.) Getting Started First we want to create JavaScript that transforms a select form element into a button widget that changes the value of the form element when a button is pressed. So the first part of that task is to build a form with a select element. How to do it This part is simple; start by creating a new web page. Inside it, create a form with a select element. Give the select element some options. Wrap the form in a div element with the class select. See the following example. I have added a title just for placement. <h2>Super awesome form element</h2><div class="select"> <form> <select> <option value="1">1</option> <option value="Bar">Bar</option> <option value="3">3</option> </select> </form></div> Next, create a new CSS file called desktop.css and add a link to it in your header. After that, add a media query to the link for screen media and min-device-width:321px. The media query causes the browser to load the new CSS file only on devices with a screen larger than 320 pixels. Copy and paste the link to the CSS, but change the media query to screen and min-width:321px. This will help you test and demonstrate the mobile version of the widget on your desktop. <link rel="stylesheet" media="screen and (min-device-width:321px)" href="desktop.css" /><link rel="stylesheet" media="screen and (min-width:321px)" href="desktop.css" /> Next, create a script tag with a link to a new JavaScript file called uiFunctions.js and then, of course, create the new JavaScript file. Also, create another script element with a link to the recent jQuery library. <script src = "http://code.jquery.com/jquery-1.8.2.min.js"></script><script src = "uiFunctions.js"></script> Now open the new JavaScript file uiFunctions.js in your editor and add instructions to do something on a document load. $(document).ready(function(){ //Do something}); The first thing your JavaScript should do when it loads is determine what kind of device it is on—a mobile device or a desktop. There are a few logical tests you can utilize to determine whether the device is mobile. You can test navigator.userAgent; specifically, you can use the .test() method, which in essence tests a string to see whether an expression of characters is in it, checks the window width, and checks whether the screen width is smaller than 600 pixels. For this article, let's use all three of them. Ultimately, you might just want to test navigator.userAgent. Write this inside the $(document).ready() function. if( /Android|webOS|iPhone|iPad|iPod|BlackBerry/i.test(navigator.userAgent) || $(window).width()<600 ||window.screen.width<600) { //Do something for mobile} else { //Do something for the desktop} Inside, you will have to create a listener for the desktop device interaction event and the mouse click event, but that is later. First, let's write the JavaScript to create the UI widget for the select element. Create a function that iterates for each select option and appends a button with the same text as the option to the select div element. This belongs inside the $(document).ready() function, but outside and before the if condition. The order of these is important. $('select option').each(function(){ $('div.select').append('<button>'+$(this).html()+'</button>');}); Now, if you load the page on your desktop computer, you will see that it generates new buttons below the select element, one for each select option. You can click on them, but nothing happens. What we want them to do is change the value of the select form element. To do so, we need to add an event listener to the buttons inside the else condition. For the desktop version, you need to add a .click() event listener with a function. Inside the function, create two new variables, element and itemClicked. Make element equal the string button, and itemClicked, the jQuery object event target, or $(event.target). The next line is tricky; we're going to use the .addClass() method to add a selected class to the element variable :nth-child(n). Also, the n of the :nth-child(n) should be a call to a function named .eventAction(), to which we will add the integer 2. We will create the function next. $('button').click(function(){ var element = 'button'; var itemClicked = $(event.target); $(element+':nth-child(' + (eventAction(itemClicked,element) + 2) + ')').addClass('selected');}); Next, outside the $(document).ready() function, create the eventAction() function. It will receive the variables itemClicked and element. The reason we make this function is because it performs the same functions for both the desktop click event and the mobile tap or long tap events. function eventAction(itemClicked,element){ //Do something!}; Inside the eventAction() function, create a new variable called choiceAction. Make choiceAction equal to the index of the element object in itemClicked, or just take a look at the following code: var choiceAction = $(element).index(itemClicked); Next, use the .removeClass() method to remove the selected class from the element object. $(element).removeClass('selected'); There are only two more steps to complete the function. First, add the selected attribute to the select field option using the .eq() method and the choiceAction variable. Finally, remember that when the function was called in the click event, it was expecting something to replace the n in :nth-child(n); so end the function by returning the value of the choiceAction variable. $('select option').eq(choiceAction).attr('selected','selected');return choiceAction; That takes care of everything but the mobile event listeners. The button style will be added at the end of the article. See how it looks in the following screenshot: This will be simple. First, using jQuery's $.getScript() method, add a line to retrieve the jQuery library in the first if condition where we tested navigator.userAgent and the screen sizes to see whether the page was loaded into the viewport of a mobile device. The jQuery Mobile library will transform the HTML into a mobile, native-looking app. $.getScript("http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.js"); The next step is to copy the desktop's click event listener, paste it below the $.getScript line, and change some values. Replace the .click() listener with a jQuery Mobile event listener, .tap() or .taphold(), change the value of the element variable to the string .uti-btn, and append the daisy-chained .parent().prev() methods to the itemClicked variable value, $(event.target). Replace the line that calls the eventAction() function in the :nth-child(n) selector with a more simple call to eventAction(), with the variables itemClicked and element. $('button').click(function(){ var element = '.ui-btn'; var itemClicked = $(event.target).parent().prev(); eventAction(itemClicked,element);}); When you click on the buttons to update the select form element in the mobile device, you will need to instruct jQuery Mobile to refresh its select menu. jQuery Mobile has a method to refresh its select element. $('select').selectmenu("refresh",true); That is all you need for the JavaScript file. Now open the HTML file and add a few things to the header. First, add a style tag to make the select form element and .ui-select hidden with the CSS display:none;. Next, add links to the jQuery Mobile stylesheets and desktop.css with a media query for media screen and max-width: 600px; or max-device-width:320px;. <style> select,.ui-select{display:none;}</style><link rel="stylesheet" media="screen and (max-width:600px)" href="http://code.jquery.com/mobile/1.2.0/jquery.mobile-1.2.0.min.css"><link rel="stylesheet" media="screen and (min-width:600px)" href="desktop.css" /> When launched on a mobile device, the widget will look like this: Then, open the desktop.css file and create some style for the widget buttons. For the button element, add an inline display, padding, margins, border radius, background gradient, box shadow, font color, text shadow, and a cursor style. button { display:inline; padding:8px 15px; margin:2px; border-top:1px solid #666; border-left:1px solid #666; border-bottom:1px solid #333; border-right:1px solid #333; border-radius:5px; background: #7db9e8; /* Old browsers */ background:-moz-linear-gradient(top, #7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#7db9e8), color-stop(49%,#207cca), color-stop(50%,#2989d8), color-stop(100%,#1e5799)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top,#7db9e8 0%,#207cca 49%, #2989d8 50%,#1e5799 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* IE10+ */ background:linear-gradient(to bottom,#7db9e8 0%,#207cca 49%,#2989d8 50%,#1e5799 100%); /* W3C */ filter:progid:DXImageTransform.Microsoft.gradient ( startColorstr='#7db9e8', endColorstr='#1e5799',GradientType=0 ); /* IE6-9 */ color:white; text-shadow: -1px -1px 1px #333; box-shadow: 1px 1px 4px 2px #999; cursor:pointer;} Finally, add CSS for the .selected class that was added by the JavaScript. This CSS will change the button to look as if the button has been pressed in. .selected{ border-top:1px solid #333; border-left:1px solid #333; border-bottom:1px solid #666; border-right:1px solid #666; color:#ffff00; box-shadow:inset 2px 2px 2px 2px #333; background: #1e5799; /* Old browsers */ background:-moz-linear-gradient(top,#1e5799 0%,#2989d8 50%, #207cca 51%, #7db9e8 100%); /* FF3.6+ */ background:-webkit-gradient(linear,left top,left bottom, color-stop(0%,#1e5799),color-stop(50%,#2989d8), color-stop(51%,#207cca),color-stop(100%,#7db9e8)); /* Chrome,Safari4+ */ background:-webkit-linear-gradient(top, #1e5799 0%,#2989d8 50%, #207cca 51%,#7db9e8 100%); /* Chrome10+,Safari5.1+ */ background:-o-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* Opera 11.10+ */ background:-ms-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* IE10+ */ background: linear-gradient(to bottom, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1e5799', endColorstr='#7db9e8',GradientType=0 ); /* IE6-9 */} How it works This uses a combination of JavaScript and media queries to build a dynamic HTML form widget. The JavaScript tests the user agent and screen size to see if it is a mobile device and responsively delivers a different event listener for the different device types. In addition to that, the look of the widget will be different for different devices. Summary In this article we learned how to create an interactive widget that uses unobtrusive JavaScript, which uses different event listeners for desktop versus mobile devices. This article also helped you build your own web app that can transition between the desktop and mobile versions without needing you to rewrite your entire JavaScript code. Resources for Article : Further resources on this subject: Video conversion into the required HTML5 Video playback [Article] LESS CSS Preprocessor [Article] HTML5 Presentations - creating our initial presentation [Article]
Read more
  • 0
  • 0
  • 8609

article-image-building-consumer-review-website-using-wordpress-3
Packt
06 Aug 2010
15 min read
Save for later

Building a Consumer Review Website using WordPress 3

Packt
06 Aug 2010
15 min read
(For more resources on Wordpress, see here.) Building a consumer review website will allow you to supply consumers with the information that they seek and then, once they've decided to make a purchase, your site can direct them to a source for the product or service. This process can ultimately allow you to earn some nice commission checks because it's only logical that you would affiliate yourself with a number of the sites to which you will be directing consumers. The great thing about using the WP Review Site plugin to build your consumer review website is that you can provide people with an unbiased source of public opinions on any product or service that you can imagine. You will never have to resort to the hard sell in order to drive traffic to the companies that you've affiliated yourself with. Instead, consumers can research the reviews posted on your website and,ultimately, make a purchase feeling confident that they're making the right decision. In this article, you will learn about the following: Present reviews in the most convenient way possible for visitors browsing your site Specify the ratings criteria that site visitors will use when reviewing the products or services included on your website Display informational comparison tables on your site's index and category pages Provide visitors with the location of local businesses using Google Maps Perform the additional steps required when writing a post now that the WP Review Site plugin has been introduced into the process Perform either automatic and manual integration so that you can use a theme of your own rather than either of the ones provided with this plugin Once this project is complete, you will have succeeded in creating a site that's similar to the one shown in the following screenshot:   Introducing WP Review Site With the WP Review Site plugin you will be able to build a consumer review site where visitors can share their opinions about the products or services of your choosing. The plugin, which can be found at WP Review Site, can be used to build a dedicated review site or, if you would like consumer reviews to make up only a subsection of your website, then you can specify certain categories where they should appear. This plugin gives you complete control over where ratings appear and where they don't since you can choose to include or exclude them on any category, page, or post. The WP Review Site plugin seamlessly integrates with WordPress by, among other things, altering the normal appearance and functionality of the comments submission form. This plugin provides visitors with a way to write a review and assign stars to the ratings categories that you previously defined. They can also write a review and opt to provide no stars without harming the overall rating presented on your site, since no stars is interpreted as though no rating was given. WP Review Site plugin makes it easy for you to present your visitors with concise information. Using the features available with this plugin, you can build comparison tables based upon your posts and user reviews. In order to accomplish this, you will need to configure a few settings and then the plugin will take care of the rest. Typically, WordPress displays posts in chronological order, but that doesn't make much sense on a consumer review site where visitors want to view posts based upon other factors such as the number of positive reviews that a particular product or service has received. The developer behind WP Site Review took that into consideration and has included two alternative sorting methods for your site's posts. The developer has even included a Bayesian weighting feature so that reviews are ordered in the most logical way possible. Right about now, you're probably wondering what Bayesian weighting is and how it works. What it does is provide a way to mathematically calculate the rating of products and/or services based upon the credibility of the votes that have been cast. If an item receives only a few votes, then it can't be said with any certainty that that's how the general public feels. If an item receives several votes, then it can be safely assumed that many others hold the same opinion. So, with Bayesian weighting, a product that has received only one five star review won't outrank another that has received fifteen four star reviews. As the product that received one five star review garners more ratings, its reviews will grow in credibility and, if it continues to receive high ratings, it will eventually become credible enough to outrank the other reviews. If you're planning to create a website where visitors can come and review local businesses, then you might consider this plugins ability to automatically embed Google Maps quite handy. After configuring the settings on the plugin's Google Maps screen you will be able to type the address for a business into a custom field when writing a post and then the plugin will take care of the rest. The WP Review Site plugin also includes two sidebar widgets that can used with any widget-ready theme. These widgets will allow you to display a list of top rated items and a list of recent reviews. Lastly, the themes provided with this plugin include built-in support for the hReview microformat. This means that Google will easily be able to extract and highlight reviews from your website. That feature will prove to be very beneficial for driving search engine traffic to your site. Installing WP Review Site Once you've installed WordPress you can then concentrate on the installation of the WP Review Site plugin and its accompanying themes. First, extract the wpreviewsite.zip archive. Inside you will find a plugins folder and a themes folder. Within the plugins folder is another folder named review-site. Since none of these folders are zipped, you will need to upload them using either an FTP program or the file manager provided by your web host. So, upload the review-site folder to the wp-content/plugins directory on your server. If you plan to use one of the themes provided with this plugin, then you will next need to upload the contents of the themes folder to the wp-content/themes directory. Setting up and configuring WP Review Site With the installation process complete, you will now need to activate the WP Review Site plugin. Once that's finished, a Review Site menu will appear on the left side of your screen. This menu contains links to the settings screens for this plugin. Before you delve into the configuration process you must first activate the theme that you plan to use on your consumer review website. Using one of the provided themes is a bit easier. That's because using any other theme will mean that you must integrate the functionality of WP Review Site into it. Now that you know the benefits offered by the themes that are bundled with this plugin, click on Appearance | Themes. Once there, activate either Award Winning Hosts, Bonus Black, or a theme of your choice. General Settings Navigate to Review Site | General Settings to be taken to the first of the WP Review Site settings screens. On this screen, Sort Posts By is the first setting that you will encounter. Rather than displaying reviews in the normal chronological order used by WordPress you should, instead, select either the Average User Rating (Weighted) or the Number of Reviews/Comments option. Either of these settings will provide a much more user-friendly experience for your visitors. If you want to make it impossible for site visitors to submit a comment without also choosing a rating, tick the checkbox next to Require Ratings with All Comments. If you don't want to make this a requirement, then you can leave this setting as is. This setting will, of course, only apply to posts that you would like your visitors to rate. On normal posts, that don't include rating stars in the comment form area, it will still be possible for your visitors to submit a comment. When using one of the themes provided with the plugin, none of the other settings on this screen need to be configured. If you would like to integrate this plugin into a different theme, then, depending upon the method that you choose, you may need to revisit this screen later on. No matter how you're handling the theme issue, you can, for now, just click Save Settings before proceeding to the next screen. Rating Categories To access the next settings screen, click on Review Site | Rating Categories. Here you can add categories for people to rate when submitting reviews. These categories shouldn't be confused with the categories used in WordPress for organizational purposes. These WP Review Site categories are more like ratings criteria. By default, WP Review Site includes a category called Overall Rating, but you can click the remove link to delete it if you like. To add your first rating category, simply enter its title into the Add a Category textbox and then click Save Settings. The screen will then refresh and your newly created rating category will now appear under the Edit Rating Categories section of the screen. To add additional rating categories, simply repeat the process that you previously completed. Once you've finished adding rating categories, you will next need to turn your attention to the Bulk Apply Rating Categories section of the screen. In the Edit Rating Categories area you will see all of the rating categories that you just finished adding to your site. If you want to simplify matters, and apply these rating categories to all of the posts on your site, tick the checkbox next to each of the available rating categories. Then, from the Apply to Posts in Category drop-down menu, select All Categories. This is most likely the configuration that you will use if you're building a website entirely dedicated to providing consumer reviews. Once you've finished, click Save Settings. If you, instead, want your newly added rating categories to only appear on certain categories, then bypass the Edit Rating Categories area for now and first look to the Apply to Posts in Category settings area. Currently this will only show All Categories and Uncategorized. The lack of categories in this menu is being caused by two things. First, you haven't added any WordPress categories to your site yet. Secondly, categories won't be included in this menu until they contain at least one post. To solve part of this problem, open a new browser window and then, navigate to Posts | Categories. Then, add the categories that you would like to include on your website. Now, click on Posts | Edit to visit the Edit Posts screen. At the moment, the Hello world! post is the only one published on your site and you can use it to force your site's categories to appear in the Apply to Posts in Category drop-down menu. So, hover over the title of this post and then, from the now visible set of links, click Quick Edit. In the Categories section of the Quick Edit configuration area, tick the checkbox next to each of the categories found on your site. Then, click Update Post. After content has been added to each of your site's categories, you can delete the Hello world! post, since you will no longer need to use it to force the categories to appear in the Apply to Posts in Category drop-down menu. Now, return to the Rating Categories screen and then select the first category that you want to configure from the Apply to Posts in Category drop-down menu. With that selected, in the Edit Rating Categories area, tick the checkbox next to each rating category that you want to appear within that WordPress category. Then, click Save Settings. Repeat this process for each of the WordPress categories to which you would like rating categories to be added. Comparison Tables If you wish, you can add a comparison table to either the home page or the category pages on your site. To do this, you need to visit the Comparison Tables screen, so click on Review Site | Comparison Tables. If you want to display a comparison table on your home page, then tick the checkbox next to Display a Comparison Table on Home Page. If you would like to include all of your site's categories in the comparison table that will be displayed on the home page, then leave the Categories To Display On Home Page textbox as is. However, if you would prefer to include only certain categories, then enter their category IDs, separated by commas, into the textbox instead. You can learn the ID numbers that have been assigned to each of your site's categories by opening a new browser window and then navigating to Posts | Categories. Once there, hover over the title of each of the categories found on the right hand side of your screen. As you do, look at the URL that appears in your browser's status bar and make a note of the number that appears directly after tag_ID=. That's the number that you will need to enter in the Comparison Table screen. If you want to display a comparison table in one or more categories, then tick the checkbox next to Display a Comparison Table on Category Page(s). Now, return to the Comparison Table screen. If you want a comparison table to be displayed on each of your category pages, leave the Categories To Display Comparison Table On textbox at its default. Otherwise, enter a list of comma separated category IDs into the textbox for the categories where you want to display comparison tables. The Number of Posts in the Table setting is currently set to 5, but you can enter another value if you would like a different number of posts to be included in each comparison table. When writing posts, you might use custom fields to include additional information. If you would like that information to be displayed in your comparison tables you will need to enter the names of those fields, separated by commas, into the Custom Fields to Display textbox. Lastly, you can change the text that appears in the Text for the Visit Site link in the Table if you wish or you may leave it at its default. With these configurations complete, click Save Settings. In this screenshot, you can see what a populated comparison table will look like on your website: Google Maps If you plan on featuring reviews centered around local businesses, then you might want to consider adding Google Maps to your site. This will make it easy for visitors to see exactly where each business is located. You can access this settings screen by clicking on Review Site | Google Maps. To activate this feature, tick the checkbox next to Display a Google Map on Posts/Pages with mapaddress Custom Field. Next, you need to use the Map Position setting to specify where these Google Maps will appear in relation to the content. You can choose to use either the Top of Post or Bottom of Post position. The Your Google Maps API Key textbox is next. Here you will need to enter a Google Maps API key. If you don't have a Google Maps API key for this domain, then you will need to visit Google to generate one. To do this, right-click on the link provided on the Google Maps screen and then open that link in a new browser window. You will then be taken to the Google Maps API sign up screen, which can be found at Google Maps API sign up. If you've ever signed up to use any of Google's services, then you can use that username and password to log in. If you don't have an account with Google, create one now. Take a moment to read the information and terms presented on the Google Maps API sign up page. After you've finished reviewing this text, if it's acceptable to you, enter the URL for your website into the My web site URL textbox and then click Generate API Key. You will then be taken to a thank you screen where your API key will be displayed. Copy the API key and then return to the Google Maps screen on your website. Once there, paste your API key into the textbox for Your Google Maps API Key. The Map Width and Map Height settings are next. By default, these are configured to 400px and 300px. If you would prefer that the maps be displayed at a different size, then enter new values into each of these textboxes. The last setting is Map Zoom Level (1-5), which is currently set to 3. This setting should be fine, but you may change it if you wish. Finally, click Save Settings. When you publish a post that includes the mappadress custom field, this is what the Google Map will look like on your site.
Read more
  • 0
  • 0
  • 8538
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-search-using-beautiful-soup
Packt
20 Jan 2014
6 min read
Save for later

Search Using Beautiful Soup

Packt
20 Jan 2014
6 min read
(For more resources related to this topic, see here.) Searching with find_all() The find() method was used to find the first result within a particular search criteria that we applied on a BeautifulSoup object. As the name implies, find_all() will give us all the items matching the search criteria we defined. The different filters that we see in find() can be used in the find_all() method. In fact, these filters can be used in any searching methods, such as find_parents() and find_siblings(). Let us consider an example of using find_all(). Finding all tertiary consumers We saw how to find the first and second primary consumer. If we need to find all the tertiary consumers, we can't use find(). In this case, find_all() will become handy. all_tertiaryconsumers = soup.find_all(class_="tertiaryconsumerslist") The preceding code line finds all the tags with the = "tertiaryconsumerlist" class. If given a type check on this variable, we can see that it is nothing but a list of tag objects as follows: print(type(all_tertiaryconsumers)) #output <class 'list'> We can iterate through this list to display all tertiary consumer names by using the following code: for tertiaryconsumer in all_tertiaryconsumers: print(tertiaryconsumer.div.string) #output lion tiger Understanding parameters used with find_all() Like find(), the find_all() method also has a similar set of parameters with an extra parameter, limit, as shown in the following code line: find_all(name,attrs,recursive,text,limit,**kwargs) The limit parameter is used to specify a limit to the number of results that we get. For example, from the e-mail ID sample we saw, we can use find_all() to get all the e-mail IDs. Refer to the following code: email_ids = soup.find_all(text=emailid_regexp) print(email_ids) #output [u'abc@example.com',u'xyz@example.com',u'foo@example.com'] Here, if we pass limit, it will limit the result set to the limit we impose, as shown in the following example: email_ids_limited = soup.find_all(text=emailid_regexp,limit=2) print(email_ids_limited) #output [u'abc@example.com',u'xyz@example.com'] From the output, we can see that the result is limited to two. The find() method is find_all() with limit=1. We can pass True or False values to find the methods. If we pass True to find_all(), it will return all tags in the soup object. In the case of find(), it will be the first tag within the object. The print(soup.find_all(True)) line of code will print out all the tags associated with the soup object. In the case of searching for text, passing True will return all text within the document as follows: all_texts = soup.find_all(text=True) print(all_texts) #output [u'n', u'n', u'n', u'n', u'n', u'plants', u'n', u'100000', u'n', u'n', u'n', u'algae', u'n', u'100000', u'n', u'n', u'n', u'n', u'n', u'deer', u'n', u'1000', u'n', u'n', u'n', u'rabbit', u'n', u'2000', u'n', u'n', u'n', u'n', u'n', u'fox', u'n', u'100', u'n', u'n', u'n', u'bear', u'n', u'100', u'n', u'n', u'n', u'n', u'n', u'lion', u'n', u'80', u'n', u'n', u'n', u'tiger', u'n', u'50', u'n', u'n', u'n', u'n', u'n'] The preceding output prints every text content within the soup object including the new-line characters too. Also, in the case of text, we can pass a list of strings and find_all() will find every string defined in the list: all_texts_in_list = soup.find_all(text=["plants","algae"]) print(all_texts_in_list) #output [u'plants', u'algae'] This is same in the case of searching for the tags, attribute values of tag, custom attributes, and the CSS class. For finding all the div and li tags, we can use the following code line: div_li_tags = soup.find_all(["div","li"]) Similarly, for finding tags with the producerlist and primaryconsumerlist classes, we can use the following code lines: all_css_class = soup.find_all(class_=["producerlist","primaryconsumerlist"]) Both find() and find_all() search an object's descendants (that is, all children coming after it in the tree), their children, and so on. We can control this behavior by using the recursive parameter. If recursive = False, search happens only on an object's direct children. For example, in the following code, search happens only at direct children for div and li tags. Since the direct child of the soup object is html, the following code will give an empty list: div_li_tags = soup.find_all(["div","li"],recursive=False) print(div_li_tags) #output [] If find_all() can't find results, it will return an empty list, whereas find() returns None. Navigation using Beautiful Soup Navigation in Beautiful Soup is almost the same as the searching methods. In navigating, instead of methods, there are certain attributes that facilitate the navigation. So each Tag or NavigableString object will be a member of the resulting tree with the Beautiful Soup object placed at the top and other objects as the nodes of the tree. The following code snippet is an example for an HTML tree: html_markup = """<div class="ecopyramid"> <ul id="producers"> <li class="producerlist"> <div class="name">plants</div> <div class="number">100000</div> </li> <li class="producerlist"> <div class="name">algae</div> <div class="number">100000</div> </li> </ul> </div>""" For the previous code snippet, the following HTML tree is formed: In the previous figure, we can see that Beautiful Soup is the root of the tree, the Tag objects make up the different nodes of the tree, while NavigableString objects make up the leaves of the tree. Navigation in Beautiful Soup is intended to help us visit the nodes of this HTML/XML tree. From a particular node, it is possible to: Navigate down to the children Navigate up to the parent Navigate sideways to the siblings Navigate to the next and previous objects parsed We will be using the previous html_markup as an example to discuss the different navigations using Beautiful Soup. Summary In this article, we discussed in detail the different search methods in Beautiful Soup, namely, find(), find_all(), find_next(), and find_parents(); code examples for a scraper using search methods to get information from a website; and understanding the application of search methods in combination. We also discussed in detail the different navigation methods provided by Beautiful Soup, methods specific to navigating downwards and upwards, and sideways, to the previous and next elements of the HTML tree. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] Web Scraping with Python [article] Plotting data using Matplotlib: Part 1 [article]
Read more
  • 0
  • 0
  • 8522

article-image-creating-your-first-complete-moodle-theme
Packt
11 May 2010
11 min read
Save for later

Creating your First Complete Moodle Theme

Packt
11 May 2010
11 min read
So let's crack on... Creating a new theme Finding a base theme to create your Moodle theme on is the first thing that you need to do.There are, however, various ways to do this; you can make a copy of the standard themeand rename it as you did in part of this article, or you can use a parent theme that isalso based on the standard theme. The important point here is that the standard theme that comes with Moodle is the cornerstone of the Moodle theming process. Every other Moodle theme should be based upon this theme, and would normally describe the differences from the standard theme. Although this method does work and is a simple way to get started with Moodle theming, it does cause problems as new features could get added to Moodle that might cause your theme to display or function incorrectly. The standard theme will always be updated before a new release of Moodle is launched. So if you do choose to make a copy of the standard theme and change its styles, it would be best to make sure that you use a parent theme as well. In this way, the parent theme will be your base theme along with the changes that you make to your copy of the standard theme. However, there is another way of creating your first theme, and that is to create a copy of a theme that is very close to the standard theme, such as standardwhite, and use this as your theme. Moodle will then use the standard theme as its base theme and apply any changes that you make to the standardwhite theme on the top (parent). All we are doing is describing the differences between the standard and the standardwhite themes. This is better because Moodle developers will sometimes make changes to the standard theme to be up-to-date with new Moodle features. This means that on each Moodle update, your standard theme, folder will be updated automatically, thus avoiding any nasty display problems being caused by Moodle updates. The way by which you configure Moodle themes is completely up to you. If you see a theme that is nearly what you want and there aren't really many changes needed, then using a parent theme makes sense, as most of the styles that you require have already been written. However, if you want to create a theme that is completely different from any other theme or wish to really get into Moodle theming, then using a copy of one of the standard sheets would be best. So let's get on and see what the differences are when using different theme setups, and see what effect these different methods have on the theming process. Time for action – copying the standard theme Browse to your theme folder in C:Program FilesApache Software FoundationApache 2.2htdocstheme. Copy the standard theme by right-clicking on the theme's folder and choosing Copy. Paste the copied theme into the theme directory (the same directory that you arecurrently in). Rename the copy of standard folder to blackandblue or any other name that you wish to choose (remember not to use capitals or spaces). Open your Moodle site and navigate to Site Administration Appearance | Themes | Theme Selector|, and choose the blackandblue theme that you have just created. Y ou might have noticed that the theme shown in the preceding screenshot has a header that says Black and Blue theme. This is because I have added this to the Full site name in the Front Page settings page. Time for action – setting a parent theme Open your web browser and navigate to your Moodle site and log in as the administrator. Go to Site Administration | Appearance | Themes | Theme Selector and choose your blackandblue theme if it is not already selected. Browse to the root of your blackandblue folder, right-click on the config.php file, and choose Open with | WordPad. You need to make four changes to this file so that you can use this theme and a parent theme while ensuring that you still use the default standard theme as your base. Here are the changes: $THEME->sheets = array('user_styles');$THEME->standardsheets = true;$THEME->parent = 'autumn';$THEME->parentsheets = array('styles'); Let's look at each of these statements, in turn. $THEME->sheets = array('user_styles'); This contains the names of all of the stylesheet files that you want to include in this for your blackandblue theme, namely user_styles. $THEME->standardsheets = true; This parameter is used to include the standard theme's stylesheets. If it is set to True, it will use all of the stylesheets in the standard theme. Alternatively, it can be set as an array in order to load individual stylesheets in whatever order is required. We have set this to True, so we will use all of the stylesheets of the standard theme. $THEME->parent = 'autumn'; This variable can be set to use a theme as the parent theme, which is included before the current theme. This will make it easier to make changes to another theme without having to change the actual files. $THEME->parentsheets = array('styles'); This variable can be used to choose either all of the parent theme's stylesheets or individual files. It has been set to include the styles.css file from the parent theme, namely autumn. Because there is only one stylesheet in the Autumn theme, you can set this variable to True. Either way, you will have the same outcome. Save themeblackandblueconfig.php, and refresh your web browser window. You should see something similar to the following screenshot. Note that your blocks may be different to the ones below, but you can ignore this. What just happened? Okay, so now you have a copy of the standard theme that uses the Autumn theme (by Patrick Malley) as its parent. You might have noticed that the header isn't correct and that the proper Autumn theme header isn't showing. Well, this is because you are essentially using the copy of the standard theme and that the header from this theme is the one that you see above. It's only the CSS files that are included in this hierarchy, so any HTML changes will not be seen until you edit your standard theme's header.html file. Have a go hero – choose another parent theme Go back and have a look through some of the themes on Moodle.org and download one that you like. Add this theme as a parent theme to your blackandblue theme's config.php file, but this time choose which stylesheets you want to use from that theme. The Back to School theme is a good one for this exercise, as its stylesheets are clearly labeled. So you Copying the header and footer files To show that you are using the Autumn theme's CSS files and the standard theme's HTML files, you can just go and copy the header.html and footer.html files from Patrick Malley's Autumn theme and paste them into your blackandblue theme's folder. Don't worry about overwriting your header and footer files, as you can always just copy them again from the ac tual standard theme folder. Time for action – copying the header.html and footer.html files Browse to the Autumn theme's folder and highlight both the header.html and footer.html files by holding down the Ctrl key and clicking on them both. Right-click on the selected files and choose Copy. Browse to your blackandblue theme's folder and right-click and choose Paste. Go back to your browser window and press the F5 button to refresh the page. You will now see the full Autumn theme. What just happened? You have copied the autumn theme's header.html and footer.html files into your blackandblue theme, so you can see the full autumn theme working. You probably will not actually use the header.html and footer.html files that you just copied, as this was just an example of how the Moodle theming process works. So you now have an unmodified copy of the standard theme called blackandblue, which is using the autumn theme as its parent theme. All you need to do now to make changes to this theme is to edit your CSS file in the blackandblue theme folder. Theme folder housework However, there are a couple of things that you need to do first, as you have an exact copy of the standard theme apart from the header.html and footer.html files. This copied folder has files that you do not need, as the only file that you set for your theme to use was the user_styles.css file in the config.php file earlier. This was the first change that you made: $THEME->sheets = array('user_styles'); The user_style.css file does not exist in your blackandblue theme's folder, so you will need to create it. You will also need to delete any other CSS files that are present, as your new blackandblue theme will use only one stylesheet, namely the user_styles.css file that you will be creating in the following sections. Time for action – creating our stylesheet Right-click anywhere in your blackandblue folder and choose New Text Document|. Rename this text document to user_styles.css by right-clicking again and choosing Rename. Time for action – deleting CSS files that we don't need Delete the following CSS files by selecting them and then right-clicking on the selected files and choosing Delete. styles_color.css styles_ie6.css styles_ie6.css styles_ie7.css styles_layout.css styles_moz.css {background: #000000;} What just happened? In the last two tasks, you created an empty CSS file called user_style.css in your blackandblue theme's folder. You then deleted all of the CSS files in your blackandblue theme's folder, as you will no longer need them. Remember, these are just copies of the CSS files in the standard theme folder and you have set your theme to use the standard theme as its base in the blackandblue theme's config.php file. Let's make some changes Now you have set up your theme the way that you want it, that is, you are using your own blackandblue theme by using the standard theme as a base and the Autumn theme as the parent. Move on and make a few changes to your user_style.css file so that you can see what effect this has on your theme, and check that all of your config.php file's settings are correct. Remember that all of the current styles are being inherited from the Autumn theme. Time for action – checking our setup Open up your Moodle site with the current theme (which should be blackandblue but looks like the Autumn theme). Navigate to your blackandblue theme's folder, right-click on the user_style.css file, and choose Open. This file should be completely blank. Type in the following line of CSS for the body element, and then save the fi le: body {background: #000000;} Now refresh your browser window. You will see that the background is now black. Note: When using Firebug to identify styles that are being used, it might not always be obvious where they are or which style is controlling that element of the page. An example of this is the body {background: #000000;}code that we just pasted in our user_style.css file. If we had used Firebug to indentify that style, we would not have found it. Instead, I just took a look at the CSS file from the autumn theme. What I am trying to say here is that there will always be an element of poking around and trial and error. What just happened? All seems fine there, doesn't it? You have added one style declaration to your empty user_style.css file to change the background color, and have checked the changes in your browser. You now know how the parent themes work and know that you only need to copy the styles from Firebug into your user_style.css file and edit the style declarations that need to be changed.
Read more
  • 0
  • 1
  • 8476

article-image-adding-flash-your-wordpress-theme
Packt
24 Dec 2009
11 min read
Save for later

Adding Flash to your WordPress Theme

Packt
24 Dec 2009
11 min read
Adobe Flash—it's come quite a long way since my first experience with it as a Macromedia product (version 2 in 1997). Yet still, it does not adhere to W3C standards, requires a plugin to view, and above all, is a pretty pricey proprietary product. So why is everyone so hot on using it? Love it or hate it, Flash is here to stay. It does have a few advantages that we'll take a quick look at. The Flash player plugin does boast the highest saturation rate around (way above other media player plugins) and it now readily accommodates audio and video, as video sites such as You Tube take advantage of it. It's pretty easy to add and upgrade for all major browsers. The price may seem prohibitive at first, but after the initial purchase, additional upgrades are reasonably priced. Plus, many third-party software companies offer very cheap authoring tools that allow you to create animations and author content using the Flash player format. (In most cases, no one needs to know you're using the $50 version of Swish and not the $800 Flash CS3 to create your content.) Above all, it can do so much more than just playing video and audio (like most plugins). You can create seriously rich and interactive content, even entire applications with it, and the best part is, no matter what you create with it, it is going to look and work exactly the same on all browsers and platforms. These are just a few of the reasons why so many developers chose to build content and applications for the Flash player. Oh, and did I mention you can easily make awesome, visually slick, audio-filled stuff with it? Yeah, that's why your client wants you to put it in their site. Flash in your theme A commonly requested use of Flash is usually in the form of a snazzy header within the theme of the site, the idea being that various relevant and/or random photographs or designs load into the header with some supercool animation (and possibly audio) every time a page loads or a section changes. I'm going to assume if you're using anything that requires the Flash player, you're pretty comfortable with generating content for it. So, we're not going to focus on any Flash timeline tricks or ActionScripting. We'll simply cover getting your Flash content into your WordPress theme. For the most part, you can simply take the HTML object embed code that Flash (or other third-party tools) will generate for you and paste it into the header area of your WordPress index.php or header.php template file. Handling users without Flash, older versions of Flash, and IE6 users While the previous method is extremely clean and simple, it doesn't help all of your site's users in dealing with Flash. What about users who don't have Flash installed or have an older version that won't support your content? What about IE users who have the Active X restrain? You'll want your site and theme to gracefully handle users who do not have Flash (if you've used the overlay method, they'll simply see the CSS background image and probably not know anything is wrong!) or an older version of Flash that doesn't support the content you wish to display. This method lets you add in a line of text or a static image as an alternative, so people who don't have the plugin/correct version installed are either served up alternative content and they're none-the-wiser, or served up content that nicely explains that they need the plugin and directs them towards getting it. Most importantly, this method also nicely handles IE's ActiveX restrictions. Is the ActiveX restriction still around? In 2006, the IE browser upped its security, so users had to validate content that shows up in the Flash player (or any player) via Microsoft's ActiveX controls). Your Flash content starts to play, but there's a "grey outline" around the player area which may or may not mess up your design. If your content is interactive, then people will need to click to activate it. This is annoying, but the main workaround involved "injecting" controls and players via JavaScript. Essentially, you need to include your Flash content via a JavaScript include file. As of April 2008, this restriction was reverted, but only if your user has updated their browser; chances are, if they intent on still using IE6 or 7, they haven't done this update. Regardless of whether you are concerned about ActiveX restrictions, using JavaScript to help you instantiate your Flash will greatly add to the ease of embedding content. It will also make sure that users of all versions or who need to install Flash are handled either by directing them to the proper Flash installation and/or letting them see an alternative version of the content. swfObject For a while, I used this standard swfObject method that was detailed in this great SitePoint article: http://www.sitepoint.com/article/activex-activationissue-ie. A similar, robust version of this JavaScript is located on Google Code's AJAX API http://code.google.com/p/swfobject/wiki/hosted_library. You can download the script (it's very small) or you can link directly to the swfObject AJAX API URL: <script type="text/javascript"src="http://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js"></script> Downloaded or linked to the Google Code CDN, be sure to place this below your wp_head or any wp_enqueue_script calls in your < head > tags in your header.php template file or other head template file. Adding a SWF to the template using swfObject If you'd like to use the swfObject.js file and method, you can read the full documentation here: http://code.google.com/p/swfobject/wiki/documentation. But essentially, we're going to use the dynamic publishing option to include our SWF file. Using the SWF file included in this book's code packet, create a new directory in your theme called flash and place the SWF file in it. Then, create a div with alternative content and a script tag that includes the following JavaScript: <script type="text/javascript">swfobject.embedSWF("myContent.swf", "myContent", "300", "120","9.0.0");</script>...<div id="myContent"><p>Alternative content</p></div>... Add this ID rule to your stylesheet (I placed it just below my other header and intHeader ID rules): #flashHold{float: right;margin-top: 12px;margin-right: 47px;} As long as you take care to make sure the div is positioned correctly, the object embed code has the correct height and width of your Flash file, and you're not accidentally overwriting any parts of the theme that contain WordPress template tags or other valuable PHP code, you're good to go. What's the Satay method?It's a cleaner way to embed your Flash movies while still supporting web standards. Drew McLellan discusses its development in detail in this article: http://www.alistapart.com/articles/flashsatay. This method was fine on its own until IE6 decided to include its ActiveX security restriction. Nowadays, a modified embed method called the "nested-objects method": http://www.alistapart.com/articles/flashembedcagematch/ is used with the swfObject JavaScript we just covered. Good developer's tip:Even if you loathe IE (as lots of us as developers tend to), it is an "industry standard" browser and you have to work with it. I've found the Microsoft's IE blog ( http://blogs.msdn.com/ie/) extremely useful in keeping tabs on IE so that I can better develop CSS-based templates for it. While you're at it, go ahead and subscribe to the RSS feeds for Firefox ( http://developer.mozilla.org/devnews/), Safari ( http://developer.apple.com/internet/safari/), and your other favorite browsers. You'll be surprised at the insight you can glean, which can be extremely handy if you ever need to debug CSS or JavaScripts for one of those browsers. jQuery Flash plugin In the past year, as I've found myself making more and more use of jQuery, I've discovered and really liked Luke Lutman's jQuery Flash plugin. There is no CDN for this and it's not bundled with WordPress, so you'll need to download it and add it to your theme's js directory: ( http://jquery.lukelutman.com/plugins/flash/). Embedding Flash files using the jQuery Flash plugin As we're leveraging jQuery already, I find Luke's Flash plugin a little easier to deal with. Load the script under the wp_head. Place a div of alternative content; just the div of alternative content and nothing else! Write the jQuery script that will replace that content or show your alternative content for old/no Flash players. Code goes here. I think you see why I liked this so much more. Passing Flash a WordPress variable So now you've popped a nice Flash header into your theme. Here's a quick trick to make it all the more impressive. If you'd like to keep track of what page, post, or category your WordPress user has clicked on and display a relevant image or animation in the header, you can pass your Flash SWF file a variable from WordPress using PHP. I've made a small and simple Flash movie that will fit right over the top-right of my internal page's header. I'd like my Flash header to display some extra text when the viewer selects a different "column" (a.k.a. category). In this case, the animation will play and display OpenSource Magazine: On The New Web underneath the open source logo when the user selects the On The New Web category. More fun with CSSIf you look at the final theme package available from this title's URL on the Packt Publishing site, I've included the original ooflash-sample. FLA file. You'll notice the FLA has a standard white background. If you look at my header.php file, you'll notice that I've set my wmode parameter to transparent. This way, my animation is working with my CSS background. Rather than beef up my SWF's file size with another open source logo, I simply animate over it! Even if my animation "hangs" or never loads, the user's perception and experience of the page is not hampered. You can also use this trick as a "cheater's preloader". In your stylesheet, assign the div that holds your Flash object embed tags, a background image of an animated preloading GIF or some other image that indicates the user should expect something to load. The user will see this background image until your Flash file starts to play and covers it up. My favorite site to get and create custom loading GIFs is http://www.ajaxload.info/.   In your Flash authoring program, set up a series of animations or images that will load or play based on a variable set in the root timeline called catName. You'll pass this variable to your ActionScript. In my FLA example, if the catName variable does not equal On The New Web, then the main animation will play, but if the variable returns On The New Web, then the visibility of the movie clip containing the words OpenSource Magazine: On The New Web will be set to "true". Now, let's get our PHP variable into our SWF file. In your object embed code where your swfs are called, be sure to add the following code: If you plan on using the Satay embed method, your object embed will look like this: ...<script type="text/javascript">var flashvars = {catName: "<?echo single_cat_title('');?>"};swfobject.embedSWF("<?php bloginfo('template_directory');?>/flash/ooflash-sample.swf", "flashHold", "338", "150","8.0.0","expressInstall.swf", flashvars);</script>... If you'd like to use jQuery Flash, your jQuery will look like this: ...<script type="text/javascript">jQuery(document).ready(function(){jQuery('#flashHold').flash({src: '<?php bloginfo('template_directory');?>/flash/ooflash-sample.swf',width: 338,height: 150,flashvars: { catName: '<?echo single_cat_title('');?>' }},{ version: 8 });});</script>... Be sure to place the full path to your SWF file in the src and value parameters for the embed tags or jQuery src. Store your Flash file inside your themes directory and link to it directly, that is, src="/mythemename/flas'); template tag. This will ensure that your SWF file loads properly. Using this method every time someone loads a page or clicks on a link on your site that is within the On The New Web category, PHP will render the template tag as myswfname.swf?catName=On The New Web, or whatever the $single_cat_title(""); for that page is. So your Flash file's ActionScript is going to look for a variable called catName in the_root or _level0, and based on that value, do whatever you told it to do—call a function, go to a frame and animate; you can even name it. For extra credit, you can play around with the other template tag variables such as the_author_meta or the_date(), for example, and load up special animations, images, or call functions based on them.
Read more
  • 0
  • 2
  • 8468

article-image-multi-table-query-generator-using-phpmyadmin-and-mysql
Packt
09 Oct 2009
4 min read
Save for later

The Multi-Table Query Generator using phpMyAdmin and MySQL

Packt
09 Oct 2009
4 min read
The Search pages in the Database or Table view are intended for single-table lookups. This article by Marc Delisle, covers the multi-table Query by example (QBE) feature available in the Database view. Many phpMyAdmin users work in the Table view, table-by-table, and thus tend to overlook the multi-table query generator, which is a wonderful feature for fine-tuning queries. The query generator is useful not only in multi-table situations but also for a single table. It enables us to specify multiple criteria for a column, a feature that the Search page in the Table view does not possess. The examples in this article assumes that a multi-user installation of the linked-tables infrastructure has been made and that the book-copy table created during an exercise in the article on Table and Database Operations in PHP is still there in the marc_book database. To access the code used in this article Click Here. To open the page for this feature, we go to the Database view for a specific database (the query generator supports working on only one database at a time) and click on Query. The screenshot overleaf shows the initial QBE page. It contains the following elements: Criteria columns An interface to add criteria rows An interface to add criteria columns A table selector The query area Buttons to update or to execute the query Choosing Tables The initial selection includes all the tables. In this example, we assume that the linked-table infrastructure has been installed into the marc_book database. Consequently, the Field selector contains a great number of fields. For our example, we will work only with the author and book tables: We then click Update Query. This refreshes the screen and reduces the number of fields available in the Field selector. We can always change the table choice later, using our browser's mechanism for multiple choices in drop-down menus (usually, control-click). Column Criteria Three criteria columns are provided by default. This section discusses the options we have for editing their criteria. These include options for selecting fields, sorting individual columns, entering conditions for individual columns, and so on. Field Selector: Single-Column or All Columns The Field selector contains all individual columns for the selected tables, plus a special choice ending with an asterisk (*) for each table, which means all the fields are selected: To display all the fields in the author table, we choose `author`.* and check the Show checkbox, without entering anything in the Sort and Criteria boxes. In our case, we select `author`.`name`, because we want to enter some criteria for the author's name. Sorts For each selected individual column, we can specify a sort (in Ascending or Descending order) or let this line remain intact (meaning no sort). If we choose more than one sorted column, the sort will be done with a priority from left to right. When we ask for a column to be sorted, we normally check the Show checkbox, but this is not necessary because we might want to do just the sorting operation without displaying this column. Showing a Column We check the Show checkbox so that we can see the column in the results. Sometimes, we may just want to apply a criterion on a column and not include it in the resulting page. Here, we add the phone column, ask for a sort on it, and choose to show both the name and phone number. We also ask for a sort on the name in ascending order. The sort will be done first by name, and then by phone number, if the names are identical. This is because the name is in a column criterion to the left of the phone column, and thus has a higher priority:
Read more
  • 0
  • 0
  • 8427
article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 8380

article-image-advanced-aspects-inserting-and-deleting-data-mysql
Packt
01 Oct 2010
10 min read
Save for later

Advanced aspects of Inserting and Deleting data in MySQL

Packt
01 Oct 2010
10 min read
  MySQL Admin Cookbook 99 great recipes for mastering MySQL configuration and administration Set up MySQL to perform administrative tasks such as efficiently managing data and database schema, improving the performance of MySQL servers, and managing user credentials Deal with typical performance bottlenecks and lock-contention problems Restrict access sensibly and regain access to your database in case of loss of administrative user credentials Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on MySQL, see here.) Inserting new data and updating data if it already exists Manipulating data in a database is part of everyday work and the basic SQL means of INSERT, UPDATE, and DELETE make this a pretty straightforward, almost trivial task—but is this always true? When considering data manipulation, most of the time we think of a situation where we know the content of the database. With this information, it is usually pretty easy to find a way of changing the data the way you intend to. But what if you have to change data in circumstances where you do not know the actual database content beforehand? You might answer: "Well, then look at your data before changing it!" Unfortunately, you do not always have this option. Think of distributed installations of any software that includes a database. If you have to design an update option for this software (and the respective databases), you might easily come to a situation where you simply do not know about the actual database content. One example of a problem arising in these cases is the question of whether to insert or to update data: "Does the data in question already (partially) exist?" Let us assume a database table config that stores configuration settings. It holds key-value pairs, with name being the name (and thus the key) of the setting and value its value. This table exists in different database installations, one for every branch office of your company. Your task is to create an update package to set a uniform upper limit of 25% for the price discount that is allowed in your sales software. If no such limit has been defined yet, there is no respective entry in the config table, and you have to insert a new record. If the limit, however, has been set before (for example by the local manager), the entry does already exist, in which case you have to update it to hold the new value. While the update of a potentially existing entry does not pose a problem, an INSERT statement that violates uniqueness constraints will simply cause an error. This is, however, typically not acceptable in an automated update procedure. The following recipe will show you how to solve this problem with only one SQL command. Getting ready Besides a running MySQL server, a SQL client, and an account with appropriate user rights (INSERT, UPDATE), we need a table to update. In the earlier example, we assumed a table named sample.config with two character columns name and value. The name column is defined as the primary key: CREATE TABLE sample.config ( name VARCHAR(64) PRIMARY KEY, value VARCHAR(64)); How to do it... Connect to your database using your SQL client Execute the following command: mysql> INSERT INTO sample.config VALUES ("maxPriceDiscount","25%") ON DUPLICATE KEY UPDATE value='25%';Query OK, 1 row affected (0.05 sec) How it works... This command is easily explained because it simply does what it says: it inserts a new row in the table using the given values, as long as this does not cause a duplicate entry in either the primary key or another unique index. If a duplicate record exists, the existing row is updated according to the clauses defined after ON DUPLICATE KEY UPDATE. While it is sometimes tedious to enter some of the data and columns two times (once for the INSERT and a second time for the UPDATE), this statement allows for a lot of flexibility when it comes to the manipulation of potentially existing data. Please note that when executing the above statement, the result differs slightly with respect to the number of affected rows, depending on the actual data present in the database: When the record does not exist yet, it is inserted, which results in one affected row. But if the record is updated rather than inserted, it reports two affected rows instead, even if only one row gets updated. There's more... The INSERT INTO … ON DUPLICATE UPDATE construct does not work when there is no UNIQUE or PRIMARY KEY defined on the target table. If you have to provide the same semantics without having appropriate key definitions in place, it is recommended to use the techniques discussed in the next recipe. Inserting data based on existing database content In the previous recipe Inserting new data and updating data if it already exists, we discussed a method to either insert or update records depending on whether the records already exist in the database. A similar problem arises when you need to insert data to your database, but the data to insert depends on the data in your database. As an example, consider a situation in which you need to insert a record with a certain message into a table logMsgs, but the message itself should be different depending on the current system language that is stored in a configuration table (config). It is fairly easy to achieve a similar behavior for an UPDATE statement because this supports a WHERE clause that can be used to only perform an update if a certain precondition is met: UPDATE logMsgs SET message= CONCAT('Last update: ', NOW()) WHERE EXISTS (SELECT value FROM config WHERE name='lang' AND value = 'en');UPDATE logMsgs SET message= CONCAT('Letztes Update: ', NOW()) WHERE EXISTS (SELECT value FROM config WHERE name='lang' AND value = 'de');UPDATE logMsgs SET message= CONCAT('Actualisation derniere: ', NOW()) WHERE EXISTS (SELECT value FROM config WHERE name='lang' AND value = 'fr'); Unfortunately, this approach is not applicable to INSERT commands, as these do not support a WHERE clause. Despite this missing option, the following recipe describes a method to make INSERT statements execute only if an appropriate precondition in the database is met. Getting ready As before, we assume a database, a SQL client (mysql), and a MySQL user with sufficient privileges (INSERT and SELECT in this case). Additionally, we need a table to insert data into (here: logMsgs) and a configuration table config (please refer to the previous recipe for details). How to do it... Connect to your database using your SQL client. Execute the following SQL commands: mysql> INSERT INTO sample.logMsgs(message) -> SELECT CONCAT('Last update: ', NOW()) -> FROM sample.config WHERE name='lang' AND value='en';Query OK, 0 rows affected (0.00 sec)Records: 0 Duplicates: 0 Warnings: 0 How it works... Our goal is to have an INSERT statement take into account the present language stored in the database. The trick to do so is to use a SELECT statement as input for the INSERT. The SELECT command provides a WHERE clause, so you can use a condition that only matches for the respective language. One restriction of this solution is that you can only insert one record at a time, so the size of scripts might grow considerably if you have to insert lots of data and/or have to cover many alternatives. There's more... If you have more than just a few values to insert, it is more convenient to have the data in one place rather than distributed over several individual INSERT statements. In this case, it might make sense to consolidate the data by putting it inside a temporary table; the final INSERT statement uses this temporary table to select the appropriate data rows for insertion into the target table. The downside of this approach is that the user needs the CREATE TEMPORARY TABLES privilege, but it typically compensates with much cleaner scripts: After creating the temporary table with the first statement, we insert data into the table with the following INSERT statement. The next statement inserts the appropriate data into the target table sample.logMsgs by selecting the appropriate data from the temporary data that matches the language entry from the config table. The temporary table is then removed again. The final SELECT statement is solely for checking the results of the operation. Deleting all data from large tables Almost everyone who works with databases experiences the constant growth of the data stored in their database and it is typically well beyond the initial estimates. Because of that you often end up with rather large data sets. Another common observation is that in most databases, there are some tables that have a special tendency to grow especially big. If a table's size reaches a virtual threshold (which is hard to define, as it depends heavily on the access patterns and the data structures), it gets harder and harder to maintain and performance degradation might occur. From a certain point on, it is even difficult to get rid of data in the table again, as the sheer number of records makes deletion a pretty expensive task. This particularly holds true for storage engines with Multi-Version Concurrency Control (MVCC): if you order the database to delete data from the table, it must not be deleted right away because you might still roll back the deletion. So even while the deletion was initiated, a concurrent query on the table still has to be able to see all the records (depending on the transaction isolation level). To achieve this, the storage engine will only mark the records as deleted, but the actual deletion takes place after the operation is committed and all other transactions that access this table are closed as well. If you have to deal with large data sets, the most difficult task is to operate on the production system while other processes concurrently work on the data. In these circumstances, you have to keep the duration of your maintenance operations as low as possible in order to minimize the impact on the running system. As the deletion of data from a large table (typically starting at several millions of rows) might take quite some time, the following recipe shows a way of minimizing the duration of this operation in order to reduce side effects (like locking effects or performance degradation). Getting ready Besides a user account with appropriate privileges (DELETE), you need a sufficiently large table to delete data from. For this recipe, we will use the employees database, which is an example database available from MySQL: http://dev.mysql.com/doc/employee/en/employee.html This database provides some tables with sensible data and some pretty large tables, the largest having more than 2.8 million records. We assume that the Employees database was installed with an InnoDB storage engine enabled. To delete all rows of the largest table employees.salaries in a quick way, please read on. How to do it...
Read more
  • 0
  • 0
  • 8356

article-image-trunks-freepbx-25
Packt
26 Oct 2009
5 min read
Save for later

Trunks in FreePBX 2.5

Packt
26 Oct 2009
5 min read
A trunk in the simplest of terms is a pathway into or out of a telephone system. A trunk connects a PBX to outside resources, such as PSTN telephone lines, or additional PBX systems to perform inter-system transfers. Trunks can be physical, such as a PRI or PSTN line, or they can be virtual by routing calls to another endpoint using Internet Protocol (IP) links. Trunk types FreePBX allows the creation of six different types of trunks as follows: Zap IAX2 SIP ENUM DUNDi Custom Zap, IAX2, and SIP trunks utilize the technologies of their namesake. These trunks have the same highlights and pitfalls that extensions and devices using the same technology do. Zap trunks require physical hardware cards for incoming lines to plug into. SIP trunks are the most widely adopted and compatible, but have difficulties traversing firewalls. IAX2 trunks are able to traverse most firewalls easily, but are limited to adoption mainly on Asterisk-based systems. In terms of VoIP, ENUM(E.164 NUmber Mapping) is a method for unifying E.164 (the international telecommunication numbering plan) with VoIP routing. The ENUM system can be considered very similar to the way that the Internet DNS system works. In the DNS system, when a domain name is looked up an IP address is returned. The IP address allows a PC to traverse the Internet and find the server that belongs to that IP address. The ENUM system provides VoIP routes back when queried for a phone number. The route that is returned is usually a SIP or IAX2 route. An ENUM trunk allows FreePBX to send the dialed phone number to the publice164.orgENUM server. If the called party has listed their phone number in the e164.org directory, a VoIP route will be returned and the call will be connected using that route. A VoIP route contains the VoIP protocol, the server name or IP address, the port, and the extension to use in order to contact the dialed phone number. For example, a SIP route for dialing the number 555-555-1234 might appear as SIP:1234@pbx.example.com:5060. This is advantageous in several ways. It is important to note that indirect routes to another telephony system are often costly. Calling a PSTN telephone number typically requires that call to route through a third-party provider's phone lines and switching equipment (a service they will happily charge for). If a number is listed in the ENUM directory, the returned route will bridge the call directly to the called party (or their provider), bypassing the cost of routing through a third party. ENUM also benefits the called party, allowing them to redirect inbound calls to wherever they would like. Service disruptions that would otherwise render a particular phone number useless can be bypassed by directing the phone number to a different VoIP route in the ENUM system. More information on ENUM can be found at the following web sites: The ENUM home page The e164.org home page: The Internet Engineering Task Force ENUM charter DUNDi (Distributed Universal Number Discovery) is a routing protocol technology similar to ENUM. In order to query another Asterisk system using DUNDi, that system must be "peered" with your own Asterisk system. Peering requires generating and exchanging key files with the other peer. DUNDi is a decentralized way of accomplishing ENUM-style lookups. By peering with one system you are effectively peering with any other system that your peer is connected to. If system A peers with system B, and system B peers with system C, then system C will be able to see the routes provided by system A. In peer-topeer fashion, system B will simply pass the request along to system A, even though system C has no direct connection to system A. DUNDi is not limited to E.164 numbering schemes like ENUM and it allows a PBX to advertise individual extensions, or route patterns, instead of whole phone numbers. Therefore, it is a good candidate for distributed office setups, where a central PBX can be peered with several satellite PBX systems. The extensions on each system will be able to call one another directly without having to statically set up routes on each individual PBX. More information on DUNDi can be found at the following web sites: DUNDi home page Example DUNDi SIP configuration Example DUNDi IAX2 configuration Custom trunks work in the same fashion as custom extensions do. Any valid Asterisk Dial command can be used as a custom trunk by FreePBX. Custom trunks typically use additional VoIP protocols such as H.323 and MGCP. Setting up a new trunk Setting up a trunk in FreePBX is very similar to setting up an extension. All of the trunks share eight common setup fields, followed by fields that are specific to the technology that trunk will be using. In order to begin setting up a trunk, click on Trunks in the left side navigation menu as shown in the following screenshot: From the Add a Trunk screen, click on the name of the technology that the trunk will be using (for example, if a SIP trunk will be used, click on Add SIP Trunk) as shown in the following screenshot:
Read more
  • 0
  • 0
  • 8319
article-image-calendars-jquery-13-php-using-jquery-week-calendar-plugin-part-1
Packt
19 Nov 2009
7 min read
Save for later

Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 1

Packt
19 Nov 2009
7 min read
There are many reasons why you would want to display a calendar. You can use it to display upcoming events, to keep a diary, or to show a timetable. Recently, for example, I combined a calendar with an online store for a client to book meetings and receive payments more intuitively. Google calendar is probably what springs to mind when people think of calendars online. There is a very good plugin called jquery-week-calendar that shows a week with events in a fashion similar to Google's calendar. Its homepage is at http://www.redredred.com.au/projects/jquery-week-calendar/. To get the latest copy of the plugin, go to http://code.google.com/p/jquery-week-calendar/downloads/list and get the highest-numbered file. The examples in this article are done with version 1.2.0. Download the library and extract it so that there is a directory named jquery-weekcalendar-1.2.0 in the root of your demo directory. Displaying the calendar As usual, the HTML for the simplest configuration is very simple. Save this as calendar.html: <html> <head> <script src="../jquery.min.js"></script> <script src="../jquery-ui.min.js"></script> <script src="../jquery-weekcalendar-1.2.0/jquery.weekcalendar.js"> </script> <script src="calendar.js"></script> <link rel="stylesheet" type="text/css" href="../jquery-ui.css" /> <link rel="stylesheet" type="text/css" href="../jquery-weekcalendar-1.2.0/jquery.weekcalendar.css"/> </head> <body> <div id="calendar_wrapper" style="height:500px"></div> </body> </html> We will keep all of our JavaScript in an external file called calendar.js, which will initially contain just this: $(document).ready(function() { $('#calendar_wrapper').weekCalendar({ 'height':function($calendar){ return $('#calendar_wrapper')[0].offsetHeight; } }); }); This is fairly straightforward. The script will apply the widget to the #calendar_wrapper element, and the widget's height will be set to that of the wrapper element. Even with this tiny bit of code, we already have a good-looking calendar, and when you drag your mouse cursor around it, you'll see that events are created as you lift the mouse up: It looks good, but it doesn't do anything yet. The events are temporary, and will vanish as soon as you change the week or reload the page. In order to make them permanent, we need to send details of the events to the server and save them there. Creating an event What we need to do is to have the client save the event on the server when it is created. In this article, we'll use PHP sessions to save the data for the sake of simplicity. Sessions are chunks of data, which are kept on the server side and are related to the cookie or PHPSESSID parameter that the client uses to access that session. We will use sessions in these examples because they do not need as much setup as databases. For your own projects, you should adapt the PHP side in order to connect to a database instead. If you are using this article to create a full application, you will obviously want to use something more permanent than sessions, in which case the PHP code should be adapted such that all references to sessions are replaced with database references instead. This is beyond the scope of this book, but as you are a PHP developer, you probably do this everyday anyway! When the event has been created, we want a modal dialog to appear and ask for more details. In this test case, we'll add a text area for further details, which allows for more data than would appear in the small visible area in the calendar itself. A modal dialog is a "pop up" that appears and blocks all other actions on the page until it has been taken care of. It's useful in cases where the answer to a question must be known before a script can carry on with its work. Now, let's create an event and add it to our calendar. Client-side code In the calendar.js file, add an eventNew event to the weekCalendar call: $(document).ready(function() { $('#calendar_wrapper').weekCalendar({ 'height':function($calendar){ return $('#calendar_wrapper')[0].offsetHeight; }, 'eventNew':function(calEvent, $event) { calendar_new_entry(calEvent,$event); } }); }); When an event is created, the calendar_new_entry function will be called with details of the new event in the calEvent parameter. Now, add the function calendar_new_entry: function calendar_new_entry(calEvent,$event){ var ds=calEvent.start, df=calEvent.end; $('<div id="calendar_new_entry_form" title="New Calendar Entry"> event name<br /> <input value="new event" id="calendar_new_entry_form_title" /> <br /> body text<br /> <textarea style="width:400px;height:200px" id="calendar_new_entry_form_body">event description </textarea> </div>').appendTo($('body')); $("#calendar_new_entry_form").dialog({ bgiframe: true, autoOpen: false, height: 440, width: 450, modal: true, buttons: { 'Save': function() { var $this=$(this); $.getJSON('./calendar.php?action=save&id=0&start=' +ds.getTime()/1000+'&end='+df.getTime()/1000, { 'body':$('#calendar_new_entry_form_body').val(), 'title':$('#calendar_new_entry_form_title').val() }, function(ret){ $this.dialog('close'); $('#calendar_wrapper').weekCalendar('refresh'); $("#calendar_new_entry_form").remove(); } ); }, Cancel: function() { $event.remove(); $(this).dialog('close'); $("#calendar_new_entry_form").remove(); } }, close: function() { $('#calendar').weekCalendar('removeUnsavedEvents'); $("#calendar_new_entry_form").remove(); } }); $("#calendar_new_entry_form").dialog('open'); } What's happening here is that a form is created and added to the body (the second line of the function), then the third line of the function creates a modal window from that form and adds some buttons to it. Our modal dialog should look like this: The Save button, when pressed, calls the server-side file calendar.php with the parameters needed to save the event, including the start and end, and the title and body. When the result returns, the calendar is refreshed with the new event's data included. When any of the buttons are clicked, we close the dialog and remove it from the page completely. Note how we are sending time information to the server (shown highlighted in the code we just saw). JavaScript time functions usually measure in milliseconds, but we want to send it to PHP, which generally measures time in seconds. So, we convert the value on the client so that the PHP can use the received data as it is, without needing to do anything to it. Every little helps! Server-side code On the server side, we want to take the new event and save it. Remember that we're doing it in sessions in this example, but you should feel free to adapt this to any other model that you wish. Create a file called calendar.php and save it with this source in it: <?php session_start(); if(!isset($_SESSION['calendar'])){ $_SESSION['calendar']=array( 'ids'=>0, ); } if(isset($_REQUEST['action'])){ switch($_REQUEST['action']){ case 'save': // { $start_date=(int)$_REQUEST['start']; $data=array( 'title'=>(isset($_REQUEST['title'])?$_REQUEST['title']:''), 'body' =>(isset($_REQUEST['body'])?$_REQUEST['body']:''), 'start'=>date('c',$start_date), 'end' =>date('c',(int)$_REQUEST['end']) ); $id=(int)$_REQUEST['id']; if($id && isset($_SESSION['calendar'][$id])){ $_SESSION['calendar'][$id]=$data; } else{ $id= ++$_SESSION['calendar']['ids']; $_SESSION['calendar'][$id]=$data; } echo 1; exit; // } } } ?> In the server-side code of this project, all the requested actions are handled by a switch statement. This is done for demonstration purposes—whenever we add a new action, we simply add a new switch case. If you are using this for your own purposes, you may wish to rewrite it using functions instead of large switch cases. The date function is used to convert the start and end parameters into ISO 8601 date format. That's the format jquery-week-calendar prefers, so we'll try to keep everything in that format. Visually, nothing appears to happen, but the data is actually being saved. To see what's being saved, create a new file named test.php, and use the var_dump function in it to examine the session data (view it in your browser): <?php session_start(); var_dump($_SESSION); ?> Here's an example from my test machine:
Read more
  • 0
  • 0
  • 8277

article-image-lets-build-angularjs-and-bootstrap
Packt
03 Jun 2015
14 min read
Save for later

Let's Build with AngularJS and Bootstrap

Packt
03 Jun 2015
14 min read
In this article by Stephen Radford, author of the book Learning Web Development with Bootstrap and AngularJS, we're going to use Bootstrap and AngularJS. We'll look at building a maintainable code base as well as exploring the full potential of both frameworks. (For more resources related to this topic, see here.) Working with directives Something we've been using already without knowing it is what Angular calls directives. These are essentially powerful functions that can be called from an attribute or even its own element, and Angular is full of them. Whether we want to loop data, handle clicks, or submit forms, Angular will speed everything up for us. We first used a directive to initialize Angular on the page using ng-app, and all of the directives we're going to look at in this article are used in the same way—by adding an attribute to an element. Before we take a look at some more of the built-in directives, we need to quickly make a controller. Create a new file and call it controller.js. Save this to your js directory within your project and open it up in your editor. Controllers are just standard JS constructor functions that we can inject Angular's services such as $scope into. These functions are instantiated when Angular detects the ng-controller attribute. As such, we can have multiple instances of the same controller within our application, allowing us to reuse a lot of code. This familiar function declaration is all we need for our controller. function AppCtrl(){} To let the framework know this is the controller we want to use, we need to include this on the page after Angular is loaded and also attach the ng-controller directive to our opening <html> tag: <html ng-controller="AppCtl">…<script type="text/javascript"src="assets/js/controller.js"></script> ng-click and ng-mouseover One of the most basic things you'll have ever done with JavaScript is listened for a click event. This could have been using the onclick attribute on an element, using jQuery, or even with an event listener. In Angular, we use a directive. To demonstrate this, we'll create a button that will launch an alert box—simple stuff. First, let's add the button to our content area we created earlier: <div class="col-sm-8"><button>Click Me</button></div> If you open this up in your browser, you'll see a standard HTML button created—no surprises there. Before we attach the directive to this element, we need to create a handler in our controller. This is just a function within our controller that is attached to the scope. It's very important we attach our function to the scope or we won't be able to access it from our view at all: function AppCtl($scope){$scope.clickHandler = function(){window.alert('Clicked!');};} As we already know, we can have multiple scopes on a page and these are just objects that Angular allows the view and the controller to have access to. In order for the controller to have access, we've injected the $scope service into our controller. This service provides us with the scope Angular creates on the element we added the ng-controller attribute to. Angular relies heavily on dependency injection, which you may or may not be familiar with. As we've seen, Angular is split into modules and services. Each of these modules and services depend upon one another and dependency injection provides referential transparency. When unit testing, we can also mock objects that will be injected to confirm our test results. DI allows us to tell Angular what services our controller depends upon, and the framework will resolve these for us. An in-depth explanation of AngularJS' dependency injection can be found in the official documentation at https://docs.angularjs.org/guide/di. Okay, so our handler is set up; now we just need to add our directive to the button. Just like before, we need to add it as an additional attribute. This time, we're going to pass through the name of the function we're looking to execute, which in this case is clickHandler. Angular will evaluate anything we put within our directive as an AngularJS expression, so we need to be sure to include two parentheses indicating that this is a function we're calling: <button ng-click="clickHandler()">Click Me</button> If you load this up in your browser, you'll be presented with an alert box when you click the button. You'll also notice that we don't need to include the $scope variable when calling the function in our view. Functions and variables that can be accessed from the view live within the current scope or any ancestor scope.   Should we wish to display our alert box on hover instead of click, it's just a case of changing the name of the directive to ng-mouseover, as they both function in the exact same way. ng-init The ng-init directive is designed to evaluate an expression on the current scope and can be used on its own or in conjunction with other directives. It's executed at a higher priority than other directives to ensure the expression is evaluated in time. Here's a basic example of ng-init in action: <div ng-init="test = 'Hello, World'"></div>{{test}} This will display Hello, World onscreen when the application is loaded in your browser. Above, we've set the value of the test model and then used the double curly-brace syntax to display it. ng-show and ng-hide There will be times when you'll need to control whether an element is displayed programmatically. Both ng-show and ng-hide can be controlled by the value returned from a function or a model. We can extend upon our clickHandler function we created to demonstrate the ng-click directive to toggle the visibility of our element. We'll do this by creating a new model and toggling the value between true or false. First of all, let's create the element we're going to be showing or hiding. Pop this below your button: <div ng-hide="isHidden">Click the button above to toggle.</div> The value within the ng-hide attribute is our model. Because this is within our scope, we can easily modify it within our controller: $scope.clickHandler = function(){$scope.isHidden = !$scope.isHidden;}; Here we're just reversing the value of our model, which in turn toggles the visibility of our <div>. If you open up your browser, you'll notice that the element isn't hidden by default. There are a few ways we could tackle this. Firstly, we could set the value of $scope.hidden to true within our controller. We could also set the value of hidden to true using the ng-init directive. Alternatively, we could switch to the ng-show directive, which functions in reverse to ng-hide and will only make an element visible if a model's value is set to true. Ensure Angular is loaded within your header or ng-hide and ng-show won't function correctly. This is because Angular uses its own classes to hide elements and these need to be loaded on page render. ng-if Angular also includes an ng-if directive that works in a similar fashion to ng-show and ng-hide. However, ng-if actually removes the element from the DOM whereas ng-show and ng-hide just toggles the elements' visibility. Let's take a quick look at how we'd use ng-if with the preceding code: <div ng-if="isHidden">Click the button above to toggle.</div> If we wanted to reverse the statement's meaning, we'd simply just need to add an exclamation point before our expression: <div ng-if="!isHidden">Click the button above to toggle.</div> ng-repeat Something you'll come across very quickly when building a web app is the need to render an array of items. For example, in our contacts manager, this would be a list of contacts, but it could be anything. Angular allows us to do this with the ng-repeat directive. Here's an example of some data we may come across. It's array of objects with multiple properties within it. To display the data, we're going to need to be able to access each of the properties. Thankfully, ng-repeat allows us to do just that. Here's our controller with an array of contact objects assigned to the contacts model: function AppCtrl($scope){$scope.contacts = [{name: 'John Doe',phone: '01234567890',email: 'john@example.com'},{name: 'Karan Bromwich',phone: '09876543210',email: 'karan@email.com'}];} We have just a couple of contacts here, but as you can imagine, this could be hundreds of contacts served from an API that just wouldn't be feasible to work with without ng-repeat. First, add an array of contacts to your controller and assign it to $scope.contacts. Next, open up your index.html file and create a <ul> tag. We're going to be repeating a list item within this unordered list so this is the element we need to add our directive to: <ul><li ng-repeat="contact in contacts"></li></ul> If you're familiar with how loops work in PHP or Ruby, then you'll feel right at home here. We create a variable that we can access within the current element being looped. The variable after the in keyword references the model we created on $scope within our controller. This now gives us the ability to access any of the properties set on that object with each iteration or item repeated gaining a new scope. We can display these on the page using Angular's double curly-brace syntax. <ul><li ng-repeat="contact in contacts">{{contact.name}}</li></ul> You'll notice that this outputs the name within our list item as expected, and we can easily access any property on our contact object by referencing it using the standard dot syntax. ng-class Often there are times where you'll want to change or add a class to an element programmatically. We can use the ng-class directive to achieve this. It will let us define a class to add or remove based on the value of a model. There are a couple of ways we can utilize ng-class. In its most simple form, Angular will apply the value of the model as a CSS class to the element: <div ng-class="exampleClass"></div> Should the model referenced be undefined or false, Angular won't apply a class. This is great for single classes, but what if you want a little more control or want to apply multiple classes to a single element? Try this: <div ng-class="{className: model, class2: model2}"></div> Here, the expression is a little different. We've got a map of class names with the model we wish to check against. If the model returns true, then the class will be added to the element. Let's take a look at this in action. We'll use checkboxes with the ng-model attribute, to apply some classes to a paragraph: <p ng-class="{'text-center': center, 'text-danger': error}">Lorem ipsum dolor sit amet</p> I've added two Bootstrap classes: text-center and text-danger. These observe a couple of models, which we can quickly change with some checkboxes: The single quotations around the class names within the expression are only required when using hyphens, or an error will be thrown by Angular. <label><input type="checkbox" ng-model="center"> textcenter</label><label><input type="checkbox" ng-model="error"> textdanger</label> When these checkboxes are checked, the relevant classes will be applied to our element. ng-style In a similar way to ng-class, this directive is designed to allow us to dynamically style an element with Angular. To demonstrate this, we'll create a third checkbox that will apply some additional styles to our paragraph element. The ng-style directive uses a standard JavaScript object, with the keys being the property we wish to change (for example, color and background). This can be applied from a model or a value returned from a function. Let's take a look at hooking it up to a function that will check a model. We can then add this to our checkbox to turn the styles off and on. First, open up your controller.js file and create a new function attached to the scope. I'm calling mine styleDemo: $scope.styleDemo = function(){if(!$scope.styler){return;}return {background: 'red',fontWeight: 'bold'};}; Inside the function, we need to check the value of a model; in this example, it's called styler. If it's false, we don't need to return anything, otherwise we're returning an object with our CSS properties. You'll notice that we used fontWeight rather than font-weight in our returned object. Either is fine, and Angular will automatically switch the CamelCase over to the correct CSS property. Just remember than when using hyphens in JavaScript object keys, you'll need to wrap them in quotation marks. This model is going to be attached to a checkbox, just like we did with ng-class: <label><input type="checkbox" ng-model="styler"> ng-style</label> The last thing we need to do is add the ng-style directive to our paragraph element: <p .. ng-style="styleDemo()">Lorem ipsum dolor sit amet</p> Angular is clever enough to recall this function every time the scope changes. This means that as soon as our model's value changes from false to true, our styles will be applied and vice versa. ng-cloak The final directive we're going to look at is ng-cloak. When using Angular's templates within our HTML page, the double curly braces are temporarily displayed before AngularJS has finished loading and compiling everything on our page. To get around this, we need to temporarily hide our template before it's finished rendering. Angular allows us to do this with the ng-cloak directive. This sets an additional style on our element whilst it's being loaded: display: none !important;. To ensure there's no flashing while content is being loaded, it's important that Angular is loaded in the head section of our HTML page. Summary We've covered a lot in this article, let's recap it all. Bootstrap allowed us to quickly create a responsive navigation. We needed to include the JavaScript file included with our Bootstrap download to enable the toggle on the mobile navigation. We also looked at the powerful responsive grid system included with Bootstrap and created a simple two-column layout. While we were doing this, we learnt about the four different column class prefixes as well as nesting our grid. To adapt our layout, we discovered some of the helper classes included with the framework to allow us to float, center, and hide elements. In this article, we saw in detail Angular's built-in directives: functions Angular allows us to use from within our view. Before we could look at them, we needed to create a controller, which is just a function that we can pass Angular's services into using dependency injection. Directives such as ng-click and ng-mouseover are essentially just new ways of handling events that you will have no doubt done using either jQuery or vanilla JavaScript. However, directives such as ng-repeat will probably be a completely new way of working. It brings some logic directly within our view to loop through data and display it on the page. We also looked at directives that observe models on our scope and perform different actions based on their values. Directives like ng-show and ng-hide will show or hide an element based on a model's value. We also saw this in action in ng-class, which allowed us to add some classes to our elements based on our models' values. Resources for Article: Further resources on this subject: AngularJS Performance [Article] AngularJS Web Application Development Cookbook [Article] Role of AngularJS [Article]
Read more
  • 0
  • 0
  • 8267
Modal Close icon
Modal Close icon