Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-media-module
Packt
06 Aug 2013
10 min read
Save for later

The Media module

Packt
06 Aug 2013
10 min read
(For more resources related to this topic, see here.) While there are many ways to build image integration into Drupal, they may all stem from different requirements and also each option should be carefully reviewed. Browsing around over 300 modules available in the Media category in Drupal's modules search for Drupal 7 (http://drupal.org/project/modules) may have you confused as to where to begin. We'll take a look at the Media module (http://drupal.org/project/media) which was sponsored by companies such as Acquia, Palantir, and Advomatic and was created to provide a solid infrastructure and common APIs for working with media assets and images specifically. To begin, download the 7.x-2.x version of the Media module (which is currently regarded as unstable but it is fairly different from 7.x-1.x which will be replaced soon enough) and unpack it to the sites/all/modules directory like we did before. The Media module also requires the File entity (http://drupal.org/project/file_entity) module to further extend how files are managed within Drupal by providing a fieldable file entity, display mods, and more. Use the 7.x-2.x unstable version for the File entity module too and download and unpack as always. To enable these modules navigate to the top administrative bar and click on Modules , scrolling to the bottom of the page we see the Media category with a collection of modules, toggle on all of them (Media field and Media Internet sources), and click on Save configuration . Adding a media asset field If you've noticed something missing in the rezepi content type fields earlier, you were right—what kind of recipes website would this be without some visual stimulation? Yes, we mean pictures! To add a new field, navigate to Structure | Content Types | rezepi | manage fields (/admin/structure/types/manage/rezepi/fields). Name the new field Picture and choose Image as the FIELD TYPE and Media file selector for the WIDGET select box and click on Save . As always, we are about to configure the new field settings, but a step before that presents first global settings for this new field, which is okay to leave as they are, so we will continue, and click on Save field settings . In the general field settings most defaults are suitable, except we want to toggle on the Required field setting and make sure the Allowed file extensions for uploaded files setting lists at least some common image types, so set it to PNG, GIF, JPG, JPEG . Click on Save settings to finalize and we've updated the rezepi content type, so let's start using it. When adding a rezepi, the form for filling up the fields should be similar to the following: The Picture field we defined to use as an image no longer has a file upload form element but rather a button to Select media . Once clicked on it, we can observe multiple tabbed options: For now, we are concerned only with the Upload tab and submit our picture for this rezepi entry. After browsing your local folder and uploading the file, upon clicking Save we are presented with the new media asset form: Our picture has been added to the website's media library and we can notice that it's no longer just a file upload somewhere, but rather it's a media asset with a thumbnail created and even has a way to configure the image HTML input element's attributes. We'll proceed with clicking on Save and once more on the add new content form too, to finalize this new rezepi submission. The media library To further explore the media asset tabs that we've seen before, we will edit the recently created rezepi entry and try to replace the previously uploaded picture with another. In the node's edit form, click on the Picture field's Select media button and browse the Library tab which should resemble the following: The Library tab is actually just a view (you can easily tell by the down-arrow and gear icons to the right of the screen) that lists all the files in your website. Furthermore, this view is equipped with some filters such as the filename, media type, and even sorting options. Straight-away, we can notice that our picture for the rezepi that was created earlier shows up there which is because it has been added as a media asset to the library. We can choose to use it again in further content that we create in the website. Without the media module and it's media assets management, we had to use the file field which only allowed to upload files to our content but never to re-use content that we, or other users, had created previously. Aside from possibly being annoying, this also meant that we had to duplicate files if we needed the same media file for more than one content type. The numbered images probably belong to some of the themes that we experimented before and the last two files are the images we've uploaded to our memo content type. Because these files were not created when the Media module was installed, they lack some of the metadata entries which the Media module keeps to better organize media assets. To manage our media library, we can click on Content from the top administrative bar which shows all content that has been created in your Drupal site. It features filtering and ordering of the columns to easily find content to moderate or investigate and even provides some bulk action updates on several content types. More important, after enabling the Media module we have a new option to choose from in the top right tabs, along with Content and Comments , we now have Files . The page lists all file uploads, both prior to the Media module as well as afterwards, and clearly states the relevant metadata such as media type, size, and the user who uploaded this file. We can also choose from List view or Thumbnail view using the top right tab options, which offers a nicer view and management of our content. The media library management page also features option to add media assets right from this page using the Add file and Import files links. While we've already seen how adding a single media file works, adding a bunch of files is something new. The Import files option allows you to specify a directory on your web server which contains media files and import them all to your Drupal website. After clicking on Preview , it will list the full paths to the files that were detected and will ask you to confirm and thus continue with the import process. Once that's successfully completed, you can return to the files thumbnail view (/admin/content/file/thumbnails) and edit the imported files, possibly setting some title text or removing some entries. You might be puzzled as to what's the point of importing media files directory from the server's web directory, after all, this would require one to have transferred the files there via FTP, SCP, or some other method, but definitely this is somewhat unconventional these days. Your hunch is correct, the import media is a nice to have feature but it's definitely not a replacement for bulk uploads of files from the web interface which Drupal should support and we will later on learn about adding this capability. When using the media library to manage these files, you will probably ask yourself first, before deleting or replacing an image, where is it actually being used? For that reason, Drupal's internal file handling keeps track of which entity makes use of each file and the Media module exposes this information via the web interface for us. Any information about a media asset is available in its Edit or View tabs, including where is it being used. Let's navigate through the media library to find the image we created previously for the rezepi entry and then click on Edit in the rightmost OPERATIONS column. In the Edit page, we can click on the USAGE tab at the top right of the page to get this information: We can tell which entity type is using this file, see the title of the node that it's being used for with a link to it, and finally the usage count. Using URL aliases If you are familiar with Drupal's internal URL aliases then you know that Drupal employs a convention of /node/<NID>[/ACTION], where NID is replaced by the node ID in the database and ACTION may be one of edit, view, or perhaps delete. To see this for yourself, you can click on one of the content items that we've previously created and when viewing it's full node display observe the URL in your browser's address bar. When working with media assets, we can employ the same URL alias convention for files too using the alias /file/<FID>[/ACTION]. For example, to see where the first file you've uploaded is being used, navigate in your browser to /file/1/usage. Remote media assets If we had wanted to replace the picture for this rezepi by specifying a link to an image that we've encountered in a website, maybe even our friend's personal blog, the only way to have done that without the Media module was to download it and upload using the file field's upload widget. With the Media module, we can specify the link for an image hosted and provided by a remote resource using the Web tab. I've Googled some images and after finding my choice for a picture, I simply copy-and-paste the image link to the URL input text as follows: After clicking on Submit , the image file will be downloaded to our website's files directory and the Media module will create the required metadata and present the picture's settings form before replacing our previous picture: There are plenty of modules such as Media: Flickr (http://drupal.org/project/media_flickr) which extends on the Media module by providing integration with remote resources for images and even provides support for a Flickr's photoset or slideshow. Just to list a few other modules: Media: Tableau (http://drupal.org/project/media_tableau) for integrating with the Tableau analytics platform Media: Slideshare (http://drupal.org/project/media_slideshare) for integrating with presentations at Slideshare website Media: Dailymotion (http://drupal.org/project/media_dailymotion) for integrating with the Dailymotion videos sharing website The only thing left for you is to download them from http://drupal.org/modules and start experimenting! Summary In this article, we dived into deep water with creating our very own content type for a food recipe website. In order to provide better user experience when dealing with images in Drupal sites, we learned about the prominent Media module and its extensive support for media resources such as providing a media library and key integration with other modules such as the Media Gallery. Resources for Article : Further resources on this subject: Installing and Configuring Drupal Commerce [Article] Drupal 7 Fields/CCK: Using the Image Field Modules [Article] Drupal 7 Preview [Article]
Read more
  • 0
  • 0
  • 2273

article-image-working-sample-controlling-mouse-hand
Packt
06 Aug 2013
10 min read
Save for later

Working sample for controlling the mouse by hand

Packt
06 Aug 2013
10 min read
(For more resources related to this topic, see here.) Getting ready Create a project in Visual Studio and prepare it for working with OpenNI and NiTE. How to do it... Copy ReadLastCharOfLine() and HandleStatus() functions to the top of your source code (just below the #include lines). Then add following lines of code: class MouseController : public nite::HandTracker::NewFrameListener { private: float startPosX, startPosY; int curX, curY; nite::HandId handId; RECT desktopRect; public: MouseController(){ startPosX = startPosY = -1; POINT curPos; if (GetCursorPos(&curPos)) { curX = curPos.x; curY = curPos.y; }else{ curX = curY = 0; } handId = -1; const HWND hDesktop = GetDesktopWindow(); GetWindowRect(hDesktop, &desktopRect); } void onNewFrame(nite::HandTracker& hTracker){ nite::Status status = nite::STATUS_OK; nite::HandTrackerFrameRef newFrame; status = hTracker.readFrame(&newFrame); if (!HandleStatus(status) || !newFrame.isValid()) return; const nite::Array<nite::GestureData>& gestures = newFrame.getGestures(); for (int i = 0; i < gestures.getSize(); ++i){ if (gestures[i].isComplete()){ if (gestures[i].getType() == nite::GESTURE_CLICK){ INPUT Input = {0}; Input.type = INPUT_MOUSE; Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; SendInput(1, &Input, sizeof(INPUT)); }else{ nite::HandId handId; status = hTracker.startHandTracking( gestures[i].getCurrentPosition(), &handId); } } } const nite::Array<nite::HandData>& hands = newFrame.getHands(); for (int i = hands.getSize() -1 ; i >= 0 ; --i){ if (hands[i].isTracking()){ if (hands[i].isNew() || handId != hands[i].getId()){ status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &startPosX, &startPosY); handId = hands[i].getId(); if (status != nite::STATUS_OK){ startPosX = startPosY = -1; } }else if (startPosX >= 0 && startPosY >= 0){ float posX, posY; status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &posX, &posY); if (status == nite::STATUS_OK){ if (abs(int(posX - startPosX)) > 10) curX += ((posX - startPosX) - 10) / 3; if (abs(int(posY - startPosY)) > 10) curY += ((posY - startPosY) - 10) / 3; curX = min(curX, desktopRect.right); curX = max(curX, desktopRect.left); curY = min(curY, desktopRect.bottom); curY = max(curY, desktopRect.top); SetCursorPos(curX, curY); } } break; } } } }; Then locate the following line: int _tmain(int argc, _TCHAR* argv[]) { Add the following inside this function: nite::Status status = nite::STATUS_OK; status = nite::NiTE::initialize(); if (!HandleStatus(status)) return 1; printf("Creating hand tracker ...rn"); nite::HandTracker hTracker; status = hTracker.create(); if (!HandleStatus(status)) return 1; MouseController* listener = new MouseController(); hTracker.addNewFrameListener(listener); hTracker.startGestureDetection(nite::GESTURE_HAND_RAISE); hTracker.startGestureDetection(nite::GESTURE_CLICK); printf("Reading data from hand tracker ...rn"); ReadLastCharOfLine(); nite::NiTE::shutdown(); openni::OpenNI::shutdown(); return 0; How it works... Both the ReadLastCharOfLine() and HandleStatus() functions are present here too. These functions are well known to you and don't need any explanation. Then in the second part, we declared a class/struct that we are going to use for capturing the new data available event from the nite::HandTracker object. But the definition of this class is a little different here. Other than the onNewFrame() method, we defined a number of variables and a constructor method for this class too. We also changed its name to MouseController to be able to make more sense of it. class MouseController : public nite::HandTracker::NewFrameListener { private: float startPosX, startPosY; int curX, curY; nite::HandId handId; RECT desktopRect; As you can see, our class is still a child class of nite::HandTracker::NewFrameListener because we are going use it to listen to the nite::HandTracker events. Also, we defined six variables in our class. startPosX and startPosY are going to hold the initial position of the active hand whereas curY and curX are going to hold the position of the mouse when in motion. The handId variable is responsible for holding the ID of the active hand and desktopRecthold for holding the size of the desktop so that we can move our mouse only in this area. These variables are all private variables; this means they will not be accessible from the outside of the class. Then we have the class's constructor method that initializes some of the preceding variables. Refer to the following code: public: MouseController(){ startPosX = startPosY = -1; POINT curPos; if (GetCursorPos(&curPos)) { curX = curPos.x; curY = curPos.y; }else{ curX = curY = 0; } handId = -1; const HWND hDesktop = GetDesktopWindow(); GetWindowRect(hDesktop, &desktopRect); } In the constructor, we set both startPosX and startPosY to -1 and then store the current position of the mouse in the curX and curY variables. Then we set the handId variable to -1 to know-mark that there is no active hand currently, and retrieve the value of desktopRect using two Windows API methods, GetDesktopWindow() and GetWindowRect(). The most important tasks are happening in the onNewFrame() method. This method is the one that will be called when new data becomes available in nite::HandTracker; after that, this method will be responsible for processing this data. As the running of this method means that new data is available, the first thing to do in its body is to read this data. So we used the nite::HandTracker::readFrame() method to read the data from this object: void onNewFrame(nite::HandTracker& hTracker){ nite::Status status = nite::STATUS_OK; nite::HandTrackerFrameRef newFrame; status = hTracker.readFrame(&newFrame); When working with nite::HandTracker, the first thing to do after reading the data is to handle gestures if you expect any. We expect to have Hand Raise to detect new hands and click gesture to perform the mouse click: const nite::Array<nite::GestureData>& gestures = newFrame.getGestures(); for (int i = 0; i < gestures.getSize(); ++i){ if (gestures[i].isComplete()){ if (gestures[i].getType() == nite::GESTURE_CLICK){ INPUT Input = {0}; Input.type = INPUT_MOUSE; Input.mi.dwFlags = MOUSEEVENTF_LEFTDOWN | MOUSEEVENTF_LEFTUP; SendInput(1, &Input, sizeof(INPUT)); }else{ nite::HandId handId; status = hTracker.startHandTracking( gestures[i].getCurrentPosition(), &handId); } } } As you can see, we retrieved the list of all the gestures using nite::HandTrackerFrameRef::getGestures() and then looped through them, searching for the ones that are in the completed state. Then if they are the nite::GESTURE_CLICK gesture, we need to perform a mouse click. We used the SendInput() function from the Windows API to do it here. But if the recognized gesture wasn't of the type nite::GESTURE_CLICK, it must be a nite::GESTURE_HAND_RAISE gesture; so, we need to request for the tracking of this newly recognized hand using the nite::HandTracker::startHandTracking() method. The next thing is to take care of the hands being tracked. To do so, we first need to retrieve a list of them using the nite::HandTrackerFrameRef::getHands() method and then loop through them. This can be done using a simple for loop as we used for the gestures. But as we want to read this list in a reverse order, we need to use a reverse for loop. The reason we need to read this list in the reverse order is that we always want the last recognized hand to control the mouse: const nite::Array<nite::HandData>& hands = newFrame.getHands(); for (int i = hands.getSize() - 1 ; i >= 0 ; --i){ Then we need to make sure that the current hand is under-tracking because we don't want an invisible hand to control the mouse. The first hand being tracked is the one we want, so we will break the looping there, of course, after the processing part, which we will remove from the following code to make it clearer. if (hands[i].isTracking()){ . . . break; } Speaking of processing, in the preceding three lines of code (with periods) we have another condition. This condition is responsible for finding out if this hand is the same one that had control of the mouse in the last frame. If it is a new hand (either it is a newly recognized hand or it is a newly active hand), we need to save its current position in the startPosX and startPosY variables. if (hands[i].isNew() || handId != hands[i].getId()){ status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &startPosX, &startPosY); handId = hands[i].getId(); if (status != nite::STATUS_OK){ startPosX = startPosY = -1; } If it was the same hand, we have another condition. Do we have the startPosX and startPosY variables already or do we not have them yet? If we have them, we can calculate the mouse's movement. But first we need to calculate the position of the hand relative to the depth frame. }else if (startPosX >= 0 && startPosY >= 0){ float posX, posY; status = hTracker.convertHandCoordinatesToDepth( hands[i].getPosition().x, hands[i].getPosition().y, hands[i].getPosition().z, &posX, &posY); Once the process of conversation ends, we need to calculate the new position of the mouse depending on how the hand's position changes. But we want to define a safe area for it to be static when small changes happen. So we calculate the new position of the mouse only if it has moved by more than 10 pixels in our depth frame: if (status == nite::STATUS_OK){ if (abs(int(posX - startPosX)) > 10) curX += ((posX - startPosX) - 10) / 3; if (abs(int(posY - startPosY)) > 10) curY += ((posY - startPosY) - 10) / 3; As you can see in the preceding code, we also divided the changes by 3 because we didn't want it to move too fast. But before setting the position of the mouse, we need to first make sure that the new positions are in the screen view port using the desktopRect variable: curX = min(curX, desktopRect.right); curX = max(curX, desktopRect.left); curY = min(curY, desktopRect.bottom); curY = max(curY, desktopRect.top); After calculating everything, we can set the new position of the mouse using SetCursorPos() from the Windows API: SetCursorPos(curX, curY); Step three and four are not markedly differently. In this step, we have the initialization process; this includes the initialization of NiTE and the creation of the nite::HandTracker variable. status = nite::NiTE::initialize(); . . . nite::HandTracker hTracker; status = hTracker.create(); Then we should add our newly defined structure/class as a listener to nite::HandTracker so that nite::HandTracker can call it later when a new frame becomes available: MouseController* listener = new MouseController(); hTracker.addNewFrameListener(listener); Also, we need to have an active search for a hand gesture because we need to locate the position of the hands and we also have to search for another gesture for the mouse click. So we need to call the nite::HandTracker::startGestureDetection() method twice for both the Click (also known as push) and Hand Raise gestures here: hTracker.startGestureDetection(nite::GESTURE_HAND_RAISE); hTracker.startGestureDetection(nite::GESTURE_CLICK); At the end, we will wait until the user presses the Enter key to end the app. We do nothing more in our main thread except just waiting. Everything happens in another thread. ReadLastCharOfLine(); nite::NiTE::shutdown(); openni::OpenNI::shutdown(); return 0; Summary In this article, we learnt how to write a working example for using nite::HandTracker, controlled the position of the mouse cursor using the NiTE hand tracker feature, and simulated a click event Resources for Article: Further resources on this subject: Getting started with Kinect for Windows SDK Programming [Article] Kinect in Motion – An Overview [Article] Active Directory migration [Article]
Read more
  • 0
  • 0
  • 2079

article-image-introduction-nginx
Packt
31 Jul 2013
8 min read
Save for later

Introduction to nginx

Packt
31 Jul 2013
8 min read
(For more resources related to this topic, see here.) So, what is nginx? The best way to describe nginx (pronounced engine-x) is as an event-based multi-protocol reverse proxy. This sounds fancy, but it's not just buzz words and actually affects how we approach configuring nginx. It also highlights some of the flexibility that nginx offers. While it is often used as a web server and an HTTP reverse proxy, it can also be used as an IMAP reverse proxy or even a raw TCP reverse proxy. Thanks to the plug-in ready code structure, we can utilize a large number of first and third party modules to implement a diverse amount of features to make nginx an ideal fit for many typical use cases. A more accurate description would be to say that nginx is a reverse proxy first, and a web server second. I say this because it can help us visualize the request flow through the configuration file and rationalize how to achieve the desired configuration of nginx. The core difference this creates is that nginx works with URIs instead of files and directories, and based on that determines how to process the request. This means that when we configure nginx, we tell it what should happen for a certain URI rather than what should happen for a certain file on the disk. A beneficial part of nginx being a reverse proxy is that it fits into a large number of server setups, and can handle many things that other web servers simply aren't designed for. A popular question is "Why even bother with nginx when Apache httpd is available?" The answer lies in the way the two programs are designed. The majority of Apache setups are done using prefork mode, where we spawn a certain amount of processes and then embed our dynamic language in each process. This setup is synchronous, meaning that each process can handle one request at a time, whether that connection is for a PHP script or an image file. In contrast, nginx uses an asynchronous event-based design where each spawned process can handle thousands of concurrent connections. The downside here is that nginx will, for security and technical reasons, not embed programming languages into its own process - this means that to handle those we will need to reverse proxy to a backend, such as Apache, PHP-FPM, and so on. Thankfully, as nginx is a reverse proxy first and foremost, this is extremely easy to do and still allows us major benefits, even when keeping Apache in use. Let's take a look at a use case where Apache is used as an application server described earlier rather than just a web server. We have embedded PHP, Perl, or Python into Apache, which has the primary disadvantage of each request becoming costly. This is because the Apache process is kept busy until the request has been fully served, even if it's a request for a static file. Our online service has gotten popular and we now find that our server cannot keep up with the increased demand. In this scenario introducing nginx as a spoon-feeding layer would be ideal. When an nginx server with a spoon-feeding layer will sit between our end user and Apache and a request comes in, nginx will reverse proxy it to Apache if it is for a dynamic file, while it will handle any static file requests itself. This means that we offload a lot of the request handling from the expensive Apache processes to the more lightweight nginx processes, and increase the number of end users we can serve before having to spend money on more powerful hardware. Another example scenario is where we have an application being used from all over the world. We don't have any static files so we can't easily offload a number of requests from Apache. In this use case, our PHP process is busy from the time the request comes in until the user has finished downloading the response. Sadly, not everyone in the world has fast internet and, as a result, the sending process could be busy for a relatively significant period of time. Let's assume our visitor is on an old 56k modem and has a maximum download speed of 5 KB per second, it will take them five seconds to download a 25 KB gzipped HTML file generated by PHP. That's five seconds where our process cannot handle any other request. When we introduce nginx into this setup, we have PHP spending only microseconds generating the response but have nginx spend five seconds transferring it to the end user. Because nginx is asynchronous it will happily handle other connections in the meantime, and thus, we significantly increase the number of concurrent requests we can handle. In the previous two examples I used scenarios where nginx was used in front of Apache, but naturally this is not a requirement. nginx is capable of reverse proxying via, for instance, FastCGI, UWSGI, SCGI, HTTP, or even TCP (through a plugin) enabling backends, such as PHP-FPM, Gunicorn, Thin, and Passenger. Quick start – Creating your first virtual host It's finally time to get nginx up and running. To start out, let's quickly review the configuration file. If you installed via a system package, the default configuration file location is most likely /etc/nginx/nginx.conf. If you installed via source and didn't change the path pre fix, nginx installs itself into/usr/local/nginx and places nginx.conf in a /conf subdirectory. Keep this file open as a reference to help visualize many of the things described in this article. Step 1 – Directives and contexts To understand what we'll be covering in this section, let me first introduce a bit of terminology that the nginx community at large uses. Two central concepts to the nginx configuration file are those of directives and contexts. A directive is basically just an identifier for the various configuration options. Contexts refer to the different sections of the nginx configuration file. This term is important because the documentation often states which context a directive is allowed to have within. A glance at the standard configuration file should reveal that nginx uses a layered configuration format where blocks are denoted by curly brackets {}. These blocks are what are referred to as contexts. The topmost context is called main, and is not denoted as a block but is rather the configuration file itself. The main context has only a few directives we're really interested in, the two major ones being worker_processes and user. These directives handle how many worker processes nginx should run and which user/group nginx should run these under. Within the main context there are two possible subcontexts, the first one being called events. This block handles directives that deal with the event-polling nature of nginx. Mostly we can ignore every directive in here, as nginx can automatically configure this to be the most optimal; however, there's one directive which is interesting, namely worker_connections. This directive controls the number of connections each worker can handle. It's important to note here that nginx is a terminating proxy, so if you HTTP proxy to a backend, such as Apache httpd, that will use up two connections. The second subcontext is the interesting one called http. This context deals with everything related to HTTP, and this is what we will be working with almost all of the time. While there are directives that are configured in the http context, for now we'll focus on a subcontext within http called server. The server context is the nginx equivalent of a virtual host. This context is used to handle configuration directives based on the host name your sites are under. Within the server context, we have another subcontext called location. The location context is what we use to match the URI. Basically, a request to nginx will flow through each of our contexts, matching first the server block with the hostname provided by the client, and secondly the location context with the URI provided by the client. Depending on the installation method, there might not be any server blocks in the nginx.conf file. Typically, system package managers take advantage of the include directive that allows us to do an in-place inclusion into our configuration file. This allows us to separate out each virtual host and keep our configuration file more organized. If there aren't any server blocks, check the bottom of the file for an includedirective and check the directory from which it includes, it should have a file which contains a server block. Step 2 – Define your first virtual hosts Finally, let us define our first server block! server { listen 80; server_name example.com; root /var/www/website;} That is basically all we need, and strictly speaking, we don't even need to define which port to listen on as port 80 is default. However, it's generally a good practice to keep it in there should we want to search for all virtual hosts on port 80 later on. Summary This article provided the details about the important aspects of nginx. It also briefed about the configuration of our virtual host using nginx by explaining two simple steps, along with a configuration example. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 6110

article-image-working-blocks
Packt
31 Jul 2013
20 min read
Save for later

Working with Blocks

Packt
31 Jul 2013
20 min read
(For more resources related to this topic, see here.) Creating a custom block type Creating block types is a great way to add custom functionality to a website. This is the preferred way to add things like calendars, dealer locators, or any other type of content that is visible and repeatable on the frontend of the website. Getting ready The code for this recipe is available to download from the book's website for free. We are going to create a fully functioning block type that will display content on our website. How to do it... The steps for creating a custom block type are as follows: First, you will need to create a directory in your website's root /blocks directory. The name of the directory should be underscored and will be used to refer to the block throughout the code. In this case, we will create a new directory called /hello_world. Once you have created the hello_world directory, you will need to create the following files: controller.php db.xml form.php add.php edit.php view.php view.css Now, we will add code to each of the files. First, we need to set up the controller file. The controller file is what powers the block. Since this is a very basic block, our controller only will contain information to tell concrete5 some details about our block, such as its name and description. Add the following code to controller.php: class HelloWorldBlockController extends BlockController { protected $btTable = "btHelloWorld"; protected $btInterfaceWidth = "300"; protected $btInterfaceHeight = "300"; public function getBlockTypeName() { return t('Hello World'); } public function getBlockTypeDescription() { return t('A basic Hello World block type!'); } } Notice that the class name is HelloWorldBlockController. concrete5 conventions dictate that you should name your block controllers with the same name as the block directory in camel case (for example: CamelCase) form, and followed by BlockController. The btTable class variable is important, as it tells concrete5 what database table should be used for this block. It is important that this table doesn't already exist in the database, so it's a good idea to give it a name of bt (short for "block type") plus the camel cased version of the block name. Now that the controller is set up, we need to set up the db.xml file. This file is based off of the ADOXMLS format, which is documented at http://phplens.com/lens/adodb/docs-datadict.htm#xmlschema. This XML file will tell concrete5 which database tables and fields should be created for this new block type (and which tables and fields should get updated when your block type gets updated). Add the following XML code to your db.xml file: lt;?xml version="1.0"?> <schema version="0.3"> <table name="btHelloWorld"> <field name="bID" type="I"> <key /> <unsigned /> </field> <field name="title" type="C" size="255"> <default value="" /> </field> <field name="content" type="X2"> <default value="" /> </field> </table> </schema> concrete5 blocks typically have both an add.php and edit.php file, both of which often do the same thing: show the form containing the block's settings. Since we don't want to repeat code, we will enter our form HTML in a third file, form.php, and <?php $form = Loader::helper('form'); ?> <div> <label for="title">Title</label> <?php echo $form->text('title', $title); ?> </div> <div> <label for="content">Content</label> <?php echo $form->textarea('content', $content); ?> </div> Once that is all set, add this line of code to both add.php and edit.php to have this HTML code appear when users add and edit the block: <?php include('form.php') ?> Add the following HTML to your view.php file: <h1><?php echo $title ?></h1> <div class="content"> <?php echo $content ?> </div> Finally, for a little visual appeal, add the following code to view.css: content { background: #eee; padding: 20px; margin: 20px 0; border-radius: 10px; } Now all of the files have been filled with the code to make our Hello World block function. Now we need to install this block in concrete5 so we can add it to our pages. To install the new block, you will need to sign into your concrete5 website and navigate to /dashboard/blocks/types/. If you happen to get a PHP fatal error here, clear your concrete5 cache by visiting /dashboard/system/optimization/clear_cache (it is always a good idea to disable the cache while developing in concrete5). At the top of the Block Types screen, you should see your Hello World block, ready to install. Click on the Install button. Now the block is installed and ready to add to your site! How it works... Let's go through the code that we just wrote, step-by-step. In controller.php, there are a few protected variables at the top of the class. The $btTable variable tells concrete5 which table in the database holds the data for this block type. The $btInterfaceWidth and $btInterfaceHeight variables determine the initial size of the dialog window that appears when users add your block to a page on their site. We put the block's description and name in special getter functions for one reason, to potentially support for translations down the road. It's best practice to wrap any strings that appear in concrete5 in the global t() function. The db.xml file tells concrete5 what database tables should be created when this block gets installed. This file uses the ADOXMLS format to generate tables and fields. In this file, we are telling concrete5 to create a table called btHelloWorld. That table should contain three fields, an ID field, the title field, and the content field. The names of these fields should be noted, because concrete5 will require them to match up with the names of the fields in the HTML form. In form.php, we are setting up the settings form that users will fill out to save the block's content. We are using the Form Helper to generate the HTML for the various fields. Notice how we are able to use the $title and $content variables without them being declared yet. concrete5 automatically exposes those variables to the form whenever the block is added or edited. We then include this form in the add.php and edit.php files. The view.php file is a template file that contains the HTML that the end users will see on the website. We are just wrapping the title in an <h1> tag and the content in a <div> with a class of .content. concrete5 will automatically include view.css (and view.js, if it happens to exist) if they are present in your block's directory. Also, if you include an auto.js file, it will automatically be included when the block is in edit mode. We added some basic styling to the .content class and concrete5 takes care of adding this CSS file to your site's <head> tag. Using block controller callback functions The block controller class contains a couple of special functions that get automatically called at different points throughout the page load process. You can look into these callbacks to power different functionalities of your block type. Getting ready To get started, you will need a block type created and installed. See the previous recipe for a lesson on creating a custom block type. We will be adding some methods to controller.php. How to do it... The steps for using block controller callback functions are as follows: Open your block's controller.php file. Add a new function called on_start(): public function on_start() { } Write a die statement that will get fired when the controller is loaded. die('hello world'); Refresh any page containing the block type. The page should stop rendering before it is complete with your debug message. Be sure to remove the die statement, otherwise your block won't work anymore! How it works... concrete5 will call the various callback functions at different points during the page load process. The on_start() function is the first to get called. It is a good place to put things that you want to happen before the block is rendered. The next function that gets called depends on how you are interacting with the block. If you are just viewing it on a page, the view() function gets called. If you are adding or editing the block, then the add() or edit() functions will get called as appropriate. These functions are a good place to send variables to the view, which we will show how to do in the next recipe. The save() and delete() functions also get called automatically at this point, if the block is performing either of those functions. After that, concrete5 will call the on_before_render() function. This is a good time to add items to the page header and footer, since it is before concrete5 renders the HTML for the page. We will be doing this later on in the article. Finally, the on_page_view() function is called. This is actually run once the page is being rendered, so it is the last place where you have the code executed in your block controller. This is helpful when adding HTML items to the page. There's more... The following functions can be added to your controller class and they will get called automatically at different points throughout the block's loading process. on_start on_before_render view add edit on_page_view save delete For a complete list of the callback functions available, check out the source for the block controller library, located in /concrete/core/libraries/block_controller.php. Sending variables from the controller to the view A common task in MVC programming is the concept of setting variables from a controller to a view. In concrete5, blocks follow the same principles. Fortunately, setting variables to the view is quite easy. Getting ready This recipe will use the block type that was created in the first recipe of this article. Feel free to adapt this code to work in any block controller, though. How to do it... In your block's controller, use the set() function of the controller class to send a variable and a value to the view. Note that the view doesn't necessarily have to be the view.php template of your block. You can send variables to add.php and edit.php as well. In this recipe, we will send a variable to view.php. The steps are as follows: Open your block's controller.php file. Add a function called view() if it doesn't already exist: public function view() { } Set a variable called name to the view. $this->set('name', 'John Doe'); Open view.php in your block's directory. Output the value of the name variable. <div class="content"> <?php echo $name ?> </div> Adding items to the page header and footer from the block controller An important part of block development is being able to add JavaScript and CSS files to the page in the appropriate places. Consider a block that is using a jQuery plugin to create a slideshow widget. You will need to include the plugin's JavaScript and CSS files in order for it to work. In this recipe, we will add a CSS <link> tag to the page's <head> element, and a JavaScript <script> tag to bottom of the page (just before the closing </body> tag). Getting ready This recipe will continue working with the block that was created in the first recipe of this article. If you need to download a copy of that block, it is included with the code samples from this book's website. This recipe also makes a reference to a CSS file and a JavaScript file. Those files are available for download in the code on this book's website as well. How to do it... The steps for adding items to the page header and footer from the block controller are as follows: Open your block's controller.php file. Create a CSS file in /css called test.css. Set a rule to change the background color of the site to black. body { background: #000 !important; } Create a JavaScript file in /js called test.js. Create an alert message in the JavaScript file. alert('Hello!'); In controller.php, create a new function called on_page_view(). public function on_page_view() { } Load the HTML helper: $html = Loader::helper('html'); Add a CSS file to the page header: $this->addHeaderItem($html->css('testing.css')); Add a JavaScript file to the page footer: $this->addFooterItem($html->javascript('test.js')); Visit a page on your site that contains this block. You should see your JavaScript alert as well as a black background. How it works... As mentioned in the Using block controller callback function recipe, the ideal place to add items to the header (the page's <head> tag) and footer (just before the closing </body> tag) is the on_before_render() callback function. The addHeaderItem and addFooterItem functions are used to place strings of text in those positions of the web document. Rather than type out <script> and <link> tags in our PHP, we will use the built-in HTML helper to generate those strings. The files should be located in the site's root /css and /js directories. Since it is typically best practice for CSS files to get loaded first and for JavaScript files to get loaded last, we place each of those items in the areas of the page that make the most sense. Creating custom block templates All blocks come with a default view template, view.php. concrete5 also supports alternative templates, which users can enable through the concrete5 interface. You can also enable these alternative templates through your custom PHP code. Getting ready You will need a block type created and installed already. In this recipe, we are going to add a template to the block type that we created at the beginning of the article. How to do it... The steps for creating custom block templates are as follows: Open your block's directory. Create a new directory in your block's directory called templates/. Create a file called no_title.php in templates/. Add the following HTML code to no_title.php: <div class="content"> <?php echo $content ?> </div> Activate the template by visiting a page that contains this block. Enter edit mode on the page and click on the block. Click on "Custom Template".     Choose "No Title" and save your changes. There's more... You can specify alternative templates right from the block controller, so you can automatically render a different template depending on certain settings, conditions, or just about anything you can think of. Simply use the render() function in a callback that gets called before the view is rendered. public function view() { $this->render('templates/no_title'); } This will use the no_title.php file instead of view.php to render the block. Notice that adding the .php file extension is not required. Just like the block's regular view.php file, developers can include view.css and view.js files in their template directories to have those files automatically included on the page. See also The Using block controller callback functions recipe The Creating a custom block type recipe Including JavaScript in block forms When adding or editing blocks, it is often desired to include more advanced functionality in the form of client-side JavaScript. concrete5 makes it extremely easy to automatically add a JavaScript file to a block's editor form. Getting ready We will be working with the block that was created in the first recipe of this article. If you need to catch up, feel free to download the code from this book's website. How to do it... The steps for including JavaScript in block forms are as follows: Open your block's directory. Create a new file called auto.js. Add a basic alert function to auto.js: alert('Hello!'); Visit a page that contains your block. Enter edit mode and edit the block. You should see your alert message appear as shown in the following screenshot: How it works... concrete5 automatically looks for the auto.js file when it enters add or edit mode on a block. Developers can use this to their advantage to contain special client-side functionality for the block's edit mode. Including JavaScript in the block view In addition to being able to include JavaScript in the block's add and edit forms, developers can also automatically include a JavaScript file when the block is viewed on the frontend. In this recipe, we will create a simple JavaScript file that will create an alert whenever the block is viewed. Getting ready We will continue working with the block that was created in the first recipe of this article. How to do it... The steps for including JavaScript in the block view are as follows: Open your block's directory. Create a new file called view.js. Add an alert to view.js: alert('This is the view!'); Visit the page containing your block. You should see the new alert appear. How it works... Much like the auto.js file discussed in the previous recipe, concrete5 will automatically include the view.js file if it exists. This allows developers to easily embed jQuery plugins or other client-side logic into their blocks very easily. Including CSS in the block view Developers and designers working on custom concrete5 block types can have a CSS file automatically included. In this recipe, we will automatically include a CSS file that will change our background to black. Getting ready We are still working with the block that was created earlier in the article. Please make sure that block exists, or adapt this recipe to suit your own concrete5 environment. How to do it... The steps for including CSS in the block view are as follows: Open your block's directory. Create a new file called view.css, if it doesn't exist. Add a rule to change the background color of the site to black: body { background: #000 !important; } Visit the page containing your block. The background should now be black! How it works... Just like it does with JavaScript, concrete5 will automatically include view.css in the page's header if it exists in your block directory. This is a great way to save some time with styles that only apply to your block. Loading a block type by its handle Block types are objects in concrete5 just like most things. This means that they have IDs in the database, as well as human-readable handles. In this recipe, we will load the instance of the block type that we created in the first recipe of this article. Getting ready We will need a place to run some arbitrary code. We will rely on /config/site_post.php once again to execute some random code. This recipe also assumes that a block with a handle of hello_world exists in your concrete5 site. Feel free to adjust that handle as needed. How to do it... The steps for loading a block type by its handle are as follows: Open /config/site_post.php in your preferred code editor. Define the handle of the block to load: $handle = 'hello_world'; Load the block by its handle: $block = BlockType::getByHandle($handle); Dump the contents of the block to make sure it loaded correctly: print_r($block); exit; How it works... concrete5 will simply query the database for you when a handle is provided. It will then return a BlockType object that contains several methods and properties that can be useful in development. Adding a block to a page Users can use the intuitive concrete5 interface to add blocks to the various areas of pages on the website. You can also programmatically add blocks to pages using the concrete5 API. Getting ready The code in this article can be run anywhere that you would like to create a block. To keep things simple, we are going to use the /config/site_post.php file to run some arbitrary code. This example assumes that a page with a path of /about exists on your concrete5 site. Feel free to create that page, or adapt this recipe to suit your needs. Also, this recipe assumes that /about has a content area called content. Again, adapt according to your own website's configuration. We will be using the block that was created at the beginning of this article. How to do it... The steps for adding a block to a page are as follows: Open /config/site_post.php in your code editor. Load the page that you would like to add a block to: $page = Page::getByPath('/about'); Load the block by its handle: $block = BlockType::getByHandle('hello_world'); Define the data that will be sent to the block: $data = array( 'title' => 'An Exciting Title', 'content' => 'This is the content!' ); Add the block to the page's content area: $page->addBlock($block, 'content', $data); How it works... First you need to get the target page. In this recipe, we get it by its path, but you can use this function on any Page object. Next, we need to load the block type that we are adding. In this case, we are using the one that was created earlier in the article. The block type handle is the same as the directory name for the block. We are using the $data variable to pass in the block's configuration options. If there are no options, you will need to pass in an empty array, as concrete5 does not allow that parameter to be blank. Finally, you will need to know the name of the content area; in this case, the content area is called "content". Getting the blocks from an area concrete5 pages can have several different areas where blocks can be added. Developers can programmatically get an array of all of the block objects in an area. In this recipe, we will load a page and get a list of all of the blocks in its main content area. Getting ready We will be using /config/site_post.php to run some arbitrary code here. You can place this code wherever you find appropriate, though. This example assumes the presence of a page with a path of /about, and with a content area called content. Make the necessary adjustments in the code as needed. How to do it... The steps for getting the blocks from an area are as follows: Open /config/site_post.php in your code editor. Load the page by its path: $page = Page::getByPath('/about'); Get the array of blocks in the page's content area. $blocks = $page->getBlocks('content'); Loop through the array, printing each block's handle. foreach ($blocks as $block) { echo $block->getBlockTypeHandle().'<br />'; } Exit the process. exit; How it works... concrete5 will return an array of block objects for every block that is contained within a content area. Developers can then loop through this array to manipulate or read the block objects. Summary This article discussed how to create custom block types and integrate blocks in your own website using concrete5's blocks. Resources for Article : Further resources on this subject: Everything in a Package with concrete5 [Article] Creating Your Own Theme [Article] concrete5: Mastering Auto-Nav for Advanced Navigation [Article]
Read more
  • 0
  • 0
  • 2381

Packt
31 Jul 2013
6 min read
Save for later

Quick start – creating your first grid

Packt
31 Jul 2013
6 min read
(For more resources related to this topic, see here.) So, let's do that right now for this example. Create a copy of the entire jqGrid folder named quickstart; easy as pie. Let's begin by creating the simplest grid possible; open up the index.html file from the newly created quickstart folder, and modify the body section to look like the following code: <body><table id="quickstart_grid"></table><script>var dataArray = [{name: 'Bob', phone: '232-532-6268'},{name: 'Jeff', phone: '365-267-8325'}];$("#quickstart_grid").jqGrid({datatype: 'local',data: dataArray,colModel: [{name: 'name', label: 'Name'},{name: 'phone', label: 'Phone Number'}]});</script></body> The first element in the body tags is a standard table element; this is what will be converted into our grid, and it's literally all the HTML needed to make one. The first four lines are just defining some data. I didn't want to get into AJAX or opening other files, so I decided to just create a simple JavaScript array with two entries. The next line is where the magic happens; this single command is being used to create and populate a grid using the information provided. We select the table element with jQuery, and call the jqGrid function, passing it all the properties needed to make a grid. The first two options set the data along with its type, in our case the data is the array we made and the type is local, which is in contrast to some of the other datatypes which use AJAX to retrieve remote data. After that we set the columns, this is done with the colModel property. Now, there are a lot of options for the colModel property, which we will get to later on, numerous settings for customizing and manipulating the data in the table. But for this simple example, we are just specifying the name and label properties, which tell jqGrid the column's label for the header and the value's key from the data array. Now, open index.html in your browser and you should see something like the following screenshot: Not particularly pretty, but you can see that with just a few short lines, we have created a grid and populated it with data that can be sorted. But we can do better; first off, we are only using two of the four standard layers we talked about: the header layer and the body layer. Let's add a caption layer to provide little context, and let's adjust the size of the grid to fit our data. So, modify the call to jqGrid with the following: $("#grid").jqGrid({datatype: 'local',data: dataArray,colModel: [{name: 'name', label: 'Name'},{name: 'phone', label: 'Phone Number'}],height: 'auto',caption: 'Users Grid',}); And refresh your browser; your grid should now look like the following screenshot: That's looking much better. Now, you may be wondering, we only set the height property to auto, so how come the width seems to have snapped to the content? This is due to the fact that the right margin we saw earlier is actually a column for the scroll bar. By default jqGrid sets your grid's height to 150 pixels, this means, regardless of whether you have only one row, or a thousand rows, the height will remain the same, so that there is a gap to hold the scroll bar in an event when you have more rows than that would fit in the given space. When we set the height to auto, it will stretch the grid vertically to contain all the items, making the scroll bar irrelevant and therefore it knows not to place it. Now this is a pretty good quick start example, but to finish things off, let's take a look at the navigation layer, just so we can say we did. For this next part, although we are going to need more data, I can't really show pagination with just two entries, luckily there is a site http://www.json-generator.com/ created by Vazha Omanashvili for doing exactly this. The way it works is, you specify the format and number of rows you want, and it generates it with random data. We are going to keep the format we have been using, of name and phone number, so in the box on the left, enter the following code: ['{{repeat(50)}}',{name: '{{firstName}} {{lastName}}',phone: '{{phone}}'}] Here we're saying we would like 50 rows with a name field containing both firstname and lastname, and we would also like a phone number. This site is not really in the scope of this book, but if you would like to know about other fields, you can click on the help button at the top. Click on Generate to produce a result object, and then just click on the copy to clipboard button. With that done, open your index.html file and replace the array for dataArray with what you copied. Your code should now look like the following code: var dataArray = {"id": 1,"jsonrpc": "2.0","total": 50,"result": [ /* the 50 rows */ ]}; As you can see, the actual rows are under a property named result, so we will need to change the data key in the call to jqGrid from dataArray to dataArray.result. Refreshing the page now you will see the first 20 rows being displayed (that is the default limit). But how can we get to the rest? Well, jqGrid provides a special navigation layer named "pager", which contains a pagination control. To display it, we will need to create an HTML element for it. So, right underneath the table element add a div like: <table id="grid"></table><div id="pager"></div> Then, we just need to add a key to the jqGrid method for the pager and row limit: caption: 'Users Grid',height: 'auto',rowNum: 10,pager: '#pager'}); And that's all there to it, you can adjust the rowNum property to display more or less entries at once, and the pages will be automatically calculated for you. Summary In this article we learned how to create a simple Grid and customize it. We used different properties and functions for the same. We also used different layers to customize the jqGrid. Resources for Article : Further resources on this subject: jQuery and Ajax Integration in Django [Article] Implementing AJAX Grid using jQuery data grid plugin jqGrid [Article] Django JavaScript Integration: jQuery In-place Editing Using Ajax [Article]
Read more
  • 0
  • 0
  • 9785

article-image-accessing-and-using-rdf-data-stanbol
Packt
30 Jul 2013
6 min read
Save for later

Accessing and using the RDF data in Stanbol

Packt
30 Jul 2013
6 min read
(For more resources related to this topic, see here.) Getting ready To start with, we need a Stanbol instance and Node.js. Additionally, we need the file rdfstore-js, which can be installed by executing the following command line: > npm install rdfstore How to do it... We create a file rdf-client.js with the following code: var rdfstore = require('rdfstore'); var request = require('request'); var fs = require('fs'); rdfstore.create(function(store) { function load(files, callback) { var filesToLoad = files.length; for (var i = 0; i < files.length; i++) { var file = files[i] fs.createReadStream(file).pipe( request.post( { url: 'http://localhost:8080/enhancer?uri=file: ///' + file, headers: {accept: "text/turtle"} }, function(error, response, body) { if (!error && response.statusCode == 200) { store.load( "text/turtle", body, function(success, results) { console.log('loaded: ' + results + " triples from file" + file); if (--filesToLoad === 0) { callback() } } ); } else { console.log('Got status code: ' + response.statusCode); } })); } } load(['testdata.txt', 'testdata2.txt'], function() { store.execute( "PREFIX enhancer:<http://fise.iks-project. eu/ontology/> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> SELECT ?label ?source { ?a enhancer:extracted-from ?source. ?a enhancer:entity-reference ?e. ?e rdfs:label ?label. FILTER (lang(?label) = "en") }", function(success, results) { if (success) { console.log("*******************"); for (var i = 0; i < results.length; i++) { console.log(results[i].label.value + " in " + results[i].source.value); } } }); }); }); Create the data files:Our client loads two files. We use a simple testdata.txt file having the content: "The Stanbol enhancer can detect famous cities such as Paris and people such as Bob Marley." And a second testdata2.txt file with the following content: "Bob Marley never had a concert in Vatican City." We execute the code using Node.js command line: > node rdf-client.js The output is: loaded: 159 triples from file testdata2.txt loaded: 140 triples from file testdata2.txt ******************* Vatican City in file:///testdata2.txt Bob Marley in file:///testdata2.txt Bob Marley in file:///testdata.txt Paris, Texas in file:///testdata.txt Paris in file:///testdata.txt This time we see the labels of the entities and the file in which they appear. How it works… Unlike the usual clients, this client no longer analyses the returned JavaScript Object Notation (JSON) but processes the returned data as RDF. An RDF document is a directed graph. The following screenshot shows some RDF rendered as graph by the W3C We can create such an image by selecting RDF/XML as the output format on localhost:8080/enhancer , copying and pasting the XML generated, and running the engines on some text to www.w3.org/RDF/Validator/ , where we can request that triples and graphs be generated from it. Triples are the other way to look at RDF. An RDF graph (or document) is a set of triples of the form– subject-predicate-object, where subject and object are the nodes (vertices) and predicate is the arc (edge). Every triple is a statement describing a property of its subject: <urn:enhancement-f488d7ce-a1b7-faa6-0582-0826854eab5e> <http://fise. iks-project.eu/ontology/entity-reference> <http://dbpedia.org/resource/ Bob_Marley>. <http://dbpedia.org/resource/Bob_Marley> <http://www.w3.org/2000/01/rdf-schema#label> "Bob Marley"@en . There are two triples saying that an enhancement referenced Bob Marley and that the English label for Bob Marley is "Bob Marley". All the arches and most of the nodes are labeled by an Internationalized Resource Identifier (IRI), which defines a superset of the good old URLs including non-Latin characters. RDF can be serialized in many different formats. The two triples in the preceding command lines use the N-TRIPLES syntax. RDF/XML expresses (serializes) RDF graphs as XML documents. Originally, RDF/XML was referred to as the canonical serialization for RDF. Unfortunately, this caused some people to believe RDF would be somehow related to XML and thus inherit its flaws. A serialization format designed specifically for RDF that doesn't encode RDF into an existing format is Turtle. Turtle allows both explicit listing of triples as in N-TRIPLES but also supports various ways of expressing the graphs in a more concise and readable fashion. The JSON-LD, expresses RDF graphs in JSON. As this specification is currently still work in progress (see json-ld.org/), different implementations are incompatible and thus, for this example, we switched the Accept-Header to text/turtle. Another change in the code performing the request is that we added a uri query-parameter to the requested URL: 'http://localhost:8080/enhancer?uri=file:///' + file,   This defines the IRI naming used as a name for the uploaded content in the result graph. If this parameter is not specified, the enhancer will generate an IRI which is based on creating a hash of the content. But this line in the output would be less helpful: Paris in urn:content-item-sha1-3b16820497aae806f289419d541c770bbf87a796 Roughly the first half of our code takes care of sending the files to Stanbol and storing the returned RDF. We define a function load that asynchronously enhances a bunch of files and invokes a callback function when all files have successfully been loaded. The second half of the code is the function that's executed once all files have been processed. At this point, we have all the triples loaded in the store. We could now programmatically access the triples one by one, but it's easier to just query for the data we're interested in. SPARQL is a query language a bit similar to SQL but designed to query triple stores rather than relational databases. In our program, we have the following query (slightly simplified here): PREFIX enhancer:<http://fise.iks-project.eu/ontology/> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> SELECT ?label ?source { ?a enhancer:extracted-from ?source. ?a enhancer:entity-reference ?e. ?e rdfs:label ?label. } The most important part is the section between curly brackets. This is a graph pattern that is like a graph, but with some variables instead of values. On execution, the SPARQL engine will check for parts of the RDF matching this pattern and return a table with a row for each selected value and a row for every matching value combination. In our case, we iterate through the result and output the label of the entity and the document in which the entity was referenced. There's more... The advantage of RDF is that many tools can deal with the data, ranging from command line tools such as rapper (librdf.org/raptor/rapper.html) for converting data to server applications, which allow to store large amounts of RDF data and build applications on top of it. Summary In this recipe, the advantage of using RDF (model-based) over the conventional JSON (syntax-based)method is explained. In the article, a client was created, rdf-client.js, which loaded two files, testdata.txt and testdata2.txt, and then were executed using Node.js command prompt. An RDF was rendered using W3C in the form of triples. Later, using SPARQL the triples were queried to extract the required information. Resources for Article : Further resources on this subject: Installing and customizing Redmine [Article] Web Services in Apache OFBiz [Article] Geronimo Architecture: Part 2 [Article]
Read more
  • 0
  • 1
  • 2319
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-digging-architecture
Packt
30 Jul 2013
31 min read
Save for later

Digging into the Architecture

Packt
30 Jul 2013
31 min read
(For more resources related to this topic, see here.) The big picture A very short description of a WaveMaker application could be: a Spring MVC server running in a Java container, such as Tomcat, serving file and JSON requests for a Dojo Toolkit-based JavaScript browser client. Unfortunately, such "elevator" descriptions can create more questions than they answer. For starters, although we will often refer to it as "the server," the WaveMaker server might be more aptly called an application server in most architectures. Sure, it is possible to have a useful application without additional servers or services beyond the WaveMaker server, but this is not typical. We could have a rich user interface to read against some in memory data set, for example. Far more commonly, the Java services running in the WaveMaker server are calling off to other servers or services, such as relational databases and RESTful web services. This means the WaveMaker server is often the middle or application tier server of a multi-tier application's architecture. Yet at the same time, the WaveMaker server can be eliminated completely. Applications can be packaged for uploading to PhoneGap build, http://build.phonegap.com/,directly from WaveMaker Studio. Both PhoneGap and the associated Apache project Cordova, http://cordova.apache.org,provide APIs to enable JavaScript to access native device functionality, such as capturing images with the camera and obtaining GPS location information. Packaged up and installed as a native application, the JavaScript files are loaded from the devices, file system instead of being downloaded from a server via HTTP. This means there is no origin domain to be constrained by. If the application only uses web services, or otherwise doesn't need additional services, such as database access, the WaveMaker server is neither used nor needed. Just because an application isn't installed on a mobile device from an app store doesn't mean we can't run it on a mobile device. Browsers on mobile devices are more capable than ever before. This means our client could be any device with a modern browser. You must also consider licensing in light of the bigger picture. WaveMaker, WaveMaker Studio, and the applications create with the Studio are released under the Apache 2.0 license, http://www.apache.org/licenses/LICENSE-2.0. The WaveMaker project was first released by WaveMaker Software in 2007. In March 2011, VMware (http://vmware.com) acquired the WaveMaker project. It was under VMware that WaveMaker 6.5 was released. In April 2013, Pramati Technlogies (http://pramati.com) acquired the assets of WaveMaker for its CloudJee (http://cloudjee.com) platform. WaveMaker continues to be developed and released by Pramati Technologies. Now that we understand where our client and server sit in the larger world, we will be primarily focused within and between those two parts. The overall picture of the client and server looks as shown in the following diagram: We will examine each piece of this diagram in detail during the course of this book. We shall start with the JavaScript client. Getting comfortable with the JavaScript client The client is a JavaScript client that runs in a modern browser. This means that most of the client, the HTML and DOM nodes that the browser interfaces with specifically, are created by JavaScript at runtime. The application is styled using CSS, and we can use HTML in our applications. However, we don't use HTML to define buttons and forms. Instead, we define components, such as widgets, and set their properties. These component class names and properties are used as arguments to functions that create DOM nodes for us. Dojo Toolkit To do this, WaveMaker uses the Dojo Toolkit, http://dojotoolkit.org/. Dojo, as it is generally referred to, is a modular, cross-browser, JavaScript framework with three sections. Dojo Core provides the base toolkit. On top of which are Dojo's visual widgets called Dijits. Finally, DojoX contains additional extensions such as charts and a color picker. DojoCampus' Dojo Explorer, http://dojocampus.com/explorer/, has a good selection of single unit demos across the toolkit, many with source code. Dojo allows developers to define widgets using HTML or JavaScript. WaveMaker users will better recognize the JavaScript approach. Specifically, WaveMaker 6.5.X uses version 1.6.1 of Dojo. Of the browsers supported by Dojo 1.6.1, http://dojotoolkit.org/reference-guide/1.8/releasenotes/1.6.html, Opera's "Dojo Core only" support prevents it from being supported by WaveMaker. This could change with Opera's move to WebKit. Building on top of the Dojo Toolkit, WaveMaker provides its own collections of widgets and underlying components. Although both can be called components, the name component is generally used for the non-visible parts, such as service calls to the server and the event notification system. Widgets, such as the Dijits, are visible components such as buttons and editors. Many, but not all, of the WaveMaker widgets extend functionality from Dojo widgets. When they do extend Dijits, WaveMaker widgets often add numerous functions and behaviors that are not part of Dojo. Examples include controlling the read-only state, formatting display values for currency, and merging components, such as buttons with icons in them. Combined with the WaveMaker runtime layers, these enhancements make it easy to assemble rich clients using only properties. WaveMaker's select editor (wm.SelectMenu) for example extends the Dojo Toolkit ComboBox (dijit.form.ComboBox) or the FilteringSelect (dijit.form.FilteringSelect) as needed. By default, a select menu has Dojo FilteringSelect as its editor, but it will use ComboBox instead if the user is on a mobile device or the developer has cleared the RestrictValues property tick box. A required select menu editor Let's consider the case of disabling a submit button when the user has not made a required list selection. In Dojo, this is done using JavaScript code, and for an experienced Dojo developer, this is not difficult. For those who may primarily consider Dojo a martial arts Studio however, it is likely another matter altogether. Using the WaveMaker framework provided widgets, no code is required to set up this inter-connection. This is simply a matter of visually linking or binding the button's disabled property to the lists' emptySelection property in the graphical binding dialog. Now the button will be disabled if the user has not made a selection in the grid's list of items. Logically, we can think of this as setting the disabled property to the value of the grid's emptySelection property, where emptySelection is true unless and until a row has been selected. Where WaveMaker most notably varies from the Dojo way of things is the layout engine. WaveMaker handles the layout of container widgets using its own engine. Containers are those widgets that contain other widgets, such as panels, tabs, and dialogs. This makes it easier for developers to arrange widgets in WaveMaker Studio. A result of this is that border, padding, and margin are set using properties on widgets, not by CSS. Border, padding, and margin are widget properties in WaveMaker, and are not controlled by CSS. Dojo made easy Having the Dojo framework available to us makes web development easier both when using the WaveMaker framework and when doing custom work. Dojo's modular and object-oriented functions, such as dojo.declare and dojo.inherited, for example, simplify creating custom components. The key takeaway here is that Dojo itself is available to you as a developer if you wish to use it directly. Many developers never need to utilize this capability, but it is available to you if you ever do wish to take advantage of it. Running the CRM Simple sample again from either the console in the browser development tools or custom project page code, we could use Dojo's byId() function to get a div, for example, the main title label: >dojo.byId("main_labelTitle"). In practice, the WaveMaker style of getting a DOM node via the component name, for example, main.labelTitle.domNode, is more practical and returns the same result. If a function or ability in Dojo is useful, the WaveMaker framework usually provides a wrapper of some sort for you. Just as often, the WaveMaker version is friendlier or otherwise easier to use in some way. For example, this.connect(), WaveMaker's version of dojo.connect(), tracks connections for you. This avoids the need for you to remember to call disconnect() to remove the reference added by every call to connect(). For more information about using Dojo functions in WaveMaker, see the Dojo framework page in the WaveMaker documentation at: http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Dojo+Framework. Binding and events Two solid examples of WaveMaker taking a powerful feature of Dojo and providing friendlier versions are topic notifications and event handling. Dojo.connect() enables you to register a method to be called when something happens. In other words: "when X happens, please also do Y". Studio provides visual tooling for this in the events section of a component's properties. Buttons have an event drop-down menu for their click event. Asynchronous server call components, live variables, and service variables, have tooled events for reviewing data just before the call is made and for the successful, and not so successful, returns from the call. These menus are populated with listings of likely components and if appropriate, functions. Invoking other service calls, particularly when a server call depends on data from the results of some previous server call, and navigation calls to other layers and pages within the application are easy examples of how WaveMaker's visual tooling of dojo.connect simplifies web development. WaveMaker's binding dialog is a graphical interface on the topic subscription system. Here we are "binding" a live variable that returns rows from the lineitem table to be filtered by the data value of the orderid editor in the form on the new order page: The result of this binding is that when the value of the orderid editor changes, the value in the filter parameter of this live variable will be updated. An event indicating that the value of this orderid editor has changed is published when the data value changes. This live variable's filter is being subscribed to that topic and can now update its value accordingly. Loading the client Web applications start from index.html, and a WaveMaker application is no different. If we examine index.html of a WaveMaker application, we see the total content is less than 100 lines. We have some meta tags in the head, mostly for Internet Explorer (MSIE) and iOS support. In the body, there are more entries to help out with older versions of MSIE, including script tags to use Chrome Frame if we so choose. If we cut all that away, index.html is rather simple. In the head, we load the CSS containing the projects theme and define a few lines of style classes for wavemakerNode and _wm_loading: <script>var wmThemeUrl = "/wavemaker/lib/wm/base/widget/themes/wm_default/theme.css";</script> <style type="text/css"> #wavemakerNode { height: 100%; overflow: hidden; position: relative; } #_wm_loading { text-align: center; margin: 25% 0px 25% 0px; } </style> Next we load the file config.js, which as its name suggests, is about configuration. The following line of code is used to load the file: <script type="text/javascript" src = "config.js"></script> Config.js defines the various settings, variables, and helper functions needed to initialize the application, such as the locale setting. Moving into the body tag of index.html, we find a div named wavemakerNode: <div id="wavemakerNode"> The next div tag is the loader gif, which is given in the following code: <div id="_wm_loading" style="z-index: 100;"> <table style='width:100%;height: 100%;'><tr><td align='center'><img alt="Loading" src = "/wavemaker/lib/boot/images/loader.gif" />&nbsp;&nbsp;Loading...</td></tr></table> </div> This is the standard spinner shown while the application is loading. With the loader gif now spinning, we begin the real work with runtimeLoader.js, as given in the following line of code: <script type="text/javascript" src = "/wavemaker/lib/runtimeLoader.js"></script> When running a project from Studio, the client runtime is loaded from Studio via WaveMaker. Config.js and index.html are modified for deployment while the client runtime is copied into the applications webapproot. runtimeLoader, as its name suggests, loads the WaveMaker runtime. With the runtime loaded, we can now load the top level project.a.js file, which defines our application using the dojo.declare() method. The following line of code loads the file: <script type="text/javascript" src = "project.a.js"></script> Finally, with our application class defined, we set up an instance of our application in wavemakerNode and run it. There are two modes for loading a WaveMaker application: debug and gzip mode. The debug mode is useful for debugging, as you would expect. The gzip mode is the default mode. The test mode of the Run , Test , or Compile button in Studio re-deploys the active project and opens it in debug mode. This is the only difference between using Test and Run in Studio. The Test button adds ?debug to the URL of the browser window; the Run button does not. Any WaveMaker application can be loaded in debug mode by adding debug to the URL parameters. For example, to load the CRM Simple application from with WaveMaker in debug mode, use the URL http://crm_simple.localhost:8094.com/?debug; detecting debug in the URL sets the djConfig.debugBoot flag, which alters the path used in runtimeLoader. djConfig.debugBoot = location.search.indexOf("debug") >=0; Like a compiled program, debug mode preserves variable names and all the other details that optimization removes which we would want available to use when debugging. However, JavaScript is not compiled into byte code or machine specific instructions. On the other hand, in gzip mode, the browser loads a few optimized packages containing all the source code in merged files. This reduces the number of files needed to load our application, which significantly improves loading time. These optimized packages are also minified. Minification removes whitespace and replaces variable names with short names, further reducing the volume of code to be parsed by the browser, and therefore further improving performance. The result is a significant reduction in the number of requests needed and the number of bytes transferred to load an application. A stock application in gzip mode requires 22 to 24 requests to load some 300 KB to 400 KB of content, depending on the application. In debug mode, the same app transfers over 1.5 MB in more than 500 requests. The index.html file, and when security is enabled, login.html, are yours to edit. If you are comfortable doing so, you can customize these files such as adding additional script tags. In practice, you shouldn't need to customize index.html, as you have full control of the application loaded into the wavemakerNode. Also, upgraded scripts in future versions of WaveMaker may need to programmatically update index.html and login.html. Changes to the X-US-Compatible meta tag are often required when support for newer versions of Internet Explorer becomes available, for example. These scripts can't possibly know about every customization you may make. Customization of index.html may cause these scripts to fail, and may require you to manually update these files. If you do encounter such a situation, simply use the index.html file from a project newly created in the new version as a template. Springing into the server side The WaveMaker server is a Java application running in a Java Virtual Machine (JVM). Like the client, it builds upon proven frameworks and libraries. In the case of the server, the foundational block is the SpringSource framework, http://www.springsource.org/SpringSource, or the Spring framework. The Spring framework is the most popular enterprise Java development framework today, and for good reason. The server of a WaveMaker application is a Spring application that includes the WaveMaker common, json, and runtime modules. More specifically, the WaveMaker server uses the Spring Web MVC framework to create a DispatcherServlet that delegates client requests to their handlers. WaveMaker uses only a handful of controllers, as we will see in the next section. The effective result is that it is the request URL that is used to direct a service call to the correct service. The method value of the request is the name of the client exposed function with the service to be called. In the case of overloaded functions, the signature of the params value is used to find the method matching by signature. We will look at example requests and responses shortly. Behind this controller is not only the power of the Spring framework, but also a number of leading frameworks such as Hibernate and, JaxWS, and libraries such as log4j and Apache commons. Here too, these libraries are available to you both directly in any custom work you might do and indirectly as tooled features of Studio. As we are working with a Spring server, we will be seeing Spring beans often as we examine the server-side configuration. One need not be familiar with Spring to reap its benefits when using custom Java in WaveMaker. Spring makes it easy to get access to other beans from our Java code. For example, if our project has imported a database as MyDB, we could get access to the service and any exposed functions in that service using getServiceBean().The following code illustrates the use of getServiceBean(): MyDB myDbSvc = (MyDB)RuntimeAccess.getInstance().getServiceBean("mydb"); We start by getting an instance of the WaveMaker runtime. From the returned runtime instance, we can use the getServiceBean() method to get a service bean for our mydb database service. There are other ways we could have got access to the service from our Java code; this one is pretty straightforward.  Starting from web.xml Just as the client side starts with index.html, a Java servlet starts in WEB-INF with web.xml. A WaveMaker application web.xml is a rather straightforward Spring MVC web.xml. You'll notice many servlet-mappings, a few listeners, and filters. Unlike index.html, web.xml is managed directly by Studio. If you need to add elements to the web-app context, add them to user-web.xml. The content of user-web.xml is merged into web.xml when generating the deployment package.  The most interesting entry is probably contextConfigLocation of /WEB-INF/project-springapp.xml. Project-springapp.xml is a Spring beans file. Immediately after the schema declaration is a series of resource imports. These imports include the services and entities that we create in Studio as we import databases and otherwise add services to our project. If you open project-spring.xml in WEB-INF, near the top of the file you'll see a comment noting how project-spring.xml is yours to edit. For experienced Spring users, here is the entry point to add any additional imports you may need. An example of such can be found at http://dev.wavemaker.com/wiki/bin/Spring. In that example, an additional XML file, ServerFileProcessor.xml, is used to enable component scanning on a package and sets some properties on those components. Project-spring.xml is then used to import ServerFileProcessor.xml into the application context. Many users of WaveMaker still think of Spring as the season between Winter and Summer. Such users do not need to think about these XML files. However, for those who are experienced with Java, the full power of the Spring framework is accessible to them. Also in project-springapp.xml is a list of URL mappings. These mappings specific request URLs that require handling by the file controller. Gzipped resources, for example, require the header Content-Encoding to be set to gzip. This informs the browser the content is gzip encoded and must be uncompressed before being parsed. >There are a few names that use ag in the server. WaveMaker Software the company was formerly known as ActiveGrid, and had a previous web development tool by the same name. The use of ag and com.activegrid stems back to the project's roots, first put down when the company was still known as ActiveGrid. Closing out web.xml is the Acegi filter mapping. Acegi is the security module used in WaveMaker 6.5 . Even when security is not enabled in an application, the Acegi filter mapping is included in web.xml. When security is not enabled in the project, an empty project-security.xml is used. Client and server communication Now that we've examined the client and server, we need to better understand the communication between the two. WaveMaker almost exclusively uses the HTTP methods GET and POST. In HTTP, GET is used, as you might suspect even without ever having heard of RFC 2626 (https://tools.ietf.org/html/rfc2616), to request, or get, a specific resource. Unless installed as a native application on a mobile device, a WaveMaker web application is loaded via a GET method. From index.html and runtimeLoad.js to the user defined pages and any images used on those images, the applications themselves are loaded into the browser using GET. All service calls, database reads and writes, or otherwise any invocations of a Java service functions, on the other hand, are POST. The URL of these POST functions is always the service named .json. For example, calls to a Java service named userPrefSvc would always be to the URL /userPrefSvc.json. Inside the POST method's request payload will be any required parameters including the method of the service to be invoked. The response will be the response returned from that call. PUT methods are not possible because we cannot nor do not want to know all possible WaveMaker server calls at "designtime", while the project files are open for writing in the Studio. This pattern avoids any URL length constraints, enabling lengthy datasets to be transferred while freeing up the URL to pass parameters such as page state. Let's take a look at an example. If you want to follow along in your browser's console, this is the third request of three when we select "Fog City Books" in the CRM Simple application when running the application with the console open. The following URL is the request URL: http://crm_simple.localhost:8094/services/runtimeService.json The following is request payload: {"params":["custpurchaseDB","com.custpurchasedb.data.Lineitem",null,{"properties":["id","item"],"filters":["id.orderid=9"],"matchMode":"start","ignoreCase":false},{"maxResults":500,"firstResult":0}],"method":"read","id":251422} The response is as follows: {"dataSetSize":2,"result":[{"id":{"itemid":2,"orderid":9},"item":{"itemid":2,"itemname":"Kidnapped","price":12.99},"quantity":2},{"id":{"itemid":10,"orderid":9},"item":{"itemid":10,"itemname":"Gravitys Rainbow","price":11.99},"quantity":1}]} As we expect, the request URL is to a service (in this case named runtime service), with the .json extension. Runtime service is the built-in WaveMaker service for reading and writing with the Hibernate (http://www.hibernate.org), data models generated by importing a database. Security service and WaveMaker service are the other built-in services used at runtime. The security service is used for security functions such as getUserName() and logout(). Note this does not include login, which is handled by Acegi. The WaveMaker service has functions such as getServerTimeOffset(), used to adjust for time zones, and remoteRESTCall(), used to proxy some web service calls. How the runtime service functions is easy to understand by observation. Inside the request payload we have, as the URL suggested, a JavaScript Object Notation (JSON) structure. JSON (http://www.json.org/), is a lightweight data-interchange format regularly used in AJAX applications. Dissecting our example request from the top of the structure enclosed in the outer-most {}'s looks like the following: {"params":[…….],"method":"read","id":251422} We have three top level name-value pairs to our request object: params, method, and id. The id is 251422; method is read and the params value is an array, as indicated by the [] brackets: ["custpurchaseDB","com.custpurchasedb.data.Lineitem",null,{},{ }] In our case, we have an array of five values. The first is the database service name, custpurchaseDB. Next we have what appears to be the package and class name we will be reading from, not unlike from in a SQL query. After which, we have a null and two objects. JSON is friendly to human reading, and we could continue to unwrap the two objects in this request in a similar fashion.  when we discuss database services and check out the response. At the top level, we have dataSetSize, the number of results, and the array of the results: {"dataSetSize":2,"result":[]} Inside our result array we have two objects: [{"id":{"itemid":2,"orderid":9},"item":{"itemid":2,"itemname":"Kidnapped","price":12.99},"quantity":2},{"id":{"itemid":10,"orderid":9},"item":{"itemid":10,"itemname":"Gravitys Rainbow","price":11.99},"quantity":1}]} Our first item has the compound key of itemid 2 with orderid 9. This is the item Kidnapped, which is a book costing $11.99. The other object in our result array also has the orderid 9, as we expect when reading line items from the selected order. This one is also a book, the item Gravity's Rainbow. Types To be more precise about the com.custpurchasdb.data.Lineitem parameter in our read request, it is actually the type name of the read request. WaveMaker projects define types from primitive types such as Boolean and custom complex types such as Lineitem. In our runtime read example, com.custpurchasedb.data.Lineitem is both the package and class name of the imported Hibernate entity and the type name for the line item entity in the project. Maintaining type information enables WaveMaker to ease a number of development issues. As the client knows the structure of the data it is getting from the server, it knows how to display that data with minimal developer configuration, if any. At design time, Studio uses type information in many areas to help us correctly configure our application. For example, when we set up a grid, type information enables Studio to present us with a list of possible column choices for the grid's dataset type. Likewise, when we add a form to the canvas for a database insert, it is type information that Studio uses to fill the form with appropriate editors. Line item is a project-wide type as it is defined in the server side. In the process of compiling the project's Java services sources, WaveMaker defines system types for any type returned to the client in a client facing function. To be added to the type system, a class must: Be public Define public getters and setters Be returned by a client exposed function Have a service class that extends JavaServiceSuperClass or uses the @ExposeToClient annotation WaveMaker 6.5.1 has a bug that prevents types from being generated as expected. Be certain to use 6.5.2 or newer versions to avoid this defect. It is possible to create new project types by adding a Java service class to the project that only defines types. Following is an example that creates a new simple type called Record to the project. Our definition of Record consists of an integer ID and a string. Note that there are two classes here. MyCustomTypes is the service class containing a method returning the type Record. As we will not be calling it, the function getNewRecord() need not do anything other than return a record. Creating a new default instance is an easy way to do this. The class Record is defined as an inner class. An inner class is a class defined within another class. In our case, Record is defined within MyCustomTypes: // Java Service class MyCustomTypes package com.myco.types; import com.wavemaker.runtime.javaservice.JavaServiceSuperClass; import com.wavemaker.runtime.service.annotations.ExposeToClient; public class MyCustomTypes extends JavaServiceSuperClass { public Record getNewRecord(){ return new Record(); } // Inner class Record public class Record{ private int id; private String name; public int getId(){ return id; } public void setId(int id){ this.id = id; } public String getName(){ return this.name; } public void setName(String name){ this.name = name; } } }  To add the preceding code to our WaveMaker project, we would add a Java service to the project using the class name MyCustomTypes in the Package and Class Name editor of the New Java Service dialog. The preceding code extends JavaServiceSuperClass and uses the package com.myco.types. A project can also have client-only types using the type definition option from the advanced section of the Studio insert menu. Type definitions are useful when we want to be able to pass structured data around within the client but we will not be sending or receiving that type to the server. For example, we may want to have application scoped wm.Variable storing a collection of current record selection information. This would enable us to keep track of a number of state items across all pages. Communication with the server is likely to be using only a few of those types at a time, so no such structure exists in the server side. Using wm.Variable enables us to bind each Record ID without using code. The insert type definition menu brings up the Type Definition Generator dialog. The generator takes JSON input and is pre-populated with a sample type. The sample type defines a person object, albeit an unusual one, with a name, an array of numbers for age, a Boolean (hasFoot), and a related person object, friend. Replace the sample type with your own JSON structure. Be certain to change the type name to something meaningful. After generating the type, you'll immediately see the newly minted type in type selectors, such as the type field of wm.Variable. Studio is pretty good at recognizing type changes. If for some reason Studio does not recognize a type change, the easiest thing to do is to get Studio to re-read the owning object. If a wm.Variable fails to show a newly added field to a type in its properties, change the type of the variable from the modified type to some other type and then back again. Studio is also an application One of the more complex WaveMaker applications is the Studio. That's right, Studio is itself an application built out of WaveMaker widgets and using the runtime and server. Being the large, complex application we use to build applications, it can sometimes be difficult to understand where the runtime ends and Studio begins. With that said, Studio remains a treasure trove of examples and ideas to explore. Let's open a finder, explorer, shell, or however you prefer to view the file system of a WaveMaker Studio installation. Let's look in the studio folder. If you've installed WaveMaker to c:program filesWaveMaker6.5.3.Release, the default on Windows, we're looking at c:program filesWaveMaker6.5.3.Releasestudio. This is the webapproot of the Studio project: For files, we've discussed index.html in loading the client. The type definition for the project types is types.js. The types.js definition is how the client learns of the server's Java types. Moving on to the directories alphabetically, we start with the app folder. The app folder can be considered a large utility folder these days. The branding folder, http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Branding, is a sample of the branding feature for when you want to easily re-brand applications for different customers. The build folder contains the optimized build files we discussed when loading our application in gzip mode. This build folder is for the Studio itself. The images folder is, as we would hope, where images are kept. The content of the doc in jsdoc is pretty old. Use jsref at the online wiki, http://dev.wavemaker.com/wiki/bin/wmjsref_6.5/WebHome, for a client API reference instead. Language contains the National Language Support (NLS) files to localize Studio into other languages. In 6.5.X, there is a Japanese (ja) and Spanish (es) directory in addition to the English (en) default thanks to the efforts of the WaveMaker community and a corporate partner. For more on internationalization applications with WaveMaker, navigate to http://dev.wavemaker.com/wiki/bin/wmdoc_6.5/Localization#HLocalizingtheClientLanguage. The lib folder is very interesting, so let's wrap up this top level before we dig into that one. The META-INF folder contains artifacts from the WaveMaker Maven build process that probably should be removed for 6.5.2. The pages folder contains the page definitions for Studio's pages. These pages can be opened in Studio. They also can be a treasure trove of tips and tricks if you see something when using Studio that you don't know how to do in your application. Be careful however, as some pages are old and use outdated classes or techniques. Other constructs are only used by Studio and aren't tooled. This means some pages use components that can only be created by code. The other major difference between a project's pages folder is that Studio page folders do not contain the same number of files. They do not have the optimized pageName.a.js file, for example. The services folder contains the Service Method Definition (SMD) files for Studio's services. These are summaries of a projects exposed services, one file per service, used at runtime by the client. Each callable function, its input parameter, and its return types are defined. Finally, WEB-INF we have discussed already when we examined web.xml. In Studio's case, replace project with studio in the file names. Also under WEB-INF, we have classes and lib. The classes folder contains Java class files and additional XML files. These files are on the classpath. WEB-INFlib contains JAR files. Studio requires significantly more JAR files, which are automatically added to projects created by Studio. Now let's get back to the lib folder. Astute readers of our walk through of index.html likely noticed the references to /wavemaker/lib in src tags for things such as runtimeloader. You might have also noticed that this folder was not present in the project and wondered how these tags could not fail. As a quick look at the URL of Studio running in a browser will demonstrate, /wavemaker is the Studio's context. This means the JavaScript runtime is only copied in as part of generating the deployment package. The lib folder is loaded directly from Studio's context when you test run an application from Studio using the Run or Test button. RuntimeLoader.js we encountered following index.html as it is the start of the loading of client modules. Manifest.js is an entry point into the loading process. Boot contains pre-initialization, such as the spinning loader image. Next we have another build folder. This one is the one used by applications and contains all possible build files. Not every JavaScript module is packaged up into an optimized build file. Some modules are so specific or rarely used that they are best loaded individually. Otherwise, if there's a build package available to applications, these them. Dojo lives in the dojo folder. I hope you don't find it surprising to find a dijit, dojo, and dojox folder in there. The folder github provides the library path github for JS Beautifier, http://jsbeautifier.org/. The images in the project images folder include a copy of Silk Icons, http://www.famfamfam.com/lab/icons/silk/, a great Creative Common licensed PNG icon set. This brings us to wm. We definitely saved the most interesting folder for our last stop on this tour. For in lib/wm, we have manifest.js, the top level of module loading when using debug mode in the runtime loader. In wm/lib/base, is the top level of the WaveMaker module space used at runtime. This means in wm/lib/base we have the WaveMaker components and widgets folders. These two folders contain the most commonly used sets of classes by WaveMaker developers using any custom JavaScript in a project. This also means we will be back in these folders again too. Summary In this article, we reviewed the WaveMaker architecture. We started with some context of what we mean by "client" and "server" in the context of this book. We then proceeded to dig into the client and the server. We reviewed how both build upon leading frameworks, the Dojo Toolkit and the SpringSource Framework in particular. We examined the running of an application from the network point of view and how the client and server communicated throughout. We dissected a JSON request to the runtime service and encountered project types. We also learned about both project and client type definitions. We ended by revisiting the file system. This time, however, we walked through a Studio installation. Studio is also a WaveMaker application. In the next article, we'll get comfortable with the Studio as a visual tool. We'll look at everything from the properties panels to the built-in source code editors. Resources for Article : Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Service Oriented JBI: Invoking External Web Services from ServiceMix [Article] Web Services in Apache OFBiz [Article]
Read more
  • 0
  • 0
  • 2274

article-image-building-your-first-zend-framework-application
Packt
26 Jul 2013
15 min read
Save for later

Building Your First Zend Framework Application

Packt
26 Jul 2013
15 min read
(For more resources related to this topic, see here.) Prerequisites Before you get started with setting up your first ZF2 Project, make sure that you have the following software installed and configured in your development environment: PHP Command Line Interface Git : Git is needed to check out source code from various github.com repositories Composer : Composer is the dependency management tool used for managing PHP dependencies The following commands will be useful for installing the necessary tools to setup a ZF2 Project: To install PHP Command Line Interface: $ sudo apt-get install php5-cli To install Git: $ sudo apt-get install git To install Composer: $ curl -s https://getcomposer.org/installer | php ZendSkeletonApplication ZendSkeletonApplication provides a sample skeleton application that can be used by developers as a starting point to get started with Zend Framework 2.0. The skeleton application makes use of ZF2 MVC, including a new module system. ZendSkeletonApplication can be downloaded from GitHub (https://github.com/zendframework/ZendSkeletonApplication). Time for action – creating a Zend Framework project To set up a new Zend Framework project, we will need to download the latest version of ZendSkeletonApplication and set up a virtual host to point to the newly created Zend Framework project. The steps are given as follows: Navigate to a folder location where you want to set up the new Zend Framework project: $ cd /var/www/ Clone the ZendSkeletonApplication app from GitHub: $ git clone git://github.com/zendframework/ ZendSkeletonApplication.git CommunicationApp In some Linux configurations, necessary permissions may not be available to the current user for writing to /var/www. In such cases, you can use any folder that is writable and make necessary changes to the virtual host configuration. Install dependencies using Composer: $ cd CommunicationApp/ $ php composer.phar self-update $ php composer.phar install The following screenshot shows how Composer downloads and installs the necessary dependencies: Before adding a virtual host entry we need to set up a hostname entry in our hosts file so that the system points to the local machine whenever the new hostname is used. In Linux this can be done by adding an entry to the /etc/hosts file: $ sudo vim /etc/hosts In Windows, this file can be accessed at %SystemRoot%system32driversetchosts. Add the following line to the hosts file: 127.0.0.1 comm-app.local The final hosts file should look like the following: Our next step would be to add a virtual host entry on our web server; this can be done by creating a new virtual host's configuration file: $ sudo vim /usr/local/zend/etc/sites.d/vhost_comm-app-80.conf This new virtual host filename could be different for you depending upon the web server that you use; please check out your web server documentation for setting up new virtual hosts. For example, if you have Apache2 running on Linux, you will need to create the new virtual host file in /etc/apache2/sites-available and enable the site using the command a2ensite comm-app.local. Add the following configuration to the virtual host file: <VirtualHost *:80> ServerName comm-app.local DocumentRoot /var/www/CommunicationApp/public SetEnv APPLICATION_ENV "development" <Directory /var/www/CommunicationApp/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> If you are using a different path for checking out the ZendSkeletonApplication project make sure that you include that path for both DocumentRoot and Directory directives. After configuring the virtual host file, the web server needs to be restarted: $ sudo service zend-server restart Once the installation is completed, you should be able to open http://comm-app.local on your web browser. This should take you to the following test page : Test rewrite rules In some cases, mod_rewrite may not have been enabled in your web server by default; to check if the URL redirects are working properly, try to navigate to an invalid URL such as http://comm-app.local/12345; if you get an Apache 404 page, then the .htaccess rewrite rules are not working; they will need to be fixed, otherwise if you get a page like the following one, you can be sure of the URL working as expected. What just happened? We have successfully created a new ZF2 project by checking out ZendSkeletonApplication from GitHub and have used Composer to download the necessary dependencies including Zend Framework 2.0. We have also created a virtual host configuration that points to the project's public folder and tested the project in a web browser. Alternate installation options We have seen just one of the methods of installing ZendSkeletonApplication; there are other ways of doing this. You can use Composer to directly download the skeleton application and create the project using the following command: $ php composer.phar create-project --repositoryurl="http://packages.zendframework.com" zendframework/skeleton-application path/to/install You can also use a recursive Git clone to create the same project: $ git clone git://github.com/zendframework/ZendSkeletonApplication.git --recursive Refer to: http://framework.zend.com/downloads/skeleton-app Zend Framework 2.0 – modules In Zend Framework, a module can be defined as a unit of software that is portable and reusable and can be interconnected to other modules to construct a larger, complex application. Modules are not new in Zend Framework, but with ZF2, there is a complete overhaul in the way modules are used in Zend Framework. With ZF2, modules can be shared across various systems, and they can be repackaged and distributed with relative ease. One of the other major changes coming into ZF2 is that even the main application is now converted into a module; that is, the application module. Some of the key advantages of Zend Framework 2.0 modules are listed as follows: Self-contained, portable, reusable Dependency management Lightweight and fast Support for Phar packaging and Pyrus distribution Zend Framework 2.0 – project folder structure The folder layout of a ZF2 project is shown as follows: Folder name Description config Used for managing application configuration. data Used as a temporary storage location for storing application data including cache files, session files, logs, and indexes. module Used to manage all application code. module/Application This is the default application module that is provided with ZendSkeletonApplication. public This is the default application module that is provided with ZendSkeletonApplication. vendor Used to manage common libraries that are used by the application. Zend Framework is also installed in this folder. vendor/zendframework endor/zendframework Zend Framework 2.0 is installed here. Time for action – creating a module Our next activity will be about creating a new Users module in Zend Framework 2.0. The Users module will be used for managing users including user registration, authentication, and so on. We will be making use of ZendSkeletonModule provided by Zend, shown as follows: Navigate to the application's module folder: $ cd /var/www/CommunicationApp/ $ cd module/ Clone ZendSkeletonModule into a desired module name, in this case it is Users: $ git clone git://github.com/zendframework/ZendSkeletonModule.git Users After the checkout is complete, the folder structure should look like the following screenshot: Edit Module.php ; this file will be located in the Users folder under modules (CommunicationApp/module/Users/module.php) and change the namespace to Users. Replace namespace ZendSkeletonModule; with namespace Users;. The following folders can be removed because we will not be using them in our project: * Users/src/ZendSkeletonModule * Users/view/zend-skeleton-module What just happened? We have installed a skeleton module for Zend Framework; this is just an empty module, and we will need to extend this by creating custom controllers and views. In our next activity, we will focus on creating new controllers and views for this module. Creating a module using ZFTool ZFTool is a utility for managing Zend Framework applications/projects, and it can also be used for creating new modules; in order to do that, you will need to install ZFTool and use the create module command to create the module using ZFTool: $ php composer.phar require zendframework/zftool:dev-master $ cd vendor/zendframework/zftool/ $ php zf.php create module Users2 /var/www/CommunicationApp Read more about ZFTool at the following link: http://framework.zend.com/manual/2.0/en/modules/zendtool.introduction.html MVC layer The fundamental goal of any MVC Framework is to enable easier segregation of three layers of the MVC, namely, model, view, and controller. Before we get to the details of creating modules, let's quickly try to understand how these three layers work in an MVC Framework: Model : The model is a representation of data; the model also holds the business logic for various application transactions. View : The view contains the display logic that is used to display the various user interface elements in the web browser. Controller : The controller controls the application logic in any MVC application; all actions and events are handled at the controller layer. The controller layer serves as a communication interface between the model and the view by controlling the model state and also by representing the changes to the view. The controller also provides an entry point for accessing the application. In the new ZF2 MVC structure, all the models, views, and controllers are grouped by modules. Each module will have its own set of models, views, and controllers, and will share some components with other modules. Zend Framework module – folder structure The folder structure of Zend Framework 2.0 module has three vital components—the configurations, the module logic, and the views. The following table describes how contents in a module are organized: Folder name Description config Used for managing module configuration src Contains all module source code, including all controllers and models view Used to store all the views used in the module Time for action – creating controllers and views Now that we have created the module, our next step would be having our own controllers and views defined. In this section, we will create two simple views and will write a controller to switch between them: Navigate to the module location: $ cd /var/www/CommunicationApp/module/Users Create the folder for controllers: $ mkdir -p src/Users/Controller/ Create a new IndexController file, < ModuleName >/src/<ModuleName>/Controller/: $ cd src/Users/Controller/ $ vim IndexController.php Add the following code to the IndexController file: <?php namespace UsersController; use ZendMvcControllerAbstractActionController; use ZendViewModelViewModel; class IndexController extends AbstractActionController { public function indexAction() { $view = new ViewModel(); return $view; } public function registerAction() { $view = new ViewModel(); $view->setTemplate('users/index/new-user'); return $view; } public function loginAction() { $view = new ViewModel(); $view->setTemplate('users/index/login'); return $view; } } The preceding code will do the following actions; if the user visits the home page, the user is shown the default view; if the user arrives with an action register, the user is shown the new-user template; and if the user arrives with an action set to login, then the login template is rendered. Now that we have created the controller, we will have to create necessary views to render for each of the controller actions. Create the folder for views: $ cd /var/www/CommunicationApp/module/Users $ mkdir -p view/users/index/ Navigate to the views folder, <Module>/view/<module-name>/index: $ cd view/users/index/ Create the following view files: index login new-user For creating the view/users/index/index.phtml file, use the following code: <h1>Welcome to Users Module</h1> <a href="/users/index/login">Login</a> | <a href = "/users/index/register">New User Registration</a> For creating the view/users/index/login.phtml file, use the following code: <h2> Login </h2> <p> This page will hold the content for the login form </p> <a href="/users"><< Back to Home</a> For creating the view/users/index/new-user.phtml file, use the following code: <h2> New User Registration </h2> <p> This page will hold the content for the registration form </p> <a href="/users"><< Back to Home</a> What just happened? We have now created a new controller and views for our new Zend Framework module; the module is still not in a shape to be tested. To make the module fully functional we will need to make changes to the module's configuration, and also enable the module in the application's configuration. Zend Framework module – configuration Zend Framework 2.0 module configuration is spread across a series of files which can be found in the skeleton module. Some of the configuration files are described as follows: Module.php: The Zend Framework 2 module manager looks for the Module.php file in the module's root folder. The module manager uses the Module.php file to configure the module and invokes the getAutoloaderConfig() and getConfig() methods. autoload_classmap.php: The getAutoloaderConfig() method in the skeleton module loads autoload_classmap.php to include any custom overrides other than the classes loaded using the standard autoloader format. Entries can be added or removed to the autoload_classmap.php file to manage these custom overrides. config/module.config.php: The getConfig() method loads config/module.config.php; this file is used for configuring various module configuration options including routes, controllers, layouts, and various other configurations. Time for action – modifying module configuration In this section will make configuration changes to the Users module to enable it to work with the newly created controller and views using the following steps: Autoloader configuration – The default autoloader configuration provided by the ZendSkeletonModule needs to be disabled; this can be done by editing autoload_classmap.php and replacing it with the following content: <?php return array(); Module configuration – The module configuration file can be found in config/module.config.php; this file needs to be updated to reflect the new controllers and views that have been created, as follows: Controllers – The default controller mapping points to the ZendSkeletonModule; this needs to be replaced with the mapping shown in the following snippet: 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), Views – The views for the module have to be mapped to the appropriate view location. Make sure that the view uses lowercase names separated by a hyphen (for example, ZendSkeleton will be referred to as zend-skeleton): 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), Routes – The last module configuration is to define a route for accessing this module from the browser; in this case we are defining the route as /users, which will point to the index action in the Index controller of the Users module: 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( 'route' => '/users', 'defaults' => array( '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), After making all the configuration changes as detailed in the previous sections, the final configuration file, config/module.config.php, should look like the following: <?php return array( 'controllers' => array( 'invokables' => array( 'UsersControllerIndex' => 'UsersControllerIndexController', ), ), 'router' => array( 'routes' => array( 'users' => array( 'type' => 'Literal', 'options' => array( // Change this to something specific to your module 'route' => '/users', 'defaults' => array( //Change this value to reflect the namespace in which // the controllers for your module are found '__NAMESPACE__' => 'UsersController', 'controller' => 'Index', 'action' => 'index', ), ), 'may_terminate' => true, 'child_routes' => array( // This route is a sane default when developing a module; // as you solidify the routes for your module, however, // you may want to remove it and replace it with more // specific routes. 'default' => array( 'type' => 'Segment', 'options' => array( 'route' => '/[:controller[/:action]]', 'constraints' => array( 'controller' => '[a-zA-Z][a-zA-Z0-9_-]*', 'action' => '[a-zA-Z][a-zA-Z0-9_-]*', ), 'defaults' => array( ), ), ), ), ), ), ), 'view_manager' => array( 'template_path_stack' => array( 'users' => __DIR__ . '/../view', ), ), ); Application configuration – Enable the module in the application's configuration—this can be done by modifying the application's config/application.config.php file, and adding Users to the list of enabled modules: 'modules' => array( 'Application', 'Users', ), To test the module in a web browser, open http://comm-app.local/users/ in your web browser; you should be able to navigate within the module. The module home page is shown as follows: The registration page is shown as follows: What just happened? We have modified the configuration of ZendSkeletonModule to work with the new controller and views created for the Users module. Now we have a fully-functional module up and running using the new ZF module system. Have a go hero Now that we have the knowledge to create and configure own modules, your next task would be to set up a new CurrentTime module. The requirement for this module is to render the current time and date in the following format: Time: 14:00:00 GMT Date: 12-Oct-2012 Summary We have now learned about setting up a new Zend Framework project using Zend's skeleton application and module. In our next chapters, we will be focusing on further development on this module and extending it into a fully-fledged application. Resources for Article : Further resources on this subject: Magento's Architecture: Part 2 [Article] Authentication with Zend_Auth in Zend Framework 1.8 [Article] Authorization with Zend_Acl in Zend Framework 1.8 [Article]
Read more
  • 0
  • 0
  • 7288

article-image-creating-your-first-freemarker-template
Packt
26 Jul 2013
10 min read
Save for later

Creating your first FreeMarker Template

Packt
26 Jul 2013
10 min read
(For more resources related to this topic, see here.) Step 1 – setting up your development directory If you haven't done so, create a directory to work in. I'm going to keep this as simple as possible, so we won't need a complicated directory structure. Everything can be done in one directory.Put the freemarker.jar in the directory. All future talk about files and running from the command-line will refer to your working directory. If you want to, you can set up a more advanced project-like set of directories. Step 2 – writing your first template This is a quick start, so let's just dive in and write the template. Open a file for editing called hello.ftl. The ftl extension is customary for FreeMarker Template Language files, but you are free to name your template files anything you want. Put this line in your file: Hello, ${name}! FreeMarker will replace the ${name} expression with the value of an element called name in the model. FreeMarker calls this an interpolation. I prefer to refer to this as "evaluating an expression", but you will encounter the term interpolation in the documentation. Everything else you have put in this initial template is static text. If name contained the value World, then this template would evaluate to: Hello, World! Step 3 – writing the Java code Templates are not scripts that can be run, so we need to write some Java code to invoke the FreeMarker engine and combine the template with a populated model. Here is that code: import java.io.*;import java.util.*;import freemarker.template.*;public class HelloFreemarker { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("name", "World"); Template template = cfg.getTemplate("hello.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} The highlighted line says that FreeMarker should look for FTL files in the "working directory" where the program is run as a simple Java application. If you set your project up differently, or run in an IDE, you may need to change this to an absolute path. The first thing we do is create a FreeMarker freemarker.template.Configuration object. This acts as a factory for freemarker.template.Template objects. FreeMarker has its own internal object types that it uses to extract values from the model.In order to use the objects that you supply, it must wrap these in its own native types. The job of doing this is done by an object wrapper. You must provide an object wrapper. It will always be FreeMarker's own freemarker.template.DefaultObjectWrapper unless you havespecial object wrapping requirements. Finally, we set the root directory for loading templates. For the purposes of our sample code, everything is in the same directory so we just set it to ".". Setting the template directory can throw an java.lang.IOException exception in this code. We simply allow that to be thrown out of the method. Next, we create our model, which is a simple map of java.lang.String keys to java.lang.Object values. The values can be simple object types such as String or java.lang.Number, or they can be complex object types, including arrays and collections. Our needs are simple here, so we're going to map "name" to the string "World". The next step is to get a Template object. We ask the Configuration instance to load the template into a Template object. This can also throw an IOException. The magic finally happens when we ask the Template instance to process the model and create an output. We already have the model, but where does the output go? For this, we need an implementation of java.io.Writer. For convenience, we are going to wrap the java.io.PrintWriter in java.lang.System.out with a java.io.OutputStreamWriter and give that to the template. After compiling this program, we can run it from the command line: java -cp .;freemarker.jar HelloFreemarker For Linux or OSX, you would use a ":" instead of a ";" in the command: java -cp .:freemarker.jar HelloFreemarker The result should be that the program prints out: Hello, World! Step 4 – moving beyond strings If you plan to create simple templates populated with preformatted text, then you now know all you need to know about FreeMarker. Chances are that you will, so let's take a look at how FreeMarker handles formatting other types and complex objects. Let's try binding the "name" object in our model to some other types of objects. We can replace: model.put("name", "World"); with: model.put("name", 123456789); The output format of the program will depend on the default locale, so if you are in the United States, you will see this: Hello, 123,456,789! If your default locale was set to Germany, you would see this: Hello, 123.456.789! FreeMarker does not call toString() method on instances of Number types it employs java.text.DecimalFormat. Unless you want to pass all of your values to FreeMarker as preformatted strings, you are going to need to understand how to control the way FreeMarker converts values to text. If preformatting all of the items in your model sounds like a good idea, it isn't. Moving "view" logic into your "controller" code is a sure-fre way to make updating the appearance of your site into a painful experience. Step 5 – formatting different types In the previous section, we saw how FreeMarker will choose a default method of formatting numbers. One of the features of this method is that it employs grouping separators: a comma or a period every three digits. It may also use a comma rather than a period to denote the decimal portion of the number. This is great for humans who may expect these formatting details, but if your number is destined to be parsed by a computer, it needs to be free of grouping separators and it must use a period as a decimal point. In this case, you need a way to control how FreeMarker decides to format a number. In order to control exactly how model objects are converted to text FreeMarker provides operators called built-ins. Let's create a new template called types.ftl and put in some expressions that use built-ins to control formatting: String: ${string?html}Number: ${number?c}Boolean: ${boolean?string("+++++", "-----")}Date: ${.now?time}Complex: ${object} The value .now come is a special variable that is automatically provided by FreeMarker. It contains the date and time when the Template began processing. There are other special variables, but this is the only one you're likely to use. This template is a little more complicated than the last template. The " ?" at the end of a variable name denotes the use of a built-in. Before we explore these particular built-ins, let's see them in action. Create a java program, FreemarkerTypes, which populates a model with values for our new template: import java.io.*;import java.math.BigDecimal;import java.util.*;import freemarker.template.*;public class FreemarkerTypes { public static void main(String[] args) throws IOException, TemplateException { Configuration cfg = new Configuration(); cfg.setObjectWrapper(new DefaultObjectWrapper()); cfg.setDirectoryForTemplateLoading(new File(".")); Map<String, Object> model = new HashMap<String, Object>(); model.put("string", "easy & fast "); model.put("number", new BigDecimal("1234.5678")); model.put("boolean", true); model.put("object", Locale.US); Template template = cfg.getTemplate("types.ftl"); template.process(model, new OutputStreamWriter(System.out)); }} Run the FreemarkerType program the same way you ran HelloFreemarker. You will see this output: String: easy &amp; fastNumber: 1234.5678Boolean: +++++Date: 9:12:33 AMComplex: en_US Let's walk through the template and see how the built-ins affected the output. Our purpose is to get a solid foundation in the basics. We'll look at more details about how to use FreeMarker features in later articles. First we output a String modified with the html built-in. This encoded the string for HTML, turning the & into the &amp; HTML entity. You will want this applied to a lot of your expressions on HTML pages in order to ensure proper display of your text and to prevent cross-site scripting ( XSS ) attacks. The second line outputs a number with the c built-in. This tells FreeMarker that the number should be written for parsing by computers. As we saw in the previous section, FreeMarker will by default format numbers with grouping separators. It will also localize the decimal point, using a comma instead of a period. This is great when you are displaying numbers to humans, but not computers. If you want to put an ID number in a URL or a price in an XML document, you will want to use this built-in to format it. Next, we format a Boolean. It may surprise you to learn that unless you use the string built-in, FreeMarker will not format a Boolean value at all. In fact, it throws an exception. Conceptually, "true" and "false" have no universal text representation. If you use string with no arguments, the interpolation will evaluate to either "true" or "false", but this is a default you can change. Here, we have told the built-in to use a series of + characters for "true" and a series of – characters for "false". Another type which FreeMarker will not process without a built-in is java.util.Date. The main issue here is that FreeMarker doesn't know whether you want to display a date, a time, or both. By specifying the time built-in we are letting FreeMarker know that we want to display a time. The output shown previously was generated shortly past nine o'clock in the morning. Finally, we see a complex object converted to text with no built-ins. Complex objects are turned into text by calling their toString() method, so you can use string built-ins on them. Step 6 – where do we go from here? We've reached the end of the Quick start section. You've created two simple templates and worked with some of the basic features of FreeMarker. You might be wondering what are the other built-ins, or what options they offer. In the upcoming sections we'll look at these options and also ways to change the default behavior. Another issue we've glossed over is errors. Once you have applied some of these built-ins, you must make sure that you supply the correct types for the named model elements. We also haven't looked at what happens when a referenced model element is missing. The FreeMarker manual provides excellent reference for all of this. Rather than trying to find your way around on your own, we'll take a guided tour through the important features in the Top Features section of the article. Quick start versus slow start A key difference between the Quick start and Top Features sections is that we'll be starting with the sample output. In this article, we created templates and evaluated them to see what we would get. In a real-world project, you will get better results if you worked backwards from the desired result. In many cases, you won't have a choice. The sample output will be generated by web designers and you will be expected to produce the same HTML with dynamic content. In other cases, you will need to work from mock-ups and decide the HTML for yourself. In these cases, it is still worth creating a static sample document. These static samples will show you where you need to apply some of the techniques. Summary In this article, we discussed how to create a freemarker template. Resources for Article: Further resources on this subject: Getting Started with the Alfresco Records Management Module [Article] Installing Alfresco Software Development Kit (SDK) [Article] Apache Felix Gogo [Article]
Read more
  • 0
  • 0
  • 7222

article-image-events
Packt
25 Jul 2013
2 min read
Save for later

Events

Packt
25 Jul 2013
2 min read
(For more resources related to this topic, see here.) What is a payload? The payload of an event, the event object, carries any necessary state from the producer to the consumer and is nothing but an instance of a Java class. An event object may not contain a type variable, such as <T>.   We can assign qualifiers to an event and thus distinguish an event from other events of an event object type. These qualifiers act like selectors, narrowing the set of events that will be observed for an event object type. There is no distinction between a qualifier of a bean type and that of an event, as they are both defined with @Qualifier. This commonality provides a distinct advantage when using qualifiers to distinguish between bean types, as those same qualifiers can be used to distinguish between events where those bean types are the event objects. An event qualifier is shown here: @Qualifier @Target( { FIELD, PARAMETER } ) @Retention( RUNTIME ) public @interface Removed {} How do I listen for an event? An event is consumed by an observer method , and we inform Weld that our method is used to observe an event by annotating a parameter of the method, the event parameter , with @Observes . The type of event parameter is the event type we want to observe, and we may specify qualifiers on the event parameter to narrow what events we want to observe. We may have an observer method for all events produced about a Book event type, as follows: public void onBookEvent(@Observes Book book) { ... } Or we may choose to only observe when a Book is removed, as follows: public void onBookRemoval(@Observes @Removed Book book) { ... } Any additional parameters on an observer method are treated as injection points. An observer method will receive an event to consume if: The observer method is present on a bean that is enabled within our application The event object is assignable to the event parameter type of the observer method
Read more
  • 0
  • 0
  • 1875
article-image-choosing-lync-2013-clients
Packt
25 Jul 2013
5 min read
Save for later

Choosing Lync 2013 Clients

Packt
25 Jul 2013
5 min read
(For more resources related to this topic, see here.) What clients are available? At the moment, we are writing a list that includes the following clients: Full client, as a part of Office 2013 Plus The Lync 2013 app for Windows 8 Lync 2013 for mobile devices The Lync Basic 2013 version A plugin is needed to enable Lync features on a virtual desktop. We need the full Lync 2013 client installation to allow Lync access to the user. Although they are not clients in the traditional sense of the word, our list must also include the following ones: The Microsoft Lync VDI 2013 plugin Lync Online (Office 365) Lync Web App Lync Phone Edition Legacy clients that are still supported (Lync 2010, Lync 2010 Attendant, and Lync 2010 Mobile) Full client (Office 2013) This is the most complete client available at the moment. It includes full support for voice, video, IM (similarly to the previous versions), and integration for the new features (for example, high-definition video, the gallery feature to see multiple video feeds at the same time, and chat room integration). In the following screenshot, we can see a tabbed conversation in Lync 2013: Its integration with Office implies that the group policies for Lync are now part of the Office group policy's administrative templates. We have to download the Office 2013 templates from the Microsoft site and install the package in order to use them (some of the settings are shown in the following screenshot): Lync is available with the Professional Plus version of Office 2013 (and with some Office 365 subscriptions). Lync 2013 app for Windows 8 The Lync 2013 app for Windows 8 (also called Lync Windows Store app) has been designed and optimized for devices with a touchscreen (with Windows 8 and Windows RT as operating systems). The app (as we can see in the following screenshot) is focused on images and pictures, so we have a tile for each contact we want in our favorites. The Lync Windows Store app supports contact management, conversations, and calls, but some features such as Persistent Chat and the advanced management of Enterprise Voice, are still an exclusive of the full client. Also, talking about conferencing, we will not be able to act as the presenter or manage other participants. The app is integrated with Windows 8, so we are able to use Search to look for Lync contacts (as shown in the following screenshot): Lync 2013 for mobile devices The Lync 2013 client for mobile devices is the solution Microsoft offers for the most common tablet and smartphone systems (excluding those tablets using Windows 8 and Windows RT with their dedicated app). It is available for Windows phones, iPad/iPhone, and for Android. The older version of this client was basically an IM application, and that is something that somehow limited the interest in the mobile versions of Lync. The 2013 version that we are talking about includes support for VOIP and video (using Wi-Fi networks and cellular data networks), meetings, and for voice mail. From an infrastructural point of view, enabling the new mobile client means to apply the Lync 2013 Cumulative Update 1 (CU1) on our Front End and Edge servers and publish a DNS record (lyncdiscover) on our public name servers. If we have had previous experience with Lync 2010 mobility, the difference is really noticeable. The lyncdiscover record must be pointed to the reverse proxy. Reverse proxy deployment requires for a product to be enabled to support Lync mobility, and a certificate with the lyncdiscover's public domain name needs to be included. Lync Basic 2013 version Lync Basic 2013 is a downloadable client that provides basic functionalities. It does not provide support for advanced call features, multiparty videos or galleries, and skill-based searches. Lync Basic 2013 is dedicated to companies with Lync 2013 on-premises, and it is for Office 365 customers that do not have the full client included with their subscription. A client will look really similar to the full one, but the display name on top is Lync Basic as we can see in the following screenshot: Microsoft Lync VDI 2013 plugin As we said before, the VDI plugin is not a client; it is software we need to install to enable Lync on virtual desktops based on the most used technologies, such as Microsoft RDS, VMware View, and XenDesktop. The main challenge of a VDI scenario is granting the same features and quality we expect from a deployment on a physical machine. The plugin uses "Media Redirection", so that audio and video originate and terminate on the plugin running on the thin client. The user is enabled to connect conferencing/telephony hardware (for example microphones, cams, and so on) to the local terminal and use the Lync 2013 client installed on the virtual desktop as it was running locally. The plugin is the only Lync software installed at the end-user workplace. The details of the deployment (Deploying the Lync VDI Plug-in ) are available at http://technet.microsoft.com/en-us/library/jj204683.aspx. Resources for Article : Further resources on this subject: Innovation of Communication and Information Technologies [Article] DPM Non-aware Windows Workload Protection [Article] Installing Microsoft Dynamics NAV [Article]
Read more
  • 0
  • 0
  • 6986

article-image-merchandising-success
Packt
25 Jul 2013
17 min read
Save for later

Merchandising for Success

Packt
25 Jul 2013
17 min read
(For more resources related to this topic, see here.) Shop categories Creating product categories, like most things in PrestaShop, is easy and we will cover that soon. First we need to plan the ideal category structure, and this demands a little thought. Planning your category structure You should think really hard about the following questions: What is your business key – general scope or specific? Remember, if the usability is complex for you, it will be difficult to get future customers. So what will make the navigation simple and intuitive for your customers? What structure will support any plan you might have for expanding the range in the future? What do your competitors use? What could you do to make your structure better for your customers than anybody else's? When you have worked it out, we will create the category structure and then we will create the content (images and descriptions) for your category pages. First you need to consider what categories you want for your product range. Here are some examples: If your business is geared to the general scope, then it could be something like: Books Electronics Home and garden Fashion, jewelry, and beauty However, if your business is a closed market, for example electronics, then it could be something like: Cameras and photography Mobile and home phones Sound and vision Video games and consoles You get the idea. My examples don't have categories, subcategories, or anything deeper just for the sake of it. There are no prizes for compartmentalizing. If you think a fairly fat structure is what your customer wants, then that is what you should do. If you are thinking, "Hang on, I don't have any categories let alone any subcategories," don't panic. If your research and common sense says you should only have a few categories without any subcategories, then stick to it. Simplicity is the most important thing. Pleasing your customer and making your shop intuitive for your customer will make you more money than obscure compartmentalizing of your products. Creating your categories Have your plan close at hand. Ideally, have it written down or, if it is very simple, have it clearly in your head. Enough of the theory, it is now time for action. Time for action – how to create product categories Make sure that you are logged into your PrestaShop back office. We will do this in two steps. First we will create your structure as per your plan, then in the next Time for action section, we will implement the category descriptions. Let's get on with the structure of your categories: Click on Catalog and you will see the categories. Click on one. Now click on the green + symbol to add a new subcategory. PrestaShop defines even your top-level categories as subcategories because the home category is considered to be the the top-level category. Just type in the title of your first main category. Don't worry about the other options. The descriptions are covered in a minute and the rest is to do with the search engines. You have created your first category. Now that you are back to the home category, you can click on the green button again to create your next main category. To do so, save as before and remember to check the Home radio button, when you are ready, to create your next main category. Repeat until all top-level categories are created. Have a quick look at your shop front to make sure you like what you see. Here is a screenshot from the PrestaShop demo store: Now for the subcategories. We will create one level at a time as earlier. So we will create all the subcategories before creating any categories within subcategories. In your home category, you will have a list of your main categories. Click on the first one in the list that requires a subcategory. Now click on the create subcategory + icon. Type the name of your subcategory, leaving the other options, and click on Save. Go back to the main category if you want to create another subcategory. Play around with clicking in and out of categories and subcategories until you get used to how PrestaShop works. It isn't complicated, but it is easy to get lost and start creating stuff in the wrong place. If this happened to you, just click on the bin icon to delete your mistake. Then pay close attention to the category or subcategory you are in and carry on. You can edit the category order from the main catalog page by selecting the box of the category you want to move and then clicking an up or down arrow. Finish creating your full category structure. Play with the category and subcategory links on your shop front to see how they work and then move on. What just happened? Superb! Your category structure is done and you should be fairly familiar with navigating around your categories in your control panel. Now we can add the category and subcategory descriptions. I left it empty until now because as you might have noticed, the category creation palaver can be a bit fiddly and it makes sense to keep it as straightforward as possible. Here are some tips for writing good category descriptions followed by a quick Time for action section for entering descriptions into the category itself. Creating content for your categories and subcategories I see so many shops online with really dull category descriptions. Category descriptions should obviously describe but they should also sell! Here are a few tips for writing some enticing descriptions: Keep them short—two paragraphs at the most. People do not visit your website to read. The detail should be in the products themselves. Similar to a USP, category descriptions should be a combination of fact and emotive description that focuses on the benefit to the customer. Try and be as specific as you can about each category and subcategory so that each description is accurate and relevant in its own right. For example, don't let the category steal all the glory from a subcategory. It is very important for SEO. Time for action – adding category descriptions Be ready with the text for all your categories or you can, of course, type them as you go: Go to Catalog and then on the first categories' Edit button. Enter your category description and click on Save. Click on the subcategories of your first category. Then enter and save a description for each (if any). Navigate to the second main category and enter a description. Repeat the same for each of the subcategories in turn. Reiterate the preceding steps for each category. What just happened? You now have a fully functioning category structure. Now we can go on to look at adding some of your products. Adding products Click on the Catalog tab and then click on product. It is pretty similar to category. In the Time for action section, I will cover what to enter in each box as a separate item. However, I will skip over a few items like meta tags because they are best dealt with on a site-wide basis separately. The other important option is the product description. This deserves special treatment because it needs to be effective at selling your product. With the categories, I specifically showed you how to create the structure before filling in the descriptions because I know others who have got into a muddle in the past. It is less likely, but still possible, to get into a bit of a muddle with the products as well. This is especially true if you have lots of them. Perhaps you should be the judge of whether to fill in your catalog before adding descriptions or add descriptions as you go. So here is a handy guide to create great product descriptions. It will help you to decide whether you should fill product descriptions at the same time as the rest of the details, or whether you should just enter the product title and revisit them later to fill in the rest of the details. Product descriptions that sell Don't fall into the trap of simply describing your products. It might be true that a potential customer does need to know the dry facts like sizes and other uninspiring information, but don't put this information in the brief description or description boxes. PrestaShop provides a place for bare facts—the Features tab (there will be more on this soon). The brief description and description boxes that will be described in more detail soon are there to sell to your customers —to increase their interest to a level that makes them "want" the product. It actually suggests they pop it in their cart and buy it. The way you do this is with a very simple and age-old formula that actually works. And,of course, having whetted your appetite, it would be rude not to tell you about it. So here it goes. Actually selling the product Don't just tell your customers about your product, sell them the product. Explain to them why they should buy it! Use the FAB technique—feature, advantage, benefit: Tell the customer about a feature: This teddy bear is made from a new fiber and wool mix This laptop has the brand new i7 processor made by Intel This guide was written by somebody who has survived cancer And the advantage that feature gives them: So it is really, really soft and fluffy! i7 is the very first processor series with a DDR3 integrated memory controller! So all the information and advice is real and practical Then emphasize the real emotive benefit this gives them: Which means your little boy or girl is going to feel safe, loved, and secure with this wonderful bear Meaning that this laptop gives your applications, up to a 30 percent performance boost over every other processor series ever made Giving you or your loved one the very best chance of beating with cancer and having more precious time they have with the people they love Don't just stop at one feature. Highlight the most important features. By most important features, of course I mean the features that lead to the best most emotive and personal benefits. Not too many though. If your product has loads of benefits, then try and pick just the best ones. Three is perfect. Three really is a magic number. All the best things come in threes and scientific research actually proves that thoughts or ideas presented in threes influence human emotion the most. If you must have more than three features, summarize them in a quick bulleted list. Three is good: Soft, strong, and very long Peace, love, and understanding Relieves pain, and clears your nose without drowsiness Ask for the sale When you have used the FAB technique, ask the customer to part with their money! Say something like, "Select the most suitable option for you and click on Add to cart" or "Remember that abc is the only xyz with benefit 1, benefit 2, and benefit 3.Order yours now!" Create some images with GIMP If you have a favorite photo editor then great. If you haven't, then I suggest you use GIMP. It's cool, easy, and free: www.gimp.org. Time for action – how to add a product to PrestaShop Let's add some products: Click on catalog and then click on product. Click on the Add a new product link. You will see the following screenshots. Okay, I admit it. It does look a little bit daunting. But actually it is not that difficult. Much of it is optional, and even more we will revisit after further discussion. So don't despair. There is a table of explanations for you after the screenshots. Field Explanation Name The short name/description of your product. There is a brief description and a full description box later, but perhaps a bit more than a short name should go here. For example, 50 cm golden teddy bear-extra fluffy version. Status Choose Enabled or Disabled. If your product is for sale as soon as you're open, click Enabled. If your product is discontinued or needs to be removed from sale for any reason, click Disabled. Reference An optional unique reference for your product. For example, 50cmFT - xfluff. EAN13 The European Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. Jan The Japanese Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. UPC The USA and Canadian Article Number or barcode. If your product has one (and almost everything does), use it because some people use this for searching or identifying a product. Visibility If you want to show the item on the catalog, only on the search or everywhere. Type You can chose if there is a physical product, pack, or a downloadable product. Options To make the product available/unavailable to order. To show or hide the price. To enable/disable the online message. Condition If the item is brand new, second hand, or refurbished. Short description Here you need to add a brief description about the item. This text will be shown on the catalog. Description When a customer clicks on the item, he will read this text. Tags Leave blank for now. Fill in your product page as described previously. Click on the Images tab at the top of the product page. Browse to the image you created earlier and upload it. Note that PrestaShop will compress the image for you. It is worth having a look at the final image and maybe varying the amount (if any) that you apply when creating your product images. Click on Save and then go and admire your product in your store front. Repeat until all your products are done, but don't forget to check how things look from the customer's point of view. Visit the category and product pages to check whether things are like the way you had expected them to be. If you have a huge range that is going to take you a long time, then consider just entering your key products. Proceed with this to get the money coming in and add the rest of your range in a bit over the course of time. What just happened? Now you have something to actually sell, let's go and showcase some of your products. Here is how to make some of your products stand out from the crowd. Highlighting products Next is a list of the different ways to promote elements of your range. There is also an explanation of each option and how to do it, as well. New products So you have just found some great new products. How do you let your visitors know about it? You could put an announcement on your front page. But what if a potential customer doesn't visit your front page or perhaps misses the announcement? Welcome to the new products module. Time for action – how to highlight your newest products The following are the quick steps to enable and configure the highlighting of any new products you add. Once this is set up, it will happen automatically, now and in the future. Click on the Modules tab and scroll down to the New products block module. Click on Install. Scroll back down to the module you just installed and click on Configure. Choose a number of products to be showcased and click on Save. Don't forget to have a look at your shop front to see how it works. Click around a few different pages and see how the highlighted product alternates. What just happened? Now you are done with new products and they will never go unnoticed. Specials Special refers to the price. This is the traditional special offer that customers know and love. Time for action – creating a special offer The following steps help us create special offers and make sure they will never go unnoticed: Click on the Catalog tab and navigate to the category or subcategory that contains the product you want to make available as a special offer. Click on the Products to go to its details page. Click on Prices and go to the Specific prices section. Click on Add a new specific price. You can enter an actual monetary amount in the first box or a percentage in the second box. Monetary amounts work well for individual discounts and percentages work well as part of a wider sale. But this is not a hard-and-fast rule. So choose what you think your customers might prefer. Click on Save. Now go and have a look at the category that the product is in and click on the product as well. You'll notice the smart enticing manner that PrestaShop uses to highlight the offer. You can have as many or as few special offers as you like. But what if you wanted to really push a product offer or a wider sale? Yes, you guessed it, there's a module. Click on the Modules tab and scroll down to Specials block and click on Install. Getting the hang of this? Thought so. Go and have a look at the effect on your store. What just happened? Your first sale is underway. Recently viewed What's this then? When customers browse products, they forget what they have seen or how to find it again. By prominently displaying a module with their most recent viewings, they can comfortably click back and forth comparing until they have made a buying decision. Now you don't need me to tell you how to set this up. Go to the module, switch it on, and you're done. Best sellers This is just what it says. Not necessarily an offer or anything else is special about it. But if it sells, well there must be something worth talking about it. Install the best sellers module in the usual place to highlight these items. Accessories I love accessories. It's all about add-on sales. Ever been to a shop to buy a single item and come out with several? Electrical retailers are brilliant at this. Go in for a PC and come out with a printer, scanner, camera, ink, paper, and the list goes on. Is it because their customers are stupid? Of course they are not! It is because they offer compelling or essential accessories that are relevant to the sale. By creating accessories, you will get a new tab at the bottom of each relevant product page along with PrestaShop making suggestions at key points of the sale. All we have to do is tell PrestaShop what is an accessory to our various products and PrestaShop will do the rest. Time for action – creating an accessory Accessories are products. So any product can be an accessory of any other product. All you have to do is decide what is relevant to what. Just think about appropriate accessories for your products and read on. The quick guide for creating accessories are as follows: Click on the Catalog tab, then click on product. Find the product you think should have some accessories. Click on it to edit it by navigating to Associations on the page and find the Accessories section, as shown in the following screenshot: Find the product that you wish to be an accessory by typing the first letters of the product name and selecting it. Save your amended product. You can add as many accessories to each product as you like. Go and have a look at your product on your shop front and notice the Accessories tab. What just happened? You just learned how to accessorize. It's silly not to accessorize, not because it costs you nothing, but because a few clicks could significantly increase your turnover. Now we can go on to explore more product ideas.
Read more
  • 0
  • 0
  • 1893

article-image-scaffolding-command-line-tool
Packt
25 Jul 2013
4 min read
Save for later

Scaffolding with the command-line tool

Packt
25 Jul 2013
4 min read
(For more resources related to this topic, see here.) CakePHP comes packaged with the Cake command-line tool, which provides a number of code generation tools for creating models, controllers, views, data fixtures, and more, all on the fly. Please note that this is great for prototyping, but is non-ideal for a production environment. On your command line, from the cake-starter folder, type the following: cd appConsolecake bake You will see something similar to the following: > Console/cake bakeWelcome to CakePHP v2.2.3 Console---------------------------------------------------------------App : appPath: /path/to/app/---------------------------------------------------------------Interactive Bake Shell---------------------------------------------------------------[D]atabase Configuration[M]odel[V]iew[C]ontroller[P]roject[F]ixture[T]est case[Q]uitWhat would you like to Bake? (D/M/V/C/P/F/T/Q)> As you can see, there's a lot to be done with this tool. Note that there are other commands beside bake, such as schema, which we be our main focus in this article. Creating the schema definition Inside the app/Config/Schema folder, create a file called glossary.php. Insert the following code into this file: <?php/** * This schema provides the definitions for the core tables in the glossary app. * * @var $glossary_terms - The main terms/definition table for the app * @var $categories - The categories table * @var $terms_categories - The lookup table, no model will be created. * * @author mhenderson * */class GlossarySchema extends CakeSchema { public $glossaryterms = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'title' => array('type' => 'string', 'null' => false, 'length' => 100), 'definition' => array('type' => 'string', 'null' => false, 'length' => 512) ); public $categories = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'name' => array('type' => 'string', 'null' => false, 'length' => 100), 'definition' => array('type' => 'string', 'null' => false, 'length' => 512) ); public $glossaryterms_categories = array( 'id' => array('type' => 'integer', 'null' => false, 'key' => 'primary'), 'glossaryterm_id' => array('type' => 'integer', 'null' => false), 'category_id' => array('type' => 'string', 'null' => false) );} This class definition represents three tables: glossaryterms , categories, and a lookup table to facilitate the relationship between the two tables. Each variable in the class represents a table, and the array keys inside of the variable represent the fields in the table. As you can see, the first two tables match up with our earlier architecture description. Creating the database schema On the command line, assuming you haven't moved to any other folders, type the following command: Console/cake schema create glossary You should then see the following responses. When prompted, type y once to drop the tables, and again to create them. Welcome to CakePHP v2.2.3 Console---------------------------------------------------------------App : appPath: /path/to/app---------------------------------------------------------------Cake Schema Shell---------------------------------------------------------------The following table(s) will be dropped.glossarytermscategoriesglossaryterms_categoriesAre you sure you want to drop the table(s)? (y/n)[n] > yDropping table(s).glossaryterms updated.categories updated.glossaryterms_categories updated.The following table(s) will be created.glossarytermscategoriesglossaryterms_categoriesAre you sure you want to create the table(s)? (y/n)[y] > yCreating table(s).glossaryterms updated.categories updated.glossaryterms_categories updated.End create. If you look at your database now, you will notice that the three tables have been created. We can also make modifications to the glossary.php file and run the cake schema command again to update it. If you want to try something a little more daring, you can use the migrations plugin found at https://github.com/CakeDC/migrations. This plugin allows you to save "snapshots" of your schema to be recalled later, and also allows you to write custom scripts to migrate "up" to a certain snapshot version, or migrate "down" in the event of an emergency or a mistake. Summary In this article we saw the use of the schema tool and also its database. Resources for Article: Further resources on this subject: Create a Quick Application in CakePHP: Part 1 [Article] Working with Simple Associations using CakePHP [Article] Creating and Consuming Web Services in CakePHP 1.3 [Article]
Read more
  • 0
  • 0
  • 1840
article-image-implementing-log-screen-using-ext-js
Packt
18 Jul 2013
31 min read
Save for later

Implementing a Log-in screen using Ext JS

Packt
18 Jul 2013
31 min read
In this article Loiane Groner, author of Mastering Ext JS, talks about developing a login page for an application using Ext JS. It is very common to have a login page for an application, which we can use to control access to the system by identifying and authenticating the user through the credentials presented by him/her. Once the user is logged in, we can track the actions performed by the user. We can also restrain access of some features and screens of the system that we do not want a particular user or even a specific group of users to have access to. In this article, we will cover: Creating the login page Handling the login page on the server Adding the Caps Lock warning message in the Password field Submitting the form by pressing the Enter key Encrypting the password before sending to the server (For more resources related to this topic, see here.) The Login screen The Login window will be the first view we are going to implement in this project. We are going to build it step by step and it will have the following capabilities: User will enter the username and password to log in Client-side validation (username and password required to log in) Submit the Login form by pressing Enter Encrypt the password before sending to the server Password Caps Lock warning (similar to Windows OS) Multilingual capability Except for the multilingual capability, we will implement all the other features throughout this topic. So at the end of the implementation, we will have a Login window that looks like the following: So let's get started! Creating the Login screen Under the app/view directory, we will create a new file named Login.js.In this file, we will implement all the code that the user is going to see on the screen. Inside the Login.js file, we will implement the following code: Ext.define('Packt.view.Login', { // #1 extend: 'Ext.window.Window', // #2 alias: 'widget.login', // #3 autoShow: true, // #4 height: 170, // #5 width: 360, // #6 layout: { type: 'fit' // #7 }, iconCls: 'key', // #8 title: "Login", // #9 closeAction: 'hide', // #10 closable: false // #11 }); On the first line (#1) we have the definition of the class. To define a class we use Ext.define, followed by parentheses (()), and inside the parentheses we first declare the name of the class, followed by a comma (") and curly brackets ({}), and at the end a semicolon. All the configurations and properties (#2 to #11) go inside curly brackets. We also need to pay attention to the name of the class. This is the formula suggested by Sencha in Ext JS MVC projects: App Namespace + package name + name of the JS file. we defined the namespace as Packt (configuration name inside the app.js file). We are creating a View for this project, so we will create the JS file under the view package/directory. And then, the name of the file we created is Login.js; therefore, we will lose the .js part and use only Login as the name of the View. Putting all together, we have Packt.view.Login and this will be the name of our class. Then, we are saying that the Login class will extend from the Window class (#2), because we want it to be displayed inside a window, and not on any other component. We are also assigning this class an alias (#3). The alias for a class that extends from a component always starts with widget., followed by the alias we want to assign. The naming convention for an alias is lowercase . It is also important to remember that the alias must be unique in an application. In this case we want to assign login as alias to this class so later we can instantiate this same class using its alias (that is the same as xtype). For example, we can instantiate the Login class using four different options: Using the complete name of the class, which is the most used one: Ext.create('Packt.view.Login'); Using the alias in the Ext.create method: Ext.create('widget.login'); Using the Ext.widget, which is a shorthand way of using Ext.ClassManager.instantiateByAlias: Ext.widget('login'); Using the xtype as an item of another component: items: [ { xtype: 'login' } ] In this book we will use the first, third, and fourth options most of the time. Then we have autoShow configured to true (#4). What happens with the window is that instantiating the component is not enough for displaying it. When we instantiate the window we will have its reference, but it will not be displayed on the screen. If we want it to be displayed we need to call the method show() manually. Another option is to have the autoShow configuration set to true. This way the window will be automatically displayed when we instantiate it. We also have height (#5) and width (#6) of the window. We set the layout as fit (#7) because we want to add a form inside this window that will contain the username and password fields. And using the fit layout the form will occupy all the body space of the window. Remember that when using the fit layout we can only have one item as a child component. We are setting an iconCls (#8) property to the window; this way we will have an icon of a key in the header of the window. We can also give a title for the window (#9), and in this case we chose Login. Following is the declaration of the key style used by the iconCls property: .key { background-image:url('../icons/key.png') !important; } All the styles we will create to use as iconCls have a format like the preceding one. And at last we have the closeAction (#10) and closable (#11) configurations. The closeAction configuration will tell if we want to destroy the window when we close it. In this case, we do not want to destroy it; we only want to hide it. The closable configuration tells if we want to display the X icon on the top-right corner of the window. As this is a Login window, we do not want to give this option for the user. If you would like to, you can also add the resizable and draggable options as false. This will prevent the user to drag the Login window around and also to resize it. So far, this will be the output we have. A single window with an icon at the top-left corner with a title Login : The next step is to add the form with the username and password fields. We are going to add the following code to the Login class: items: [ { xtype: 'form', // #12 frame: false, // #13 bodyPadding: 15, // #14 defaults: { // #15 xtype: 'textfield', // #16 anchor: '100%', // #17 labelWidth: 60 // #18 }, items: [ { name: 'user', fieldLabel: "User" }, { inputType: 'password', // #19 name: 'password', fieldLabel: "Password" } ] } ] As we are using the fit layout, we can only declare one child item in this class. So we are going to add a form (#12) and to make the form to look prettier, we are going to remove the frame property (#13) and also add padding to the form body (#14). The form's frame property is by default set to false. But by default, there is a blue border that appears if we to do not explicitly add this property set to false. As we are going to add two fields to the form, we probably want to avoid repeating some code. That is why we are going to declare some field configurations inside the defaults configuration of the form (#15); this way the configuration we declare inside defaults will be applied to all items of the form, and we will need to declare only the configurations we want to customize. As we are going to declare two fields, both of them will be of type textfield. The default layout of the form is the anchor layout, so we do not need to make this declaration explicit. However, we want both fields can occupy all the horizontal available space of the body of the form. That is why we are declaring anchor as 100% (#17). By default, the width attribute of the label of the TextField class is 100 pixels. It is too much space for a label User and Password, so we are going to decrease this value to 60 pixels (#18). And finally, we have the user text field and the password text field. The configuration name is what we are going to use to identify each field when we submit the form to the server. But there is only one detail missing: when the user types the password into the field the system cannot display its value, we need to mask it somehow. That is why inputType is 'password' (#19) for the password field, as we want to display bullets instead of the original value, and the user will not be able to see the password value. Now we have improved our Login window a little more. This is the output so far: Client-side validations The field component in Ext JS provides some client-side validation capability. This can save time and also bandwidth (the system will only make a server request when it is sure the information has passed the basic validation). It also helps to point out to the user where they have gone wrong in filling out the form. Of course, it is also good to validate the information again on the server side for security reasons, but for now we will focus on the validations we can apply to the form of our Login window. Let's brainstorm some validations we can apply to the username and password fields: The username and password must be mandatory—how are going to authenticate the user without a username and password? The user can only enter alphanumeric characters (A-Z, a-z, and 0-9) in both the fields. The user can only type between 3 and 25 chars in the username field. The user can only type between 3 and 15 chars in the password field. So let's add into the code the ones that are common to both fields: allowBlank: false, // #20 vtype: 'alphanum', // #21 minLength: 3, // #22 msgTarget: 'under' // #23 We are going to add the preceding configurations inside the defaults configuration of the form, as they all apply to both the fields we have. First, both need to be mandatory (#20), we can only allow to enter alphanumeric characters (#21) and the minimum number of characters the user needs to input is three (#22). Then, a last common configuration is that we want to display any validation error message under the field (#23). And the only validation customized for each field is that we can enter a maximum of 25 characters in the User field: name: 'user', fieldLabel: "User", maxLength : 25 And a maximum of 15 characters in the Password field: inputType: 'password', name: 'password', fieldLabel: "Password", maxLength : 15 After we apply the client validations, we will have the following output in case the user went wrong in filling out the Login window: If you do not like it, we can change the place where the error message appears. We just need to change the msgTarget value. The available options are: title, under, side, and none. We can also show the error message as a tooltip (qtip) or display it in a specific target (inner HTML of a specific component). Creating custom VTypes Many systems have a special format for passwords. Let's say we need the password to have at least one digit (0-9), one letter lowercase, one letter uppercase, one special character (@, #, $, %, and so on) and its length between 6 and 20 characters. We can create a regular expression to validate that the password is entering into the app. And to do this, we can create a custom VType to do the validation for us. Creating a custom VType is simple. For our case, we can create a custom VType called passRegex: Ext.apply(Ext.form.field.VTypes, { customPass: function(val, field) { return /^((?=.*d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%]).{6,20})/.test(val); }, customPassText: 'Not a valid password. Length must be at least 6 characters and maximum of 20Password must contain one digit, one letter lowercase, one letter uppercase, onse special symbol @#$% and between 6 and 20 characters.', }); customPass is the name of our custom VType, and we need to declare a function that will validate our regular expression. customPassText is the message that will be displayed to the user in case the incorrect password format is entered. The preceding code can be added anywhere on the code, inside the init function of a controller, inside the launch function of the app.js, or even in a separate JavaScript file (recommended) where you can put all your custom VTypes. To use it, we simply need to add vtype: 'customPass' to our Password field. To learn more about regular expressions, please visit http://www.regular-expressions.info/. Adding the toolbar with buttons So far we have created the Login window, which contains a form with two fields and it is already being validated as well. The only thing missing is to add the two buttons: cancel and submit . We are going to add the buttons as items of a toolbar and the toolbar will be added on the form as a docked item. The docked items can be docked to either on the top, right, left, or bottom of a panel (both form and window components are subclasses of panel). In this case we will dock the toolbar to the bottom of the form. Add the following code right after the items configuration of the form: dockedItems: [ { xtype: 'toolbar', dock: 'bottom', items: [ { xtype: 'tbfill' //#24 }, { xtype: 'button', // #25 itemId: 'cancel', iconCls: 'cancel', text: 'Cancel' }, { xtype: 'button', // #26 itemId: 'submit', formBind: true, // #27 iconCls: 'key-go', text: "Submit" } ] } ] If we take a look back to the screenshot of the Login screen we first presented at the beginning of this article, we will notice that there is a component for the translation/multilingual capability. And after this component there is a space and then we have the Cancel and Submit buttons. As we do not have the multilingual component yet, we can only implement the two buttons, but they need to be at the right end of the form and we need to leave that space. That is why we first need to add a toolbar fill component (#24), which is going to instruct the toolbar's layout to begin using the right-justified button container. Then we will add the Cancel button (#25) and then the Submit button (#26). We are going to add icons to both buttons (iconCls) and later, when we implement the controller class, we will need a way to identify the buttons. This is why we assigned itemId to both of them. We already have the client validations, but even with the validations, the user can click on the Submit button and we want to avoid this behavior. That is why we are binding the Submit button to the form (#27); this way the button will only be enabled if the form has no error from the client validation. In the following screenshot, we can see the current output of the Login form (after we added the toolbar) and also verify the behavior of the Submit button: Running the code To execute the code we have created so far, we need to make a few changes in the app.js file. First, we need to declare views we are using (only one in this case). Also, as we are going to instantiate using the Login class' xtype, we need to declare this class in the requires declaration: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And the last change is inside the launch function. now we only need to replace the console.log message with the Login instance (#1): splashscreen.next().fadeOut({ duration: 1000, remove:true, listeners: { afteranimate: function(el, startTime, eOpts ){ Ext.widget('login'); // #1 } } }); Now the app.js is OK and we can execute what we have implemented so far! Using itemId versus id Ext.Cmp is bad! Before we create the controller, we will need to have some knowledge about Ext.ComponentQuery selectors. And in this topic we will discuss a subject to help us to understand better why we took some decisions while creating the Login window and why we are going to take some other decisions on the controller topic. Whenever we can, we will always try to use the itemId configuration instead of id to uniquely identify a component. And here comes the question, why? When using id, we need to make sure that id is unique, and none of all the other components of the application has the same id. Now imagine the situation where you are working with other developers of the same team and it is a big application. How can you make sure that id is going to be unique? Pretty difficult, don't you think? And this can be a hard task to achieve. Components created with an id may be accessed globally using Ext.getCmp, which is a short-hand reference for Ext.ComponentManager.get. Just to mention one example, when using Ext.getCmp to retrieve a component by its id, it is going to return the last component declared with the given id. And if the id is not unique, it can return the component that you are not expecting and this can lead into an error of the application. Do not panic! There is an elegant solution, which is using itemId instead of id. The itemId can be used as an alternative way to get a reference of a component. The itemId is an index to the container's internal MixedCollection, and that is why the itemId is scoped locally to the container. This is the biggest advantage of the itemId. For example, we can have a class named MyWindow1, extending from window and inside this class we can have a button with item ID submit. Then we can have another class named MyWindow2, also extending from window, and also with a button with item ID submit. Having two item IDs with the same value is not an issue. We only need to be careful when we use Ext.ComponentQuery to retrieve the component we want. For example, if we have a Login window whose alias is login and another screen called the Registration window whose alias is registration. Both the windows have a button Save whose itemId is save. If we simply use Ext.ComponentQuery.query('button#save'), the result will be an array with two results. However, if we narrow down the selector even more, let's say we want the Login window's Save button, and not the Registration window's Save button, we need to use Ext.ComponentQuery.query('login button#save'), and the result will be a single item, which is exactly we expect. You will notice that we will not use Ext.getCmp in the code of our project. Because it is not a good practice; especially for Ext JS 4 and also because we can use itemId and Ext.ComponentQuery instead. We will understand Ext.ComponentQuery better during the next topic. Creating the login controller We have created the view for the Login screen so far. As we are following the MVC architecture, we are not implementing the user interaction on the View class. If we click on the buttons on the Login class, nothing will happen because we have not yet implemented this logic. We are going to implement this logic now on the controller class. Under the app/controller directory, we will create a new file named Login.js. In this file we will implement all the code related to the events management of the Login screen. Inside the Login.js file we will implement the following code, which is only a base of the controller class we are going to implement: Ext.define('Packt.controller.Login', { // #1 extend: 'Ext.app.Controller', // #2 views: [ 'Login' // #3 ], init: function(application) { // #4 this.control({ // #5 }); } }); As usual, on the first line of the class we have its name (#1). Following the same formula we used for the view/Login.js we will have Packt (app namespace) + controller (name of the package) + Login (which is the name of the file), resulting in Packt.controller.Login. Note that that the controller JS file (controller/Login.js) has the same name as view/Login.js, but that is OK because they are in a different package. It is good to use a similar name for the views, models, stores and controllers because it is going to be easier to maintain the project later. For example, let's say that after the project is in production, we need to add a new button on the Login screen. With only this information (and a little bit of MVC concept knowledge) we know we will need to add the button code on the view/Login.js file and listen to any events that might be fired by this button on the controller/Login.js. Easier maintainability is also a great pro of using the MVC architecture. The controller classes need to extend from Ext.app.Controller (#2), so we will always use this parent class for our controllers. Then we have the views declaration (#3), which is where we are going to declare all the views that this controller will care about. In this case, we only have the Login view so far. We will add more views later on this article. Next, we have the init method declaration (#4). The init method is called before the application boots, before the launch function of Ext.application (app.js). The controller will also load the views, models, and stores declared inside its class. Then we have the control method configured (#5). This is where we are going to listen to all events we want the controller to react. And as we are coding the events fired by the Login window and its child components, this will be our scope in this controller. Adding the controller to app.js Now that we already have a base of the login controller, we need to add it to the app.js file. We can remove this code, since the controller will be responsible for loading the view/Login.js file for us: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And add the controllers declaration: controllers: [ 'Login' ], And as our project is only starting, declaring the views on the controller classes will help us to have a code more organized, as we do not need to declare all the application's views in the app.js file. Listening to the button click event Our next step now is to start listening to the Login window events. First, we are going to listen to the Submit and Cancel buttons. We already know that we are going to add the listeners inside the this.control declaration. The format that we need to use is the following: 'Ext.ComponentQuery selector': { eventWeWantToListenTo: functionOrMethodWeWantToExecute } First, we need to pass the selector that is going to be used by the Ext.ComponentQuery class to find the component. Then we need to list the event that we want to listen to. And then, we need to declare the function that is going to be executed when the event we are listening to is fired, or declare the name of the controller method that is going to be executed when the event is fired. In our case, we are going to declare the method only for code organization purposes. Now let's focus on finding the correct selector for the Submit and Cancel buttons. According to Ext.ComponentQuery API documentation, we can retrieve components by using their xtype (if you are already familiar with jQuery, you will notice that Ext.ComponentQuery selectors are very similar to jQuery selectors' behavior). Well, we are trying to retrieve two buttons, and their xtype is button. We try then the selector button. But before we start coding, let's make sure that this is the correct selector to avoid us to change the code all the time when trying to figure out the correct selector. There is one very useful tip we can try: open the browser console (command editor), type the following command, and click on Run : Ext.ComponentQuery.query('button'); As we can see in the screenshot, it returned an array of the buttons that were found by the selector we used, and the array contains six buttons; too many buttons and it is not what we want. We want to narrow down to the Submit and Cancel buttons. Let's try to draw a path of the Login window using the components xtype we used: We have a Login window (xtype: login or window), inside the window we have a form (xtype: form), inside the form we have a toolbar (xtype: toolbar), and inside the toolbar we have two buttons (xtype: button). Therefore, we have login-form-toolbar-button. However, if we use login-form-button we will have the same result, because we do not have any other buttons inside the form. So we can try the following command: Ext.ComponentQuery.query('login form button'); So let's try this last selector on the command editor: Now the result is an array of two buttons and these are the buttons that we are looking for! There is still one detail missing: if we use the login form button selector, it will listen to the click event (which is the event we want to listen to) of both buttons. When we click on the Cancel button one thing should happen (reset the form) and when we click on the Submit button, another thing should happen (submit the form to the server to validate the login). So we still want to narrow down the selector even more, until it returns the Cancel button and another selector that will return the Submit button. Going back to the view/Login code, notice that we declared a configuration named itemId to both buttons. We can use these itemId configurations to identify the buttons in a unique way. According to the Ext.ComponentQuery API docs, we can use # as a prefix of itemId. So let's try the following command on the command editor to get the Submit button reference: Ext.ComponentQuery.query('login form button#submit'); The output will be only one button as we expect: Now let's try the following command to retrieve the Cancel button reference: Ext.ComponentQuery.query('login form button#cancel'); The output will be only one button as we expect: So now we have the selectors that we were looking for! Console command editor is a great tool and using it can save us a lot of time when trying to find the exact selector that we want, instead of coding, testing, not the selector we want, code again, test again, and so on. Could we use only button#submit or button#cancel as selectors? Yes, we could use a shorter selector. However, it would work perfectly for now. As the application grows and we declare many more classes and buttons, the event would be fired for all buttons that have the itemId named submit or cancel and this could lead to an error in the application. We always need to remember that itemId is scoped locally to the container. By using login form button as the selector, we make sure that the event will come from the button from the Login window. So let's implement the code inside the controller class: init: function(application) { this.control({ "login form button#submit": { // #1 click: this.onButtonClickSubmit // #2 }, "login form button#cancel": { // #3 click: this.onButtonClickCancel // #4 } }); }, onButtonClickSubmit: function(button, e, options) { console.log('login submit'); // #5 }, onButtonClickCancel: function(button, e, options) { console.log('login cancel'); // #6 } In the preceding code, we have first the listener to the Submit button (#1), and on the following line we say that we want to listen to the click event, and then, when the click event of the Submit button is fired, the onButtonClickSubmit method should be executed (#2). Then we have the same for the Cancel button: we have the listener to the Cancel button (#3), and on the following line we say that we want to listen to the click event, and then, when the click event of the Cancel button is fired, the onButtonClickCancel method should be executed (#4). Next, we have the declaration of the methods onButtonClickSubmit and onButtonClickCancel. For now, we are only going to output a message on the console to make sure that our code is working. So we are going to output login submit (#5) in case the user clicks on the Submit button, and login cancel (#6) in case the user clicks on the Cancel button. But how do you know which are the parameters the event method can receive? You can find the answer to this question in the documentation. If we take a look at the click event in the documentation, this is what we will find: This is exactly what we declared. For all the other event listeners, we will go to the docs and see which are the parameters the event accepts, and then list them as parameters in our code. This is also a very good practice. We should always list out all the arguments from the docs, even if we are only interested in the first one. This way we always know that we have the full collection of the parameters, and this can come very handy when we are doing maintenance of the application. Let's go ahead and try it. Click on the Cancel button and then on the Submit button. This should be the output: Cancel button listener implementation Let's remove the console.log messages and add the code we actually want the methods to execute. First, let's work on the onButtonClickCancel method. When we execute this method, we want it to reset the form. So this is the logic sequence we want to program: Get the Login form reference. Call the method getForm, which is going to return the form basic class. Call the reset method to reset the form. The form basic class provides input field management, validation, submission, and form loading services. The Ext.form.Panel class (xtype: form) works as the container, and it is automatically hooked up with an instance of Ext.form.Basic. That is why we need to get the form basic reference to call the reset method. If we take a look at the parameters we have available on the onButtonClickCancel method, we have: button, e, and options, and none of them provides us the form reference. So what can we do about it? We can use the up method from the Button class (inherited from the AbstractComponent class). With this method, we can use a selector to try to retrieve the form. The up method navigates up the component hierarchy, searching from an ancestor container that matches the passed selector. As the button is inside a toolbar that is inside the form we are looking for, if we use button.up('form'), it will retrieve exactly what we want. Ext JS will see what is the first ancestor in the hierarchy of the button and will find a toolbar. Not what we are looking for. So it goes up again and it will find a form, which is what we are looking for. So this is the code that we are going to implement inside the onButtonClickCancel method: button.up('form').getForm().reset(); Some people like to implement the toolbar inside the window instead of the form. No problem at all, it is only a matter of how you like to implement it. In this case, if the toolbar that contains the Submit button is inside the Window class we can use: button.up('window').down('form').getForm().reset() And we will have the same result! Submit button listener implementation Now we need to implement the onButtonClickSubmit method. Inside this method, we want to program the logic to send the username and password values to the server so that the user can be authenticated. We can implement two programming logics inside this method: the first one is to use the submit method that is provided by the form basic class and the second one is to use an Ajax call to submit the values to the server. Either way we will achieve what we want to do. However, there is one detail that we need to know prior to making this decision: if using the submit method of the form basic class, we will not be able to encrypt the password before we send it to the server, and if we take a look at the parameters sent to the server, the password will be a plain text, and this is not good. Using the Ajax request will result the same; however, we can encrypt the password value before sending to the server. So apparently, the second option seems better and that is the one that we will implement. So to summarize, following are the steps we need to perform in this method: Get the Login form reference Get the Login window reference (so that we can close it once the user has been authenticated) Get the username and password values from the form Encrypt the password Send login information to the server Handle the server response If user is authenticated display application If not, display an error message First, let's get the references that we need: var formPanel = button.up('form'), login = button.up('login'), user = formPanel.down('textfield[name=user]').getValue(), pass = formPanel.down('textfield[name=password]').getValue(); To get the form reference, we can use the button.up('form') code that we already used in the onButtonClickCancel method; to get the Login window reference we can do the same thing, only changing the selector to login or window. Then to get the values from the User and Password fields we can use the down method, but this time the scope will start from the form reference. For the selector we will use the text field xtype, and to make sure we are retrieving the text field we want, we can create an itemId attribute, but there is no need for it. We can use the name attribute since the user and password fields have different names and they are unique within the Login window. To use attributes within a selector we must wrap it in brackets. The next step is to submit the values to the server: if (formPanel.getForm().isValid()) { Ext.Ajax.request({ url: 'php/login.php', params: { user: user, password: pass } }); } If we try to run this code, the application will send the request to the server, but we will get an error as the response because we do not have the login.php page implemented yet. That's OK because we are interested in other details right now. With Firebug or Chrome Developer Tools enabled, open the Net tab and filter by the XHR requests. Make sure to enter a username and password (any valid value so that we can click on the Submit button). This will be the output: We still do not have the password encrypted. The original value is still being displayed and this is not good. We need to encrypt the password. Under the app directory, we will create a new folder named util where we are going to create all the utility classes. We will also create a new file named MD5.js; therefore, we will have a new class named Packt.util.MD5. This class contains a static method called encode and this method encodes the given value using the MD5 algorithm. To understand more about the MD5 algorithm go to http://en.wikipedia.org/wiki/MD5. As Packt.util.MD5 is big, we will not list its code here, but you can download the source code of this book from http://www.packtpub.com/mastering-ext-javascript/book or get the latest version at https://github.com/loiane/masteringextjs). If you would like to make it even more secure, you can also use SSL and ask for a random salt string from the server, salt the password and hash it. You can learn more about it at one the following URLs: http://en.wikipedia.org/wiki/Transport_Layer_Security and http://en.wikipedia.org/wiki/Salt_(cryptography). A static method does not require an instance of the class to be able to be called. In Ext JS, we can declare static attributes and methods inside the static configuration. As the encode method from Packt.util.MD5 class is static, we can call it like Packt.util.MD5.encode(value);. So before Ext.Ajax.request, we will add the following code: pass = Packt.util.MD5.encode(pass); We must not forget to add the Packt.util.MD5 class on the controller's requires declaration (the requires declaration is right after the extend declaration): requires: [ 'Packt.util.MD5' ], Now, if we try to run the code again, and check the XHR requests on the Net tab, we will have the following output: The password is encrypted and it is much safer now.
Read more
  • 0
  • 0
  • 10515

article-image-implementing-document-management
Packt
17 Jul 2013
19 min read
Save for later

Implementing Document Management

Packt
17 Jul 2013
19 min read
(For more resources related to this topic, see here.) Managing spaces A space in Alfresco is nothing but a folder, which contains content as well as sub-spaces. Space users are the users invited to a space to perform specific actions, such as editing content, adding content, discussing a particular document, and so on. The exact capability a given user has within a space is a function of their role or rights. Consider the capability of creating a sub-space. By default, to create a sub-space, one of the following must apply: The user is the administrator of the system The user has been granted the Contributor role. The user has been granted the Coordinator role. The user has been granted the Collaborator role. Similarly, to edit space properties, a user will need to be the administrator or be granted a role that gives them rights to edit the space. These roles include Editor, Collaborator, and Coordinator.  Space is a smart folder Space is a folder with additional features such as security, business rules, workflow, notifications, local search, and special views. These additional features which make a space a smart folder are explained as follows: Space security: You can define security at the space level. You can specify a user or a group of users, who may perform certain actions on content in a space. For example, on the Marketing Communications space in intranet, you can specify that only users of the marketing group can add the content and others can only see the content. Space business rules: Business rules, such as transforming content from Microsoft Word to Adobe PDF and sending notifications when content gets into a space can be defined at space level. Space workflow: You can define and manage content workflow on a space. Typically, you will create a space for the content to be reviewed, and a space for approved content. You will create various spaces for dealing with the different stages the work flows through, and Alfresco will manage the movement of the content between those spaces. Space events: Alfresco triggers events when content gets into a space, or when content goes out of a space, or when content is modified within a space. You can capture such events at space level and trigger certain actions, such as sending e-mail notifications to certain users. Space aspects: Aspects are additional properties and behavior, which could be added to the content, based on the space in which it resides. For example, you can define a business rule to add customer details to all the customer contract documents in your intranet's Sales space. Space search: Alfresco search can be limited to a space. For example, if you create a space called Marketing, you can limit the search for documents within the Marketing space, instead of searching the entire site. Space syndication: Space content can be syndicated by applying RSS feed scripts on a space. You can apply RSS feeds on your News space, so that other applications and websites can subscribe for news updates. Space content: Content in a space can be versioned, locked, checked-in and checked-out, and managed. You can specify certain documents in a space to be versioned and others not. Space network folder: Space can be mapped to a network drive on your local machine, enabling you to work with the content locally. For example, using CIFS interface, space can be mapped to the Windows network folder. Space dashboard view: Content in a space can be aggregated and presented using special dashboard views. For example, the Company Policies space can list all the latest policy documents which are updated for the past one month or so. You can create different views for Sales, Marketing and Finance departmental spaces. Importance of space hierarchy Like regular folders, a space can have child spaces (called sub-spaces) and sub-spaces can further have sub-spaces of their own. There is no limitation on the number of hierarchical levels. However, the space hierarchy is very important for all the reasons specified above in the previous section. Any business rule and security defined at a space is applicable to all the content and sub-spaces underlying that space by default. Use the created system users, groups, and spaces for various departments as per the example. Your space hierarchy should look similar to the following screenshot: A space in Alfresco enables you to define various business rules, a dashboard view, properties, workflow, and security for the content belonging to each department. You can decentralize the management of your content by giving access to departments at individual space levels. The example of the intranet space should contain sub-spaces, as shown in the preceding screenshot. If you have not already created spaces, you must do it now by logging in as administrator. Also, it is very important to set security (by inviting groups of users to these spaces). Editing a space Using a web client, you can edit the spaces you have added previously. Note that you need to have edit permissions on the spaces to edit them. Editing space properties Every space listed will have clickable actions, as shown in the following screenshot: These clickable actions will be dynamically generated for each space based on the current user's permissions on that space. If you have copy permission on a space, you will notice the copy icon as a clickable action for that space. On clicking the View Details action icon, the detailed view of a space will be displayed, as shown in the next screenshot: The detailed view page of a space allows you to select a dashboard view for viewing and editing existing space properties, to categorize the space, to set business rules, and to run various actions on the space, as shown in the preceding screenshot. To edit space properties, click on the Edit Space Properties icon, shown in the preceding screenshot. You can change the name of the space and other properties as needed. Deleting space and its contents From the list of space actions, you can click on the Delete action to delete the space. You need to be very careful while deleting a space as all the business rules, sub-spaces, and the entire content within the space will also be deleted. Moving or copying space by using the clipboard From the list of space actions, you can click on the Cut action to move a space to the clipboard. Then you can navigate to any space hierarchy, assuming that you have the required permissions to do so, and paste this particular space, as required. Similarly, you can use the Copy action to copy the space to some other space hierarchy. This is useful if you have an existing space structure (such as a marketing project or engineering project), and you would like to replicate it along with the data it contains. The copied or moved space will be identical in all aspects to the original (source) space. When you copy a space, the space properties, categorization, business rules, space users, entire content within the space, and all sub-spaces along with their content will also be copied. Creating a shortcut to a space for quick access If you need to frequently access a space, you can create a shortcut (similar to the Favorite option in Internet browsers) to that space, in order to reach the space in just one click. From the list of space actions, you can click on the Create Shortcut action to create a shortcut to the existing space. Shortcuts are listed in the left-hand side shelf. Consider a scenario where after creating the shortcut, the source space is deleted. The shortcuts are not automatically removed as there is a possibility for the user to retrieve the deleted space. What will happen when you click on that shortcut link in the Shelf? If the source space is not found (deleted by user), then the shortcut will be removed with an appropriate error message. Choosing a default view for your space There are four different out-of-the-box options available (as shown in the screenshot overleaf). These options support the display of the space's information: Details View: This option provides listings of sub-spaces and content, in horizontal rows. Icon View: This option provides a title, description, timestamp, and action menus for each sub-space and content item present in the current space. Browse View: Similar to the preceding option, this option provides title, description, and list of sub-spaces for each space. Dashboard View: This option is disabled and appears in gray. This is because you have not enabled the dashboard view for this space. In order to enable dashboard view for a space, you need to select a dashboard view (Refer to the icon shown in the preceding screenshot). Sample space structure for a marketing project Let us say you are launching a new marketing project called Product XYZ Launch. Go to the Company Home | Intranet | Marketing Communications space and create a new space called Product XYZ Launch and create various sub-spaces as needed. You can create your own space structure within the marketing project space to manage content. For example, you can have a space called 02_Drafts to keep all the draft marketing documents and so on. Managing content Content could be of any type, as mentioned at the start of this article. By using the Alfresco web client application, you can add and modify content and its properties. You can categorize content, lock content for safe editing, and can maintain several versions of the content. You can delete content, and you can also recover the deleted content. This section uses the space you have already created as a part of your Intranet sample application. As a part of sample application, you will manage content in the Intranet | Marketing Communications space. Because you have secured this space earlier, only the administrator (admin) and users belonging to the Marketing group (Peter Marketing and Harish Marketing) can add content in this space. You can log in as Peter Marketing to manage content in this space. Creating content A web client provides two different interfaces for adding content. One can be used to create inline editable content, such as HTML, text, and XML, and the other can be used to add binary content, such Microsoft office files and scanned images. You need to have either administrator, contributor, collaborator, or coordinator roles on a space to create content within that space.  Creating text documents HTML, text, and XML To create an HTML file in a space, follow these steps: Ensure that you are in the Intranet | Marketing Communications | Product XYZ Launch | 02_Drafts space. On the header, click on Create | Create Content. The first pane of the Create Content wizard appears. You can track your progress through the wizard from the list of steps at the left of the pane. Provide name of the HTML file, select HTML as Content Type and click on the Next button. The Enter Content pane of the wizard appears, as shown in the next screenshot. Note that Enter Content is now highlighted in the list of steps at the left of the pane:   You can see that there is a comprehensive set of tools to help you format your HTML document. Enter some text, using some of the formatting features. If you know HTML, you can also use the HTML editor by clicking on the HTML icon. The HTML source editor is displayed. Once you update the HTML content, click on the Update button to return to the Enter Content pane in the wizard, with the contents updated. After the content is entered and edited in the Enter Content pane, click on Finish. You will see the Modify Content Properties screen, which can used to update the metadata associated with the content. Give some filename with .html as extension. Also, you will notice that then Inline Editing checkbox is selected by default. Once you are done with editing the properties, click on the OK button to return to the 02_Drafts space, with your newly created file inserted. You can launch the newly created HTML file by clicking on it. Your browser launches most of the common files, such as HTML, text, and PDF. If the browser could not recognize the file, you will be prompted with the windows dialog box containing the list of applications, from which you must choose an application. This is the normal behavior if you try to launch a file on any Internet page. Uploading binary files – Word, PDF, Flash, Image, and Media Using a web client, you can upload content from your hard drive. Choose a file from your hard disk that is not an HTML or text file. I chose Alfresco_CIGNEX.docx from my hard disk for the sample application. Ensure that you are in the Intranet | Marketing Communications | Product XYZ Launch | 02_Drafts space. To upload a binary file in a space, follow these steps: In the space header, click on the Add Content link. The Add Content dialog appears. To specify the file that you want to upload, click Browse. In the File Upload dialog box, browse to the file that you want to upload. Click Open. Alfresco inserts the full path name of the selected file in the Location textbox. Click on the Upload button to upload the file from your hard disk to the Alfresco repository. A message informs you that your upload was successful, as shown in the following screenshot. Click OK to confirm. Modify the Content Properties dialog appears. Verify the pre-populated properties and add information in the textboxes. Click OK to save and return to the 02_Drafts space. The file that you uploaded appears in the Content Items pane. Alfresco extracts the file size from the properties of the disk file, and includes the value in the size row. Editing content You can edit the content in Alfresco in three different ways: by using the Edit Online, Edit Offline, and Update actions. Note that you need to have edit permissions on the content to edit them. Online editing of HTML, text, and XML HTML files and plain text files can be created and edited online. If you have edit access to a file, you will notice a small pencil (Edit Online) icon, as shown in the following screenshot: Clicking on the pencil icon will open the file in its editor. Each file type is edited in its own WYSIWYG editor. Once you select to edit online, a working copy of the file will be created for editing, whereas the original file will be locked, as shown in the next screenshot. The working copy can be edited further as needed by clicking on the Edit Online button. Once you are done with editing, you can commit all the changes to the original document by clicking on the Done Editing icon. For some reason, if you decided to cancel editing of a document and discard any changes, you can do that by clicking on the Cancel Editing button given below. If you cancel editing of a document, the associated working copy will be deleted and all changes to it since it was checked out will be lost. The working copy can be edited by any user who has edit access to the document or the folder containing the document. For example, if user1 created the working copy and user2 has edit access to the document, and then both user1 and user2 can edit the working copy. Consider a scenario where user1 and user2 are editing the working copy simultaneously. If user1 commits the changes first, then the edits done by user2 will be lost. Hence, it is important to follow best practices in editing the working copy. Some of these best practices are listed here for your reference: Securing the edit access to the working copy to avoid multiple users simultaneously editing the file Saving the working copy after each edit to avoid losing the work done Following the process of allowing only the owner of the document edit the working copy. If others need to edit, they can claim the ownership Triggering the workflow on working copy to confirm the changes before committing Offline editing of files If you wish to download the files to your local machine, edit it locally, and then upload the updated version to Alfresco, then you might consider using the Edit Offline option (pencil icon). Once you click on the Edit Offline button, the original file will be locked automatically and a working copy of the file will be created for download. Then you will get an option to save the working copy of the document locally on your laptop or personal computer. If you don't want to automatically download the files for offline editing, you can turn off this feature. In order to achieve this, click on the User Profile icon in the top menu, and uncheck the option for Offline Editing, as shown here: The working copy can be updated by clicking on the Upload New Version button. Once you have finished editing the file, you can commit all the changes to the original document by clicking on the Done Editing icon. Or you can cancel all the changes by clicking on the Cancel Editing button. Uploading updated content If you have edit access to a binary file, you will notice the Update action icon in the drop-down list for the More actions link. Upon clicking on the Update icon, the Update pane opens. Click on the Browse button to upload the updated version of the document from your hard disk. It is always a good practice to check out the document and update the working copy rather than directly updating the document. Checking the file out avoids conflicting updates by locking the document, as explained in the previous section. Content actions Content will have clickable actions, as shown in the upcoming screenshot. These clickable actions (icons) will be dynamically generated for a content based on the current user's permissions for that content. For example, if you have copy permission for the content, you will notice the Copy icon as a clickable action for that content. Deleting content Click on the Delete action, from the list of content actions, to delete the content. Please note that when content is deleted, all the previous versions of that content will also be deleted. Moving or copying content using the clipboard From the list of content actions, as shown in the preceding screenshot, you can click on the Cut action to move content to the clipboard. Then, you can navigate to any space hierarchy and paste this particular content as required. Similarly, you can use the Copy action to copy the content to another space. Creating a shortcut to the content for quick access If you have to access a particular content very frequently, you can create a shortcut (similar to the way you can with Internet and Windows browser's Favorite option) to that content, in order to reach the content in one click. From the list of content actions, as shown in the preceding screenshot, you can click on the Create Shortcut action to create a shortcut to the existing content. Shortcuts are listed in the left-hand side Shelf. Managing content properties Every content item in Alfresco will have properties associated with it. Refer to the preceding screenshot to see the list of properties, such as Title, Description, Author, Size, and Creation Date. These properties are associated with the actual content file, named Alfresco_CIGNEX.docx. The content properties are stored in a relational database and are searchable using Advanced Search options. What is Content Metadata? Content properties are also known as Content Metadata. Metadata is structured data, which describes the characteristics of the content. It shares many similar characteristics with the cataloguing that takes place in libraries. The term "Meta" derives from the Greek word denoting a nature of a higher order or more fundamental kind. A metadata record consists of a number of predefined elements representing specific attributes of content, and each element can have one or more values. Metadata is a systematic method for describing resources, and thereby improving access to them. If access to the content will be required, then it should be described using metadata, so as to maximize the ability to locate it. Metadata provides the essential link between the information creator and the information user. While the primary aim of metadata is to improve resource discovery, metadata sets are also being developed for other reasons, including: Administrative control Security Management information Content rating Rights management Metadata extractors Typically, in most of the content management systems, once you upload the content file, you need to add the metadata (properties), such as title, description, and keywords to the content manually. Most of the content, such as Microsoft Office documents, media files, and PDF documents contain properties within the file itself. Hence, it is double the effort, having to enter those values again in the content management system along with the document. Alfresco provides built-in metadata extractors for popular document types to extract the standard metadata values from a document and populate the values automatically. This is very useful if you are uploading the documents through FTP, CIFS, or WebDAV interface, where you do not have to enter the properties manually, as Alfresco will transfer the document properties automatically. Editing metadata To edit metadata, you need to click the Edit Metadata icon () in content details view. Refer the Edit Metadata icon shown in the screenshot, which shows a detailed view of the Alfresco_CIGNEX.docx file. You can update the metadata values, such as Name and Description for your content items. However, certain metadata values, such as Creator, Created Date, Modifier, and Modified Date are read-only and you cannot change them. Certain properties, such as Modifier and Modified Date will be updated by Alfresco automatically, whenever the content is updated. Adding additional properties Additional properties can be added to the content in two ways. One way is to extend the data model and define more properties in a content type.  The other way is to dynamically attach the properties and behavior through Aspects. By using aspects, you can add additional properties, such as Effectivity, Dublin Core Metadata, and Thumbnailable, to the content. 
Read more
  • 0
  • 0
  • 2063
Modal Close icon
Modal Close icon