Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-introducing-hierarchical-menu-typo3
Packt
16 Nov 2010
6 min read
Save for later

Introducing Hierarchical Menu in TYPO3

Packt
16 Nov 2010
6 min read
TYPO3 Templates Create and modify templates with TypoScript and TemplaVoila Build dynamic and powerful TYPO3 templates using TypoScript, TemplaVoila, and other core technologies. Customize dynamic menus, logos, and headers using tricks you won’t find in the official documentation. Build content elements and template extensions to overhaul and improve TYPO3’s default back-end editing experience. Follow along with the step-by-step instructions to build a site from scratch using all the lessons in the book in a practical example. Page tree concepts We are about to dive into all of the little details, but there are a few basic concepts that we need to review first. So, we're going to make sure we have a more complete definition that avoids any confusion: Page tree: Our TYPO3 page tree is all of the pages and folders that we work with. This includes the home page, about us, subpages, and even non-public items such as the storage folder in our example site. If we have a very simple website it could look like this: Home About Us Staff Level: Our page tree will almost always have pages, subpages, and pages under those. In TYPO3, these are considered levels, and they increase as you go deeper into the page tree. For example, in our extremely simple website from the example above both Home and About Us are at the base (or root) of our page tree, so they are on level 0. The staff page is underneath the About Us page in the hierarchy, so it is on level 1. If we added a page for a photo gallery of our last staff lunch as a subpage to the staff page, then it would be at level 2: Home (Level 0) About Us (Level 0) Staff (Level 1) Staff Lunch Gallery (Level 2) Rootline: TYPO3 documentation actually has a few different uses for the term "rootline", but for the menu objects it is the list of pages from your current page or level moving up to the root page. In our example above, the current rootline from the Staff Lunch Gallery is Staff Lunch Gallery | Staff| About Us Before we look at all the different kinds of menus in TYPO3 and all their little differences, we need to explore the base TypoScript object for all of them: HMENU. HMENU generates hierarchical menus, and everything related to menus in TYPO3 is controlled by it. As the base object, HMENU is the one thing that every type of menu is guaranteed to have in common. If we understand how HMENU is creating its hierarchical menu, then everything else is just styling. We can already see an example of HMENU being used in our own TypoScript template setup by looking at the menus that the TemplaVoila Wizard generated for us: ## Main Menu [Begin] lib.mainMenu = HMENU lib.mainMenu.entryLevel = 0 lib.mainMenu.wrap = <ul id="menu-area">|</ul> lib.mainMenu.1 = TMENU lib.mainMenu.1.NO { allWrap = <li class="menu-item">|</li> } ## Main Menu [End] ## Submenu [Begin] lib.subMenu = HMENU lib.subMenu.entryLevel = 1 lib.subMenu.wrap = <ul id="submenu-area">|</ul> lib.subMenu.1 = TMENU lib.subMenu.1.NO { allWrap = <li class="submenu-item">|</li> } ## Submenu [End] We can see that the wizard created two new HMENU objects, lib.mainMenu and lib.subMenu, and assigned properties for the entry level and HTML tags associated with each menu. We're about to learn what those specific properties mean, but we can already use the code from the wizard as an example of how HMENU is created and how properties are defined for it. Types of menu objects The HMENU class does not output anything directly. To generate our menus, we must define a menu object and assign properties to it. In our current menus, the TemplaVoila Wizard generated a menu object for each HMENU in the following highlighted lines: ## Main Menu [Begin] lib.mainMenu = HMENU lib.mainMenu.entryLevel = 0 lib.mainMenu.wrap = <ul id="menu-area">|</ul> lib.mainMenu.1 = TMENU lib.mainMenu.1.NO { allWrap = <li class="menu-item">|</li> } ## Main Menu [End] ## Submenu [Begin] lib.subMenu = HMENU lib.subMenu.entryLevel = 1 lib.subMenu.wrap = <ul id="submenu-area">|</ul> lib.subMenu.1 = TMENU lib.subMenu.1.NO { allWrap = <li class="submenu-item">|</li> } ## Submenu [End] There are a handful of classes for menu objects that can be used by HMENU to generate menus in TYPO3, but we are going to be concentrating on the two most powerful and flexible options: TMENU and GMENU. The TemplaVoila Wizard used TMENU in our current menu, and it is used to generate text-based menus. Menus built with TMENU output the title of each page in the menu as a text link, and then we can use HTML and CSS to add styling and layout options. Menus created with the GMENU class are considered graphic menus. We can use GMENU to dynamically generate images from our page titles so that we can use fancy fonts and effects like drop-shadow and emboss that are not supported in CSS by all browsers equally. Menu item states The menu system in TYPO3 allows us to define states for different menu options. For example, using the state definitions, we can customize the behavior of menu items when they are active or rolled over. The normal state (NO) is available and set by default, but all of the menu item states must be enabled in TYPO3 by adding code to our template like this: lib.mainMenu.1.ACT = 1. All menu objects share a common set of menu item states from the table below: HMENU properties Because HMENU is the root of all of our other menu objects, any of the properties that we learn for HMENU will be applicable to all of our menu options that we might use on future websites. I've included a list of the TypoScript properties that we are most likely to use in the TypoScript template setup, but you can see the complete list in the TSref (http://typo3.org/documentation/document-library/references/doc_core_tsref/current). If you haven't used TypoScript much, and this is too much information all at once, don't worry. It will make more sense in a few pages when we start experimenting on our own site. Then, this will serve as a great reference. As we've already witnessed in the main menu, TYPO3 sorts our menu by the order in the page tree by default. We can use this property to list fields for TYPO3 to use in the database query. For example, if we wanted to list the main menu items in reverse alphabetical order, we could call the alternativeSortingField in our template: lib.mainMenu.1 = TMENU lib.mainMenu.1.alternativeSortingField = title desc
Read more
  • 0
  • 0
  • 3042

article-image-skinners-toolkit-plone-3-theming-part-2
Packt
20 Oct 2009
4 min read
Save for later

Skinner's Toolkit for Plone 3 Theming (Part 2)

Packt
20 Oct 2009
4 min read
(For more resources on Plone, see here.) Text editors The last key piece to successfully skinning a site is to choose a text editor or CSS editor that matches your needs and plays well with Plone. We are not talking about a word processor here, like Microsoft Word or Pages; rather, a text editor is a type of program used for editing plain text files. Text editors are often provided with operating systems or software development packages, and can be used to change configuration files and programming language source code. We'll look at a few of the more popular text editors that are appropriate for Plone development and theming. TextMate TextMate is a combination of text editor and programming tool that is exclusively for the Mac, and can be found at http://macromates.com. One of the key joys of working with TextMate is that it lets you open up an entire file structure at once to make navigation between related files easier. For Plone, this is essential. Your average file structure will look something like this: Rather than opening the entire buildouts folder, or even the plonetheme.copperriver folder, generally you only want to open the structure closest to the files you need in order to keep performance snappy—in this case, mybuildout[rockaway]/src/plonetheme.copperriver/plonetheme/copperriver/: As you can see, it opens the entire project in a clean interface with an easily navigable structure. Without this feature, skinning for Plone would be much more time-consuming. TextMate also offers numerous programmer-related tools: You can open two files at once (or more), and using the diff option you can compare the files easily Subversion (svn) support Ability to search and replace in a project Regular expression search and replace (grep) Auto-indent for common actions such as pasting text Auto-completion of brackets and other characters Clipboard history Foldable code blocks Support for more than 50 languages Numerous key combinations (for example, Apple + T opens a search window that makes it easy to locate a file) Themable syntax highlight colors Visual bookmarks to jump between places in a file Copy/paste of columns of text Bundles And much, much more The Bundle feature is one of the more interesting aspects of the tool. If you look at the HTML bundle, for example, it shows a list of common actions that you might wish to perform in a given document, and on the right, the code that spawns that action, and the hot-key that activates it. There's even a Zope/Plone TextMate support bundle found at http://plone.org/products/textmate-support that was developed by some of Plone's core developers. It enhances TextMate's already existing support for Python, XML, (X)HTML, CSS, and Restructured Text by adding features aimed specifically at the modern day Zope and Plone developer. For the geeks in the audience, the bundle's features include: Doctest support (restructured text with inline Python syntax and auto-indent of python code), pdb support (for debugging), ZCML support (no more looking up directives with our handy and exhaustive snippets), and a ZPT syntax that marries the best of both worlds (XML strictness with the goodness of TextMate's HTML support). This bundle plus TextMate's other capabilities make switching to developing for Plone on a Mac a good idea any day! As well as assigning a single key equivalent to a bundle item, it is possible to assign a tab trigger to the item. This is a sequence of text that you enter in the document and follow it by pressing the tab key. This will remove the sequence entered and then execute the bundle item. TextMate is full of hot-keys and features in general, yet it's surprisingly compact. Thankfully, the documentation is thorough. TextMate is a dream for themers and programmers alike. For those who are still new at CSS, another tool might be a good place to start, but for power users, TextMate is the primary tool of choice.
Read more
  • 0
  • 0
  • 3034

article-image-playing-max-6-framework
Packt
06 Sep 2013
17 min read
Save for later

Playing with Max 6 Framework

Packt
06 Sep 2013
17 min read
(For more resources related to this topic, see here.) Communicating easily with Max 6 – the [serial] object The easiest way to exchange data between your computer running a Max 6 patch and your Arduino board is via the serial port. The USB connector of our Arduino boards includes the FTDI integrated circuit EEPROM FT-232 that converts the RS-232 plain old serial standard to USB. We are going to use again our basic USB connection between Arduino and our computer in order to exchange data here. The [serial] object We have to remember the [serial] object's features. It provides a way to send and receive data from a serial port. To do this, there is a basic patch including basic blocks. We are going to improve it progressively all along this article. The [serial] object is like a buffer we have to poll as much as we need. If messages are sent from Arduino to the serial port of the computer, we have to ask the [serial] object to pop them out. We are going to do this in the following pages. This article is also a pretext for me to give you some of my tips and tricks in Max 6 itself. Take them and use them; they will make your patching life easier. Selecting the right serial port we have used the message (print) sent to [serial] in order to list all the serial ports available on the computer. Then we checked the Max window. That was not the smartest solution. Here, we are going to design a better one. We have to remember the [loadbang] object. It fires a bang, that is, a (print) message to the following object as soon as the patch is loaded. It is useful to set things up and initialize some values as we could inside our setup() block in our Arduino board's firmware. Here, we do that in order to fill the serial port selector menu. When the [serial] object receives the (print) message, it pops out a list of all the serial ports available on the computer from its right outlet prepended by the word port. We then process the result by using [route port] that only parses lists prepended with the word port. The [t] object is an abbreviation of [trigger]. This object sends the incoming message to many locations, as is written in the documentation, if you assume the use of the following arguments: b means bang f means float number i means integer s means symbol l means list (that is, at least one element) We can also use constants as arguments and as soon as the input is received, the constant will be sent as it is. At last, the [trigger] output messages in a particular order: from the rightmost outlet to the leftmost one. So here we take the list of serial ports being received from the [route] object; we send the clear message to the [umenu] object (the list menu on the left side) in order to clear the whole list. Then the list of serial ports is sent as a list (because of the first argument) to [iter]. [iter] splits a list into its individual elements. [prepend] adds a message in front of the incoming input message. That means the global process sends messages to the [umenu] object similar to the following: append xxxxxx append yyyyyy Here xxxxxx and yyyyyy are the serial ports that are available. This creates the serial port selector menu by filling the list with the names of the serial ports. This is one of the typical ways to create some helpers, in this case the menu, in our patches using UI elements. As soon as you load this patch, the menu is filled, and you only have to choose the right serial port you want to use. As soon as you select one element in the menu, the number of the element in the list is fired to its leftmost outlet. We prepend this number by port and send that to [serial], setting it up to the right-hand serial port. Polling system One of the most used objects in Max 6 to send regular bangs in order to trigger things or count time is [metro]. We have to use one argument at least; this is the time between two bangs in milliseconds. Banging the [serial] object makes it pop out the values contained in its buffer. If we want to send data continuously from Arduino and process them with Max 6, activating the [metro] object is required. We then send a regular bang and can have an update of all the inputs read by Arduino inside our Max 6 patch. Choosing a value between 15 ms and 150 ms is good but depends on your own needs. Let's now see how we can read, parse, and select useful data being received from Arduino. Parsing and selecting data coming from Arduino First, I want to introduce you to a helper firmware inspired by the Arduino2Max page on the Arduino website but updated and optimized a bit by me. It provides a way to read all the inputs on your Arduino, to pack all the data read, and to send them to our Max 6 patch through the [serial] object. The readAll firmware The following code is the firmware. int val = 0; void setup() { Serial.begin(9600); pinMode(13,INPUT); } void loop() { // Check serial buffer for characters incoming if (Serial.available() > 0){ // If an 'r' is received then read all the pins if (Serial.read() == 'r') { // Read and send analog pins 0-5 values for (int pin= 0; pin<=5; pin++){ val = analogRead(pin); sendValue (val); } // Read and send digital pins 2-13 values for (int pin= 2; pin<=13; pin++){ val = digitalRead(pin); sendValue (val); } Serial.println();// Carriage return to mark end of data flow. delay (5); // prevent buffer overload } } } void sendValue (int val){ Serial.print(val); Serial.write(32); // add a space character after each value sent } For starters, we begin the serial communication at 9600 bauds in the setup() block. As usual with serial communication handling, we check if there is something in the serial buffer of Arduino at first by using the Serial.available() function. If something is available, we check if it is the character r. Of course, we can use any other character. r here stands for read, which is basic. If an r is received, it triggers the read of both analog and digital ports. Each value (the val variable) is passed to the sendValue()function; this basically prints the value into the serial port and adds a space character in order to format things a bit to provide an easier parsing by Max 6. We could easily adapt this code to only read some inputs and not all. We could also remove the sendValue() function and find another way of packing data. At the end, we push a carriage return to the serial port by using Serial.println(). This creates a separator between each pack of data that is sent. Now, let's improve our Max 6 patch to handle this pack of data being received from Arduino. The ReadAll Max 6 patch The following screenshot is the ReadAll Max patch that provides a way to communicate with our Arduino: Requesting data from Arduino First, we will see a [t b b] object. It is also a trigger, ordering bangs provided by the [metro] object. Each bang received triggers another bang to another [trigger] object, then another one to the [serial] object itself. The [t 13 r] object can seem tricky. It just triggers a character r and then the integer 13. The character r is sent to [spell] that converts it to ASCII code and then sends the result to [serial]. 13 is the ASCII code for a carriage return. This structure provides a way to fire the character r to the [serial] object, which means to Arduino, each time that the metro bangs. As we already see in the firmware, it triggers Arduino to read all its inputs, then to pack the data, and then to send the pack to the serial port for the Max 6 patch. To summarize what the metro triggers at each bang, we can write this sequence: Send the character r to Arduino. Send a carriage return to Arduino. Bang the [serial] object. This triggers Arduino to send back all its data to the Max patch. Parsing the received data Under the [serial] object, we can see a new structure beginning with the [sel 10 13] object. This is an abbreviation for the [select] object. This object selects an incoming message and fires a bang to the specific output if the message equals the argument corresponding to the specific place of that output. Basically, here we select 10 or 13. The last output pops the incoming message out if that one doesn't equal any argument. Here, we don't want to consider a new line feed (ASCII code 10). This is why we put it as an argument, but we don't do anything if that's the one that has been selected. It is a nice trick to avoid having this message trigger anything and even to not have it from the right output of [select]. Here, we send all the messages received from Arduino, except 10 or 13, to the [zl group 78] object. The latter is a powerful list for processing many features. The group argument makes it easy to group the messages received in a list. The last argument is to make sure we don't have too many elements in the list. As soon as [zl group] is triggered by a bang or the list length reaches the length argument value, it pops out the whole list from its left outlet. Here, we "accumulate" all the messages received from Arduino, and as soon as a carriage return is sent (remember we are doing that in the last rows of the loop() block in the firmware), a bang is sent and all the data is passed to the next object. We currently have a big list with all the data inside it, with each value being separated from the other by a space character (the famous ASCII code 32 we added in the last function of the firmware). This list is passed to the [itoa] object. itoa stands for integer to ASCII . This object converts integers to ASCII characters. The [fromsymbol] object converts a symbol to a list of messages. Finally, after this [fromsymbol] object we have our big list of values separated by spaces and totally readable. We then have to unpack the list. [unpack] is a very useful object that provides a way to cut a list of messages into individual messages. We can notice here that we implemented exactly the opposite process in the Arduino firmware while we packed each value into a big message. [unpack] takes as many arguments as we want. It requires knowing about the exact number of elements in the list sent to it. Here we send 12 values from Arduino, so we put 12 i arguments. i stands for integer . If we send a float, [unpack] would cast it as an integer. It is important to know this. Too many students are stuck with troubleshooting this in particular. We are only playing with the integer here. Indeed, the ADC of Arduino provides data from 0 to 1023 and the digital input provides 0 or 1 only. We attached a number box to each output of the [unpack] object in order to display each value. Then we used a [change] object. This latter is a nice object. When it receives a value, it passes it to its output only if it is different from the previous value received. It provides an effective way to avoid sending the same value each time when it isn't required. Here, I chose the argument -1 because this is not a value sent by the Arduino firmware, and I'm sure that the first element sent will be parsed. So we now have all our values available. We can use them for different jobs. But I propose to use a smarter way, and this will also introduce a new concept. Distributing received data and other tricks Let's introduce here some other tricks to improve our patching style. Cordless trick We often have to use some data in our patches. The same data has to feed more than one object. A good way to avoid messy patches with a lot of cord and wires everywhere is to use the [send] and [receive] objects. These objects can be abbreviated with [s] and [r], and they generate communication buses and provide a wireless way to communicate inside our patches. These three structures are equivalent. The first one is a basic cord. As soon as we send data from the upper number box, it is transmitted to the one at the other side of the cord. The second one generates a data bus named busA. As soon as you send data into [send busA], each [receive busA] object in your patch will pop out that data. The third example is the same as the second one, but it generates another bus named busB. This is a good way to distribute data. I often use this for my master clock, for instance. I have one and only one master clock banging a clock to [send masterClock], and wherever I need to have that clock, I use [receive masterClock] and it provides me with the data I need. If you check the global patch, you can see that we distribute data to the structures at the bottom of the patch. But these structures could also be located elsewhere. Indeed, one of the strengths of any visual programming framework such as Max 6 is the fact that you can visually organize every part of your code exactly as you want in your patcher. And please, do that as much as you can. This will help you to support and maintain your patch all through your long development months. Check the previous screenshot. I could have linked the [r A1] object at the top left corner to the [p process03] object directly. But maybe this will be more readable if I keep the process chains separate. I often work this way with Max 6. This is one of the multiple tricks I teach in my Max 6 course. And of course, I introduced the [p] object, that is the [patcher] abbreviation. Let's check a couple of tips before we continue with some good examples involving Max 6 and Arduino. Encapsulation and subpatching When you open Max 6 and go to File | New Patcher , it opens a blank patcher. The latter, if you recall, is the place where you put all the objects. There is another good feature named subpatching . With this feature, you can create new patchers inside patchers, and embed patchers inside patchers as well. A patcher contained inside another one is also named a subpatcher. Let's see how it works with the patch named ReadAllCutest.maxpat. There are four new objects replacing the whole structures we designed before. These objects are subpatchers. If you double-click on them in patch lock mode or if you push the command key (or Ctrl for Windows), double-click on them in patch edit mode and you'll open them. Let's see what is there inside them. The [requester] subpatcher contains the same architecture that we designed before, but you can see the brown 1 and 2 objects and another blue 1 object. These are inlets and outlets. Indeed, they are required if you want your subpatcher to be able to communicate with the patcher that contains it. Of course, we could use the [send] and [receive] objects for this purpose too. The position of these inlets and outlets in your subpatcher matters. Indeed, if you move the 1 object to the right of the 2 object, the numbers get swapped! And the different inlets in the upper patch get swapped too. You have to be careful about that. But again, you can organize them exactly as you want and need. Check the next screenshot: And now, check the root patcher containing this subpatcher. It automatically inverts the inlets, keeping things relevant. Let's now have a look at the other subpatchers: The [p portHandler] subpatcher The [p dataHandler] subpatcher The [p dataDispatcher] subpatcher In the last figure, we can see only one inlet and no outlets. Indeed, we just encapsulated the global data dispatcher system inside the subpatcher. And this latter generates its data buses with [send] objects. This is an example where we don't need and even don't want to use outlets. Using outlets would be messy because we would have to link each element requesting this or that value from Arduino with a lot of cords. In order to create a subpatcher, you only have to type n to create a new object, and type p, a space, and the name of your subpatcher. While I designed these examples, I used something that works faster than creating a subpatcher, copying and pasting the structure on the inside, removing the structure from the outside, and adding inlets and outlets. This feature is named encapsulate and is part of the Edit menu of Max 6. You have to select the part of the patch you want to encapsulate inside a subpatcher, then click on Encapsulate , and voilà! You have just created a subpatcher including your structures that are connected to inlets and outlets in the correct order. Encapsulate and de-encapsulate features You can also de-encapsulate a subpatcher. It would follow the opposite process of removing the subpatcher and popping out the whole structure that was inside directly outside. Subpatching helps to keep things well organized and readable. We can imagine that we have to design a whole patch with a lot of wizardry and tricks inside it. This one is a processing unit, and as soon as we know what it does, after having finished it, we don't want to know how it does it but only use it . This provides a nice abstraction level by keeping some processing units closed inside boxes and not messing the main patch. You can copy and paste the subpatchers. This is a powerful way to quickly duplicate process units if you need to. But each subpatcher is totally independent of the others. This means that if you need to modify one because you want to update it, you'd have to do that individually in each subpatcher of your patch. This can be really hard. Let me introduce you to the last pure Max 6 concept now named abstractions before I go further with Arduino. Abstractions and reusability Any patch created and saved can be used as a new object in another patch. We can do this by creating a new object by typing n in a patcher; then we just have to type the name of our previously created and saved patch. A patch used in this way is called an abstraction . In order to call a patch as an abstraction in a patcher, the patch has to be in the Max 6 path in order to be found by it. You can check the path known by Max 6 by going to Options | File Preferences . Usually, if you put the main patch in a folder and the other patches you want to use as abstractions in that same folder, Max 6 finds them. The concept of abstraction in Max 6 itself is very powerful because it provides reusability . Indeed, imagine you need and have a lot of small (or big) patch structures that you are using every day, every time, and in almost every project. You can put them into a specific folder on your disk included in your Max 6 path and then you can call (we say instantiate ) them in every patch you are designing. Since each patch using it has only a reference to the one patch that was instantiated itself, you just need to improve your abstraction; each time you load a patch using it, the patch will have up-to-date abstractions loaded inside it. It is really easy to maintain all through the development months or years. Of course, if you totally change the abstraction to fit with a dedicated project/patch, you'll have some problems using it with other patches. You have to be careful to maintain even short documentation of your abstractions. Let's now continue by describing some good examples with Arduino.
Read more
  • 0
  • 0
  • 3034

article-image-social-networks-and-extending-user-profile-drupal-part-1
Packt
27 Nov 2009
5 min read
Save for later

Social Networks and Extending the User Profile in Drupal: Part-1

Packt
27 Nov 2009
5 min read
The term "social network" means different things to different people. However, the starting point of any network is the individuals within it. A user profile provides a place for site members to describe themselves, and for other site members to find out about them. In this article, we will examine how to create a user profile that is aligned with the goals of your site. Identifying the Goals of Your Profile User profiles can be used for a range of purposes. On one end of the spectrum, a profile can be used to store basic information about the user. On the other end of the spectrum, a user profile can be a place for a user to craft and share an online identity. As you create the functionality behind your user profile page, you should know the type of profile you want to create for your users. Drupal ships with a core Profile module. This module is a great starting point, and for many sites will provide all of the functionality needed. If, however, you want a more detailed profile, you will probably need to take the next step: building a node-based profile. This involves creating a content type that stores profile information. Node-based profiles offer several practical advantages; these nodes can be extended using CCK fields, and they can be categorized using a taxonomy. In Drupal 6, user profiles become nodes through using the Content Profile module. The most suitable approach to user profiles will be determined by the goals of your site. Using Drupal's core Profile module provides some simple options that will be easy to set up and use. Extending profiles via the Content Profile module allows for a more detailed profile, but requires more time to set up. In this article, we will begin by describing how to set up profiles using the core Profile module. Then we will look at how to use the Content Profile module. Using the Core Profile Module To use the core profile module, click on the Administer | Site building | Modules link, or navigate to admin/build/modules. In the Core – optional section, enable the Profile module. Click the Save configuration button to submit the form and save the settings. Once the Profile module has been enabled, you can see a user's profile information by navigating to http://example.com/user/UID, where UID is the user's ID number on the site. To see your own user profile, navigate to http://example.com/user when logged in, or click the My Account link. The default user profile page exposes some useful functionality. First, it shows the user's profile, and secondly, it provides the Edit tab that allows a user to edit their profile. The Edit tab will only be visible to the owner of the profile, or to administrative users with elevated permissions. Other modules can add tabs to the core Profile page. As shown in the preceding screenshot by Item 1, the core Tracker module adds a Track tab; this tab gives an overview of all of the posts to which this user has participated. As shown in the preceding screenshot by item, the Contact tab has been added by the core Contact module. The Contact module allows users to contact one another via the site. Customizing the Core Profile The first step in customizing the user profile requires us to plan what we want the profile to show. By default, Drupal only requires users to create a username and provide an email address. From a user privacy perspective, this is great. However, for a teacher trying to track multiple students across multiple classes, this can be less than useful. For this sample profile, we will add two fields using the core Profile module: a last name, and a birthday. The admin features for the core profile module are accessible via the Administer | User Management | Profiles link, or you can navigate to admin/user/profile. As seen in the preceding screenshot, the core profile module offers the following possibilities for customization: single-line textfield—adds a single line of text; useful for names or other types of brief information. multi-line text field—adds a larger textarea field; useful for narrative-type profile information. checkbox—adds a checkbox; useful for Yes/No options. list selection—allows the site admin to create a set of options; the user can then select from these pre-defined options. Functionally, this is similar to a controlled vocabulary created using the core Taxonomy module. freeform list—adds a field where the user can enter a comma-separated list. Functionally, this is similar to a tag-based vocabulary created using the core Taxonomy module. URL—allows users to enter a URL; this is useful for allowing users to add a link to their personal blog. date—adds a date field. In our example profile—adding a last name and a birthday—our last name will be a single-line textfield; our birthday will be a date field.
Read more
  • 0
  • 0
  • 3032

Packt
26 Nov 2013
6 min read
Save for later

CodeIgniter MVC – The Power of Simplicity!

Packt
26 Nov 2013
6 min read
(For more resources related to this topic, see here.) "Simplicity Wins In Big!" Back in the 80s there was a programming Language ADA that according to many contracts was required to be used. ADA was so complex and hard compared to C/C++ to maintain. Today ADA fades like Pascal. C/C++ is the simplicity winner for real time systems arena. In Telecom for network devices management protocols there were two standards in the 90s: CMIP (Common Management Information Protocol) and SNMP (Simple Network Management Protocol). Initially (90s) all telecom Requirement Papers required CMIP support. Eventually after several years a research found that there's about 1:10 or 10x effort to develop and maintain a same system based CMIP compared to SNMP. SNMP is the simplicity winner in network management systems arena! In VoIP or Media over IP, the H.323 and SIP (Session Initiation Protocol) were competing protocols in early 2000. H.323 had the messages in a cryptic binary way. SIP makes it all textual XML fashioned, easy to understand via text editor. Today almost all end point devices powered SIP while H.323 becomes a niche protocol for the VoIP backbone. SIP is the simplicity winner in VoIP arena! Back in 2010 I was looking for a good PHP platform to develop Web Application for my startup 1st product Logodial Zappix (http://zappix.com). I got a recommendation to use DRUPAL for this. I've tried the platform and found it very heavy to manipulate and change for my exact user interaction flow and experience I had in mind. Many times I had to compromise and the overhead of the platform was indeed horrible. Just make Hello world App and tons of irrelevant code will get into the project. Try to make free JavaScript and you found yourself struggling with the platform disabling you from the creativity of client side JavaScript and its Add-ons. I've decided to look for a better platform for my needs. Later on I've heard about Zend Framework MVC (Model-View-Controller framework typed). I've tried to work with it as it is based MVC and a lot of OOP usage, but I've found it heavy... Documentation seems great at first sight, but the more I've used I, looking for vivid examples and explanations, I found myself in endless close circle loops of links. It was lacking a clear explanation and vivid examples. The filling was like every match box moving task, I'd required a semi-trailer of declarations and calls to handle making it... Though it was MVC typed which I greatly liked. Keeping on with my search, I was looking for simple but powerful MVC based PHP which is my favorite language for server side. One day in early 2011 I got a note from a friend that there's a light and cool platform named CodeIgniter (CI in brief). I've checked the documentation link http://ellislab.com/codeigniter/user-guide/ and was amazed from the very clean, simple, well organized and well explained browsing experience. Having Examples? Yes, lots of clear examples, with great community. It was so great and simple. I felt like those platform designers were doing the best effort to make the simplest and most vivid code, reusable and clean OOP fashion from the infrastructure to the last function. I've tried making web app for a trail, trying to load helpers, libraries and use them and greatly loved the experience. Fast forward, today I see a matured CodeIgniter as a Lego like playground that I know well. I've wrote tons of models, helpers, libraries, controllers and views. CodeIgniter Simplicity enables me to do things fast and, clear and well maintained and expandable. In time I've gathered the most useful helpers and libraries, Ajax server and Browser side solutions for reuse, good links to useful add on such as the free Grid Powered Plug-In for CI the http://www.grocerycrud.com/ that keep improving day by day. Today I see Codeigniter as a matured scalable (See at&t and sprint Call Center Web apps based CI), reusability and simplicity champion. The following is the high-level architecture of the Codeigniter MVC with the Controller/s as the hub the application session. The CI controller main use cases are: Handles requests from web browser as HTTP URI call, based on submitted parameters (for example Submitting a Login with the credentials) or with no-parameters (for example Home Page navigation). Handles Asynchronous Ajax requests from the Web Client mostly as JSON HTTP POST request and response. Serving CRON job requests that creates HTTP URI request, calling controller methods, similar to browser navigation, silently from the CRON PHP module. The CI Views main features: Rendered by a controller with optionally set of parameters (scalar, arrays, objects) Has full open access to all the helpers, libraries, models as their rendering controller has. Has the freedom to integrate any JavaScript / 3rd party Web Client side plug-ins. The CI helper/s main features and fashion: Flat functions sets protected from duplication risks Can be loaded for use by any controller and accessed by any rendered view. Can access any CI resource / library and others via the &get_instance() service. The CI Libraries main features and fashion: OOP classes that can expand other 3rd party classes (For example, see the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). Can be used by the CI project controllers and all their rendered views. The CI Model main features and fashion: Similar to Libraries but has access to the default database, that can be expanded to multi databases and any other CI resource via the &get_instance(). OOP classes that can expand other 3rd party classes (For example, See the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). It seems that CodeIgniter is continuously increasing its popularity as it has a simple yet high quality OOP core that enables great creativity, reusability, and code clarity naming conventions, which are easy to expand (user class extends CI class), while more third-party application plugins (packages of views and/or models and/or libraries and/or helpers). I found Codeigniter flexible, great reusability enabler, having light infrastructure, enables developer creativity powered active global community. For a day to day the CI code clarity, high performance capabilities, minimal controllable footprint (You decide what helpers/libraries/models to load for each controller). Above all CI blessed with very fast learning curve of PHP developers and many blogs and community sites to share knowledge and raise and resolve issues and changes. CodeIgniter is the simplicity winner I've found for Web Apps MVC Server side. Summary This article introduces the CodeIgniter framework, while initially getting started with web-based applications. Resources for Article: Further resources on this subject: Database Interaction with Codeigniter 1.7 [Article] User Authentication with Codeigniter 1.7 using Facebook Connect [Article] CodeIgniter 1.7 and Objects [Article]
Read more
  • 0
  • 0
  • 3025

article-image-manage-your-money-simple-invoices
Packt
13 May 2010
6 min read
Save for later

Manage Your Money with Simple Invoices

Packt
13 May 2010
6 min read
As a freelancer I have one primitive motive. I want to do work and get paid. Getting paid means I need to generate invoices and keep track of them. I've tried to manage my invoices via spreadsheets and documents, but keeping track of my payments in a series of disconnected files is a fragile and inefficient process. Simple Invoices provides a solution to this. Simple Invoices is a relatively young project, and working with it requires that you're willing to do some manual configurations and tolerate the occasional problem. To work with and install the application, you need to be familiar with running a web server on OS X, Windows, or Linux. The next section, Web Server Required provides some out of the box server packages that allow you to run a server environment on your personal computer. It's point and click easy and perfect for an individual user. Not up for running a web server, but still need to find a reliable invoicing application? No problem. Visit www.simpleinvoices.com for a list of hosted solutions. Let's get started. Web Server Required Simple Invoices is a web application that requires Apache, PHP, and MySQL to function. Even if you're not a system administrator, you can still run a web server on your computer, regardless of your operating system. Windows users can get the required software by installing WAMP from www.wampserver.com. OS X users can install MAMP from www.mamp.info. Linux users can install Apache, MySql, and PHP5 using their distribution's software repositories. The database administrative tool, phpMyAdmin makes managing the MySQL database intuitive. Both the WAMP and MAMP installers contain phpMyAdmin, and we'll use it to setup our databases. Take a moment to setup your web server before continuing with the Simple Invoices installation. Install Simple Invoices Our first step will be to prepare the MySQL database. Open a web browser and navigate to http://localhost/phpmyadmin. Replace localhost with the actual server address. A login screen will display and will prompt you for a user name and password. Enter the the root login information for your MySQL install. MAMP users might try root for both the user name and password. WAMP users might try root with no password. If you plan on keeping your WAMP or MAMP servers installed, setting new root passwords for your MySQL database is a good idea, even if you do not allow external connections to your server. After you log in to phpMyAdmin, you will see a list of databases on the left sidebar; the main content window displays a set of tabs, including Databases, SQL, and Status. Let's create the database. Click on the Privileges tab to display a list of all users and associated access permissions. Find the Add a New User link and click on it. The Add New User page displays. Complete the following fields: User Name: enter simpleinvoices Host: select Local Password: specify a password for the user; then retype it in the field provided Database for User: select the Create database with same name and grant all privileges option Scroll to the bottom of the page and click the Go button. This procedure creates the database user and the database at the same time. If you wanted to use a database name that was different than the user name, then could have selected None for the Database for user configuration and added the database manually via the Databases tab in phpMyAdmin. If you prefer to work with MySQL directly, the SQL code for the steps we just ran is (the *** in the first line is the password): CREATE USER 'simpleinvoices'@'localhost' IDENTIFIED BY '***'; GRANT USAGE ON *.* TO 'simpleinvoices'@'localhost' IDENTIFIED BY '***' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0;CREATE DATABASE IF NOT EXISTS `simpleinvoices`; GRANT ALL PRIVILEGES ON `simpleinvoices`.* TO 'simpleinvoices'@'localhost'; Now that the database is setup, let's download the stable version of Simple Invoices by visiting www.simpleinvoices.org and following the Download link. The versions are identified by the year and version. At the time of this writing, the stable version is 2010.1. Unzip the Simple Invoices download file into a subdirectory on your web server. Because I like to install a lot of software, I like to keep the application name in my directory structure, so my example installation installs to a directory named simpleinvoices. That makes my installation available at http://localhost/simpleinvoices. Pick a directory path that makes sense for you. Not sure where the root of your web server resides on your server? Here are some of the default locations for the various server environments: WAMP – C:wampwww MAMP – /Applications/MAMP/htdocs Linux – /var/www Linux users will need to set the ownership of the tmp directory to the web user and make the tmp directory writable. For an Ubuntu system, the appropriate commands are: chown -R www-data tmpchmod -R 775 tmp The command syntax assumes we're working from the Simple Invoices installation directory on the web server. The web user on Ubuntu and other Debian-based systems is www-data. The -R option in both commands applies the permissions to all sub-directories and files. With the chmod command, you are granting write access to the web user. If you have problems or feel like being less secure, you can reduce this step down to one command: chmod -R 777 tmp. We're almost ready to open the Simple Invoices installer, but before we go to the web browser, we need to define the database connections in the config/config.ini file. At a minimum, we need to specify the database.params.username and database.params.password with the values we used to setup the database. If you skip this step and try to open Simple Invoices in your web browser, you will receive an error message indicating that your config.ini settings are incorrect. The following screenshot shows the relevant settings in confi.ini. Now, we're ready to start Simple Invoices and step through the graphical installer. Open a web browser and navigate to your installation (for example: http://localhost/simpleinvoices). Step 1: Install Database will display in the browser. Review the database connection information and click the Install Database button. Step 2: Import essential data displays. Click the Install Essential Data button to advance the installation. Step 3: Import sample data displays. We can choose to import sample data or start using the application. The sample data contains a few example billers, customers, and invoices. We're going to set all that up from scratch, so I recommend you click the Start using Simple Invoices button. At this point the Simple Invoices dashboard displays with a yellow note that instructs us to configure a biller, a customer, and a product before we create our first invoice. See the following screenshot. You might notice that the default access to Simple Invoices is not protected by a username and password. We can force authentication by adding a user and password via the People > Users screen. Then set the authentication.enabled field in config.ini equal to true.
Read more
  • 0
  • 0
  • 3024
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-advanced-indexing-and-array-concepts
Packt
26 Dec 2012
6 min read
Save for later

Advanced Indexing and Array Concepts

Packt
26 Dec 2012
6 min read
(For more resources related to this topic, see here.) Installing SciPy SciPy is the scientific Python library and is closely related to NumPy. In fact, SciPy and NumPy used to be one and the same project many years ago. In this recipe, we will install SciPy. How to do it... In this recipe, we will go through the steps for installing SciPy. Installing from source: If you have Git installed, you can clone the SciPy repository using the following command: git clone https://github.com/scipy/scipy.gitpython setup.py buildpython setup.py install --user This installs to your home directory and requires Python 2.6 or higher. Before building, you will also need to install the following packages on which SciPy depends: BLAS and LAPACK libraries C and Fortran compilers There is a chance that you have already installed this software as a part of the NumPy installation. Installing SciPy on Linux: Most Linux distributions have SciPy packages. We will go through the necessary steps for some of the popular Linux distributions: In order to install SciPy on Red Hat, Fedora, and CentOS, run the following instructions from the command line: yum install python-scipy In order to install SciPy on Mandriva, run the following command line instruction: urpmi python-scipy In order to install SciPy on Gentoo, run the following command line instruction: sudo emerge scipy On Debian or Ubuntu, we need to type the following: sudo apt-get install python-scipy Installing SciPy on Mac OS X: Apple Developer Tools (XCode) is required, because it contains the BLAS and LAPACK libraries. It can be found either in the App Store, or in the installation DVD that came with your Mac, or you can get the latest version from Apple Developer's connection at https://developer.apple.com/technologies/tools/. Make sure that everything, including all the optional packages is installed. You probably already have a Fortran compiler installed for NumPy. The binaries for gfortran can be found at http://r.research.att.com/tools/. Installing SciPy using easy_install or pip: Install with either of the following two commands: sudo pip install scipyeasy_install scipy Installing on Windows: If you have Python installed already, the preferred method is to download and use the binary distribution. Alternatively, you may want to install the Enthought Python distribution, which comes with other scientific Python software packages. Check your installation: Check the SciPy installation with the following code: import scipy print scipy.__version__ print scipy.__file__ This should print the correct SciPy version. How it works... Most package managers will take care of any dependencies for you. However, in some cases, you will need to install them manually. Unfortunately, this is beyond the scope of this book. If you run into problems, you can ask for help at: The #scipy IRC channel of freenode, or The SciPy mailing lists at http://www.scipy.org/Mailing_Lists Installing PIL PIL, the Python imaging library, is a prerequisite for the image processing recipes in this article. How to do it... Let's see how to install PIL. Installing PIL on Windows: Install using the Windows executable from the PIL website http://www.pythonware.com/products/pil/. Installing on Debian or Ubuntu: On Debian or Ubuntu, install PIL using the following command: sudo apt-get install python-imaging Installing with easy_install or pip: At the t ime of writing this book, it appeared that the package managers of Red Hat, Fedora, and CentOS did not have direct support for PIL. Therefore, please follow this step if you are using one of these Linux distributions. Install with either of the following commands: easy_install PILsudo pip install PIL Resizing images In this recipe, we will load a sample image of Lena, which is available in the SciPy distribution, into an array. This article is not about image manipulation, by the way; we will just use the image data as an input. Lena Soderberg appeared in a 1972 Playboy magazine. For historical reasons, one of those images is often used in the field of image processing. Don't worry; the picture in question is completely safe for work. We will resize the image using the repeat function. This function repeats an array, which in practice means resizing the image by a certain factor. Getting ready A prerequisite for this recipe is to have SciPy, Matplotlib, and PIL installed. How to do it... Load the Lena image into an array. SciPy has a lena function , which can load the image into a NumPy array: lena = scipy.misc.lena() Some refactoring has occurred since version 0.10, so if you are using an older version, the correct code is: lena = scipy.lena() Check the shape. Check the shape of the Lena array using the assert_equal function from the numpy.testing package—this is an optional sanity check test: numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) Resize the Lena array. Resize the Lena array with the repeat function. We give this function a resize factor in the x and y direction: resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) Plot the arrays. We will plot the Lena image and the resized image in two subplots that are a part of the same grid. Plot the Lena array in a subplot: matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) The Matplotlib subplot function creates a subplot. This function accepts a 3-digit integer as the parameter, where the first digit is the number of rows, the second digit is the number of columns, and the last digit is the index of the subplot starting with 1. The imshow function shows images. Finally, the show function displays the end result. Plot the resized array in another subplot and display it. The index is now 2: matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() The following screenshot is the result with the original image (first) and the resized image (second): The following is the complete code for this recipe: import scipy.misc import sys import matplotlib.pyplot import numpy.testing # This script resizes the Lena image from Scipy. if(len(sys.argv) != 3): print "Usage python %s yfactor xfactor" % (sys.argv[0]) sys.exit() # Loads the Lena image into an array lena = scipy.misc.lena() #Lena's dimensions LENA_X = 512 LENA_Y = 512 #Check the shape of the Lena array numpy.testing.assert_equal((LENA_X, LENA_Y), lena.shape) # Get the resize factors yfactor = float(sys.argv[1]) xfactor = float(sys.argv[2]) # Resize the Lena array resized = lena.repeat(yfactor, axis=0).repeat(xfactor, axis=1) #Check the shape of the resized array numpy.testing.assert_equal((yfactor * LENA_Y, xfactor * LENA_Y), resized.shape) # Plot the Lena array matplotlib.pyplot.subplot(211) matplotlib.pyplot.imshow(lena) #Plot the resized array matplotlib.pyplot.subplot(212) matplotlib.pyplot.imshow(resized) matplotlib.pyplot.show() How it works... The repeat function repeats arrays, which, in this case, resulted in changing the size of the original image. The Matplotlib subplot function creates a subplot. The imshow function shows images. Finally, the show function displays the end result. See also The Installing SciPy recipe The Installing PIL recipe
Read more
  • 0
  • 0
  • 3019

article-image-testing-and-debugging-grok-10-part-2
Packt
11 Feb 2010
8 min read
Save for later

Testing and Debugging in Grok 1.0: Part 2

Packt
11 Feb 2010
8 min read
Adding unit tests Apart from functional tests, we can also create pure Python test cases which the test runner can find. While functional tests cover application behavior, unit tests focus on program correctness. Ideally, every single Python method in the application should be tested. The unit test layer does not load the Grok infrastructure, so tests should not take anything that comes with it for granted; just the basic Python behavior. To add our unit tests, we'll create a module named unit_tests.py. Remember, in order for the test runner to find our test modules, their names have to end with 'tests'. Here's what we will put in this file: """ Do a Python test on the app. :unittest: """ import unittest from todo.app import Todo class InitializationTest(unittest.TestCase): todoapp = None def setUp(self): self.todoapp = Todo() def test_title_set(self): self.assertEqual(self.todoapp.title,u'To-do list manager') def test_next_id_set(self): self.assertEqual(self.todoapp.next_id,0) The :unittest: comment at the top, is very important. Without it, the test runner will not know in which layer your tests should be executed, and will simply ignore them. Unit tests are composed of test cases, and in theory, each should contain several related tests based on a specific area of the application's functionality. The test cases use the TestCase class from the Python unittest module. In these tests, we define a single test case that contains two very simple tests. We are not getting into the details here. Just notice that the test case can include a setUp and a tearDown method that can be used to perform any common initialization and destruction tasks which are needed to get the tests working and finishing cleanly. Every test inside a test case needs to have the prefix 'test' in its name, so we have exactly two tests that fulfill this condition. Both of the tests need an instance of the Todo class to be executed, so we assign it as a class variable to the test case, and create it inside the setUp method. The tests are very simple and they just verify that the default property values are set on instance creation. Both of the tests use the assertEqual method to tell the test runner that if the two values passed are different, the test should fail. To see them in action, we just run the bin/test command once more: $ bin/testRunning tests at level 1 Running todo.FunctionalLayer tests: Set up in 2.691 seconds. Running: .......2009-09-30 22:00:50,703 INFO sqlalchemy.engine.base.Engine.0x...684c PRAGMA table_info("users") 2009-09-30 22:00:50,703 INFO sqlalchemy.engine.base.Engine.0x...684c () Ran 7 tests with 0 failures and 0 errors in 0.420 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down todo.FunctionalLayer ... not supported Running in a subprocess. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Ran 2 tests with 0 failures and 0 errors in 0.000 seconds. Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 9 tests, 0 failures, 0 errors in 5.795 seconds Now, both the functional and unit test layers contain some tests and both are run one after the other. We can see the subtotal for each layer at the end of its tests as well as the grand total of the nine passed tests when the test runner finishes its work. Extending the test suite Of course, we just scratched the surface of which tests should be added to our application. If we continue to add tests, hundreds of tests may be there by the time we finish. However, this article is not the place to do so. As mentioned earlier, its way easier to have tests for each part of our application, if we add them as we code. There's no hiding from the fact that testing is a lot of work, but there is great value in having a complete test suite for our applications. More so, when third parties might use our work product independently. Debugging We will now take a quick look at the debugging facilities offered by Grok. Even if we have a very thorough test suite, chances are there that we will find a fair number of bugs in our application. When that happens, we need a quick and effective way to inspect the code as it runs and find the problem spots easily. Often, developers will use print statements (placed at key lines) throughout the code, in the hopes of finding the problem spot. While this is usually a good way to begin locating sore spots in the code, we often need some way to follow the code line by line to really find out what's wrong. In the next section, we'll see how to use the Python debugger to step through the code and find the problem spots. We'll also take a quick look at how to do post-mortem debugging in Grok, which means jumping into the debugger to analyze program state immediately after an exception has occurred. Debugging in Grok For regular debugging, where we need to step through the code to see what's going on inside, the Python debugger is an excellent tool. To use it, you just have to add the next line at the point where you wish to start debugging: import pdb; pdb.set_trace() Let's try it out. Open the app.py module and change the add method of the AddProjectForm class (line 108) to look like this: @grok.action('Add project') def add(self,**data): import pdb; pdb.set_trace() project = Project() project.creator = self.request.principal.title project.creation_date = datetime.datetime.now() project.modification_date = datetime.datetime.now() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) Notice that we invoke the debugger at the beginning of the method. Now, start the instance, go to the 'add project' form, fill it up, and submit it. Instead of seeing the new project view, the browser will stay at the 'add form' page, and display the waiting for... message. This is because control has been transferred to the console for the debugger to act. Your console will look like this: > /home/cguardia/work/virtual/grok1/todo/src/todo/app.py(109)add() -> project = Project() (Pdb) | The debugger is now active and waiting for input. Notice that the line number where debugging started, appears right beside the path of the module where we are located. After the line number, comes the name of the method, add(). Below that, the next line of code to be executed is shown. The debugger commands are simple. To execute the current line, type n: (Pdb) n > /home/cguardia/work/virtual/grok1/todo/src/todo/app.py(110)add() -> project.creator = self.request.principal.title (Pdb) You can see the available commands if you type h: (Pdb) h Documented commands (type help <topic>): ======================================== EOF break condition disable help list q step w a bt cont down ignore n quit tbreak whatis alias c continue enable j next r u where args cl d exit jump p return unalias b clear debug h l pp s up Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv (Pdb) The list command id is used for getting a bird's eye view of where in the code are we: (Pdb) list 105 106 @grok.action('Add project') 107 def add(self,**data): 108 import pdb; pdb.set_trace() 109 project = Project() 110 -> project.creator = self.request.principal.title 111 project.creation_date = datetime.datetime.now() 112 project.modification_date = datetime.datetime.now() 113 self.applyData(project,**data) 114 id = str(self.context.next_id) 115 self.context.next_id = self.context.next_id+1 (Pdb) As you can see, the current line is shown with an arrow. It's possible to type in the names of objects within the current execution context and find out their values: (Pdb) project <todo.app.Project object at 0xa0ef72c> (Pdb) data {'kind': 'personal', 'description': u'Nothing', 'title': u'Project about nothing'} (Pdb) We can of course, continue stepping line by line through all of the code in the application, including Grok's own code, checking values as we proceed. When we are through reviewing, we can click on c to return control to the browser. At this point, we will see the project view. The Python debugger is very easy to use and it can be invaluable for finding obscure bugs in your code.
Read more
  • 0
  • 0
  • 3018

article-image-enabling-apache-axis2-clustering
Packt
25 Feb 2011
6 min read
Save for later

Enabling Apache Axis2 clustering

Packt
25 Feb 2011
6 min read
Clustering for high availability and scalability is one of the main requirements of any enterprise deployment. This is also true for Apache Axis2. High availability refers to the ability to serve client requests by tolerating failures. Scalability is the ability to serve a large number of clients sending a large number of requests without any degradation to the performance. Many large scale enterprises are adapting to web services as the de facto middleware standard. These enterprises have to process millions of transactions per day, or even more. A large number of clients, both human and computer, connect simultaneously to these systems and initiate transactions. Therefore, the servers hosting the web services for these enterprises have to support that level of performance and concurrency. In addition, almost all the transactions happening in such enterprise deployments are critical to the business of the organization. This imposes another requirement for production-ready web services servers, namely, to maintain very low downtime. It is impossible to support that level of scalability and high availability from a single server, despite how powerful the server hardware or how efficient the server software is. Web services clustering is needed to solve this. It allows you to deploy and manage several instances of identical web services across multiple web services servers running on different server machines. Then we can distribute client requests among these machines using a suitable load balancing system to achieve the required level of availability and scalability. Setting up a simple Axis2 cluster Enabling Axis2 clustering is a simple task. Let us look at setting up a simple two node cluster: Extract the Axis2 distribution into two different directories and change the HTTP and HTTPS ports in the respective axis2.xml files. Locate the "Clustering" element in the axis2.xml files and set the enable attribute to true. Start the two Axis2 instances using Simple Axis Server. You should see some messages indicating that clustering has been enabled. That is it! Wasn't that extremely simple? In order to verify that state replication is working, we can deploy a stateful web service on both instances. This web service should set a value in the ConfigurationContext in one operation and try to retrieve that value in another operation. We can call the set value operation on one node, and next call the retrieve operation on the other node. The value set and the value retrieved should be equal. Next, we will look at the clustering configuration language in detail. Writing a highly available clusterable web service In general, you do not have to do anything extra to make your web service clusterable. Any regular web service is clusterable in general. In the case of stateful web services, you need to store the Java serializable replicable properties in the Axis2 ConfigurationContext, ServiceGroupContext, or ServiceContext. Please note that stateful variables you maintain elsewhere will not be replicated. If you have properly configured the Axis2 clustering for state replication, then the Axis2 infrastructure will replicate these properties for you. In the next section, you will be able to look at the details of configuring a cluster for state replication. Let us look at a simple stateful Axis2 web service deployed in the soapsession scope: public class ClusterableService { private static final String VALUE = "value"; public void setValue(String value) { MessageContext.getCurrentMessageContext().getServiceContext(); serviceContext.setProperty(VALUE, value); } public String getValue() { MessageContext.getCurrentMessageContext().getServiceContext(); return (String) serviceContext.getProperty(VALUE); } } You can deploy this service on two Axis2 nodes in a cluster. You can write a client that will call the setValue operation on the first, and then call the getValue operation on the second node. You will be able to see that the value you set in the first node can be retrieved from the second node. What happens is, when you call the setValue operation on the first node, the value is set in the respective ServiceContext, and replicated to the second node. Therefore, when you call getValue on the second node, the replicated value has been properly set in the respective ServiceContext. As you may have already noticed, you do not have to do anything additional to make a web service clusterable. Axis does the state replication transparently. However, if you require control over state replication, Axis2 provides that option as well. Let us rewrite the same web service, while taking control of the state replication: public class ClusterableService { private static final String VALUE = "value"; public void setValue(String value) { MessageContext.getCurrentMessageContext().getServiceContext(); serviceContext.setProperty(VALUE, value); Replicator.replicate(serviceContext); } public String getValue() { MessageContext.getCurrentMessageContext().getServiceContext(); return (String) serviceContext.getProperty(VALUE); } } Replicator.replicate() will immediately replicate any property changes in the provided Axis2 context. So, how does this setup increase availability? Say, you sent a setValue request to node 1 and node 1 failed soon after replicating that value to the cluster. Now, node 2 will have the originally set value, hence the web service clients can continue unhindered. Stateless Axis2 Web Services Stateless Axis2 Web Services give the best performance, as no state replication is necessary for such services. These services can still be deployed on a load balancer-fronted Axis2 cluster to achieve horizontal scalability. Again, no code change or special coding is necessary to deploy such web services on a cluster. Stateless web services may be deployed in a cluster either to achieve failover behavior or scalability. Setting up a failover cluster A failover cluster is generally fronted by a load balancer and one or more nodes that are designated as primary nodes, while some other nodes are designated as backup nodes. Such a cluster can be set up with or without high availability. If all the states are replicated from the primaries to the backups, then when a failure occurs, the clients can continue without a hitch. This will ensure high availability. However, this state replication has its overhead. If you are deploying only stateless web services, you can run a setup without any state replication. In a pure failover cluster (that is, without any state replication), if the primary fails, the load balancer will route all subsequent requests to the backup node, but some state may be lost, so the clients will have to handle some degree of that failure. The load balancer can be configured in such a way that all requests are generally routed to the primary node, and a failover node is provided in case the primary fails, as shown in the following figure: Increasing horizontal scalability As shown in the figure below, to achieve horizontal scalability, an Axis2 cluster will be fronted by a load balancer (depicted by LB in the following figure). The load balancer will spread the load across the Axis2 cluster according to some load balancing algorithm. The round-robin load balancing algorithm is one such popular and simple algorithm, and works well when all hardware and software on the nodes are identical. Generally, a horizontally scalable cluster will maintain its response time and will not degrade performance under increasing load. Throughput will also increase when the load increases in such a setup. Generally, the number of nodes in the cluster is a function of the expected maximum peak load. In such a cluster, all nodes are active.
Read more
  • 0
  • 0
  • 3018

article-image-web-services-testing-and-soapui
Packt
16 Nov 2012
8 min read
Save for later

Web Services Testing and soapUI

Packt
16 Nov 2012
8 min read
(For more resources related to this topic, see here.) SOA and web services SOA is a distinct approach for separating concerns and building business solutions utilizing loosely coupled and reusable components. SOA is no longer a nice-to-have feature for most of the enterprises and it is widely used in organizations to achieve a lot of strategic advantages. By adopting SOA, organizations can enable their business applications to quickly and efficiently respond to business, process, and integration changes which usually occur in any enterprise environment. Service-oriented solutions If a software system is built by following the principles associated with SOA, it can be considered as a service-oriented solution. Organizations generally tend to build service-oriented solutions in order to leverage flexibility in their businesses, merge or acquire new businesses, and achieve competitive advantages. To understand the use and purpose of SOA and service-oriented solutions, let's have a look at a simplified case study. Case study Smith and Co. is a large motor insurance policy provider located in North America. The company uses a software system to perform all their operations which are associated with insurance claim processing. The system consists of various modules including the following: Customer enrollment and registration Insurance policy processing Insurance claim processing Customer management Accounting Service providers management With the enormous success and client satisfaction of the insurance claims processed by the company during the recent past, Smith and Co. has acquired InsurePlus Inc., one of its competing insurance providers, a few months back. InsurePlus has also provided some of the insurance motor claim policies which are similar to those that Smith and Co. provides to their clients. Therefore, the company management has decided to integrate the insurance claim processing systems used by both companies and deliver one solution to their clients. Smith and Co. uses a lot of Microsoft(TM) technologies and all of their software applications, including the overall insurance policy management system, are built on .NET framework. On the other hand, InsurePlus uses J2EE heavily, and their insurance processing applications are all based on Java technologies. To worsen the problem of integration, InsurePlus consists of a legacy customer management application component as well, which runs on an AS-400 system. The IT departments of both companies faced numerous difficulties when they tried to integrate the software applications in Smith and Co. and InsurePlus Inc. They had to write a lot of adapter modules so that both applications would communicate with each other and do the protocol conversions as needed. In order to overcome these and future integration issues, the IT management of Smith and Co. decided to adopt SOA into their business application development methodology and convert the insurance processing system into a service-oriented solution. As the first step, a lot of wrapper services (web services which encapsulate the logic of different insurance processing modules) were built, exposing them as web services. Therefore the individual modules were able to communicate with each other with minimum integration concerns. By adopting SOA, their applications used a common language, XML, in message transmission and hence a heterogeneous systems such as the .NET based insurance policy handling system in Smith and Co. was able to communicate with the Java based applications running on InsurePlus Inc. By implementing a service-oriented solution, the system at Smith and Co. was able to merge with a lot of other legacy systems with minimum integration overhead. Building blocks of SOA When studying typical service-oriented solutions, we can identify three major building blocks as follows: Web services Mediation Composition Web services Web services are the individual units of business logic in SOA. Web services communicate with each other and other programs or applications by sending messages. Web services consist of a public interface definition which is a central piece of information that assigns the service an identity and enables its invocation. The service container is the SOA middleware component where the web service is hosted for the consuming applications to interact with it. It allows developers to build, deploy, and manage web services and it also represents the server-side processor role in web service frameworks. A list of commonly used web service frameworks can be found at http://en.wikipedia.org/wiki/List_of_web_service_frameworks; here you can find some popular web service middleware such as Windows Communication Foundation (WCF) Apache CXF, Apache Axis2, and so on. Apache Axis2 can be found at http://axis.apache.org/ The service container contains the business logic, which interacts with the service consumer via a service interface. This is shown in the following diagram: Mediation Usually, the message transmission between nodes in a service-oriented solution does not just occur via the typical point-to-point channels. Instead, once a message is received, it can be flowed through multiple intermediaries and subjected to various transformation and conversions as necessary. This behavior is commonly referred to as message mediation and is another important building block in service-oriented solutions. Similar to how the service container is used as the hosting platform for web services, a broker is the corresponding SOA middleware component for message mediation. Usually, enterprise service bus (ESB) acts as a broker in service-oriented solutions Composition In service-oriented solutions, we cannot expect individual web services running alone to provide the desired business functionality. Instead, multiple web services work together and participate in various service compositions. Usually, the web services are pulled together dynamically at the runtime based on the rules specified in business process definitions. The management or coordination of these business processes are governed by the process coordinator, which is the SOA middleware component associated with web service compositions. Simple Object Access Protocol Simple Object Access Protocol (SOAP) can be considered as the foremost messaging standard for use with web services. It is defined by the World Wide Web Consortium (W3C) at http://www.w3.org/TR/2000/NOTE-SOAP-20000508/ as follows: SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that defines a framework for describing what is in a message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing remote procedure calls and responses. The SOAP specification has been universally accepted as the standard transport protocol for messages processed by web services. There are two different versions of SOAP specification and both of them are widely used in service-oriented solutions. These two versions are SOAP v1.1 and SOAP v1.2. Regardless of the SOAP specification version, the message format of a SOAP message still remains intact. A SOAP message is an XML document that consists of a mandatory SOAP envelope, an optional SOAP header, and a mandatory SOAP body. The structure of a SOAP message is shown in the following diagram: The SOAP Envelope is the wrapper element which holds all child nodes inside a SOAP message. The SOAP Header element is an optional block where the meta information is stored. Using the headers, SOAP messages are capable of containing different types of supplemental information related to the delivery and processing of messages. This indirectly provides the statelessness for web services as by maintaining SOAP headers, services do not necessarily need to store message-specific logic. Typically, SOAP headers can include the following: Message processing instructions Security policy metadata Addressing information Message correlation data Reliable messaging metadata The SOAP body is the element where the actual message contents are hosted. These contents of the body are usually referred to as the message payload. Let's have a look at a sample SOAP message and relate the preceding concepts through the following diagram: In this example SOAP message, we can clearly identify the three elements; envelope, body, and header. The header element includes a set of child elements such as <wsa:To>, <wsa:ReplyTo>, <wsa:Address>, <wsa:MessageID>, and <wsa:Action>. These header blocks are part of the WS-Addressing specification. Similarly, any header element associated with WS-* specifications can be included inside the SOAP header element. The <s:Body> element carries the actual message payload. In this example, it is the <p:echoString> element with a one child element. When working with SOAP messages, identification of the version of SOAP message is one of the important requirements. At first glance, you can determine the version of the specification used in the SOAP message through the namespace identifier of the <Envelope> element. If the message conforms to SOAP 1.1 specification, it would be http://schemas.xmlsoap.org/soap/envelope/, otherwise http://www.w3.org/2003/05/soap-envelope is the name space identifier of SOAP 1.2 messages. Alternatives to SOAP Though SOAP is considered as the standard protocol for web services communication, it is not the only possible transport protocol which is used. SOAP was designed to be extensible so that the other standards could be integrated into it. The WS-* extensions such as WS-Security, WS-Addressing, and WSReliableMessaging are associated with SOAP messaging due to this extensible nature. In addition to the platform and language agnosticism, SOAP messages can be transmitted over various transports such as HTTP, HTTPS, JMS, and SMTP among others. However, there are a few drawbacks associated with SOAP messaging. The performance degradations due to heavy XML processing and the complexities associated with the usage of various WS-* specifications are two of the most common disadvantages of the SOAP messaging model. Because of these concerns, we can identify some alternative approaches to SOAP.
Read more
  • 0
  • 0
  • 3015
article-image-deployment-and-maintenance
Packt
20 Jul 2015
21 min read
Save for later

Deployment and Maintenance

Packt
20 Jul 2015
21 min read
 In this article by Sandro Pasquali, author of Deploying Node.js, we will learn about the following: Automating the deployment of applications, including a look at the differences between continuous integration, delivery, and deployment Using Git to track local changes and triggering deployment actions via webhooks when appropriate Using Vagrant to synchronize your local development environment with a deployed production server Provisioning a server with Ansible Note that application deployment is a complex topic with many dimensions that are often considered within unique sets of needs. This article is intended as an introduction to some of the technologies and themes you will encounter. Also, note that the scaling issues are part and parcel of deployment. (For more resources related to this topic, see here.) Using GitHub webhooks At the most basic level, deployment involves automatically validating, preparing, and releasing new code into production environments. One of the simplest ways to set up a deployment strategy is to trigger releases whenever changes are committed to a Git repository through the use of webhooks. Paraphrasing the GitHub documentation, webhooks provide a way for notifications to be delivered to an external web server whenever certain actions occur on a repository. In this section, we'll use GitHub webhooks to create a simple continuous deployment workflow, adding more realistic checks and balances. We'll build a local development environment that lets developers work with a clone of the production server code, make changes, and see the results of those changes immediately. As this local development build uses the same repository as the production build, the build process for a chosen environment is simple to configure, and multiple production and/or development boxes can be created with no special effort. The first step is to create a GitHub (www.github.com) account if you don't already have one. Basic accounts are free and easy to set up. Now, let's look at how GitHub webhooks work. Enabling webhooks Create a new folder and insert the following package.json file: {"name": "express-webhook","main": "server.js","dependencies": {"express": "~4.0.0","body-parser": "^1.12.3"}} This ensures that Express 4.x is installed and includes the body-parser package, which is used to handle POST data. Next, create a basic server called server.js: var express = require('express');var app = express();var bodyParser = require('body-parser');var port = process.env.PORT || 8082;app.use(bodyParser.json());app.get('/', function(req, res) {res.send('Hello World!');});app.post('/webhook', function(req, res) {// We'll add this next});app.listen(port);console.log('Express server listening on port ' + port); Enter the folder you've created, and build and run the server with npm install; npm start. Visit localhost:8082/ and you should see "Hello World!" in your browser. Whenever any file changes in a given repository, we want GitHub to push information about the change to /webhook. So, the first step is to create a GitHub repository for the Express server mentioned in the code. Go to your GitHub account and create a new repository with the name 'express-webhook'. The following screenshot shows this: Once the repository is created, enter your local repository folder and run the following commands: git initgit add .git commit -m "first commit"git remote add origin git@github.com:<your username>/express-webhook You should now have a new GitHub repository and a local linked version. The next step is to configure this repository to broadcast the push event on the repository. Navigate to the following URL: https://github.com/<your_username>/express-webhook/settings From here, navigate to Webhooks & Services | Add webhook (you may need to enter your password again). You should now see the following screen: This is where you set up webhooks. Note that the push event is already set as default, and, if asked, you'll want to disable SSL verification for now. GitHub needs a target URL to use POST on change events. If you have your local repository in a location that is already web accessible, enter that now, remembering to append the /webhook route, as in http://www.example.com/webhook. If you are building on a local machine or on another limited network, you'll need to create a secure tunnel that GitHub can use. A free service to do this can be found at http://localtunnel.me/. Follow the instructions on that page, and use the custom URL provided to configure your webhook. Other good forwarding services can be found at https://forwardhq.com/ and https://meetfinch.com/. Now that webhooks are enabled, the next step is to test the system by triggering a push event. Create a new file called readme.md (add whatever you'd like to it), save it, and then run the following commands: git add readme.mdgit commit -m "testing webhooks"git push origin master This will push changes to your GitHub repository. Return to the Webhooks & Services section for the express-webhook repository on GitHub. You should see something like this: This is a good thing! GitHub noticed your push and attempted to deliver information about the changes to the webhook endpoint you set, but the delivery failed as we haven't configured the /webhook route yet—that's to be expected. Inspect the failed delivery payload by clicking on the last attempt—you should see a large JSON file. In that payload, you'll find something like this: "committer": {"name": "Sandro Pasquali","email": "spasquali@gmail.com","username": "sandro-pasquali"},"added": ["readme.md"],"removed": [],"modified": [] It should now be clear what sort of information GitHub will pass along whenever a push event happens. You can now configure the /webhook route in the demonstration Express server to parse this data and do something with that information, such as sending an e-mail to an administrator. For example, use the following code: app.post('/webhook', function(req, res) {console.log(req.body);}); The next time your webhook fires, the entire JSON payload will be displayed. Let's take this to another level, breaking down the autopilot application to see how webhooks can be used to create a build/deploy system. Implementing a build/deploy system using webhooks To demonstrate how to build a webhook-powered deployment system, we're going to use a starter kit for application development. Go ahead and use fork on the repository at https://github.com/sandro-pasquali/autopilot.git. You now have a copy of the autopilot repository, which includes scaffolding for common Gulp tasks, tests, an Express server, and a deploy system that we're now going to explore. The autopilot application implements special features depending on whether you are running it in production or in development. While autopilot is a little too large and complex to fully document here, we're going to take a look at how major components of the system are designed and implemented so that you can build your own or augment existing systems. Here's what we will examine: How to create webhooks on GitHub programmatically How to catch and read webhook payloads How to use payload data to clone, test, and integrate changes How to use PM2 to safely manage and restart servers when code changes If you haven't already used fork on the autopilot repository, do that now. Clone the autopilot repository onto a server or someplace else where it is web-accessible. Follow the instructions on how to connect and push to the fork you've created on GitHub, and get familiar with how to pull and push changes, commit changes, and so on. PM2 delivers a basic deploy system that you might consider for your project (https://github.com/Unitech/PM2/blob/master/ADVANCED_README.md#deployment). Install the cloned autopilot repository with npm install; npm start. Once npm has installed dependencies, an interactive CLI application will lead you through the configuration process. Just hit the Enter key for all the questions, which will set defaults for a local development build (we'll build in production later). Once the configuration is complete, a new development server process controlled by PM2 will have been spawned. You'll see it listed in the PM2 manifest under autopilot-dev in the following screenshot: You will make changes in the /source directory of this development build. When you eventually have a production server in place, you will use git push on the local changes to push them to the autopilot repository on GitHub, triggering a webhook. GitHub will use POST on the information about the change to an Express route that we will define on our server, which will trigger the build process. The build runner will pull your changes from GitHub into a temporary directory, install, build, and test the changes, and if all is well, it will replace the relevant files in your deployed repository. At this point, PM2 will restart, and your changes will be immediately available. Schematically, the flow looks like this: To create webhooks on GitHub programmatically, you will need to create an access token. The following diagram explains the steps from A to B to C: We're going to use the Node library at https://github.com/mikedeboer/node-github to access GitHub. We'll use this package to create hooks on Github using the access token you've just created. Once you have an access token, creating a webhook is easy: var GitHubApi = require("github");github.authenticate({type: "oauth",token: <your token>});github.repos.createHook({"user": <your github username>,"repo": <github repo name>,"name": "web","secret": <any secret string>,"active": true,"events": ["push"],"config": {"url": "http://yourserver.com/git-webhook","content_type": "json"}}, function(err, resp) {...}); Autopilot performs this on startup, removing the need for you to manually create a hook. Now, we are listening for changes. As we saw previously, GitHub will deliver a payload indicating what has been added, what has been deleted, and what has changed. The next step for the autopilot system is to integrate these changes. It is important to remember that, when you use webhooks, you do not have control over how often GitHub will send changesets—if more than one person on your team can push, there is no predicting when those pushes will happen. The autopilot system uses Redis to manage a queue of requests, executing them in order. You will need to manage multiple changes in a way. For now, let's look at a straightforward way to build, test, and integrate changes. In your code bundle, visit autopilot/swanson/push.js. This is a process runner on which fork has been used by buildQueue.js in that same folder. The following information is passed to it: The URL of the GitHub repository that we will clone The directory to clone that repository into (<temp directory>/<commit hash>) The changeset The location of the production repository that will be changed Go ahead and read through the code. Using a few shell scripts, we will clone the changed repository and build it using the same commands you're used to—npm install, npm test, and so on. If the application builds without errors, we need only run through the changeset and replace the old files with the changed files. The final step is to restart our production server so that the changes reach our users. Here is where the real power of PM2 comes into play. When the autopilot system is run in production, PM2 creates a cluster of servers (similar to the Node cluster module). This is important as it allows us to restart the production server incrementally. As we restart one server node in the cluster with the newly pushed content, the other clusters continue to serve old content. This is essential to keeping a zero-downtime production running. Hopefully, the autopilot implementation will give you a few ideas on how to improve this process and customize it to your own needs. Synchronizing local and deployed builds One of the most important (and often difficult) parts of the deployment process is ensuring that the environment an application is being developed, built, and tested within perfectly simulates the environment that application will be deployed into. In this section, you'll learn how to emulate, or virtualize, the environment your deployed application will run within using Vagrant. After demonstrating how this setup can simplify your local development process, we'll use Ansible to provision a remote instance on DigitalOcean. Developing locally with Vagrant For a long while, developers would work directly on running servers or cobble together their own version of the production environment locally, often writing ad hoc scripts and tools to smoothen their development process. This is no longer necessary in a world of virtual machines. In this section, we will learn how to use Vagrant to emulate a production environment within your development environment, advantageously giving you a realistic box to work on testing code for production and isolating your development process from your local machine processes. By definition, Vagrant is used to create a virtual box emulating a production environment. So, we need to install Vagrant, a virtual machine, and a machine image. Finally, we'll need to write the configuration and provisioning scripts for our environment. Go to http://www.vagrantup.com/downloads and install the right Vagrant version for your box. Do the same with VirtualBox here at https://www.virtualbox.org/wiki/Downloads. You now need to add a box to run. For this example, we're going to use Centos 7.0, but you can choose whichever you'd prefer. Create a new folder for this project, enter it, and run the following command: vagrant box add chef/centos-7.0 Usefully, the creators of Vagrant, HashiCorp, provide a search service for Vagrant boxes at https://atlas.hashicorp.com/boxes/search. You will be prompted to choose your virtual environment provider—select virtualbox. All relevant files and machines will now be downloaded. Note that these boxes are very large and may take time to download. You'll now create a configuration file for Vagrant called Vagrantfile. As with npm, the init command quickly sets up a base file. Additionally, we'll need to inform Vagrant of the box we'll be using: vagrant init chef/centos-7.0 Vagrantfile is written in Ruby and defines the Vagrant environment. Open it up now and scan it. There is a lot of commentary, and it makes a useful read. Note the config.vm.box = "chef/centos-7.0" line, which was inserted during the initialization process. Now you can start Vagrant: vagrant up If everything went as expected, your box has been booted within Virtualbox. To confirm that your box is running, use the following code: vagrant ssh If you see a prompt, you've just set up a virtual machine. You'll see that you are in the typical home directory of a CentOS environment. To destroy your box, run vagrant destroy. This deletes the virtual machine by cleaning up captured resources. However, the next vagrant up command will need to do a lot of work to rebuild. If you simply want to shut down your machine, use vagrant halt. Vagrant is useful as a virtualized, production-like environment for developers to work within. To that end, it must be configured to emulate a production environment. In other words, your box must be provisioned by telling Vagrant how it should be configured and what software should be installed whenever vagrant up is run. One strategy for provisioning is to create a shell script that configures our server directly and point the Vagrant provisioning process to that script. Add the following line to Vagrantfile: config.vm.provision "shell", path: "provision.sh" Now, create that file with the following contents in the folder hosting Vagrantfile: # install nvmcurl https://raw.githubusercontent.com/creationix/nvm/v0.24.1/install.sh | bash# restart your shell with nvm enabledsource ~/.bashrc# install the latest Node.jsnvm install 0.12# ensure server default versionnvm alias default 0.12 Destroy any running Vagrant boxes. Run Vagrant again, and you will notice in the output the execution of the commands in our provisioning shell script. When this has been completed, enter your Vagrant box as the root (Vagrant boxes are automatically assigned the root password "vagrant"): vagrant sshsu You will see that Node v0.12.x is installed: node -v It's standard to allow password-less sudo for the Vagrant user. Run visudo and add the following line to the sudoers configuration file: vagrant ALL=(ALL) NOPASSWD: ALL Typically, when you are developing applications, you'll be modifying files in a project directory. You might bind a directory in your Vagrant box to a local code editor and develop in that way. Vagrant offers a simpler solution. Within your VM, there is a /vagrant folder that maps to the folder that Vagrantfile exists within, and these two folders are automatically synced. So, if you add the server.js file to the right folder on your local machine, that file will also show up in your VM's /vagrant folder. Go ahead and create a new test file either in your local folder or in your VM's /vagrant folder. You'll see that file synchronized to both locations regardless of where it was originally created. Let's clone our express-webhook repository from earlier in this article into our Vagrant box. Add the following lines to provision.sh: # install various packages, particularly for gityum groupinstall "Development Tools" -yyum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel-yyum install git -y# Move to shared folder, clone and start servercd /vagrantgit clone https://github.com/sandro-pasquali/express-webhookcd express-webhooknpm i; npm start Add the following to Vagrantfile, which will map port 8082 on the Vagrant box (a guest port representing the port our hosted application listens on) to port 8000 on our host machine: config.vm.network "forwarded_port", guest: 8082, host: 8000 Now, we need to restart the Vagrant box (loading this new configuration) and re-provision it: vagrant reloadvagrant provision This will take a while as yum installs various dependencies. When provisioning is complete, you should see this as the last line: ==> default: Express server listening on port 8082 Remembering that we bound the guest port 8082 to the host port 8000, go to your browser and navigate to localhost:8000. You should see "Hello World!" displayed. Also note that in our provisioning script, we cloned to the (shared) /vagrant folder. This means the clone of express-webhook should be visible in the current folder, which will allow you to work on the more easily accessible codebase, knowing it will be automatically synchronized with the version on your Vagrant box. Provisioning with Ansible Configuring your machines by hand, as we've done previously, doesn't scale well. For one, it can be overly difficult to set and manage environment variables. Also, writing your own provisioning scripts is error-prone and no longer necessary given the existence of provisioning tools, such as Ansible. With Ansible, we can define server environments using an organized syntax rather than ad hoc scripts, making it easier to distribute and modify configurations. Let's recreate the provision.sh script developed earlier using Ansible playbooks: Playbooks are Ansible's configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce or a set of steps in a general IT process. Playbooks are expressed in the YAML format (a human-readable data serialization language). To start with, we're going to change Vagrantfile's provisioner to Ansible. First, create the following subdirectories in your Vagrant folder: provisioningcommontasks These will be explained as we proceed through the Ansible setup. Next, create the following configuration file and name it ansible.cfg: [defaults]roles_path = provisioninglog_path = ./ansible.log This indicates that Ansible roles can be found in the /provisioning folder, and that we want to keep a provisioning log in ansible.log. Roles are used to organize tasks and other functions into reusable files. These will be explained shortly. Modify the config.vm.provision definition to the following: config.vm.provision "ansible" do |ansible|ansible.playbook = "provisioning/server.yml"ansible.verbose = "vvvv"end This tells Vagrant to defer to Ansible for provisioning instructions, and that we want the provisioning process to be verbose—we want to get feedback when the provisioning step is running. Also, we can see that the playbook definition, provisioning/server.yml, is expected to exist. Create that file now: ---- hosts: allsudo: yesroles:- commonvars:env:user: 'vagrant'nvm:version: '0.24.1'node_version: '0.12'build:repo_path: 'https://github.com/sandro-pasquali'repo_name: 'express-webhook' Playbooks can contain very complex rules. This simple file indicates that we are going to provision all available hosts using a single role called common. In more complex deployments, an inventory of IP addresses could be set under hosts, but, here, we just want to use a general setting for our one server. Additionally, the provisioning step will be provided with certain environment variables following the forms env.user, nvm.node_version, and so on. These variables will come into play when we define the common role, which will be to provision our Vagrant server with the programs necessary to build, clone, and deploy express-webhook. Finally, we assert that Ansible should run as an administrator (sudo) by default—this is necessary for the yum package manager on CentOS. We're now ready to define the common role. With Ansible, folder structures are important and are implied by the playbook. In our case, Ansible expects the role location (./provisioning, as defined in ansible.cfg) to contain the common folder (reflecting the common role given in the playbook), which itself must contain a tasks folder containing a main.yml file. These last two naming conventions are specific and required. The final step is creating the main.yml file in provisioning/common/tasks. First, we replicate the yum package loaders (see the file in your code bundle for the full list): ---- name: Install necessary OS programsyum: name={{ item }} state=installedwith_items:- autoconf- automake...- git Here, we see a few benefits of Ansible. A human-readable description of yum tasks is provided to a looping structure that will install every item in the list. Next, we run the nvm installer, which simply executes the auto-installer for nvm: - name: Install nvmsudo: noshell: "curl https://raw.githubusercontent.com/creationix/nvm/v{{ nvm.version }}/install.sh | bash" Note that, here, we're overriding the playbook's sudo setting. This can be done on a per-task basis, which gives us the freedom to move between different permission levels while provisioning. We are also able to execute shell commands while at the same time interpolating variables: - name: Update .bashrcsudo: nolineinfile: >dest="/home/{{ env.user }}/.bashrc"line="source /home/{{ env.user }}/.nvm/nvm.sh" Ansible provides extremely useful tools for file manipulation, and we will see here a very common one—updating the .bashrc file for a user. The lineinfile directive makes the addition of aliases, among other things, straightforward. The remainder of the commands follow a similar pattern to implement, in a structured way, the provisioning directives we need for our server. All the files you will need are in your code bundle in the vagrant/with_ansible folder. Once you have them installed, run vagrant up to see Ansible in action. One of the strengths of Ansible is the way it handles contexts. When you start your Vagrant build, you will notice that Ansible gathers facts, as shown in the following screenshot: Simply put, Ansible analyzes the context it is working in and only executes what is necessary to execute. If one of your tasks has already been run, the next time you try vagrant provision, that task will not run again. This is not true for shell scripts! In this way, editing playbooks and reprovisioning does not consume time redundantly changing what has already been changed. Ansible is a powerful tool that can be used for provisioning and much more complex deployment tasks. One of its great strengths is that it can run remotely—unlike most other tools, Ansible uses SSH to connect to remote servers and run operations. There is no need to install it on your production boxes. You are encouraged to browse the Ansible documentation at http://docs.ansible.com/index.html to learn more. Summary In this article, you learned how to deploy a local build into a production-ready environment and the powerful Git webhook tool was demonstrated as a way of creating a continuous integration environment. Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] API with MongoDB and Node.js [Article] So, what is Node.js? [Article]
Read more
  • 0
  • 0
  • 3015

article-image-introduction-moodle
Packt
28 Sep 2011
5 min read
Save for later

Introduction to Moodle

Packt
28 Sep 2011
5 min read
  (For more resources on Moodle, see here.) The Moodle philosophy Moodle is designed to support a style of learning called Social Constructionism. This style of learning is interactive. The social constructionist philosophy believes that people learn best when they interact with the learning material, construct new material for others, and interact with other students about the material. The difference between a traditional class and a class following the social constructionist philosophy is the difference between a lecture and a discussion. Moodle does not require you to use the social constructionist method for your courses. However, it best supports this method. For example, Moodle allows you to add several kinds of static course material. This is course material that a student reads, but does not interact with: Web pages Links to anything on the Web (including material on your Moodle site) A directory of files A label that displays any text or image However, Moodle also allows you to add interactive course material. This is course material that a student interacts with, by answering questions, entering text, or uploading files: Assignment (uploading files to be reviewed by the teacher) Choice (a single question) Lesson (a conditional, branching activity) Quiz (an online test) Moodle also offers activities where students interact with each other. These are used to create social course material: Chat (live online chat between students) Forum (you can have zero or more online bulletin boards for each course) Glossary (students and/or teachers can contribute terms to site-wide glossaries) Wiki (this is a familiar tool for collaboration to most younger students and many older students) Workshop (this supports the peer review and feedback of assignments that students upload) In addition, some of Moodle's add-on modules add even more types of interaction. For example, one add-on module enables students and teachers to schedule appointments with each other. The Moodle experience Because Moodle encourages interaction and exploration, your students' learning experience will often be non-linear. Moodle can be used to enforce a specific order upon a course, using something called conditional activities. Conditional activities can be arranged in a sequence. Your course can contain a mix of conditional and non-linear activities. In this section, I'll take you on a tour of a Moodle learning site. You will see the student's experience from the time that the student arrives at the site, through entering a course, to working through some material in the course. You will also see some student-to-student interaction, and some functions used by the teacher to manage the course. The Moodle Front Page The Front Page of your site is the first thing that most visitors will see. This section takes you on a tour of the Front Page of my demonstration site. Probably the best Moodle demo sites are http://demo.moodle.net/ and http://school.demo.moodle.net/. Arriving at the site When a visitor arrives at a learning site, the visitor sees the Front Page. You can require the visitor to register and log in before seeing any part of your site, or you can allow an anonymous visitor to see a lot of information about the site on the Front Page, which is what I have done: (Move the mouse over the image to enlarge.) One of the first things that a visitor will notice is the announcement at the top and centre of the page, Moodle 2.0 Book Almost Ready!. Below the announcement are two activities: a quiz, Win a Prize: Test Your Knowledge of E-mail History, and a chat room, Global Chat Room. Selecting either of these activities will require to the visitor to register with the site, as shown in the following screenshot: Anonymous, guest, and registered access Notice the line Some courses may allow guest access at the middle of the page. You can set three levels of access for your site, and for individual courses: Anonymous access allows anyone to see the contents of your site's Front Page. Notice that there is no Anonymous access for courses. Even if a course is open to Guests, the visitor must either manually log in as the user Guest, or you must configure the site to automatically log in a visitor as Guest. Guest access requires the user to login as Guest. This allows you to track usage, by looking at the statistics for the user Guest. However, as everyone is logged in as the user Guest, you can't track individual users. Registered access requires the user to register on your site. You can allow people to register with or without e-mail confirmation, require a special code for enrolment, manually create their accounts yourself, import accounts from another system, or use an outside system (like an LDAP server) for your accounts. The Main menu Returning to the Front Page, notice the Main menu in the upper-left corner. This menu consists of two documents that tell the user what the site is about, and how to use it. In Moodle, icons tell the user what kind of resource will be accessed by a link. In this case, the icon tells the user that the first resource is a PDF (Adobe Acrobat) document, and the second is a web page. Course materials that students observe or read, such as web or text pages, hyperlinks, and multimedia files are called Resources.
Read more
  • 0
  • 0
  • 3012

article-image-self-service-business-intelligence-creating-value-data
Packt
20 Sep 2013
15 min read
Save for later

Self-service Business Intelligence, Creating Value from Data

Packt
20 Sep 2013
15 min read
(For more resources related to this topic, see here.) Over the years most businesses have spent considerable amount of time, money, and effort in building databases, reporting systems, and Business Intelligence (BI) systems. IT often thinks that they are providing the necessary information to the business users for them to make the right decisions. However, when I meet the users they tell me a different story. Most often they say that they do not have the information they need to do their job. Or they have to spend a lot of time getting the relevant information. Many users state that they spend more time getting access to the data than understanding the information. This divide between IT and business is very common, it causes a lot of frustration and can cost a lot of money, which is a real issue for companies that needs to be solved for them to be profitable in the future. Research shows that by 2015 companies that build a good information management system will be 20 percent more profitable compared to their peers. You can read the entire research publication from http://download.microsoft.com/download/7/B/8/7B8AC938-2928-4B65-B1B3-0B523DDFCDC7/Big%20Data%20Gartner%20 information_management_in_the_21st%20Century.pdf. So how can an organization avoid the pitfalls in business intelligence systems and create an effective way of working with information? This article will cover the following topics concerning it: Common user requirements related to BI Understanding how these requirements can be solved by Analysis Services An introduction to self-service reporting Identifying common user requirements for a business intelligence system In many cases, companies that struggle with information delivery do not have a dedicated reporting system or data warehouse. Instead the users have access only to the operational reports provided by each line of business application. This is extremely troublesome for the users that want to compare information from different systems. As an example, think of a sales person that wants to have a report that shows the sales pipeline, from the Customer Relationship Management (CRM) system together with the actual sales figures from the Enterprise Resource Planning (ERP) system. Without a common reporting system the users have to combine the information themselves with whatever tools are available to them. Most often this tool is Microsoft Excel. While Microsoft Excel is an application that can be used to effectively display information to the users, it is not the best system for data integration. To perform the steps of extracting, transforming, and loading data (ETL), from the source system, the users have to write tedious formulas and macros to clean data, before they can start comparing the numbers and taking actual decisions based on the information. Lack of a dedicated reporting system can also cause trouble with the performance of the Online Transaction Processing (OLTP) system. When I worked in the SQL Server support group at Microsoft, we often had customers contacting us on performance issues that they had due to the users running the heavy reports directly on the production system. To solve this problem, many companies invest in a dedicated reporting system or a data warehouse. The purpose of this system is to contain a database customized for reporting, where the data can be transformed and combined once and for all from all source systems. The data warehouse also serves another purpose and that is to serve as the storage of historic data. Many companies that have invested in a common reporting database or data warehouse still require a person with IT skills to create a report. The main reason for this is that the organizations that have invested in a reporting system have had the expert users define the requirements for the system. Expert users will have totally different requirements than the majority of the users in the organization and an expert tool is often very hard to learn. An expert tool that is too hard for the normal users will put a strain on the IT department that will have to produce all the reports. This will result in the end users waiting for their reports for weeks and even months. One large corporation that I worked with had invested millions of dollars in a reporting solution, but to get a new report the users had to wait between nine and 12 months, before they got the report in their hand. Imagine the frustration and the grief that waiting this long before getting the right information causes the end users. To many users, business intelligence means simple reports with only the ability to filter data in a limited way. While simple reports such as the one in the preceding screenshot can provide valuable information, it does not give the users the possibility to examine the data in detail. The users cannot slice-and-dice the information and they cannot drill down to the details, if the aggregated level that the report shows is insufficient for decision making. If a user would like to have these capabilities, they would need to export the information into a tool that enables them to easily do so. In general, this means that the users bring the information into Excel to be able to pivot the information and add their own measures. This often results in a situation where there are thousands of Excel spreadsheets floating around in the organization, all with their own data, and with different formulas calculating the same measures. When analyzing data, the data itself is the most important thing. But if you cannot understand the values, the data is of no benefit to you. Many users find that it is easier to understand information, if it is presented in a way that they can consume efficiently. This means different things to different users, if you are a CEO, you probably want to consume aggregated information in a dashboard such as the one you can see in the following screenshot: On the other hand, if you are a controller, you want to see the numbers on a very detailed level that would enable you to analyze the information. A controller needs to be able to find the root cause, which in most cases includes analyzing information on a transaction level. A sales representative probably does not want to analyze the information. Instead, he or she would like to have a pre-canned report filtered on customers and time to see what goods the customers have bought in the past, and maybe some suggested products that could be recommended to the customers. Creating a flexible reporting solution What the companies need is a way for the end users to access information in a user-friendly interface, where they can create their own analytical reports. Analytical reporting gives the user the ability to see trends, look at information on an aggregated level, and drill down to the detailed information with a single-click. In most cases this will involve building a data warehouse of some kind, especially if you are going to reuse the information in several reports. The reason for creating a data warehouse is mainly the ability to combine different sources into one infrastructure once. If you build reports that do the integration and cleaning of the data in the reporting layer, then you will end up doing the same tasks of data modification in every report. This is both tedious and could cause unwanted errors as the developer would have to repeat all the integration efforts in all the reports that need to access the data. If you do it in the data warehouse you can create an ETL program that will move the data, and prepare it for the reports once, and all the reports can access this data. A data warehouse is also beneficial from many other angles. With a data warehouse, you have the ability to offload the burden of running the reports from the transactional system, a system that is built mainly for high transaction rates at high speed, and not for providing summarized data in a report to the users. From a report authoring perspective a data warehouse is also easier to work with. Consider the simple static report shown in the first screenshot. This report is built against a data warehouse that has been modeled using dimensional modeling. This means that the query used in the report is very simple compared to getting the information from a transactional system. In this case, the query is a join between six tables containing all the information that is available about dates, products, sales territories, and sales. selectf.SalesOrderNumber,s.EnglishProductSubcategoryName,SUM(f.OrderQuantity) as OrderQuantity,SUM(f.SalesAmount) as SalesAmount,SUM(f.TaxAmt) as TaxAmtfrom FactInternetSales fjoin DimProduct p on f.ProductKey=p.ProductKeyjoin DimProductSubcategory s on p.ProductSubcategoryKey =s.ProductSubcategoryKeyjoin DimProductCategory c on s.ProductCategoryKey =c.ProductCategoryKeyjoin DimDate d on f.OrderDateKey = d.DateKeyjoin DimSalesTerritory t on f.SalesTerritoryKey =t.SalesTerritoryKeywhere c.EnglishProductCategoryName = @ProductCategoryand d.CalendarYear = @Yearand d.EnglishMonthName = @MonthNameand t.SalesTerritoryCountry = @Countrygroup by f.SalesOrderNumber, s.EnglishProductSubcategoryName You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. The preceding query is included for illustrative purposes. As you can see it is very simple to write for someone that is well versed in Transact-SQL. Compare this to getting all the information from the operational system necessary to produce this report, and all the information stored in the six tables. It would be a daunting task. Even though the sample database for AdventureWorks is very simple, we still need to query a lot of tables to get to the information. The following figure shows the necessary tables from the OLTP system you would need to query, to get the information available in the six tables in the data warehouse. Now imagine creating the same query against a real system, it could easily be hundreds of tables involved to extract the data that are stored in a simple data model used for sales reporting. As you can see clearly now, working against a model that has been optimized for reporting is much simpler when creating the reports. Even with a well-structured data warehouse, many users would struggle with writing the select query driving the report shown earlier. The users, in general, do not know SQL. They typically do not understand the database schema since the table and column names usually consists of abbreviations that can be cryptic to the casual user. What if a user would like to change the report, so that it would show data in a matrix with the ability to drill down to lower levels? Then they most probably would need to contact IT. IT would need to rewrite the query and change the entire report layout, causing a delay between the need of the data and the availability. What is needed is a tool that enables the users to work with the business attributes instead of the tables and columns, with simple understandable objects instead of a complex database engine. Fortunately for us SQL Server contains this functionality; it is just for us database professionals to learn how to bring these capabilities to the business. That is what this article is all about, creating a flexible reporting solution allowing the end users to create their own reports. I have assumed that you as the reader have knowledge of databases and are well-versed with your data. What you will learn in this article is, how to use a component of SQL Server 2012 called SQL Server Analysis Services to create a cube or semantic model, exposing data in the simple business attributes allowing the users to use different tools to create their own ad hoc reports. Think of the cube as a PivotTable spreadsheet in Microsoft Excel. From the users perspective, they have full flexibility when analyzing the data. You can drag-and-drop whichever column you want to, into either the rows, columns, or filter boxes. The PivotTable spreadsheet also summarizes the information depending on the different attributes added to the PivotTable spreadsheet. The same capabilities are provided through the semantic model or the cube. When you are using the semantic model the data is not stored locally within the PivotTable spreadsheet, as it is when you are using the normal PivotTable functionality in Microsoft Excel. This means that you are not limited to the number of rows that Microsoft Excel is able to handle. Since the semantic model sits in a layer between the database and the end user reporting tool, you have the ability to rename fields, add calculations, and enhance your data. It also means that whenever new data is available in the database and you have processed your semantic model, then all the reports accessing the model will be updated. The semantic model is available in SQL Server Analysis Services. It has been part of the SQL Server package since Version 7.0 and has had major revisions in the SQL Server 2005, 2008 R2, and 2012 versions. This article will focus on how to create semantic models or cubes through practical step-by-step instructions. Getting user value through self-service reporting SQL Server Analysis Services is an application that allows you to create a semantic model that can be used to analyze very large amounts of data with great speed. The models can either be user created, or created and maintained by IT. If the user wants to create it, they can do so, by using a component in Microsoft Excel 2010 and upwards called PowerPivot. If you run Microsoft Excel 2013, it is included in the installed product, and you just need to enable it. In Microsoft Excel 2010, you have to download it as a separate add-in that you either can find on the Microsoft homepage or on the site called http://www.powerpivot.com. PowerPivot creates and uses a client-side semantic model that runs in the context of the Microsoft Excel process; you can only use Microsoft Excel as a way of analyzing the data. If you just would like to run a user created model, you do not need SQL Server at all, you just need Microsoft Excel. On the other hand, if you would like to maintain user created models centrally then you need, both SQL Server 2012 and SharePoint. Instead, if you would like IT to create and maintain a central semantic model, then IT need to install SQL Server Analysis Services. IT will, in most cases, not use Microsoft Excel to create the semantic models. Instead, IT will use Visual Studio as their tool. Visual Studio is much more suitable for IT compared to Microsoft Excel. Not only will they use it to create and maintain SQL Server Analysis Services semantic models, they will also use it for other database related tasks. It is a tool that can be connected to a source control system allowing several developers to work on the same project. The semantic models that they create from Visual Studio will run on a server that several clients can connect to simultaneously. The benefit of running a server-side model is that they can use the computational power of the server, this means that you can access more data. It also means that you can use a variety of tools to display the information. Both approaches enable users to do their own self-service reporting. In the case where PowerPivot is used they have complete freedom; but they also need the necessary knowledge to extract the data from the source systems and build the model themselves. In the case where IT maintains the semantic model, the users only need the knowledge to connect an end user tool such as Microsoft Excel to query the model. The users are, in this case, limited to the data that is available in the predefined model, but on the other hand, it is much simpler to do their own reporting. This is something that can be seen in the preceding figure that shows Microsoft Excel 2013 connected to a semantic model. SQL Server Analysis Services is available in the Standard edition with limited functionality, and in the BI and Enterprise edition with full functionality. For smaller departmental solutions the Standard edition can be used, but in many cases you will find that you need either the BI or the Enterprise edition of SQL Server. If you would like to create in-memory models, you definitely cannot run the Standard edition of the software since this functionality is not available in the Standard edition of SQL Server. Summary In this article, you learned about the requirements that most organizations have when it comes to an information management platform. You were introduced to SQL Server Analysis Services that provides the capabilities needed to create a self-service platform that can serve as the central place for all the information handling. SQL Server Analysis Services allows users to work with the data in the form of business entities, instead of through accessing a databases schema. It allows users to use easy to learn query tools such as Microsoft Excel to analyze the large amounts of data with subsecond response times. The users can easily create different kinds of reports and dashboards with the semantic model as the data source. Resources for Article : Further resources on this subject: MySQL Linked Server on SQL Server 2008 [Article] Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio [Article] FAQs on Microsoft SQL Server 2008 High Availability [Article]
Read more
  • 0
  • 0
  • 3004
article-image-overview-rest-concepts-and-developing-your-first-web-script-using-alfresco
Packt
30 Aug 2010
10 min read
Save for later

Overview of REST Concepts and Developing your First Web Script using Alfresco

Packt
30 Aug 2010
10 min read
(For more resources on Alfresco, see here.) Web Scripts allow you to develop entire web applications on Alfresco by using just a scripting language—JavaScript and a templating language—FreeMarker. They offer a lightweight framework for quickly developing even complex interfaces such as Alfresco Share and Web Studio. Besides this, Web Scripts can be used to develop Web Services for giving external applications access to the features of the Alfresco repository. Your Web Services, implemented according to the principles of the REST architectural style, can be easily reused by disparate, heterogeneous systems. Specifically, in this article, you will learn: What REST means and how it compares to SOAP What elements are needed to implement a Web Script A lightweight alternative to SOAP Web Services The term Web Services is generally intended to denote a large family of specifications and protocols, of which SOAP is only a small part, which are often employed to let applications provide and consume services over the World Wide Web (WWW). This basically means exchanging XML messages over HTTP. The main problem with the traditional approach to Web Services is that any implementation has to be compliant with a huge, and complicated set of specifications. This makes the application itself complex and typically hard to understand, debug, and maintain. A whole cottage industry has grown with the purpose of providing the tools necessary for letting developers abstract away this complexity. It is virtually impossible to develop any non-trivial application without these tools based on SOAP. In addition, one or more of the other Web Services standards such as WS-Security, WS-Transaction, or WS-Coordination are required. It is also impossible for any one person to have a reasonably in-depth knowledge of a meaningful portion of the whole Web Services stack (sometimes colloquially referred to as WS-*). Recently, a backlash against this heavyweight approach in providing services over the Web has begun and some people have started pushing for a different paradigm, one that did not completely ignore and disrupt the architecture of the World Wide Web. The main objection that the proponents of the REST architectural style, as this paradigm is called, raise with respect to WS-* is that the use of the term Web in Web Services is fraudulent and misleading. The World Wide Web, they claim, was designed in accordance with REST principles and this is precisely why it was able to become the largest, most scalable information architecture ever realized. WS-*, on the other hand, is nothing more than a revamped, RPC-style message exchange paradigm. It's just CORBA once again, only this time over HTTP and using XML, to put it bluntly. As it has purportedly been demonstrated, this approach will never scale to the size of the World Wide Web, as it gets in the way of important web concerns such as cacheability, the proper usage of the HTTP protocol methods, and of well-known MIME types to decouple clients from servers. Of course, you don't have to buy totally into the REST philosophy—which will be described in the next section—in order to appreciate the elegance, simplicity, and usefulness of Alfresco Web Scripts. After all, Alfresco gives you the choice to use either Web Scripts or the traditional, SOAP-based, Web Services. But you have to keep in mind that the newer and cooler pieces of Alfresco, such as Surf, Share, Web Studio, and the CMIS service, are being developed using Web Scripts. It is, therefore, mandatory that you know how the Web Scripts work, how to develop them, and how to interact with them, if you want to be part of this brave new world of RESTful services. REST concepts The term REST had been introduced by Roy T. Fielding, one of the architects of the HTTP protocol, in his Ph.D dissertation titled Architectural Styles and the Design of Network-based Software Architectures (available online at http://www.ics.uci.edu/ ~fielding/pubs/dissertation/top.htm). Constraints In his work, Dr. Fielding introduces an "architectural style for distributed hypermedia systems" called Representational State Transfer (REST). It does so by starting from an architectural style that does not impose any constraints on implementations (called the Null Style) and progressively adds new constraints that together define what REST is. Those constraints are: Client-Server interaction Statelessness Cacheability Uniform Interface Layered System Code-On-Demand (optional) Fielding then goes on to define the main elements of the REST architectural style. Foremost among those are resources and representations. In contrast with distributed object systems, where data is always hidden behind an interface that only exposes operations that clients may perform on said data, "REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types, selected dynamically based on the capabilities or desires of the recipient and the nature of the resource." Resources It is important to understand what a resource is and what it isn't. A resource is some information that can be named. It can correspond to a specific entity on a data management system such as a record in a database or a document in a DMS such as Alfresco. However, it can also map to a set of entities, such as a list of search results, or a non-virtual object like a person in the physical world. In any case, a resource is not the underlying entity. Resources need to be named, and in a globally distributed system such as the World Wide Web, they must be identified in a way that guarantees the universality and possibly the univocity of identifiers. On the Web, resources are identified using Uniform Resource Identifiers (URI). A specific category of URIs are Uniform Resource Locators (URL) , which provide a way for clients to locate, that is to find, a resource anywhere on the Web, in addition to identifying it. It is also assumed that URIs never change over the lifetime of a resource, no matter how much the internal state of the underlying entities changes over time. This allows the architecture of the Web to scale immensely, as the system does not need to rely on centralized link servers that maintain references separated from the content. Representations Representations are sequences of bytes intended to capture the current or intended state of a resource, as well as metadata (in the form of name / value pairs) about the resource or the representation itself. The format of a representation is called its media type. Examples of media types are plain text, HTML , XML, JPEG, PDF, and so on. When servers and clients use a set of well-known, standardized media types, interoperability between systems is greatly simplified. Sometimes, it is possible for clients and servers to negotiate a specific format from a set that is supported by both. Control data, which is exchanged between systems together with the representation, is used to determine the purpose of a message or the behavior of any intermediaries. Control data can be used by the client, for instance, to inform the server that the representation being transferred is meant to be the intended new state of the resource, or it can be used by the server to control how proxies, or the client itself, may cache representations. The most obvious example of control data on the Web is HTTP methods and result codes. By using the PUT method, for example, a client usually signals to a server that it is sending an updated representation of the resource. REST in practice As we mentioned, REST is really just an abstract architectural style, not a specific architecture, network protocol, or software system. While no existing system exactly adheres to the full set of REST principles, the World Wide Web is probably the most well-known and successful implementation of them. Developing Web Services that follow the REST paradigm boils down to following a handful of rules and using HTTP the way it was meant to be used. The following sections detail some of those rules. Use URLs to identify resources It is important that you design the URLs for your Web Service in such a way that they identify resources and do not describe the operations performed on said resources. It is a common mistake to use URLs such as: /widgetService/createNewWidget /widgetService/readWidget?id=1 /widgetService/updateWidget?id=1 /widgetService/deleteWidget?id=1 whenever, for instance, you want to design a web service for doing CRUD operations on widgets. A proper, RESTful URL space for this kind of usage scenario could instead be something like the following: /widgets/ To identify a collection of widgets /widgets/id To identify a single widget. Then again, a RESTful interaction with a server that implements the previous service would be along the lines of the following (where we have indicated the HTTP verb together with the URL): POST /widgets/ To create a new widget, whose representation is contained in the body of the request GET /widgets/ To obtain a representation (listing) of all widgets of the collection GET /widgets/1 To obtain a representation of the widget having id=1 POST /widgets/1 To update a widget by sending a new representation (the PUT verb could be used here as well) DELETE /widgets/1 To delete a widget You can see here how URLs representing resources and the appropriate usage of HTTP methods can be used to implement a correctly designed RESTful Web Service for CRUD operations on server-side objects. Use HTTP methods properly There are four main methods that a client can use to tell a server which kind of operation to perform. You can call them commands, if you like. These are GET, POST, PUT, and DELETE. The HTTP 1.1 specification lists some other methods, such as HEAD, TRACE, and OPTIONS, but we can ignore them as they are not frequently used. GET GET is meant to be used for requests that are not intended to modify the state of a resource. This does not mean that the processing by the server of a GET request must be free of side effects—it is perfectly legal, for instance, to increment a counter of page views. GET requests, however, should be idempotent. The property of idempotency means that a sequence of N identical requests should have the same side effects as a single request. The methods GET, HEAD, PUT, and DELETE share this property. Basically, by using GET, a client signals that it intends to retrieve the representation of a resource. The server can perform any operation that causes side effects as part of the execution of the method, but the client cannot be held accountable for them. PUT PUT is generally used to send the modified representation of a resource. It is idempotent as well—multiple, identical PUT requests have the same effect as a single request. DELETE DELETE can be used to request the removal of a resource. This is another idempotent method. POST The POST method is used to request that the server accepts the entity enclosed in the request as a new subordinate of the resource identified by the URI named in the request. POST is a bit like the Swiss army knife of HTTP and can be used for a number of purposes, including: Annotation of existing resources Posting a message to a bulletin board, newsgroup, or mailing list Providing a block of data, such as the result of submitting a form, to a data-handling process Extending a database through an append operation POST is not an idempotent method. One of the main objections proponents of REST raise with respect to traditional Web Service architectures is that, with the latter, POST is used for everything. While you shouldn't feel compelled to use every possible HTTP method in your Web Service (it is perfectly RESTful to use only GET and POST), you should at least know the expectations behind them and use them accordingly.
Read more
  • 0
  • 0
  • 3003

article-image-nokogiri
Packt
27 Aug 2013
8 min read
Save for later

Nokogiri

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Spoofing browser agents When you request a web page, you send metainformation along with your request in the form of headers. One of these headers, User-agent, informs the web server which web browser you are using. By default open-uri, the library we are using to scrape, will report your browser as Ruby. There are two issues with this. First, it makes it very easy for an administrator to look through their server logs and see if someone has been scraping the server. Ruby is not a standard web browser. Second, some web servers will deny requests that are made by a nonstandard browsing agent. We are going to spoof our browser agent so that the server thinks we are just another Mac using Safari. An example is as follows: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# this string is the browser agent for Safari running on a Macbrowser = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4)AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5Safari/536.30.1'# create a new Nokogiri HTML document from the scraped URL and pass inthe# browser agent as a second parameterdoc = Nokogiri::HTML(open('http://nytimes.com', browser))# you can now go along with your request as normal# you will show up as just another safari user in the logsputs doc.at_css('h2 a').to_s Caching It's important to remember that every time we scrape content, we are using someone else's server's resources. While it is true that we are not using any more resources than a standard web browser request, the automated nature of our requests leave the potential for abuse. In the previous examples we have searched for the top headline on The New York Times website. What if we took this code and put it in a loop because we always want to know the latest top headline? The code would work, but we would be launching a mini denial of service (DOS) attack on the server by hitting their page potentially thousands of times every minute. Many servers, Google being one example, have automatic blocking set up to prevent these rapid requests. They ban IP addresses that access their resources too quickly. This is known as rate limiting. To avoid being rate limited and in general be a good netizen, we need to implement a caching layer. Traditionally in a large app this would be implemented with a database. That's a little out of scope for this article, so we're going to build our own caching layer with a simple TXT file. We will store the headline in the file and then check the file modification date to see if enough time has passed before checking for new headlines. Start by creating the cache.txt file in the same directory as your code: $ touch cache.txt We're now ready to craft our caching solution: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# set how long in minutes until our data is expired# multiplied by 60 to convert to secondsexpiration = 1 * 60# file to store our cache incache = "cache.txt"# Calculate how old our cache is by subtracting it's modification time# from the current time.# Time.new gets the current time# The mtime methods gets the modification time on a filecache_age = Time.new - File.new(cache).mtime# if the cache age is greater than our expiration timeif cache_age > expiration# our cache has expireputs "cache has expired. fetching new headline"# we will now use our code from the quick start to# snag a new headline# scrape the web pagedata = open('http://nytimes.com')# create a Nokogiri HTML Document from our datadoc = Nokogiri::HTML(data)# parse the top headline and clean it upheadline = doc.at_css('h2 a').content.gsub(/n/," ").strip# we now need to save our new headline# the second File.open parameter "w" tells Ruby to overwrite# the old fileFile.open(cache, "w") do |file| # we then simply puts our text into the file file.puts headlineendputs "cache updated"else # we should use our cached copy puts "using cached copy" # read cache into a string using the read method headline = IO.read("cache.txt")end puts "The top headline on The New York Times is ..."puts headline Our cache is set to expire in one minute, so assuming it has been one minute since you created your cache.txt file, let's fire up our Ruby script: ruby cache.rbcache has expired. fetching new headlinecache updatedThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act If we run our script again before another minute passes, it should use the cached copy: $ ruby cache.rbusing cached copyThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act SSL By default, open-uri does not support scraping a page with SSL. This means any URL that starts with https will give you an error. We can get around this by adding one line below our require statements: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# disable SSL checking to allow scrapingOpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE Mechanize Sometimes you need to interact with a page before you can scrape it. The most common examples are logging in or submitting a form. Nokogiri is not set up to interact with pages. Nokogiri doesn't even scrape or download the page. That duty falls on open-uri. If you need to interact with a page, there is another gem you will have to use: Mechanize. Mechanize is created by the same team as Nokogiri and is used for automating interactions with websites. Mechanize includes a functioning copy of Nokogiri. To get started, install the mechanize gem: $ gem install mechanizeSuccessfully installed mechanize-2.7.1 We're going to recreate the code sample from the installation where we parsed the top Google results for "packt", except this time we are going to start by going to the Google home page and submitting the search form: # mechanize takes the place of Nokogiri and open-urirequire 'mechanize'# create a new mechanize agent# think of this as launching your web browseragent = Mechanize.new# open a URL in your agent / web browserpage = agent.get('http://google.com/')# the google homepage has one big search box# if you inspect the HTML, you will find a form with the name 'f'# inside of the form you will find a text input with the name 'q'google_form = page.form('f')# tell the page to set the q input inside the f form to 'packt'google_form.q = 'packt'# submit the formpage = agent.submit(google_form)# loop through an array of objects matching a CSS# selector. mechanize uses the search method instead of# xpath or css. search supports xpath and css# you can use the search method in Nokogiri too if you# like itpage.search('h3.r').each do |link| # print the link text puts link.contentend Now execute the Ruby script and you should see the titles for the top results: $ ruby mechanize.rbPackt Publishing: HomeBooksLatest BooksLogin/registerPacktLibSupportContactPackt - Wikipedia, the free encyclopediaPackt Open Source (PacktOpenSource) on TwitterPackt Publishing (packtpub) on TwitterPackt Publishing | LinkedInPackt Publishing | Facebook For more information refer to the site: http://mechanize.rubyforge.org/ People and places you should get to know If you need help with Nokogiri, here are some people and places that will prove invaluable. Official sites The following are the sites you can refer: Homepage and documentation: http://nokogiri.org Source code: https://github.com/sparklemotion/nokogiri/ Articles and tutorials The top five Nokogiri resources are as follows: Nokogiri History, Present, and Future presentation slides from Nokogiri co-author Mike Dalessio: http://bit.ly/nokogiri-goruco-2013 In-depth tutorial covering Ruby, Nokogiri, Sinatra, and Heroku complete with 90 minute behind-the-scenes screencast written by me: http://hunterpowers.com/data-scraping-and-more-with-ruby-nokogiri-sinatra-and-heroku RailsCasts episode 190: Screen Scraping with Nokogiri – an excellent Nokogiri quick start video: http://railscasts.com/episodes/190-screen-scraping-with-nokogiri Mechanize – an excellent Mechanize quick start video: http://railscasts.com/episodes/191-mechanize RailsCasts episode 191 Nokogiri co-author Mike Dalessio's blog: http://blog.flavorjon.es Community The community sites are as follows: Listserve: http://groups.google.com/group/nokogiri-talk GitHub: https://github.com/sparklemotion/nokogiri/ Wiki: http://github.com/sparklemotion/nokogiri/wikis Known issues: http://github.com/sparklemotion/nokogiri/issues Stackoverflow: http://stackoverflow.com/search?q=nokogiri Twitter Nokogiri leaders on Twitter are: Nokogiri co-author Mike Dalessio: @flavorjones Nokogiri co-author Aaron Patterson: @tenderlove Me: @TheHunter For more information on open source, follow Packt Publishing: @PacktOpenSource Summary Thus, we learnt about Nokogiri open source library in this article. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Introducing RubyMotion and the Hello World app [Article] Building the Facebook Clone using Ruby [Article]
Read more
  • 0
  • 0
  • 3002
Modal Close icon
Modal Close icon