Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-showcasing-personnel-facultystaff-directory-using-plone-3
Packt
18 Jan 2010
6 min read
Save for later

Showcasing Personnel with Faculty/Staff Directory using Plone 3

Packt
18 Jan 2010
6 min read
(For more resources on Plone, see here.) Faculty/Staff Directory is practically a departmental web site in a box, but it doesn't stop there. The product is general enough to serve in non-academic settings as well. It is also not limited to people; it has been repurposed to such diverse applications as cataloguing tea leaves. It really is a general-purpose taxonomy engine—albeit with an academic bent in many of its naming choices. This article is an exploration of a product that will likely provide the framework for much of your site. We'll tour its features, get ideas on how they're commonly used in a school setting, and, finally, get a sneak peek into the future of the product. Install the product Faculty/Staff Directory depends on several other products: membrane, Relations, and archetypes.schemaextender. Thus, the easiest way to get it, as documented in its readme, is by adding Products.FacultyStaffDirectory to your buildout.cfg, like so: [buildout]eggs = ...(other eggs)... Products.FacultyStaffDirectory Then, shut down Zope, and run buildout. After starting up Zope again, install FacultyStaffDirectory in the Add-on Products control panel, and you're ready to go. Test drive Faculty/Staff Directory Faculty/Staff Directory (FSD) is tremendously flexible; you can use it as a simple list of phone numbers or as the foundation supporting the rest of your site. In this section, we take FSD for a spin, with ample pit stops along the way to discuss alternative design possibilities. Keep in mind that FSD's features are given to creative repurposing. We'll not only see their typical uses but some more inventive ones as well. Create a directory and choose global roles Faculty/Staff Directory centers around the idea of collecting people in a central folder called, fittingly, a faculty/staff directory. Our first step is to create that folder. Once you've installed FSD in the Add-on Products control panel, choose a place in your site where you would like to place a personnel directory, and add a faculty/staff directory there. You'll be prompted for two things: Title (which is usually something like People) and a selection of Roles. These roles, part of FSD's optional user-and-group integration, are granted site-wide to all people in the directory. Thus, as with the similarly scoped roles in the Users and Groups control panel, it's best to use them sparingly: choose only Member or, if you don't intend the people represented in your Directory to log in, no roles at all. Most roles should be assigned on a per-folder basis using the Sharing tab, and FSD provides facilities for that as well, as we discuss later in the Integrate users and groups section. Add people Now that you have a place to keep them, it's time to add actual personnel. FSD represents people through the aptly named Person type. A fully tricked-out person might look like this: People, without exception, live directly inside the faculty/staff directory—though they often appear to be elsewhere, as we will soon see. Each person can track the following information: Basic Information   Access Account ID Doubling as the object ID (or "short name") of the person, this field should contain the person's institution-wide login name. If you choose to take advantage of FSD's user-and-group integration, this is how logged-in users are matched with their Person objects. The name of this field and its validation requirements can be changed (and should be, in most deployments) in the Faculty/Staff Directory control panel under Site Setup. Name First, middle, and last names, as well as a suffix such as Ph. D or Jr. Image A picture of the person in GIF, JPEG, or PNG format, for use on their page and in some listings. It will be automatically scaled as necessary. Classifications Classifications are FSD's most prominent way to group people. This is a convenient place to assign one or more when creating a person. Departments Another way of grouping people. See Group People, below, for an in-depth comparison of all the various types of groupings. Password If the Person objects provide user passwords option in the FSD control panel is on, one can log into the Plone site using the Access Account ID as a username and the contents of this field as a password. Personal Assistant(s) Other people who should have access to edit this person's information (excluding password and the fields under User Settings). Contact Information   Email This email address is run through Plone's spam armoring, as yet not widely cracked, before being displayed to visitors. An alternative armoring method ("somebody AT here DOT edu") is available from the FSD control panel under Site Setup. Street Address City State Postal Code Phone The required format of the Office Phone field, along with its example text, can be set in the FSD control panel-a must for installations outside the United States and Canada. Professional Information   Job Titles As many job titles as you care to provide, one per line. These are all listed on the person's page. Biography A rich text field for the person's biographical information. Since this field can contain all the formatting you can muster-headings, images, and all-it is a wonderful thing to abuse. Fill it with lists of published journal articles, current projects, and anything else appropriate to show on the person's page. Education A plain text field where every line represents a conferred degree, certification, or such. Web Sites A list of URLs, one per line, to display on the person's page. There is no option to provide link text for these URLs, so you may wish to embed links in the Biography field instead. Committees Specialties The committees and specialties to which this person belongs. See the full discussion of FSD's various methods of grouping people under Group People, below. User Settings   Language Content Editor If you leave FSD's user-and-group integration enabled for the Person type, these duplications of the standard Plone options will apply. Integration can be disabled on a type-by-type basis in the FSD control panel under Site Setup. That's a lot of information, but you can fill out only what you'll use. Fields left blank will be omitted from directory pages, labels and all. For more thorough customization, you can write a Faculty/Staff Directory extender product to hide inapplicable fields from your content providers or to collect information that FSD doesn't ordinarily track. Here's how to add people to the directory: From within the directory, pull down the Add new… menu, and choose Person. Fill out some of the person's attributes. Be sure to assign at least one classification to each person, but don't worry about departments, specialties, or committees yet, as we're about to cover them in detail.
Read more
  • 0
  • 0
  • 3133

article-image-setting-node
Packt
07 Aug 2013
10 min read
Save for later

Setting up Node

Packt
07 Aug 2013
10 min read
(For more resources related to this topic, see here.) System requirements Node runs on POSIX-like operating systems, the various UNIX derivatives (Solaris, and so on), or workalikes (Linux, Mac OS X, and so on), as well as on Microsoft Windows, thanks to the extensive assistance from Microsoft. Indeed, many of the Node built-in functions are direct corollaries to POSIX system calls. It can run on machines both large and small, including the tiny ARM devices such as the Raspberry Pi microscale embeddable computer for DIY software/hardware projects. Node is now available via package management systems, limiting the need to compile and install from source. Installing from source requires having a C compiler (such as GCC), and Python 2.7 (or later). If you plan to use encryption in your networking code you will also need the OpenSSL cryptographic library. The modern UNIX derivatives almost certainly come with these, and Node's configure script (see later when we download and configure the source) will detect their presence. If you should have to install them, Python is available at http://python.org and OpenSSL is available at http://openssl.org. Installing Node using package managers The preferred method for installing Node, now, is to use the versions available in package managers such as apt-get, or MacPorts. Package managers simplify your life by helping to maintain the current version of the software on your computer and ensuring to update dependent packages as necessary, all by typing a simple command such as apt-get update. Let's go over this first. Installing on Mac OS X with MacPorts The MacPorts project (http://www.macports.org/) has for years been packaging a long list of open source software packages for Mac OS X, and they have packaged Node. After you have installed MacPorts using the installer on their website, installing Node is pretty much this simple: $ sudo port search nodejs nodejs @0.10.6 (devel, net) Evented I/O for V8 JavaScript nodejs-devel @0.11.2 (devel, net) Evented I/O for V8 JavaScript Found 2 ports. -- npm @1.2.21 (devel) node package manager $ sudo port install nodejs npm .. long log of downloading and installing prerequisites and Node Installing on Mac OS X with Homebrew Homebrew is another open source software package manager for Mac OS X, which some say is the perfect replacement for MacPorts. It is available through their home page at http://mxcl.github.com/homebrew/. After installing Homebrew using the instructions on their website, using it to install Node is as simple as this: $ brew search node leafnode node $ brew install node ==> Downloading http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz ######################################################################## 100.0% ==> ./configure –prefix=/usr/local/Cellar/node/0.10.7 ==> make install ==> Caveats Homebrew installed npm. We recommend prepending the following path to your PATH environment variable to have npm-installed binaries picked up: /usr/local/share/npm/bin ==> Summary /usr/local/Cellar/node/0.10.7: 870 files, 16M, built in 21.9 minutes Installing on Linux from package management systems While it's still premature for Linux distributions or other operating systems to prepackage Node with their OS, that doesn't mean you cannot install it using the package managers. Instructions on the Node wiki currently list packaged versions of Node for Debian, Ubuntu, OpenSUSE, and Arch Linux. See: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager For example, on Debian sid (unstable): # apt-get update # apt-get install nodejs # Documentation is great. And on Ubuntu: # sudo apt-get install python-software-properties # sudo add-apt-repository ppa:chris-lea/node.js # sudo apt-get update # sudo apt-get install nodejs npm We can expect in due course that the Linux distros and other operating systems will routinely bundle Node into the OS like they do with other languages today. Installing the Node distribution from nodejs.org The nodejs.org website offers prebuilt binaries for Windows, Mac OS X, Linux, and Solaris. You simply go to the website, click on the Install button, and run the installer. For systems with package managers, such as the ones we've just discussed, it's preferable to use that installation method. That's because you'll find it easier to stay up-to-date with the latest version. However, on Windows this method may be preferred. For Mac OS X, the installer is a PKG file giving the typical installation process. For Windows, the installer simply takes you through the typical install wizard process. Once finished with the installer, you have a command line tool with which to run Node programs. The pre-packaged installers are the simplest ways to install Node, for those systems for which they're available. Installing Node on Windows using Chocolatey Gallery Chocolatey Gallery is a package management system, built on top of NuGet. Using it requires a Windows machine modern enough to support the Powershell and the .NET Framework 4.0. Once you have Chocolatey Gallery (http://chocolatey.org/), installing Node is as simple as this: C:> cinst install nodejs Installing the StrongLoop Node distribution StrongLoop (http://strongloop.com) has put together a supported version of Node that is prepackaged with several useful tools. This is a Node distribution in the same sense in which Fedora or Ubuntu are Linux distributions. StrongLoop brings together several useful packages, some of which were written by StrongLoop. StrongLoop tests the packages together, and distributes installable bundles through their website. The packages in the distribution include Express, Passport, Mongoose, Socket.IO, Engine.IO, Async, and Request. We will use all of those modules in this book. To install, navigate to the company home page and click on the Products link. They offer downloads of precompiled packages for both RPM and Debian Linux systems, as well as Mac OS X and Windows. Simply download the appropriate bundle for your system. For the RPM bundle, type the following: $ sudo rpm -i bundle-file-name For the Debian bundle, type the following: $ sudo dpkg -i bundle-file-name The Windows or Mac bundles are the usual sort of installable packages for each system. Simply double-click on the installer bundle, and follow the instructions in the install wizard. Once StrongLoop Node is installed, it provides not only the nodeand npmcommands (we'll go over these in a few pages), but also the slnodecommand. That command offers a superset of the npmcommands, such as boilerplate code for modules, web applications, or command-line applications. Installing from source on POSIX-like systems Installing the pre-packaged Node distributions is currently the preferred installation method. However, installing Node from source is desirable in a few situations: It could let you optimize the compiler settings as desired It could let you cross-compile, say for an embedded ARM system You might need to keep multiple Node builds for testing You might be working on Node itself Now that you have the high-level view, let's get our hands dirty mucking around in some build scripts. The general process follows the usual configure, make, and makeinstallroutine that you may already have performed with other open source software packages. If not, don't worry, we'll guide you through the process. The official installation instructions are in the Node wiki at https://github.com/joyent/node/wiki/Installation. Installing prerequisites As noted a minute ago, there are three prerequisites, a C compiler, Python, and the OpenSSL libraries. The Node installation process checks for their presence and will fail if the C compiler or Python is not present. The specific method of installing these is dependent on your operating system. These commands will check for their presence: $ cc --version i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3) Copyright (C) 2007 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. $ python Python 2.6.6 (r266:84292, Feb 15 2011, 01:35:25) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> Installing developer tools on Mac OS X The developer tools (such as GCC) are an optional installation on Mac OS X. There are two ways to get those tools, both of which are free. On the OS X installation DVD is a directory labeled Optional Installs, in which there is a package installer for—among other things—the developer tools, including Xcode. The other method is to download the latest copy of Xcode (for free) from http://developer.apple.com/xcode/. Most other POSIX-like systems, such as Linux, include a C compiler with the base system. Installing from source for all POSIX-like systems First, download the source from http://nodejs.org/download. One way to do this is with your browser, and another way is as follows: $ mkdir src $ cd src $ wget http://nodejs.org/dist/v0.10.7/node-v0.10.7.tar.gz $ tar xvfz node-v0.10.7.tar.gz $ cd node-v0.10.7 The next step is to configure the source so that it can be built. It is done with the typical sort of configure script and you can see its long list of options by running the following: $ ./configure –help. To cause the installation to land in your home directory, run it this way: $ ./configure –prefix=$HOME/node/0.10.7 ..output from configure If you want to install Node in a system-wide directory simply leave off the -prefixoption, and it will default to installing in /usr/local. After a moment it'll stop and more likely configure the source tree for installation in your chosen directory. If this doesn't succeed it will print a message about something that needs to be fixed. Once the configure script is satisfied, you can go on to the next step. With the configure script satisfied, compile the software: $ make .. a long log of compiler output is printed $ make install If you are installing into a system-wide directory do the last step this way instead: $ make $ sudo make install Once installed you should make sure to add the installation directory to your PATHvariable as follows: $ echo 'export PATH=$HOME/node/0.10.7/bin:${PATH}' >>~/.bashrc $ . ~/.bashrc For cshusers, use this syntax to make an exported environment variable: $ echo 'setenv PATH $HOME/node/0.10.7/bin:${PATH}' >>~/.cshrc $ source ~/.cshrc This should result in some directories like this: $ ls ~/node/0.10.7/ bin include lib share $ ls ~/node/0.10.7/bin node node-waf npm Maintaining multiple Node installs simultaneously Normally you won't have multiple versions of Node installed, and doing so adds complexity to your system. But if you are hacking on Node itself, or are testing against different Node releases, or any of several similar situations, you may want to have multiple Node installations. The method to do so is a simple variation on what we've already discussed. If you noticed during the instructions discussed earlier, the –prefixoption was used in a way that directly supports installing several Node versions side-by-side in the same directory: $ ./configure –prefix=$HOME/node/0.10.7 And: $ ./configure –prefix=/usr/local/node/0.10.7 This initial step determines the install directory. Clearly when Version 0.10.7, Version 0.12.15, or whichever version is released, you can change the install prefix to have the new version installed side-by-side with the previous versions. To switch between Node versions is simply a matter of changing the PATHvariable (on POSIX systems), as follows: $ export PATH=/usr/local/node/0.10.7/bin:${PATH} It starts to be a little tedious to maintain this after a while. For each release, you have to set up Node, npm, and any third-party modules you desire in your Node install; also the command shown to change your PATHis not quite optimal. Inventive programmers have created several version managers to make this easier by automatically setting up not only Node, but npmalso, and providing commands to change your PATHthe smart way: Node version manager: https://github.com/visionmedia/n Nodefront, aids in rapid frontend development: http://karthikv.github.io/nodefront/
Read more
  • 0
  • 0
  • 3130

article-image-ajaxdynamic-content-and-interactive-forms-joomla
Packt
16 Oct 2009
13 min read
Save for later

AJAX/Dynamic Content and Interactive Forms in Joomla!

Packt
16 Oct 2009
13 min read
AJAX: an acronym that Jesse James Garret of AdaptivePath.com came up with in 2005. Just a few short years later, it seems like every site has a "taste" of AJAX in it. If you're totally new to AJAX, I'll just point out that, at its core, AJAX is nothing very scary or horrendous. AJAX isn't even a new technology or language. Essentially, AJAX stands for: Asynchronous JavaScript and XML, and it is the technique of using JavaScript and XML to send and receive data between a web browser and a web server. The biggest advantage this technique has is that you can dynamically update a piece of content on your web page or web form with data from the server (preferably formatted in XML), without forcing the entire page to reload. The implementation of this technique has made it obvious to many web developers that they can start making advanced web applications (sometimes called RIAs—Rich Interface Applications) that work and feel more like software applications than web pages. Keep in mind that the word AJAX is starting to have its own meaning (as you'll also note its occasional use here as well as all over the Web as a proper noun rather than an all-cap acronym). For example, a Microsoft web developer may use VBScript instead of JavaScript to serve up Microsoft Access database data that is transformed into JSON (not XML) using a .NET server-side script. Today, that guy's site would still be considered an AJAX site rather than an "AVAJ" site (yep, AJAX just sounds cooler). In fact, it's getting to the point where just about anything on a web site (that isn't in Flash) that slides, moves, fades, or pops up without rendering a new browser window is considered an "Ajaxy" site. In truth, a large portion of these sites don't truly qualify as using AJAX, they're just using straight-up JavaScripting. Generally, if you use cool JavaScripts in your Joomla! site, it will probably be considered Ajaxy, despite not being asynchronous or using any XML. Want more info on this AJAX business? The w3schools site has an excellent introduction to AJAX, explaining it in straightforward simple terms. They even have a couple of great tutorials that are fun and easy to accomplish even if you only have a little HTML, JavaScript and server-side script (PHP or ASP) experience (no XML experience required): http://w3schools.com/ajax/. Preparing for dynamic content and interactive forms Gone are the days of clicking, submitting, and waiting for the next page to load, or manually compiling your own content from all your various online identities to post in your site. A web page using AJAX techniques (if applied properly) will give the user a smoother and leaner experience. Click on a drop-down option and check-box menus underneath are immediately updated with the relevant choices—no submitting, no waiting. Complicated forms that, in the past, took two or three screens to process can be reduced into one convenient screen by implementing the form with AJAX. As wonderful as this all sounds, I must again offer a quick disclaimer: I understand that, like with drop-down menus and Flash, you may want AJAX to be in your site, or your clients are demanding that AJAX be in their sites. Just keep in mind, AJAX techniques are best used in situations where they truly benefit a user's experience of a page; for example, being able to painlessly add relevant content via an extension or cutting a lengthy web process form down from three pages to one. In a nutshell, using an AJAX technique simply to say your site is an AJAX site is probably not a good idea. You should be aware that, if not implemented properly, some uses of AJAX can compromise the security of your site. You may inadvertently end up disabling key web browser features (such as back buttons or the history manager). Then there's all the basic usability and accessibility issues that JavaScript in general can bring to a site. Some screen readers may not be able to read a new screen area that's been generated by JavaScript. If you cater to users who rely on tabbing through content, navigation may be compromised once new content is updated. There are also interface design problems that AJAX brings to the table (and Flash developers can commiserate). Many times, in trying to limit screen real estate and simplify a process, developers actually end up creating a form or interface that is unnecessarily complex and confusing, especially when your user is expecting a web page to, well, act like a normal web page. Remember to check in with Don't Make Me Think: This is the Steve Krug book I recommend for help with any interface usability questions you may run into.Really interested in taking on AJAX? For you programmers, I highly recommend "AJAX and PHP: Building Responsive Web Applications", Cristian Darie, Bogdan Brinzarea, Filip Chereches-Tosa, and Mihai Bucica, Packt Publishing. In it, you'll learn the ins and outs of AJAX development, including handling security issues. You'll also do some very cool stuff, such as make your own Google-style auto-suggest form and a drag-and-drop sortable list (and that's just two of the many fun things to learn in the book). So, that said, you're now all equally warned and armed with all the knowledgeable resources I can think to throw at you. Let's get to it: how exactly do you go about getting something Ajaxy into your Joomla! site? Joomla! extensions Keep in mind, extensions are not part of your template. They are additional files with Joomla!-compatible PHP code, which are installed separately into their own directories in your Joomla! 1.5 installation. Once installed, they are available to be used with any template that is also installed in your Joomla! installation. Even though these are not part of your template, you might have to prepare your template to be fully compatible with them. Some extensions may have their own stylesheets, which are installed in their extension directory. Once you've installed an extension, you may want to go into your own template's stylesheet so that it nicely displays XHTML objects and content that the extension may output into your site. Extensions are any component, module or plugin that you install into your Joomla! 1.5 installation. Components control content that displays in the main type="component" jdoc tag in your template. Note that components may also have module settings and the ability to display content in assigned module positions. The poll component is a good example of a component that also has module settings. Modules are usually smaller and lighter and only display in module positions. Plugins generally help you out more on the backend of your site, say to switch WYSIWYG editors or with enabling OpenID logins, but as we'll see, some plugins can affect the display of your site to users as well. Deciding where AJAX is best used On the whole, we're going to look at the most popular places where AJAX can really aid and enrich your site's user experience. We'll start with users adding comments to articles and pages and streamlining that process. We'll then take a look at a nice plugin that can enhance pagination for people reading long articles on your site. We'll then move on to the RSS Reader module, which can enhance the content in your modules (and even makes your users have fun arranging them). Finally, we'll realize that AJAX isn't just for impressing your site users. You, as an administrator, can (and do) take advantage of AJAX as well. Please note: These extensions were chosen by me based on the following criteria: 1. They provided some useful enhancement to a basic site.2. They, at the time of this writing, were free and received very good feedback on Joomla!.org's extensions site: http://extensions.Joomla.org. In the next few pages, I'll walk you through installing these extensions and discuss any interesting insights for doing so, and benefits of their enhancements (and some drawbacks). But you must use the extension links provided to make sure you download the latest stable versions of these extensions and follow the extension author's installation guides when installing these into your Joomla! site. If you run into any problems installing these extensions, please contact the extension's author for support. Always be sure to take the normal precaution of backing up your site before installation, at least for any non-stable extensions you may decide to try. Installing the Joomla! comment component Chances are, if you've invested in Joomla! 1.5 as your CMS, you need some powerful capabilities. Easy commenting with "captcha" images to reduce spam is always helpful: http://extensions.Joomla.org/extensions/contacts-&-feedback/comments/4389/details To install this extension (and the other few coming up), you have to basically go to Extensions | Install/Uninstall and upload the extension's ZIP file. You'll then proceed to the plugin, component, and/or modules panel and activate the extension so that it is ready to be implemented on your site. Upon installing this comment component, to my surprise, it told me that it was for an older version of Joomla! Everything on the download page seemed to indicate it worked with 1.5. The installation error did mention that I just needed to activate the System Legacy plugin and it would work. So I did, and the comment form appeared on all my article pages. This may seem like a step backward, but for extensions like this, which are very useful, if they work well and stay stable in Legacy Mode, a developer may have made the decision to leave well enough alone. The developer will most likely eventually upgrade the extension (especially if Legacy Mode goes away in future versions of Joomla!). Just be sure to sign up for updates or check back on any extensions you use if you do upgrade your site. You should do this regardless of whether your extensions run natively or in Legacy Mode. The advantage of AJAX in a comment form is that a user isn't distracted and comments post smoothly and right away (a bit of instant gratification for the user, even if you never "confirm" the post and it never gets actually published for other viewers). This extension outputs tables, but for the ease of handling robust comments and having a great admin area to manage them, I'll make do. The following screenshot shows the Joomla! comment component appearing in an article page: As you can see in my previous image, I have some strong styles that are trying to override the component's styles. A closer look at the output HTML will give me some class names and objects that I can target with CSS. The administration panel's Component | Joomla! Comment | Other Component settings page also allows quite a few customization options. The Layout tab also offers several included style sheets to select from as well as the option to copy the CSS sheet out to my template's directory (the component will do this automatically). This way, I can amend it with my own specific CSS, giving my comment form a better fit with my template's design. Installing the core design Ajax Pagebreak plugin If your site has long articles that get broken down regularly in to three or more pages, Pagebreak is a nice plugin that uses Ajax to smoothly load the next page. It's a useful feature that will also leave your site users with a little "oh wow" expression. http://www.greatJoomla.com/news/plugins/demo-core-design-ajaxpagebreak-plugin.html After successfully installing this plugin, I headed over to the Extensions | Plugin Manager and activated it. I then beefed out an article (with Lorem Ipsum) and added page breaks to it on the Home Page. It's hard to see in a screenshot, but it appears below the Prev and Next links without a full browser redraw. I've set my site up with SEO-friendly URLs, and this plugin does amend the URLs with a string; that is, http://yoururl.com/1.5dev/menu-item-4?start=1. I'm not sure how this will really affect the SEO "friendliness" value of my URL, but it does give me a specific URL to give to people if I want to send them to a targeted page, which is very good for accessibility. One thing to note, the first page of the article is the original URL; that is, http://yoururl.com/1.5dev/menu-item-4. The second page then appends ?start=1, the third page becomes ?start=2, and so on. Just be aware that when sending links out to people, it is always best to pull the URL directly from the site so that you know it's correct! Installing the AJAX RSS Reader Version 3 with Draggable Divs module RSS feeds are a great way to bring together a wide variety of content as well as bring all your or your organization's "social network happenings" to one place in your own site. I like to use RSS feeds to get people interested in knowing what an organization is doing (or tweeting), or reading, and so on. Having links and lists of what's currently going on can compel users to link to you, join your group, follow you, and become a friend, a fan, or whatever. This AJAX powered module has the extra feature of being draggable and somewhat editable. This is a nice way to draw a user in to the feeds and let them play with them and arrange the information to their taste. Sometimes, sorting and reorganizing makes you see connections and possibilities that you didn't see before. The next image may seem confusing, but it's a screenshot of the top div box being dragged and dropped: http://extensions.Joomla!.org/extensions/394/details AJAX: It's not just for your site's users I've already mentioned, when applied properly, how AJAX can aid in interface usability. Joomla! attempts to take advantage of this within its Administration panel by enhancing it with relevant information and compressing multiple page forms into one single screen area. Here's a quick look at how Joomla! already uses AJAX to enhance its Administration panel forms: The following image shows how the image uploader uses a "lightbox" div layer effect so that you can keep track of where you are in the content editor. In the next image, you can see how Joomla! helps keep the administration area cleared up by using smooth-sliding accordion panels. This helps you see everything on one page and have access to just what you need, when you need it.
Read more
  • 0
  • 0
  • 3125

article-image-developing-entity-metadata-wrappers
Packt
07 Aug 2013
8 min read
Save for later

Developing with Entity Metadata Wrappers

Packt
07 Aug 2013
8 min read
(For more resources related to this topic, see here.) Introducing entity metadata wrappers Entity metadata wrappers, or wrappers for brevity, are PHP wrapper classes for simplifying code that deals with entities. They abstract structure so that a developer can write code in a generic way when accessing entities and their properties. Wrappers also implement PHP iterator interfaces, making it easy to loop through all properties of an entity or all values of a multiple value property. The magic of wrappers is in their use of the following three classes: EntityStructureWrapper EntityListWrapper EntityValueWrapper The first has a subclass, EntityDrupalWrapper, and is the entity structure object that you'll deal with the most. Entity property values are either data, an array of values, or an array of entities. The EntityListWrapper class wraps an array of values or entities. As a result, generic code must inspect the value type before doing anything with a value, in order to prevent exceptions from being thrown. Creating an entity metadata wrapper object Let's take a look at two hypothetical entities that expose data from the following two database tables: ingredient recipe_ingredient The ingredient table has two fields: iid and name. The recipe_ingredient table has four fields: riid, iid , qty , and qty_unit. The schema would be as follows: Schema for ingredient and recipe_ingredient tables To load and wrap an ingredient entity with an iid of 1 and, we would use the following line of code: $wrapper = entity_metadata_wrapper('ingredient', 1); To load and wrap a recipe_ingredient entity with an riid of 1, we would use this line of code: $wrapper = entity_metadata_wrapper('recipe_ingredient', 1); Now that we have a wrapper, we can access the standard entity properties. Standard entity properties The first argument of the entity_metadata_wrapper function is the entity type, and the second argument is the entity identifier, which is the value of the entity's identifying property. Note, that it is not necessary to supply the bundle, as identifiers are properties of the entity type. When an entity is exposed to Drupal, the developer selects one of the database fields to be the entity's identifying property and another field to be the entity's label property. In our previous hypothetical example, a developer would declare iid as the identifying property and name as the label property of the ingredient entity. These two abstract properties, combined with the type property, are essential for making our code apply to multiple data structures that have different identifier fields. Notice how the phrase "type property" does not format the word "property"? That is not a typographical error. It is indicating to you that type is in fact the name of the property storing the entity's type. The other two, identifying property and label property are metadata in the entity declaration. The metadata is used by code to get the correct name for the properties on each entity in which the identifier and label are stored. To illustrate this, consider the following code snippet: $info = entity_get_info($entity_type);$key = isset($info['entity keys']['name']) ? $info['entity keys']['name'] : $info['entity keys']['id'];return isset($entity->$key) ? $entity->$key : NULL; Shown here is a snippet of the entity_id() function in the entity module. As you can see, the entity information is retrieved at the first highlight, then the identifying property name is retrieved from that information at the second highlight. That name is then used to retrieve the identifier from the entity. Note that it's possible to use a non-integer identifier, so remember to take that into account for any generic code. The label property can either be a database field name or a hook. The entity exposing developer can declare a hook that generates a label for their entity when the label is more complicated, such as what we would need for recipe_ingredient. For that, we would need to combine the qty, qty_unit, and the name properties of the referenced ingredient. Entity introspection In order to see the properties that an entity has, you can call the getPropertyInfo() method on the entity wrapper. This may save you time when debugging. You can have a look by sending it to devel module's dpm() function or var_dump: dpm($wrapper->getPropertyInfo());var_dump($wrapper->getPropertyInfo()); Using an entity metadata wrapper The standard operations for entities are CRUD: create, retrieve, update, and delete. Let's look at each of these operations in some example code. The code is part of the pde module's Drush file: sites/all/modules/pde/pde.drush.inc. Each CRUD operation is implemented in a Drush command, and the relevant code is given in the following subsections. Before each code example, there are two example command lines. The first shows you how to execute the Drush command for the operation. ; the second is the help command. Create Creation of entities is implemented in the drush_pde_entity_create function. Drush commands The following examples show the usage of the entity-create ( ec) Drush command and how to obtain help documentation for the command: $ drush ec ingredient '{"name": "Salt, pickling"}'$ drush help ec Code snippet $entity = entity_create($type, $data);// Can call $entity->save() here or wrap to play and save$wrapper = entity_metadata_wrapper($type, $entity);$wrapper->save(); In the highlighted lines we create an entity, wrap it, and then save it. The first line uses entity_create, to which we pass the entity type and an associative array having property names as keys and their values. The function returns an object that has Entity as its base class. The save() method does all the hard work of storing our entity in the database. No more calls to db_insert are needed! Whether you use the save() method on the wrapper or on the Entity object really depends on what you need to do before and after the save() method call. For example, if you need to plug values into fields before you save the entity, it's handy to use a wrapper. Retrieve The retrieving (reading) of entities is implemented in the drush_pde_print_entity() function. Drush commands The following examples show the usage of the entity-read (er) Drush command and how to obtain help documentation for the command. $ drush er ingredient 1$ drush help er Code snippet $header = ' Entity (' . $wrapper->type();$header .= ') - ID# '. $wrapper->getIdentifier().':';// equivalents: $wrapper->value()->entityType()// $wrapper->value()->identifier()$rows = array();foreach ($wrapper as $pkey => $property) { // $wrapper->$pkey === $property if (!($property instanceof EntityValueWrapper)) { $rows[$pkey] = $property->raw() . ' (' . $property->label() . ')'; } else { $rows[$pkey] = $property->value(); }} On the first highlighted line, we call the type() method of the wrapper, which returns the wrapped entity's type. The wrapped Entity object is returned by the value() method of the wrapper. Using wrappers gives us the wrapper benefits, and we can use the entity object directly! The second highlighted line calls the getIdentifier() method of the wrapper. This is the way in which you retrieve the entity's ID without knowing the identifying property name. We'll discuss more about the identifying property of an entity in a moment. Thanks to our wrapper object implementing the IteratorAggregate interface , we are able to use a foreach statement to iterate through all of the entity properties. Of course, it is also possible to access a single property by using its key. For example, to access the name property of our hypothetical ingredient entity, we would use $wrapper->name. The last three highlights are the raw(), label(), and value() method calls. The distinction between these is very important, and is as follows: raw(): This returns the property's value straight from the database. label(): This returns value of an entity's label property. For example, name. value(): This returns a property's wrapped data: either a value or another wrapper. Finally, the highlighted raw() and value() methods retrieve the property values for us. These methods are interchangeable when simple entities are used, as there's no difference between the storage value and property value. However, for complex properties such as dates, there is a difference. Therefore, as a rule of thumb, always use the value() method unless you absolutely need to retrieve the storage value. The example code is using the raw() method only so we that can explore it, and all remaining examples in this book will stick to the rule of thumb. I promise! Storage value: This is the value of a property in the underlying storage media. for example, database. Property value: This is the value of a property at the entity level after the value is converted from its storage value to something more pleasing. For example, date formatting of a Unix timestamp. Multi-valued properties need a quick mention here. Reading these is quite straightforward, as they are accessible as an array. You can use Array notation to get an element, and use a foreach to loop through them! The following is a hypothetical code snippet to illustrate this: $output = 'First property: ';$output .= $wrapper->property[0]->value();foreach ($wrapper->property as $vwrapper) { $output .= $vwrapper->value();} Summary This article delved into development using entity metadata wrappers for safe CRUD operations and entity introspection. Resources for Article: Further resources on this subject: Microsoft SQL Server 2008 R2 MDS: Creating and Using Models [Article] EJB 3 Entities [Article] ADO.NET Entity Framework [Article]
Read more
  • 0
  • 0
  • 3124

article-image-agile-works-best-php-projects-2
Packt
30 Sep 2009
8 min read
Save for later

Agile Works Best in PHP Projects

Packt
30 Sep 2009
8 min read
What is agility Agility includes effective, that is, rapid and adaptive, response to change. This requires effective communication among all of the stakeholders. Stakeholders are those who are going to benefit from the project in some form or another. The key stakeholders of the project include the developers and the users. Leaders of the customer organization, as well as the leaders of the software development organizations, are also among the stakeholders. Rather than keeping the customer away, drawing the customer into the team helps the team to be more effective. There can be various types of customers, some are annoying, and some who tend to forget what they once said. There are also those who will help steer the project in the right direction. The idea of drawing the customer into the team is not to let them micromanage the team. Rather, it is for them to help the team to understand the user requirements better. This needs to be explained to the customers up front, if they seem to hinder the project, rather than trying to help in it. After all, it is the team that consists of the technical experts, so the customer should understand this. Organizing a team, in such a manner so that it is in control of the work performed, is also an important part of being able to adapt to change effectively. The team dynamics will help us to respond to changes in a short period of time without any major frictions. Agile processes are based on three key assumptions. These assumptions are as follows: It is difficult to predict in advance, which requirements or customer priorities will change and which will not. For many types of software, design and construction activities are interweaved. We can use construction to prove the design. Analysis, design, and testing are not as predictable from the planning's perspective as we software developers like them to be. To manage unpredictability, the agile process must be adapted incrementally by the project's team. Incremental adaptation requires customer's feedback. Based on the evaluation of delivered software, it increments or executes prototypes over short time periods. The length of the time periods should be selected based on the nature of the user requirements. It is ideal to restrict the length of a delivery to get incremented by two or three weeks. Agility yields rapid, incremental delivery of software. This makes sure that the client will get to see the real up-and-running software in quick time. Characteristics of an agile process An agile process is driven by the customer's demand. In other words, the process that is delivered is based on the users' descriptions of what is required. What the project's team builds is based on the user-given scenarios. The agile process also recognizes that plans are short lived. What is more important is to meet the users' requirements. Because the real world keeps on changing, plans have little meaning. Still, we cannot eliminate the need for planning. Constant planning will make sure that we will always be sensitive to where we're going, compared to where we are. Developing software iteratively, with a greater emphasis on construction activities, is another characteristic of the agile process. Construction activities make sure that we have something working all of the time. Activities such as requirements gathering for system modeling are not construction activities. Those activities, even though they're useful, do not deliver something tangible to the users. On the other hand, activities such as design, design prototyping, implementation, unit testing, and system testing are activities that deliver useful working software to the users. When our focus is on construction activities, it is a good practice that we deliver the software in multiple software increments. This gives us more time to incorporate user feedback, as we go deeper into implementing the product. This ensures that the team will deliver a high quality product at the end of the project's life cycle because the latter increments of software are based on clearly-understood requirements. This is as opposed to those, which would have been delivered with partially understood requirements in the earlier increments. As we go deep into the project's life cycle, we can adopt the project's team as well as the designs and the PHP code that we implement as changes occur. Principles of agility Our highest priority is to satisfy the customer through early and continuous delivery of useful and valuable software. To meet this requirement, we need to be able to embrace changes. We welcome changing requirements, even late in development life cycle. Agile processes leverage changes for the customer's competitive advantage. In order to attain and sustain competitive advantage over the competitors, the customer needs to be able to change the software system that he or she uses for the business at the customer's will. If the software is too rigid, there is no way that we can accommodate agility in the software that we develop. Therefore, not only the process, but also the product, needs to be equipped with agile characteristics. In addition, the customer will need to have new features of the software within a short period of time. This is required to beat the competitors with state of the art software system that facilitate latest business trends. Therefore, deliver the working software as soon as possible. A couple of weeks to a couple of months are always welcome. For example, the customer might want to improve the reports that are generated at the presentation layer based on the business data. Moreover, some of this business data will not have been captured in the data model in the initial design. Still, as the software development team, we need to be able to upgrade the design and implement the new set of reports using PHP in a very short period of time. We cannot afford to take months to improve the reports. Also, our process should be such that we will be able to accommodate this change and deliver it within a short period of time. In order to make sure that we can understand these types of changes, we need to make the business people and the developers daily work together throughout the project. When these two parties work together, it becomes very easy for them to understand each other. The team members are the most important resource in a software project. The motivation and attitude of these team members can be considered the most important aspect that will determine the success of the project. If we build the project around motivated individuals, give them the environment and support they need, trust them to get the job done, the project will be a definite success. Obviously, the individual team members need to work with each other in order to make the project a success. The most efficient and effective method of conveying information to and within a development team is a face-to-face conversation. Even though various electronic forms of communication, such as instant messaging, emails, and forums makes effective communication possible, there is nothing comparable to face-to-face communication. When it comes to evaluating progress, working software should be the primary measure of progress. We need to make sure that we clearly communicate this to all of the team members. They should always focus on making sure that the software they develop is in a working state at all times. It is not a bad idea to tie their performance reviews and evaluations based on how much effort they have put in. This is in order to make sure that whatever they deliver (software) is working all of the time. An agile process promotes sustainable development. This means that people are not overworked, and they are not under stress in any condition. The sponsors, managers, developers, and users should be able to maintain a constant pace of development, testing, evaluation, and evolution, indefinitely. The team should pay continuous attention to technical excellence. This is because good design enhances agility. Technical reviews with peers and non-technical reviews with users will allow corrective action to any deviations from the expected result. Aggressively seeking technical excellence will make sure that the team will be open minded and ready to adopt corrective action based on feedback. With PHP, simplicity is paramount. Simplicity should be used as the art of maximizing the amount of work that is not done. In other words, it is essential that we prevent unwanted wasteful work, as well as rework, at all costs. PHP is a very good vehicle to achieve this. The team members that we have should be smart and capable. If we can get those members to reflect on how to become more effective, at regular intervals, we can get the team to tune and adjust its behavior to enhance the process over time. The best architectures, requirements, and designs emerge from self-organizing teams. Therefore, for a high quality product, the formation of the team can have a direct impact.
Read more
  • 0
  • 0
  • 3124

article-image-debugging-ajax-using-microsoft-ajax-library-internet-explorer-and-mozilla-firefox
Packt
22 Oct 2009
8 min read
Save for later

Debugging AJAX using Microsoft AJAX Library, Internet Explorer and Mozilla Firefox

Packt
22 Oct 2009
8 min read
AJAX Debugging Overview Unfortunately, today’s tools for client-side debugging and tracing aren’t as evolved as their server-side counterparts. For example, things such as capturing ongoing communication traffic between the client and the server, or client-side debugging, aren’t usually supported by today’s IDEs (Integrated Development Environments) such as Microsoft Visual Studio 2005. The next version of Visual Studio (code-named Orcas at the time of writing) promises a lot of improvements in this area: Improved IntelliSense technology with support for JavaScript code, which provides coding hints based on specially-formatted comments in the code Breakpoints in inline JavaScript code These are only the most important new coming features; there are others as well. For more information we suggest that you browse and keep an eye on Scott Guthrie’s blog at http://weblogs.asp.net/scottgu/, the JScript blog at http://blogs.msdn.com/jscript/, Bertrand Le Roy’s blog at http://weblogs.asp.net/bleroy/. Until this new edition of Visual Studio is released, we can rely on third-party tools that can do a very good job at helping us develop our AJAX applications. You’ll find a number of tools for Internet Explorer and Mozilla Firefox for this purpose. Debugging and Tracing with Microsoft AJAX Library The common practices for debugging JavaScript code are: Putting alert messages throughout the code to get notified about the values of the variables Logging tracing messages in a <div> element While the first option is straightforward, the second option offers a centralized place for storing different messages and could be considered more appropriate. Nevertheless both options can come in quite handy depending on the circumstances. Microsoft AJAX Library offers the Sys.Debug object that has a series of methods that you can use for debugging and tracing. The diagram of this class is presented in figure below. The Debug class As we can easily see in the diagram, Sys.Debug offers the most common features that we can find also in other languages: assert(), trace(), traceDump(), fail(), and clearTrace(). assert(), trace(), and fail() automatically send the messages to the debugging console of the browser. To see the messages in IE you need to have the Web Development Helper, and for Firefox you need the Firebug plugin. Both of these tools are presented later in this article. Internally assert() calls fail() if the expression evaluates to false. fail() simply logs the message passed in by assert to the debugging console. trace() offers an interesting feature beside logging to the debugging console: it offers the possibility to log the trace message in a <textarea> element with the ID TraceConsole. If such an element is found, trace() will log this message in this element too. The clearTrace() function simply clears the TraceConsole element, if found. The traceDump() function traces all the information about an object including its properties. Internally this function uses the trace() function so that we can have the information logged in the browser’s debugging console, and in the TraceConsole element (if it exists). MicrosoftAjax.debug.js You might have wondered why the Microsoft AJAX Library comes with both release and debug version of the JavaScript file. The major features of the debug version of the library files are:  The script is nicely formatted. The names of variables are not obfuscated. The script contains code comments. Some of the functions have the optional summary data that will be used by Visual Studio “Orcas” for code auto-completion. The script outputs debugging-friendly information. Parameters are validated. Once the development stage is finished, you can switch your application to the release version of the script (MicrosoftAjax.js), which is smaller and doesn’t contain the debugging features presented above. Perhaps the most interesting features of the debug version are the last two: debugging-friendly information and parameter validation. Anonymous Functions vs. Pseudo-Named Functions We will explain these two concepts by taking a look at how different functions are defined in the debug and release version of the library. The debug version of the library contains: function Sys$_Debug$assert(condition, message, displayCaller) {   ...}Sys._Debug.prototype = {  assert: Sys$_Debug$assert,  ...} and: String.format = function String$format(format, args) {...} In the release version of the library we have: Sys._Debug.prototype = {  assert: function(c, a, b) {  ...} and: String.format = function() {...} In the release version, we have methods that are anonymous functions. This means that within a debugger stack trace the method name will read JScript anonymous function. This is not very useful for debugging purposes, is it? Call Stack showing anonymous functions However, the debug version of the library uses the dollar-syntax to provide alias names for our functions: String$format for String.format and Sys$Debug$assert for Sys.Debug.assert. When using the debug version of the file, the stack trace would look like: Call Stack showing named functions We can still notice some anonymous functions as they are the result of creating callback or delegate functions. The example shows two different ways of coding:  In the debug version, the function is declared outside the prototype and then referenced in the prototype declaration. In the release version, the declaration is done directly where the function is declared (outside or inside the prototype). Parameters Validation Another interesting feature that has not been documented in the Microsoft AJAX Library documentation is that of parameters validation. Type safety is one of the typical problems when it comes to using JavaScript. Although the dynamic type features are really useful, sometimes we might really want to make sure that a parameter or object is of a certain type. To check the data type of an object, you can try converting the object to the desired type, or using the methods defined by Type. Fortunately the Microsoft AJAX Library has a function that does the dirty work for us: Function._validateParams(). The class diagram in figure below shows the _validateParameter() and _validateParams() methods of the Function class. The Function class The Function._validateParams() function, even if it is declared as private (by convention, using the leading underscore), can be used by our scripts as it is used throughout the debug version of the Microsoft AJAX Library. Here’s an example of using this function: function Sys$_Debug$fail(message) {/// <param name="message" type="String" mayBeNull="true"></param>   var e = Function._validateParams(arguments,          [ {name: "message", type: String, mayBeNull: true} ]);   if (e) throw e; This shows how the parameter for the fail() function is validated as a String. We can also see the additional code comments inside the function, which are meant to be used by the IntelliSense feature in Visual Studio “Orcas” to check for the correctness of the parameter types. While the first parameter of _validateParams() represents an array of parameters to be checked, the second parameter is an array of JSON objects describing the validation rules for the array of parameters. Each JSON object contains a validation rule for a parameter. The JSON object is a dictionary of keys and values. The list of keys that can be used is described in the table that follows. Key Description name The name of the parameter type The allowed type for this parameter (ex: String, Array, Function, Sys.Component, etc.) mayBeNull Boolean value indicating whether this parameter can be passed as null or not domElement Boolean value indicating whether this parameter is a DOM element or not integer Boolean value indicating whether this parameter should have an integer value or not optional Boolean value indicating whether this parameter is optional or not parameterArray Boolean value indicating whether this parameter should be an Array or not elementType The allowed type for each element of an array (type must be Array) elementMayBeNull Boolean value indicating whether an array element could have null or not (type must be Array) elementDomElement Boolean value indicating whether each element of an array is a DOM element (type must be Array) elementInteger Boolean value indicating whether each element of an array should have an integer value (type must be Array) The function returns an error message if the parameters don’t validate and the error is typically thrown like this: if (e) throw e; This exception could be caught and the appropriate measures taken programmatically. If the exception is not caught, the error will pop up in the debugging console of the browser.
Read more
  • 0
  • 0
  • 3121
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-error-handling-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Error Handling in PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Errors will happen whether we like it or not. Ideally the framework can help in their discovery, recording, and handling by: Trapping different kinds of errors Making a record of errors with sufficient detail to aid analysis Supporting a structure that mitigates the effect of errors Discussion There are three main kinds of errors that can arise. Many possible situations can crop up within PHP code that count as errors, such as an attempt to use a method on a variable that turns out not to be an object, or is an object but does not implement the specified method. The database will sometimes report errors, such as an attempt to retrieve information from a non-existent table, or to ask for a field that has not been defined for a table. And the logic of applications can often lead to situations that can only be described as errors. What resources do we have to handle these error situations? PHP error handling If nothing else is done, PHP has its own error handler. But developers are free to build their own handlers. So that is the first item on our to do list. Consistently with our generally object oriented approach, the natural thing to do is to build an error recording class, and then to tell PHP that one of its methods is to be called whenever PHP detects an error. Once that is done, the error handler must deal with whatever PHP passes, as it has taken over full responsibility for error handling. It has been a common practice to suppress the lowest levels of PHP error such as notices and warnings, but this is not really a good idea. Even these relatively unimportant messages can reveal more serious problems. It is not difficult to write code to avoid them, so that if a warning or notice does arise, it will indicate something unexpected and therefore worth investigation. For example, the PHP foreach statement expects to work on something iterable and will generate a warning if it is given, say, a null value. But this is easily avoided, either by making sure that methods which return arrays will always return an array, even if it is an array of zero items, rather than a null value. Failing that, the foreach can be protected by a preceding test. So it is safest to assume that a low level error may be a symptom of a bigger problem, and have our error handler record every error that is passed to it. The database is the obvious place to put the error, and the handler receives enough information to make it possible to save only the latest occurrence of the same error, thus avoiding a bloated table of many more or less identical errors. The other important mechanism offered by PHP is new to version 5 and is the try, catch, and throw construct. A section of code can be put within a try and followed by one or more catch specifications that define what is to be done if a particular kind of problem arises. The problems are triggered by using throw. This is a valuable mechanism for errors that need to break the flow of program execution, and is particularly helpful for dealing with database errors. It also has the advantage that the try sections can be nested, so if a large area of code, such as an entire component, is covered by a try it is still possible to write a try of narrower scope within that code. In general, it is better to be cautious about giving information about errors to users. For one thing, ordinary users are simply irritated by technically oriented error messages that mean nothing to them. Equally important is the issue of cracking, and the need to avoid displaying any weaknesses too clearly. It is bad enough that an error has occurred, without giving away details of what is going wrong. So a design assumption for error handling should be that the detail of errors is recorded for later analysis, but that only a very simple indication of the presence of an error is given to the user with a message that it has been noted for rectification. Database errors Errors in database operations are a particular problem for developers. Within the actual database handling code, it would be negligent to ignore the error indications that are available through the PHP interfaces to database systems. Yet within applications, it is hard to know what to do with such errors. SQL is very flexible, and a developer has no reason to expect any errors, so in the nature of things, any error that does arise is unexpected, and therefore difficult to handle. Furthermore, if there have to be several lines of error handling code every time the database is accessed, then the overhead in code size and loss of clarity is considerable. The best solution therefore seems to be to utilize the PHP try, catch, and throw structure. A special database error exception can be created by writing a suitable class, and the database handling code will then deal with an error situation by "throwing" a new error with an exception of that class. The CMS framework can have a default try and catch in place around most of its operation, so that individual applications within the CMS are not obliged to take any action. But if an application developer wants to handle database errors, it is always possible to do so by coding a nested try and catch within the application. One thing that must still be remembered by developers is that SQL easily allows some kinds of error situation to go unnoticed. For example, a DELETE or UPDATE SQL statement will not generate any error if nothing is deleted or updated. It is up to the developer to check how many rows, if any, were affected. This may not be worth doing, but issues of this kind need to be kept in mind when considering how software will work. A good error handling framework makes it easier for a developer to choose between different checking options. Application errors Even without there being a PHP or database error, an application may decide that an error situation has arisen. For some reason, normal processing is impossible, and the user cannot be expected to solve the problem. There are two main choices that will fit with the error handling framework we are considering. One is to use the PHP trigger_error statement. It raises a user error, and allows an error message to be specified. The error that is created will be trapped and passed to the error handler, since we have decided to have our own handler. This mechanism is best used for wholly unexpected errors that nonetheless could arise out of the logic of the application. The other choice is to use a complete try, catch, and throw structure within the application. This is most useful when there are a number of fatal errors that can arise, and are somewhat expected. The CMS extension installer uses this approach to deal with the various possible fatal errors that can occur during an attempt to install an extension. They are mostly related to errors in the XML packaging file, or in problems with accessing the file system. These are errors that need to be reported to help the user in resolving the problem, but they also involve abandoning the installation process. Whenever a situation of this kind arises, try, catch, and throw is a good way to deal with it. Exploring PHP—Error handling PHP provides quite a lot of control over error handling in its configuration. One question to be decided is whether to allow PHP to send any errors to the browser. This is determined by setting the value of display_errors in the php.ini configuration file. It is also possible to determine whether errors will be logged by setting log_errors and to decide where they should be logged by setting error_log. (Often there are several copies of this file, and it is important to find the one that is actually used by the system.) The case against sending errors is that it may give away information useful to crackers. Or it may look bad to users. On the other hand, it makes development and bug fixing harder if errors have to be looked up in a log file rather than being visible on the screen. And if errors are not sent to the screen, then in the event of a fatal error, the user will simply see a blank screen. This is not a good outcome either. Although the general advice is that errors should not be displayed on production systems, I am still rather inclined to show them. It seems to me that an error message, even if it is a technical one that is meaningless to the user, is rather better than a totally blank screen. The information given is only a bare description of the error, with the name and line number for the file having the error. It is unlikely to be a great deal of use to a cracker, especially since the PHP script just terminates on a fatal error, not leaving any clear opportunity for intrusion. You should make your own decision on which approach is preferable. Without any special action in the PHP code, an error will be reported by PHP giving details of where it occurred. Providing our own error handler by using the PHP set_error_handler function gives us far more flexibility to decide what information will be recorded and what will be shown to the user. A limitation on this is that PHP will still immediately terminate on a fatal error, such as attempting a method on something that is not an object. Termination also occurs whenever a parsing error is found, that is to say when the PHP program code is badly formed. It is not possible to have control transferred to a user provided error handler, which is an unfortunate limitation on what can be achieved. However, an error handler can take advantage of knowledge of the framework to capture relevant information. Quite apart from special information on the framework, the handler can make use of the useful PHP debug_backtrace function to find out the route that was followed before the error was reached. This will give information about what called the current code. It can then be used again to find what called that, and so on until no further trace information is available. A trace greatly increases the value of error reporting as it makes it much easier to find out the route that led to the error. When an error is trapped using PHP's try and catch, then it is best to trace the route to the error at the point the exception is thrown. Otherwise, the error trace will only show the chain of events from the exception to the error handler. There are a number of other PHP options that can further refine how errors are handled, but those just described form the primary tool box that we need for building a solid framework.
Read more
  • 0
  • 0
  • 3120

article-image-ejb-31-working-interceptors
Packt
06 Jul 2011
3 min read
Save for later

EJB 3.1: Working with Interceptors

Packt
06 Jul 2011
3 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook        The recipes in this article are based largely around a conference registration application as developed in the first recipe of the previous article on Introduction to Interceptors. It will be necessary to create this application before the other recipes in this article can be demonstrated. Using interceptors to enforce security While security is an important aspect of many applications, the use of programmatic security can clutter up business logic. The use of declarative annotations has come a long way in making security easier to use and less intrusive. However, there are still times when programmatic security is necessary. When it is, then the use of interceptors can help remove the security code from the business logic. Getting ready The process for using an interceptor to enforce security involves: Configuring and enabling security for the application server Adding a @DeclareRoles to the target class and the interceptor class Creating a security interceptor How to do it... Configure the application to handle security as detailed in Configuring the server to handle security recipe. Add the @DeclareRoles("employee") to the RegistrationManager class. Add a SecurityInterceptor class to the packt package. Inject a SessionContext object into the class. We will use this object to perform programmatic security. Also use the @DeclareRoles annotation. Next, add an interceptor method, verifyAccess, to the class. Use the SessionContext object and its isCallerInRole method to determine if the user is in the "employee" role. If so, invoke the proceed method and display a message to that effect. Otherwise, throw an EJBAccessException. @DeclareRoles("employee") public class SecurityInterceptor { @Resource private SessionContext sessionContext; @AroundInvoke public Object verifyAccess(InvocationContext context) throws Exception { System.out.println("SecurityInterceptor: Invoking method: " + context.getMethod().getName()); if (sessionContext.isCallerInRole("employee")) { Object result = context.proceed(); System.out.println("SecurityInterceptor: Returned from method: " + context.getMethod().getName()); return result; } else { throw new EJBAccessException(); } } } Execute the application. The user should be prompted for a username and password as shown in the following screenshot. Provide a user in the employee role. The application should execute to completion. Depending on the interceptors in place, you will console output similar to the following: INFO: Default Interceptor: Invoking method: register INFO: SimpleInterceptor entered: register INFO: SecurityInterceptor: Invoking method: register INFO: InternalMethod: Invoking method: register INFO: register INFO: Default Interceptor: Invoking method: create INFO: Default Interceptor: Returned from method: create INFO: InternalMethod: Returned from method: register INFO: SecurityInterceptor: Returned from method: register INFO: SimpleInterceptor exited: register INFO: Default Interceptor: Returned from method: register How it works... The @DeclareRoles annotation was used to specify that users in the employee role are associated with the class. The isCallerInRole method checked to see if the current user is in the employee role. When the target method is called, if the user is authorized then the InterceptorContext's proceed method is executed. If the user is not authorized, then the target method is not invoked and an exception is thrown. See also EJB 3.1: Controlling Security Programmatically Using JAAS  
Read more
  • 0
  • 0
  • 3118

article-image-looking-apache-axis2
Packt
09 Oct 2009
11 min read
Save for later

Looking into Apache Axis2

Packt
09 Oct 2009
11 min read
(For more resources on Axis2, see here.) Axis2 Architecture Axis2 is built upon a modular architecture that consists of core modules and non-core modules. The core engine is said to be a pure SOAP processing engine (there is not any JAX-PRC concept burnt into the core). Every message coming into the system has to be transformed into a SOAP message before it is handed over to the core engine. An incoming message can either be a SOAP message or a non-SOAP message (REST JSON or JMX). But at the transport level, it will be converted into a SOAP message. When Axis2 was designed, the following key rules were incorporated into the architecture. These rules were mainly applied to achieve a highly flexible and extensible SOAP processing engine: Separation of logic and state to provide a stateless processing mechanism. (This is because Web Services are stateless.) A single information model in order to enable the system to suspend and resume. Ability to extend support to newer Web Service specifications with minimal changes made to the core architecture. The figure below shows all the key components in Axis2 architecture (including core components as well as non-core components). Core Modules XML Processing Model : Managing or processing the SOAP message is the most diffcult part of the execution of a message. The efficiency of message processing is the single most important factor that decides the performance of the entire system. Axis1 uses DOM as its message representation mechanism. However, Axis2 introduced a fresh XML InfoSet-based representation for SOAP messages. It is known as AXIOM (AXIs Object Model). AXIOM encapsulates the complexities of efficient XML processing within the implementation. SOAP Processing Model :  This model involves the processing of an incoming SOAP message. The model defines the different stages (phases) that the execution will walk through. The user can then extend the processing model in specific places. Information Model :  This keeps both static and dynamic states and has the logic to process them. The information model consists of two hierarchies to keep static and run-time information separate. Service life cycle and service session management are two objectives in the information model. Deployment Model :  The deployment model allows the user to easily deploy the services, configure the transports, and extend the SOAP Processing Model. It also introduces newer deployment mechanisms in order to handle hot deployment, hot updates, and J2EE-style deployment. Client API : This provides a convenient API for users to interact with Web Services using Axis2. The API consists of two sub-APIs, for average and advanced users. Axis2 default implementation supports all the eight MEPs (Message Exchange Patterns) defined in WSDL 2.0. The API also allows easy extension to support custom MEPs. Transports :  Axis2 defines a transport framework that allows the user to use and expose the same service in multiple transports. The transports fit into specific places in the SOAP processing model. The implementation, by default, provides a few common transports (HTTP, SMTP, JMX, TCP and so on). However, the user can write or plug-in custom transports, if needed. XML Processing Model Axis2 is built on a completely new architecture as compared to Axis 1.x. One of the key reasons for introducing Axis2 was to have a better, and an efficient XML processing model. Axis 1.x used DOM as its XML representation mechanism, which required the complete object hierarchy (corresponding to incoming message) to be kept in memory. This will not be a problem for a message of small size. But when it comes to a message of large size, it becomes an issue. To overcome this problem, Axis2 has introduced a new XML representation. AXIOM (AXIs Object Model) forms the basis of the XML representation for every SOAP-based message in Axis2. The advantage of AXIOM over other XML InfoSet representations is that it is based on the PULL parser technique, whereas most others are based on the PUSH parser technique. The main advantage of PULL over PUSH is that in the PULL technique, the invoker has full control over the parser and it can request the next event and act upon that, whereas in case of PUSH, the parser has limited control and delegates most of the functionality to handlers that respond to the events that are fired during its processing of the document. Since AXIOM is based on the PULL parser technique, it has on-demand-building capability whereby it will build an object model only if it is asked to do so. If required, one can directly access the underlying PULL parser from AXIOM, and use that rather than build an OM (Object Model). SOAP Processing Model Sending and receiving SOAP messages can be considered two of the key jobs of the SOAP-processing engine. The architecture in Axis2 provides two Pipes ('Flows'), in order to perform two basic actions. The AxisEngine or driver of Axis2 defines two methods, send() and receive() to implement these two Pipes. The two pipes are namedInFlow and OutFlow.   The complex Message Exchange Patterns (MEPs) are constructed by combining these two types of pipes. It should be noted that in addition to these two pipes there are two other pipes as well, and those two help in handling incoming Fault messages and sending a Fault message. Extensibility of the SOAP processing model is provided through handlers. When a SOAP message is being processed, the handlers that are registered will be executed. The handlers can be registered in global, service, or in operation scopes, and the final handler chain is calculated by combining the handlers from all the scopes. The handlers act as interceptors, and they process parts of the SOAP message and provide the quality of service features (a good example of quality of service is security or reliability). Usually handlers work on the SOAP headers; but they may access or change the SOAP body as well. The concept of a flow is very simple and it constitutes a series of phases wherein a phase refers to a collection of handlers. Depending on the MEP for a given method invocation, the number of flows associated with it may vary. In the case of an in-only MEP, the corresponding method invocation has only one pipe, that is, the message will only go through the in pipe (inflow). On the other hand, in the case of in-out MEP, the message will go through two pipes, that is the in pipe (inflow) and the out pipe (outflow). When a SOAP message is being sent, an OutFlow begins. The OutFlow invokes the handlers and ends with a Transport Sender that sends the SOAP message to the target endpoint. The SOAP message is received by a Transport Receiver at the target endpoint, which reads the SOAP message and starts the InFlow. The InFlow consists of handlers and ends with the Message Receiver, which handles the actual business logic invocation. A phase is a logical collection of one or more handlers, and sometimes a phase itself acts as a handler. Axis2 introduced the phase concept as an easy way of extending core functionalities. In Axis 1.x, we need to change the global configuration files if we want to add a handler into a handler chain. But Axis2 makes it easier by using the concept of phases and phase rules. Phase rules specify how a given set of handlers, inside a particular phase, are ordered. The figure below illustrates a flow and its phases. If the message has gone through the execution chain without having any problem, then the engine will hand over the message to the message receiver in order to do the business logic invocation, After this, it is up to the message receiver to invoke the service and send the response, if necessary. The figure below shows how the Message Receiver fits into the execution chain. The two pipes do not differentiate between the server and the client. The SOAP processing model handles the complexity and provides two abstract pipes to the user. The different areas or the stages of the pipes are named 'phases' in Axis2. A handler always runs inside a phase, and the phase provides a mechanism to specify the ordering of handlers. Both pipes have built-in phases, and both define the areas for User Phases, which can be defined by the user, as well. Information Model As  shown  in  the figure below, the information model consists of two hierarchies: Description hierarchy and Context hierarchy. The Description hierarchy represents the static data that may come from different deployment descriptors. If hot deployment is turned off, then the description hierarchy is not likely to change. If hot deployment is turned on, then we can deploy the service while the system is up and running. In this case, the description hierarchy is updated with the corresponding data of the service. The context hierarchy keeps run-time data. Unlike the description hierarchy, the context hierarchy keeps on changing when the server starts receiving messages. These two hierarchies create a model that provides the ability to search for key value pairs. When the values are to be searched for at a given level, they are searched while moving up the hierarchy until a match is found. In the resulting model, the lower levels override the values present in the upper levels. For example, when a value has been searched for in the Message Context and is not found, then it would be searched in the Operation Context, and so on. The search is first done up the hierarchy, and if the starting point is a Context then it would search for in the Description hierarchy as well. This allows the user to declare and override values, with the result being a very flexible configuration model. The flexibility could be the Achilles' heel of the system, as the search is expensive, especially for something that does not exist. Deployment Model The previous versions of Axis failed to address the usability factor involved in the deployment of a Web Service. This was due to the fact that Axis 1.x was released mainly to prove the Web Service concepts. Therefore in Axis 1.x, the user has to manually invoke the admin client and update the server classpath. Then, you need to restart the server in order to apply the changes. This burdensome deployment model was a definite barrier for beginners. Axis2 is engineered to overcome this drawback, and provide a flexible, user-friendly, easily configurable deployment model. Axis2 deployment introduced a J2EE-like deployment mechanism, wherein the developer can bundle all the class files, library files, resources files, and configuration fi  les together as an archive file, and drop it in a specified location in the file system. The concept of hot deployment and hot update is not a new technical paradigm, particularly for the Web Service platform. But in the case of Apache Axis, it is a new feature. Therefore, when Axis2 was developed, hot deployment features were added to the feature list. Hot deployment : This refers to the capability to deploy services while the system is up and running. In a real time system or in a business environment, the availability of the system is very important. If the processing of the system is slow, even for a moment, then the loss might be substantial and it may affect the viability of the business. In the meanwhile, it is required to add new service to the system. If this can be done without needing to shut down the servers, it will be a great achievement. Axis2 addresses this issue and provides a Web Service hot deployment ability, wherein we need not shut down the system to deploy a new Web Service. All that needs to be done is to drop the required Web Service archive into the services directory in the repository. The deployment model will automatically deploy the service and make it available. Hot update : This refers to the ability to make changes to an existing Web Service without even shutting down the system. This is an essential feature, which is best suited to use in a testing environment. It is not advisable to use hot updates in a real-time system, because a hot update could lead a system into an unknown state. Additionally, there is the possibility of loosening the existing service data of that service. To prevent this, Axis2 comes with the hot update parameter set to FALSE by default.
Read more
  • 0
  • 0
  • 3114

article-image-managing-orders-joomla-and-virtuemart
Packt
26 Oct 2009
6 min read
Save for later

Managing orders in Joomla! and VirtueMart

Packt
26 Oct 2009
6 min read
Our shop is now ready for customers. They can register to the shop and get some permissions to browse products and place orders. After building the catalog, one of the big tasks of the shop administrator is to manage the orders. Managing an order includes viewing the order, ensuring that payment is made, shipping the product to customers ship to address, and setting the appropriate status of the order. Whenever the status of the order changes, the shop administrator can also notify the customer about its status. When the product is delivered, the status should also be changed. Sometimes, you need to change the status when the customer refunds it for some reason. Viewing the orders To view the list of orders placed, click on Orders | List Orders. You get the Order List screen: The Order List screen shows all of the orders placed so far in that store. It shows the latest order first. As you can see, it shows the order number, name of customer, date of order, date last modified, status of the order, and total price of the order. As there may be hundreds of orders per day, you need to filter the orders and see which ones need urgent attention. You can filter the orders by their status. For example, clicking on the Pending link will show all of the orders which are pending. Viewing the list of pending orders, you may enquire why those are pending. Some may be pending for not making the payment, or you may be waiting for some offline payment. For example, when the Money Order payment method is used, the order needs to remain Pending until you receive the money order. Once you get the payment, you can change the order status to Confirmed. Viewing an order's details In the Order List screen, you will get an overview of each order. However, sometimes it may be necessary to view the details of an order. For viewing an order's details, in the Order List screen, click on the order number link under the Order Number column. This shows details of the order: In the Order Details page, you will first see the order number, order date, order status, its current status, and IP address from where the order was placed. There is a box section from where you can update the order's status and view the order's history. Then, you get the Bill To and Ship To addresses. After the Bill To and Ship To addresses, you get the list of ordered items and their prices. You can also add a new product to this order from this section. This section also shows taxes added, and shipping and handling fees: After the product items, you get another section which shows shipping information and payment method used: In the Shipping Information section, you get the carrier used, shipping mode applied, shipping price, shipping and handling fees, and shipping taxes. The payment section shows what method was used and when the payment was made. It shows the payment history for this order. It also shows how much of a coupon discount was applied to this order. As an administrator of the shop, you can change the values in the fields where an update icon () is displayed. At the bottom, you see the customer's comment. Customers may provide comments while placing the order. These comments may be very much valuable for the shop owner. For example, the customer may want the product to be delivered in a special way. The customer can express that in this comment. For printing the purchase orders, you may use a printer friendly view. To see the purchase order in a printer friendly view, click on the Print View link at top. This formats the purchase order as a plain document, and also shows a printer icon. Click on that printer icon to print the purchase order. Understanding an order's status How is the order management workflow maintained? Mainly, this is based on the order status. After receiving an order from the customer, it passes several statuses. An order's life cycle is shown in the following diagram: These order status types are defined in advance. At the very outset of starting the shop, the workflow should be clearly defined. Managing order status types You can view the existing order status types from Orders | List Order Status Types. This shows the List Order Status Types screen: As you see from the screen on the previous page, there are five status types. We may add another status type of Delivered. For adding a new order status type, click on the New icon in the toolbar, or on Orders | Add Order Status Type. Both brings the Order Status screen: In the Order Status screen, first type the Order Status Code. For the Delivered status, assign D as code. Then, type the name of the status type in the Order Status Name text box. In the Description text area, you may provide a brief description of the order status type. At the end, specify a list order value. Then, click on the Save icon in the toolbar. This adds the new Delivered order status type. You can create as many order status types as you need. Changing an order's status As indicated earlier, while fulfilling the order, the shop owner needs to update the status of the order, and communicate that status change to the customer. You can change an order's status from two places. In the Order List screen, you can see the orders and also change status. For changing the status of an order, select an order status type from drop-down list in the Status column. Then, click on the Update Status button to save the change. If you want to notify the customer about this     status change, select the Notify Customer? checkbox. One disadvantage of updating the order status from the Order List screen is that you cannot add a note on changing the status. The other way of updating the order status provides this advantage. For using this, click on the order number link in the Order List screen. The order details page will open. On the right side, you will see a box from where you can update the order status. Can you see the Comment text area in the following screen? As you can see, from the Order Status Change tab, you can change the status, write a comment, notify the customer about the status change, and can also add the comment with that notification.
Read more
  • 0
  • 0
  • 3109
Packt
24 Sep 2015
8 min read
Save for later

Snap – The Code Snippet Sharing Application

Packt
24 Sep 2015
8 min read
In this article by Joel Perras, author of the book Flask Blueprints, we will build our first fully functional, database-backed application. This application with the codename, Snap, will allow users to create an account with a username and password. In this account, users will be allowed to add, update, and delete the so-called semiprivate snaps of text (with a focus on lines of code) that can be shared with others. For this you should be familiar with at least one of the following relational database systems: PostgreSQL, MySQL, or SQLite. Additionally, some knowledge of the SQLAlchemy Python library, which acts as an abstraction layer and object-relational mapper for these (and several other) databases, will be an asset. If you are not well versed in the usage of SQLAlchemy, fear not. We will have a gentle introduction to the library that will bring the new developers up to speed and serve as a refresher for the more experienced folks. The SQLite database will be our relational database of choice due to its very simple installation and operation. The other database systems that we listed are all client/server-based with a multitude of configuration options that may need adjustment depending on the system they are installed in, while SQLite's default mode of operation is self-contained, serverless, and zero-configuration. Any major relational database supported by SQLAlchemy as a first-class citizen will do. (For more resources related to this topic, see here.) Diving In To make sure things start correctly, let's create a folder where this project will exist and a virtual environment to encapsulate any dependencies that we will require: $ mkdir -p ~/src/snap && cd ~/src/snap $ mkvirtualenv snap -i flask This will create a folder called snap at the given path and take us to this newly created folder. It will then create the snap virtual environment and install Flask in this environment. Remember that the mkvirtualenv tool will create the virtual environment, which will be the default set of locations to install the packages from pip, but the mkvirtualenv command does not create the project folder for you. This is why we will run a command to create the project folder first and then create the virtual environment. Virtual environments, by virtue of the $PATH manipulation performed once they are activated, are completely independent of where in your file system your project files exist. We will then create our basic blueprint-based project layout with an empty users blueprint: application ├── __init__.py ├── run.py └── users ├── __init__.py ├── models.py └── views.py Flask-SQLAlchemy Once this has been established, we need to install the next important set of dependencies: SQLAlchemy, and the Flask extension that makes interacting with this library a bit more Flask-like, Flask-SQLAlchemy: $ pip install flask-sqlalchemy This will install the Flask extension to SQLAlchemy along with the base distribution of the latter and several other necessary dependencies in case they are not already present. Now, if we were using a relational database system other than SQLite, this is the point where we would create the database entity in, say, PostgreSQL along with the proper users and permissions so that our application can create tables and modify the contents of these tables. SQLite, however, does not require any of that. Instead, it assumes that any user that has access to the filesystem location that the database is stored in should also have permission to modify the contents of this database. For the sake of completeness, however, here is how one would create an empty database in the current folder of your filesystem: $ sqlite3 snap.db # hit control-D to escape out of the interactive SQL console if necessary.   As mentioned previously, we will be using SQLite as the database for our example applications and the directions given will assume that SQLite is being used; the exact name of the binary may differ on your system. You can substitute the equivalent commands to create and administer the database of your choice if anything other than SQLite is being used. Now, we can begin the basic configuration of the Flask-SQLAlchemy extension. Configuring Flask-SQLAlchemy First, we must register the Flask-SQLAlchemy extension with the application object in the application/__init__.py: from flask import Flask fromflask.ext.sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///../snap.db' db = SQLAlchemy(app) The value of app.config['SQLALCHEMY_DATABASE_URI'] is the escaped relative path to the snap.db SQLite database that we created previously. Once this simple configuration is in place, we will be able to create the SQLite database automatically via the db.create_all() method, which can be invoked in an interactive Python shell: $ python >>>from application import db >>>db.create_all() This should be an idempotent operation, which means that nothing would change even if the database already exists. If the local database file did not exist, however, it would be created. This also applies to adding new data models: running db.create_all() will add their definitions to the database, ensuring that the relevant tables have been created and are accessible. It does not, however, take into account the modification of an existing model/table definition that already exists in the database. For this, you will need to use the relevant tools (for example, the sqlite CLI) to modify the corresponding table definitions to match those that have been updated in your models or use a more general schema tracking and updating tool such as Alembic to do the majority of the heavy lifting for you. SQLAlchemy basics SQLAlchemy is, first and foremost, a toolkit to interact with the relational databases in Python. While it provides an incredible number of features—including the SQL connection handling and pooling for various database engines, ability to handle custom datatypes, and a comprehensive SQL expression API—the one feature that most developers are familiar with is the Object Relational Mapper. This mapper allows a developer to connect a Python object definition to a SQL table in the database of their choice, thus allowing them the flexibility to control the domain models in their own application and requiring only minimal coupling to the database product and the engine-specific SQLisms that each of them exposes. While debating the usefulness (or the lack thereof) of an object relational mapper is outside the scope of for those who are unfamiliar with SQLAlchemy we will provide a list of benefits that using this tool brings to the table, as follows: Your domain models are written to interface with one of the most well-respected, tested, and deployed Python packages ever created—SQLAlchemy. Onboarding new developers to a project becomes an order of magnitude easier due to the extensive documentation, tutorials, books, and articles that have been written about using SQLAlchemy. Import-time validation of queries written using the SQLAlchemy expression language; instead of having to execute each query string against the database to determine if there is a syntax error present. The expression language is in Python and can thus be validated with your usual set of tools and IDE. Thanks to the implementation of design patterns such as the Unit of Work, the Identity Map, and various lazy loading features, the developer can often be saved from performing more database/network roundtrips than necessary. Considering that the majority of a request/response cycle in a typical web application can easily be attributed to network latency of one form or another, minimizing the number of database queries in a typical response is a net performance win on many fronts. While many successful, performant applications can be built entirely on the ORM, SQLAlchemy does not force it upon you. If, for some reason, it is preferable to write raw SQL query strings or to use the SQLAlchemy expression language directly, then you can do that and still benefit from the connection pooling and the Python DBAPI abstraction functionality that is the core of SQLAlchemy itself. Now that we've given you several reasons why you should be using this database query and domain data abstraction layer, let's look at how we would go about defining a basic data model. Summary After having gone through this article we have seen several facets of how Flask may be augmented with the use of extensions. While Flask itself is relatively spartan, the ecology of extensions that are available make it such that building a fully fledged user-authenticated application may be done quickly and relatively painlessly. Resources for Article: Further resources on this subject: Creating Controllers with Blueprints[article] Deployment and Post Deployment [article] Man, Do I Like Templates! [article]
Read more
  • 0
  • 0
  • 3109

article-image-designing-facebook-clone-and-creating-colony-using-ruby
Packt
25 Aug 2010
14 min read
Save for later

Designing the Facebook Clone and Creating Colony using Ruby

Packt
25 Aug 2010
14 min read
(For more resources on Ruby, see here.) Main features Online social networking services are complex applications with a large number of features. However, these features can be roughly grouped into a few common categories: User Community Content-sharing Developer User features are features that relate directly to and with the user. For example, the ability to create and share their own profiles, and the ability to share status and activities are user features. Community features are features that connect users with each other. An example of this is the friends list feature, which shows the number of friends a user has connected with in the social network. Content sharing features are quite easy to understand. These are features that allow a user to share his self-created content with other users, for example photo sharing or blogging. Social bookmarking features are those features that allow users to share content they have discovered with other users, such as sharing links and tagging items with labels. Finally, developer features are features that allow external developers to access the services and data in the social networks. While the social networking services out in the market often try to differentiate themselves from each other in order to gain an edge over their competition, in this article we will be building a stereotypical online social networking service. We will be choosing only a few of the more common features in each category, except for developer features, which for practical reasons will not be implemented here. Let's look at these features we will implement in Colony, by category. User User features are features that relate directly to users: Users' activities on the system are broadcast to friends as an activity feed. Users can post brief status updates to all users. Users can add or remove friends by inviting them to link up. Friendship in both ways need to be approved by the recipient of the invitation. Community Community features connect users with each other: Users can post to a wall belonging to a user, group, or event. A wall is a place where any user can post on and can be viewed by all users. Users can send private messages to other users. Users can create events that represent an actual event in the real world. Events pull together users, content, and provide basic event management capabilities, such as RSVP. Users can form and join groups. Groups represent a grouping of like-minded people and pulls together users and content. Groups are permanent. Users can comment on various types of shared and created content including photos, pages, statuses, and activities. Comments are textual only. Users can indicate that they like most types of shared and created content including photos, pages, statuses, and activities. Content sharing Content sharing features allow users to share content, either self-generated or discovered, with other users: Users can create albums and upload photos to them Users can create standalone pages belonging to them or attached pages belonging to events and groups Online social networking services grew from existing communications and community services, often evolving and incorporating features and capabilities from those services. Designing the clone Now that we have the list of features that we want to implement for Colony, let's start designing the clone. Authentication, access control, and user management Authentication is done through RPX, which means we delegate authentication to a third party provider such as Google, Yahoo!, or Facebook. Access control however is still done by Colony, while user management functions are shared between the authentication provider and Colony. Access control in Colony is done on all data, which prevents user from accessing data that they are not allowed to. This is done through control of the user account, to which all other data for a user belongs. In most cases a user is not allowed access to any data that does not belong to him/her (that is not shared to everyone). In some cases though access is implicit; for example, an event is accessible to be viewed only if you are the organizer of the event. As before, user management is a shared responsibility between the third party provider and the clone. The provider handles password management and general security while Colony stores a simple set of profile information for the user. Status updates Allowing you to send status updates about yourself is a major feature of all social networking services. This feature allows the user, a member of the social networking service, to announce and define his presence as well as state of mind to his network. In Colony, only the user's friends can read the statuses. Remember that a user's friend is someone validated and approved by the user and not just anyone off the street who happens to follow that user. Status updates belong to a single user but are viewable to all friends as a part of the user's activity feed. User activity feeds and news feeds Activity feeds, activity streams, or life streams are continuous streams of information on a user's activities. Activity feeds go beyond just status updates; they are a digital trace of a user's activity in the social network, which includes his status updates. This include public actions like posting to a wall, uploading photos, and commenting on content, but not private actions like sending messages to individuals. The user's activity feed is visible to all users who visit his user page. Activity feeds are a subset of news feeds that is an aggregate of activity feeds of the user and his network. News feeds give an insight into the user's activities as well as the activities of his network. In the design of our clone, the user's activity feed is what you see when you visit the user page, for example http://colony.saush. com/user/sausheong, while the news feed is what you see when you first log in to Colony, that's the landing page. This design is quite common to many social networking services. Friends list and inviting users to join One of the reasons why social networking services are so wildly successful is the ability to reach out to old friends or colleagues, and also to see friends of your friends. To clone this feature we provide a standard friends list and an option to search for friends. Searching for friends allows you to find other users in the system by their nicknames or their full names. By viewing a user's page, we are able to see his friends and therefore see his friend's user pages as well. Another critical feature in social networking services is the ability to invite friends and spread the word around. In Colony we tap on the capabilities of Facebook and invite friends who are already on Facebook to use Colony. While there is a certain amount of irony (using another social networking service to implement a feature of your social networking service), it makes a lot of practical sense, as Facebook is already one of the most popular social networking services on the planet. To implement this, we will use Facebook Connect. However, this means if the user wants to reach out and get others to join him in Colony he will need to log into Facebook to do so. As with most features, the implementation can be done in many ways and Facebook Connect (or any other type of third-party integration for that matter) is only one of them. Another popular strategy is to use web mail clients such as Yahoo! Mail or Gmail, and extract user contacts with the permission of the user. The e-mails extracted this way can be used as a mailing list to send to potential users. This is in fact a strategy used by Facebook. Posting to the wall A wall is a place where users can post messages. Walls are meant to be publicly read by all visitors. In a way it is like a virtual cork bulletin board that users can pin their messages on to be read by anyone. Wall posts are meant to be short public messages. The Messages feature can be used to send private messages. A wall can belong to a user, an event, or a group and each of these owning entities can have only one wall. This means any post sent to a user, event, or group is automatically placed on its one and only wall. A message on a wall is called a post, which in Colony is just a text message (Facebook's original implementation was text only but later extended to other types of media). Posts can be remarked on and are not threaded. Posts are placed on the wall in a reverse chronological order in a way that the latest post remains at the top of the wall. Sending messages The messaging feature of Colony is a private messaging mechanism. Messages are sent by senders and received by recipients. Messages that are received by a user are placed into an inbox while messages that the user sent are placed into a sent box. For Colony we will not be implementing folders so these are the only two message folders that every user has. Messages sent to and received from users are threaded and ordered by time. We thread the messages in order to group different messages sent back and forth as part of an ongoing conversation. Threaded messages are sorted in chronological order, where the last received message is at the bottom of the message thread. Attending events Events can be thought of as locations in time where people can come together for an activity. Social networking services often act as a nexus for a community so organizing and attending events is a natural extension of the features of a social networking service. Events have a wall, venue, date, and time where the event is happening, and can have event-specific pages that allow users to customize and market their event. In Colony we categorize users who attend events by their attendance status. Confirmed users are users who have confirmed their attendance. Pending users are users who haven't yet decided to attend the event. Declined users are users who have declined to attend the event after they have been invited. Declinations are explicit; there is an invisible group of users who are in none of the above three types. Attracting users to events or simply keeping them informed is a critical part of making this or any feature successful. To do so, we suggest events to users and display the suggested events in the user's landing page. The suggestion algorithm is simple, we just go through each of the user's friends and see which other events they have confirmed attending, and then suggest that event to the user. Besides suggestions, the other means of discovering events are through the activity feeds (whenever an event is created, it is logged as an activity and published on the activity feed) and through user pages, where the list of a user's pages are also displayed. All events are public, as with content created within events like wall posts and pages. Forming groups Social networking services are made of people and people have a tendency to form groups or categories based on common characteristics or interests. The idea of groups in Colony is to facilitate such grouping of people with a simple set of features. Conceptually groups and events are very similar to each other, except that groups are not time-based like events, and don't have a concept of attendance. Groups have members, a wall, and can have specific pages created by the group. Colony's capabilities to attract users to groups are slightly weaker than in events. Colony only suggests groups in the groups page rather than the landing page. However, groups also allow discovery through activity feeds and through user pages. Colony has only public groups and no restriction on who can join these public groups. Commenting on and liking content Two popular and common features in many consumer focused web applications are reviews and ratings. Reviews and ratings allow users to provide reviews (or comments) or ratings to editorial or user-generated content. The stereotypical review and ratings feature is Amazon.com's book review and rating, which allows users to provide book reviews as well as rate the book from one to five stars. Colony's review feature is called comments. Comments are applicable to all user-generated content such as status updates, wall posts, photos, and pages. Comments provide a means for users to review the content and give critique or encouragement to the content creator. Colony's rating feature is simple and follows Facebook's popular rating feature, called likes. While many rating features provide a range of one to five stars for the users to choose, Colony (and Facebook) asks the user to indicate if he likes the content. There is no dislike though, so the fewer number of likes a piece of content, the less popular it is. Colony's comments and liking feature is applicable to all user-generated content such as statuses, photos, wall posts, activities, and pages. Sharing photos Photos are one of the most popular types of user-generated content shared online, with users uploading 3 billion photos a month on Facebook; it's an important feature to include in Colony. The basic concept of photo sharing in Colony is that each user can have one or more albums and each album can have one or more photos. Photos can be commented, liked, and annotated. Blogging with pages Colony's pages are a means of allowing users to create their own full-page content, and attach it to their own accounts, a page, or a group. A user, event, or group can own one or more pages. Pages are meant to be user-generated content so the entire content of the page is written by the user. However in order to keep the look and feel consistent throughout the site, the page will be styled according to Colony's look and feel. To do this we only allow users to enter Markdown, a lightweight markup language that takes many cues from existing conventions for marking up plain text in e-mail. Markdown converts its marked-up text input to valid, well-formed XHTML. We use it here in Colony to let users write content easily without worrying about layout or creating a consistent look and feel. Technologies and platforms used We use a number of technologies in this article, mainly revolving around the Ruby programming language and its various libraries. In addition to Ruby and its libraries we also use mashups, which are described next. Mashups While the main features in the applications are all provided for, sometimes we still depend on other services provided by other providers. In this article we use four such external services—RPX for user web authentication, Gravatar for avatar services, Amazon Web Services S3 for photo storage, and Facebook Connect for reaching out to users on Facebook. Facebook Connect Facebook has a number of technologies and APIs used to interact and integrate with their platform, and Facebook Connect is one of them. Facebook Connect is a set of APIs that let users bring their identity and information into the application itself. We use Facebook Connect to send out requests to a user's friends, inviting them to join our social network. Note that for the user invitation feature, once a user has logged in through Facebook with RPX, he is considered to have logged into Facebook Connect and therefore can send invitations immediately without logging in again. Summary In this article, we described some of the essential features of Facebook and we categorized the features into User, Community, and Content sharing features. After that, we went into a high level discussion on these various features and how we implement them in our Facebook clone, Colony. After that, we went briefly into the various technologies used in the clone. In the next article, we will be building the Facebook clone using Ruby. Further resources on this subject: Building the Facebook Clone using Ruby [article] URL Shorteners – Designing the TinyURL Clone with Ruby [article]
Read more
  • 0
  • 0
  • 3107

article-image-wordpress-3-security-risks-and-threats
Packt
04 Jul 2011
11 min read
Save for later

WordPress 3 Security: Risks and Threats

Packt
04 Jul 2011
11 min read
  WordPress 3 Ultimate Security Protect your WordPress site and its network         Read more about this book       (For more resources on WordPress, see here.) You may think that most of this is irrelevant to WordPress security. Sadly, you'd be wrong. Your site is only as safe as the weakest link: of the devices that assist in administering it or its server; of your physical security; or of your computing and online discipline. To sharpen the point with a simple example, whether you have an Automattic-managed wordpress.com blog or unmanaged dedicated site hosting, if a hacker grabs a password on your local PC, then all bets are off. If a hacker can borrow your phone, then all bets are off. If a hacker can coerce you to a malicious site, then all bets are off. And so on. Let's get one thing clear. There is no such thing as total security and anyone who says any different is selling something. Then again, what we can achieve, given ongoing attention, is to boost our understanding, to lock our locations, to harden our devices, to consolidate our networks, to screen our sites and, certainly not least of all, to discipline our computing practice. Even this carries no guarantee. Tell you what though, it's pretty darned tight. Let's jump in and, who knows, maybe even have a laugh here and there to keep us awake. Calculated risk So what is the risk? Here's one way to look at the problem: RISK = VULNERABILITY x THREAT A vulnerability is a weakness, a crack in your armour. That could be a dodgy wireless setup or a poorly coded plugin, a password-bearing sticky note, or an unencrypted e-mail. It could just be the tired security guy. It could be 1001 things, and then more besides. The bottom line vulnerability though, respectfully, is our ignorance. A threat, on the other hand, is an exploit, some means of hacking the flaw, in turn compromising an asset such as a PC, a router, a phone, your site. That's the sniffer tool that intercepts your wireless, the code that manipulates the plugin, a colleague that reads the sticky, whoever reads your mail, or the social engineer who tiptoes around security. The risk is the likelihood of getting hacked. If you update the flawed plugin, for instance, then the threat is redundant, reducing the risk. Some risk remains because, when a further vulnerability is found there will be someone, somewhere, who will tailor an exploit to threaten it. This ongoing struggle to minimize risk is the cat and mouse that is security. To minimize risk, we defend vulnerabilities against threats. You may be wondering, why bother calculating risk? After all, any vulnerability requires attention. You'd not be wrong but, such is the myriad complexity of securing multiple assets, any of which can add risk to our site, and given that budgets or our time are at issue, we need to prioritize. Risk factoring helps by initially flagging glaring concerns and, ideally assisted by a security policy, ensuring sensible ongoing maintenance. Securing a site isn't a one-time deal. Such is the threatscape, it's an ongoing discipline.   An overview of our risk Let's take a WordPress site, highlight potential vulnerabilities, and chew over the threats. WordPress is an interactive blogging application written in PHP and working in conjunction with a SQL database to store data and content. The size and complexity of this content manager is extended with third party code such as plugins and themes. The framework and WordPress sites are installed on a web server and that, the platform, and its file system are administered remotely. WordPress. Powering multi-millions of standalone sites plus another 20 million blogs at wordpress.com, Automattic's platform is an attack target coveted by hackers. According to wordpress.org 40% of self-hosted sites run the gauntlet with versions 2.3 to 2.9. Interactive. Just being online, let alone offering interaction, sites are targets. A website, after all, is effectively an open drawer in an otherwise lockable filing cabinet, the server. Now, we're inviting people server-side not just to read but to manipulate files and data. Application, size, and complexity. Not only do applications require security patching but, given the sheer size and complexity of WordPress, there are more holes to plug. Then again, being a mature beast, a non-custom, hardened WordPress site is in itself robust. PHP, third party code, plugins, and themes. Here's a whole new dynamic. The use of poorly written or badly maintained PHP and other code adds a slew of attack vectors. SQL database. Containing our most valuable assets, content and data, MySQL, and other database apps are directly available to users making them immediate targets for hackers. Data. User data from e-mails to banking information is craved by cybercriminals and its compromise, else that of our content, costs sites anything from reputation to a drop or ban in search results as well as carrying the remedial cost of time and money. Content and media. Content is regularly copied without permission. Likewise with media, which can also be linked to and displayed on other sites while you pay for its storage and bandwidth. Upload, FTP, and private areas provide further opportunities for mischief. Sites. Sites-plural adds risk because a compromise to one can be a compromise to all. Web server. Server technologies and wider networks may be hacked directly or via WordPress, jeopardizing sites and data, and being used as springboards for wider attacks. File system. Inadequately secured files provide a means of site and server penetration. Administered remotely. Casual or unsecured content, site, server, and network administration allows for multi-faceted attacks and, conversely, requires discipline, a secure local working environment, and impenetrable local-to-remote connectivity.   Meet the hackers This isn't some cunning ploy by yours-truly to see for how many readers I can attain visitor's rights, you understand. The fact is to catch a thief one has to think like one. Besides, not all hackers are such bad hats. Far from it. Overall there are three types-white hat, grey hat, and black hat-each with their sub-groups. White hat One important precedent sets white hats above and beyond other groups: permission. Also known as ethical hackers, these decent upstanding folks are motivated: To learn about security To test for vulnerabilities To find and monitor malicious activity To report issues To advise others To do nothing illegal To abide by a set of ethics to not harm anyone So when we're testing our security to the limit, that should include us. Keep that in mind. Black hat Out-and-out dodgy dealers. They have nefarious intent and are loosely sub-categorized: Botnets A botnet is a network of automated robots, or scripts, often involved in malicious activity such as spamming or data-mining. The network tends to be comprised of zombie machines, such as your server, which are called upon at will to cause general mayhem. Botnet operators, the actual black hats, have no interest in damaging most sites. Instead they want quiet control of the underlying server resources so their malbots can, by way of more examples, spread malware or Denial of Service (DoS) attacks, the latter using multiple zombies to shower queries to a server to saturate resources and drown out a site. Cybercriminals These are hackers and gangs whose activity ranges from writing and automating malware to data-mining, the extraction of sensitive information to extort or sell for profit. They tend not to make nice enemies, so I'll just add that they're awfully clever. Hacktivists Politically-minded and often inclined towards freedom of information, hacktivists may fit into one of the previous groups, but would argue that they have a justifiable cause. Scrapers While not technically hackers, scrapers steal content-often on an automated basis from site feeds-for the benefit of their generally charmless blog or blog farms. Script kiddies This broad group ranges anything from well-intentioned novices (white hat) to online graffiti artists who, when successfully evading community service, deface sites for kicks. Armed with tutorials galore and a share full of malicious warez, the hell-bent are a great threat because, seeking bragging rights, they spew as much damage as they possibly can. Spammers Again not technically hackers but this vast group leeches off blogs and mailing lists to promote their businesses which frequently seem to revolve around exotic pharmaceutical products. They may automate bomb marketing or embed hidden links but, however educational their comments may be, spammers are generally, but not always, just a nuisance and a benign threat. Misfits Not jargon this time, this miscellaneous group includes disgruntled employees, the generally unloved, and that guy over the road who never really liked you. Grey hat Grey hatters may have good intentions, but seem to have a knack for misplacing their moral compass, so there's a qualification for going into politics. One might argue, for that matter, that government intelligence departments provide a prime example. Hackers and crackers Strictly speaking, hackers are white hat folks who just like pulling things apart to see how they work. Most likely, as kids, they preferred Meccano to Lego. Crackers are black or grey hat. They probably borrowed someone else's Meccano, then built something explosive. Over the years, the lines between hacker and cracker have become blurred to the point that put-out hackers often classify themselves as ethical hackers. This author would argue the point but, largely in the spirit of living language, won't, instead referring to all those trying to break in, for good or bad, as hackers. Let your conscience guide you as to which is which instance and, failing that, find a good priest.   Physically hacked off So far, we have tentatively flagged the importance of a safe working environment and of a secure network from fingertips to page query. We'll begin to tuck in now, first looking at the physical risks to consider along our merry way. Risk falls into the broad categories of physical and technical, and this tome is mostly concerned with the latter. Then again, with physical weaknesses being so commonly exploited by hackers, often as an information-gathering preface to a technical attack, it would be lacking not to mention this security aspect and, moreover, not to sweet-talk the highly successful area of social engineering. Physical risk boils down to the loss or unauthorized use of (materials containing) data: Break-in or, more likely still, a cheeky walk-in Dumpster diving or collecting valuable information, literally from the trash Inside jobs because a disgruntled (ex-)employee can be a dangerous sort Lost property when you leave the laptop on the train Social engineering which is a topic we'll cover separately, so that's ominous Something just breaks ... such as the hard-drive Password-strewn sticky notes aside, here are some more specific red flags to consider when trying to curtail physical risk: Building security whether it's attended or not. By the way, who's got the keys? A cleaner, a doorman, the guy you sacked? Discarded media or paper clues that haven't been criss-cross shredded. Your rubbish is your competitor's profit. Logged on PCs left unlocked, unsecured, and unattended or with hard drives unencrypted and lacking strong admin and user passwords for the BIOS and OS. Media, devices, PCs and their internal/external hardware. Everything should be pocketed or locked away, perhaps in a safe. No Ethernet jack point protection and no idea about the accessibility of the cable beyond the building. No power-surge protection could be a false economy too. This list is not exhaustive. For mid-sized to larger enterprises, it barely scratches the surface and you, at least, do need to employ physical security consultants to advise on anything from office location to layout as well as to train staff to create a security culture. Otherwise, if you work in a team, at least, you need a policy detailing each and every one of these elements, whether they impact your work directly or indirectly. You may consider designating and sub-designating who is responsible for what and policing, for example, kit that leaves the office. Don't forget cell and smart phones and even diaries.  
Read more
  • 0
  • 0
  • 3105
article-image-moodle-19-working-mind-maps
Packt
30 Jun 2010
7 min read
Save for later

Moodle 1.9: Working with Mind Maps

Packt
30 Jun 2010
7 min read
In this virtual classroom, we are going to enrich the use of vocabulary, because in the creation of these techniques we have to use keywords, which have to be used in a piece of writing. Mind maps are going to be designed according to the facilities that the different software provides us to exploit them. Pictures in mind maps—using Buzan's iMindMap V4 In this task, we are going to use the software of the inventor of mind maps: Buzan's iMindMap V4. We are going to work on the topic of robots and afterwards students are going to write an article about them. We are going to provide students with images of different robots, taking into account that a robot is not a silver rectangular human look-alike. They may have several shapes and can be used for different purposes. Read the next screenshot, which is taken from Buzan's iMindMap V4 software, about inserting images in a mind map: Getting ready Let's create a mind map related to robots with pictures. After creating the mind map, students are going to look at it and they are going to write an article about the topic. In this case, the mind map will be designed with images only so as to "trigger associations within the brain" of our students. You can download a free trial of this software from the following webpage: http://www.thinkbuzan.com/uk/. How to do it... After downloading the free trial (you may also buy the software), create a new file. Then follow these steps to create a mind map with images using the previously mentioned software: Choose a central image in order to write the name of the topic in the middle, as shown in the next screenshot: In Enter some text for your central idea,, enter Robots as shown in the previous screenshot and click on Create. Click on Draw and select Organic, and draw the lines of the mind map, as shown in the following screenshot: To add images to the mind map, click on Insert and select Floating image, as shown in the next screenshot: Click on View and select Image Library and search for images, as shown in the next screenshot: Another option is to look for an image in Microsoft Word and copy and paste the images in the mind map. Save the file. How it works... We are going to select the Weekly outline section where we want to insert the activity. Then we are going to create a link to a file. Later, we will ask students to upload a single file in order to carry out the writing activity. Follow these steps: Click on Add a resource and select Link to a file or website. Complete the Name block. Complete the Summary block. Click on Choose or upload a file. Click on Upload a file. Click on Browse and search for the file, then click on Open. Click on Upload this file and then select Choose. In the Target block, select New window. Click on Save and return to course. The mind map appears as shown in the following screenshot: There's more... We saw how to create a mind map related to robots previously; now we will see how to upload this mind map as an image in your course. Uploading the mind map as .png file If your students do not have this software and they cannot open this file, you may upload this mind map in the Moodle course as an image. These are the steps that you have to follow: Open the file and fit the mind map in the screen. Press the Prt Scr key. Paste (Ctrl + V) the image in Paint or Inkscape (or any similar software). Select the section of the mind map only, as shown in the next screenshot: Save the image as .png so that you can upload the image of the mind map in the Moodle course. Drawing pictures using pen sketch It is also possible to use a digital pen, also known as pen sketch, to draw elements for the mind map. For example, as we are dealing with robots in this mind map, you can draw a robot's face and add it to the mind map, as shown in the next screenshot: Creating a writing activity You may add the mind map as a recourse in the Moodle course or you may insert an image in it. In both cases, students can write an article about robots. If you upload the mind map in the Moodle course, you can do it in the Description block of Upload a single file and you do not have to split the activity in two. Adding data to pictures—creating a mind map using MindMeister In this recipe, we are going to work with MindMeister software, which is free and open source.We are going to create a mind map, inserting links to websites, which contain information as well as pictures. Why? Because if we include more information in the mind map, we are going to lead our students on how to write. Apart from that, they are going to read more before writing and we are also exercising reading comprehension in a way. However, they may also summarize information if we create a link to a website. So let's get ready! Getting ready We are going to enter http://www.mindmeister.com/ and then Sign up for free. There is one version which is free to use, or you may choose the other two that are commercial. After signing up, we are going to develop a mind map for our students to work with. There is a video which is a tutorial explaining in a very simple and easy way on how to design a mind map using this software. So it is worth watching. How to do it... We are going to enter the previously mentioned website and we are going to start working on this new mind map. In this case, I have chosen the topic "Special days around the world". Follow these steps: Click on My New Mind Map and write the name of the topic in the block in the middle. Click on Connect and draw arrows, adding as many New node blocks as you wish. Add a website giving information for each special occasion. Click on the Node, then click on Extras–Links | Links and complete the URL block, as shown in the next screenshot: Then click on the checkmark icon. Repeat the same process for each occasion. You can add icons or images to the nodes of the mind map. Click on Share Map at the bottom of the page, as shown in the next screenshot: Click on Publish and change the button to ON, as shown in the next screenshot: Select Allow edit for everybody (WikiMap), as shown in the previous screenshot. You can also embed the mind map. When you click on Embed map, the next screenshot will appear: Copy the Embed code and click on Close. Click on OK. How it works... After creating the mind map about special occasions around the world, we will either embed it or create a link to a website for our students to work on a writing activity. Here the proposal is to work through a Wiki because in Map Properties we have clicked on Allow edit for everybody (WikiMap) so that students can modify the mind map with their ideas. Select the Weekly outline section where you want to insert the activity and these are the steps you have to follow: Click on Add an activity and select Wiki. Complete the Name block. Complete the Summary block. You may either embed the mind map or create a link to a website, as shown in the next screenshot: Click on Save and return to course.
Read more
  • 0
  • 0
  • 3102

article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 3098
Modal Close icon
Modal Close icon