Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-jasperreports-36-creating-report-model-beans-java-applications
Packt
26 Jun 2010
3 min read
Save for later

JasperReports 3.6: Creating a Report from Model Beans of Java Applications

Packt
26 Jun 2010
3 min read
(For more resources on JasperReports, see here.) Getting ready You need a Java JAR file that contains class files for the JavaBeans required for this recipe. A custInvoices.jar file is contained in the source code (chap4). Unzip the source code file for this article and copy the Task5 folder from the unzipped source code to a location of your choice. How to do it... Let's start using Java objects as data storage units. Open the ModelBeansReport.jrxml file from the Task5 folder of the source code for this article (chapt 4). The Designer tab of iReport shows a report containing data in the Title, Column Header, Customer Group Header1 and Detail 1 sections, as shown in the following screenshot: If you have not made any database connection so far in your iReport installation, you will see an Empty datasource shown selected in a drop-down list just below the main menu. Click on the Report Datasources icon, shown encircled to the right of the drop-down list in the following screenshot: A new window named Connections / Datasources will open, as shown next. This window lists an Empty data source as well as the datasources you have made so far. Click the New button at the top-right of the Connections / Datasources window. This will open a new Datasource selection window, as shown in the following screenshot: Select JavaBeans set datasource from the datasource types, as shown next. Click the Next button. A new window named JavaBeans set datasource will open, as shown in the following screenshot: Enter CustomerInvoicesJavaBeans as the name of your new connection in the text box beside the Name field, as shown in the following screenshot: Enter com.CustomerInvoicesFactory as the name of the factory class in the text box beside the Factory class field, as shown in the following screenshot: This com.CustomerInvoicesFactory class provides iReport with access to JavaBeans that contain your data. Enter getBeanCollection as the name of the static method in the text box beside The static method... field, as shown in the following screenshot: Leave the rest of the fields at their default values. Click the Test button to test your new connection to the JavaBeans datasource. You will see an Exception message dialog. This exception message occurs because iReport can't find your factory class. Dismiss the message box by clicking OK. Click the Save button at the bottom of the JavaBeans set datasource window and close the Connections / Datasources window as well.
Read more
  • 0
  • 0
  • 3196

article-image-setup-routine-enterprise-spring-application
Packt
14 Jan 2016
6 min read
Save for later

Setup Routine for an Enterprise Spring Application

Packt
14 Jan 2016
6 min read
In this article by Alex Bretet, author of the book Spring MVC Cookbook, you will learn to install Eclipse for Java EE developers and Java SE 8. (For more resources related to this topic, see here.) Introduction The choice of the Eclipse IDE needs to be discussed as there is some competition in this domain. Eclipse is popular in the Java community for being an active open source product; it is consequently accessible online to anyone with no restrictions. It also provides, among other usages, a very good support to web implementations, particularly to MVC approaches. Why use the Spring Framework? The Spring Framework and its community have also contributed to pull forward the Java platform for more than a decade. Presenting the whole framework in detail would require us to write more than a article. However, the core functionality based on the principles of Inversion of Control and Dependency Injection through a performant access to the bean repository allows massive reusability. Staying lightweight, the Spring Framework secures great scaling capabilities and could probably suit all modern architectures. The following recipe is about downloading and installing the Eclipse IDE for JEE developers and downloading and installing JDK 8 Oracle Hotspot. Getting ready This first sequence could appear as redundant or unnecessary with regard to your education or experience. For instance, you will, for sure, stay away from unidentified bugs (integration or development). You will also be assured of experiencing the same interfaces as the presented screenshots and figures. Also, because third-party products are living, you will not have to face the surprise of encountering unexpected screens or windows. How to do it... You need to perform the following steps to install the Eclipse IDE: Download a distribution of the Eclipse IDE for Java EE developers. We will be using in this article an Eclipse Luna distribution. We recommend you to install this version, which can be found at https://www.eclipse.org/downloads/packages/eclipse-ide-java-ee-developers/lunasr1, so that you can follow along with our guidelines and screenshots completely. Download a Luna distribution for the OS and environment of your choice: The product to be downloaded is not a binary installer but a ZIP archive. If you feel confident enough to use another version (more actual) of the Eclipse IDE for Java EE developers, all of them can be found at https://www.eclipse.org/downloads. For the upcoming installations, on Windows, a few target locations are suggested to be at the root directory C:. To avoid permission-related issues, it would be better if your Windows user is configured to be a local administrator. If you can't be part of this group, feel free to target installation directories you have write access to. Extract the downloaded archive into an eclipse directory:     If you are on Windows, archive into the C:Users{system.username}eclipse directory     If you are using Linux, archive into the /home/usr/{system.username}/eclipse directory     If you are using Mac OS X, archive into the /Users/{system.username}/eclipse directory Select and download a JDK 8. We suggest you to download the Oracle Hotspot JDK. Hotspot is a performant JVM implementation that has originally been built by Sun Microsystems. Now owned by Oracle, the Hotspot JRE and JDK are downloadable for free. Choose the product corresponding to your machine through Oracle website's link, http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. To avoid a compatibility issue later on, do stay consistent with the architecture choice (32 or 64 bits) that you have made earlier for the Eclipse archive. Install the JDK 8. On Windows, perform the following steps:     Execute the downloaded file and wait until you reach the next installation step.     On the installation-step window, pay attention to the destination directory and change it to C:javajdk1.8.X_XX (X_XX as the latest current version. We will be using jdk1.8.0_25 in this article.)     Also, it won't be necessary to install an external JRE, so uncheck the Public JRE feature. On a Linux/Mac OS, perform the following steps:     Download the tar.gz archive corresponding to your environment.     Change the current directory to where you want to install Java. For easier instructions, let's agree on the /usr/java directory.     Move the downloaded tar.gz archive to this current directory.     Unpack the archive with the following command line, targeting the name of your archive: tar zxvf jdk-8u25-linux-i586.tar.gz (This example is for a binary archive corresponding to a Linux x86 machine).You must end up with the /usr/java/jdk1.8.0_25 directory structure that contains the subdirectories /bin, /db, /jre, /include, and so on. How it works… Eclipse for Java EE developers We have installed the Eclipse IDE for Java EE developers. Comparatively to Eclipse IDE for Java developers, there are some additional packages coming along, such as Java EE Developer Tools, Data Tools Platform, and JavaScript Development Tools. This version is appreciated for its capability to manage development servers as part of the IDE itself and its capability to customize the Project Facets and to support JPA. The Luna version is officially Java SE 8 compatible; it has been a decisive factor here. Choosing a JVM The choice of JVM implementation could be discussed over performance, memory management, garbage collection, and optimization capabilities. There are lots of different JVM implementations, and among them, a couple of open source solutions, such as OpenJDK and IcedTea (RedHat). It really depends on the application requirements. We have chosen Oracle Hotspot from experience, and from reference implementations, deployed it in production; it can be trusted for a wide range of generic purposes. Hotspot also behaves very well to run Java UI applications. Eclipse is one of them. Java SE 8 If you haven't already played with Scala or Clojure, it is time to take the functional programming train! With Java SE 8, Lambda expressions reduce the amount of code dramatically with an improved readability and maintainability. We won't implement only this Java 8 feature, but it being probably the most popular, it must be highlighted as it has given a massive credit to the paradigm change. It is important nowadays to feel familiar with these patterns. Summary In this article, you learned how to install Eclipse for Java EE developers and Java SE 8. Resources for Article: Further resources on this subject: Support for Developers of Spring Web Flow 2[article] Design with Spring AOP[article] Using Spring JMX within Java Applications[article]
Read more
  • 0
  • 0
  • 3194

article-image-logos-inkscape
Packt
03 Nov 2010
6 min read
Save for later

Logos in Inkscape

Packt
03 Nov 2010
6 min read
  Inkscape 0.48 Essentials for Web Designers Use the fascinating Inkscape graphics editor to create attractive layout designs, images, and icons for your website The first book on the newly released Inkscape version 0.48, with an exclusive focus on web design Comprehensive coverage of all aspects of Inkscape required for web design Incorporate eye-catching designs, patterns, and other visual elements to spice up your web pages Learn how to create your own Inkscape templates in addition to using the built-in ones Written in a simple illustrative manner, which will appeal to web designers and experienced Inkscape users alike        Here's an example of a web page with a logo as a major design element: Logos as the cornerstone of the design Logos are the graphical representation or emblem for a company or organization—sometimes even individuals use them to promote instant recognition. They can be a graphic (a combination of symbols or icons), a combination of graphics and text, or graphical forms of text. Why are they important in web design? Since most companies want to be recognized by their logo alone—the logo is the critical piece of the design. It needs prominent placement to work flawlessly with the design. Best practices for creating logos There are a lot of guidelines and principles to the best logo designs. And they start with some simple ideas that have been reworked and discussed intensely since the start of the Internet. But it never hurts to review the best practices. You want your company logos to be: Simple: That's right, you want to keep them clean, simple, neat, and intensely easy to recreate. If you nail this attribute, the others listed below will be easy to achieve. Memorable: Think of all the great company logos. You remember them in your mind's eye very easily right? That's because they are unique and in essence simple. These two attributes together make some of the best company logos today. Timeless: These logos will last many years. This not only saves the company or individual money, but it also increases the memorability of the logo and brand of the company. Versatile: Any logo that can be used in print (color and black and white), digital media, television, any size, letterhead, billboards, and small iconic statements along the bottom of web pages or promotional materials—is a successful logo. You never know where a logo might be placed, especially on the web. You want something that can be used in a prominent location on the company web site itself, but also something that works in a small thumbnail space for social media or cell phone applications. Appropriate: We want the logo to be appropriate for the company it is representing. The right colors, images, and more will go along way in giving the company credibility immediately upon first glance by any consumer or potential client. It can also prove to be a great indication of the services one can expect from the company itself. Seems easy enough right? It is, after some practice and some processes are in place. It never hurts to have a loose process to work with clients to determine their needs and wants in a logo. Some may already have a logo and want to keep parts of the design and revamp others while other clients might be so new they haven't ever had a logo before. As a start, here's a brief process for working with clients and discussing logos. Information gathering There's no better place to start than to open the floor for discussion. Here's just the start of what you can ask or gather from your client in an initial information gathering meeting: Does the client already have a logo? If yes, do they intend to keep that logo to use in the web design? Again if yes, get the source files. Hopefully they are in vector graphic format so they are scalable and usable right away in the web design. Are they interested in a logo redesign? This can be beneficial if they are rebranding themselves as a business or having a 'grand re-opening' of some sort. It can breathe life into a stale business and sometimes garner some new interest. If yes, is it a complete (open to anything) redesign? Or are there certain elements that need to stay? Sometimes color is important, or a certain font or even a certain graphical element needs to stay within the logo. Listen and take notes; it is important to work with the client to try to fulfill their needs as much as possible. If the client is open to a complete redesign, brainstorm a bit with them about their needs and wants. Colors, fonts, graphical ideas. Don't be afraid to bring out some paper and pencils and start sketching some ideas. Sometimes it can be most productive to work through some rough ideas this way to get a feel for what the client likes most and not. Consider it a working session. Try to understand where the client wants to use this logo most prominently. Keeping that in mind you will design something that is versatile and could be used in most mediums; you still want to know where they plan to use it the most. That way, you can tailor the logo as much as possible for that space—especially if you can use more color. What are the primary goals of the company? What is their mission statement? Does the client already have brand guidelines to consider? Creating initial designs After the initial informational session it is your turn to start designing. Take the paper and pencil sketches (if you had any) from the initial meeting and expand on them. In fact, spend a bit more time with your team and flesh out a few more of those ideas in a true brainstorming session. It can be beneficial to start this way first before jumping on to the computer and getting caught in details like typeface and effects. Once you have some solid ideas, bring it over to the computer and start designing. Focus on only three of your best ideas. That way you bring only your best to the client to review and discuss. Much like with the web design process, the logo design process takes a very similar route. You bring design mock ups to your client to review, give feedback, redesign, and then you go back and design some more—all until you get approvals. And then you—being an Inkscape expert—can build and then export the logo in any number of vector formats for use in almost any medium.
Read more
  • 0
  • 0
  • 3193

article-image-google-app-engine
Packt
05 Feb 2015
11 min read
Save for later

Google App Engine

Packt
05 Feb 2015
11 min read
In this article by Massimiliano Pippi, author of the book Python for Google App Engine, in this article, you will learn how to write a web application and seeing the platform in action. Web applications commonly provide a set of features such as user authentication and data storage. App Engine provides the services and tools needed to implement such features. (For more resources related to this topic, see here.) In this article, we will see: Details of the webapp2 framework How to authenticate users Storing data on Google Cloud Datastore Building HTML pages using templates Experimenting on the Notes application To better explore App Engine and Cloud Platform capabilities, we need a real-world application to experiment on; something that's not trivial to write, with a reasonable list of requirements. A good candidate is a note-taking application; we will name it Notes. Notes enable the users to add, remove, and modify a list of notes; a note has a title and a body of text. Users can only see their personal notes, so they must authenticate before using the application. The main page of the application will show the list of notes for logged-in users and a form to add new ones. The code from the helloworld example is a good starting point. We can simply change the name of the root folder and the application field in the app.yaml file to match the new name we chose for the application, or we can start a new project from scratch named notes. Authenticating users The first requirement for our Notes application is showing the home page only to users who are logged in and redirect others to the login form; the users service provided by App Engine is exactly what we need and adding it to our MainHandler class is quite simple: import webapp2 from google.appengine.api import users class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: self.response.write('Hello Notes!') else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) app = webapp2.WSGIApplication([ ('/', MainHandler) ], debug=True) The user package we import on the second line of the previous code provides access to users' service functionalities. Inside the get() method of the MainHandler class, we first check whether the user visiting the page has logged in or not. If they have, the get_current_user() method returns an instance of the user class provided by App Engine and representing an authenticated user; otherwise, it returns None as output. If the user is valid, we provide the response as we did before; otherwise, we redirect them to the Google login form. The URL of the login form is returned using the create_login_url() method, and we call it, passing as a parameter the URL we want to redirect users to after a successful authentication. In this case, we want to redirect users to the same URL they are visiting, provided by webapp2 in the self.request.uri property. The webapp2 framework also provides handlers with a redirect() method we can use to conveniently set the right status and location properties of the response object so that the client browsers will be redirected to the login page. HTML templates with Jinja2 Web applications provide rich and complex HTML user interfaces, and Notes is no exception but, so far, response objects in our applications contained just small pieces of text. We could include HTML tags as strings in our Python modules and write them in the response body but we can imagine how easily it could become messy and hard to maintain the code. We need to completely separate the Python code from HTML pages and that's exactly what a template engine does. A template is a piece of HTML code living in its own file and possibly containing additional, special tags; with the help of a template engine, from the Python script, we can load this file, properly parse special tags, if any, and return valid HTML code in the response body. App Engine includes in the Python runtime a well-known template engine: the Jinja2 library. To make the Jinja2 library available to our application, we need to add this code to the app.yaml file under the libraries section: libraries: - name: webapp2 version: "2.5.2" - name: jinja2 version: latest We can put the HTML code for the main page in a file called main.html inside the application root. We start with a very simple page: <!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title>Notes</title> </head> <body> <div class="container"> <h1>Welcome to Notes!</h1> <p> Hello, <b>{{user}}</b> - <a href="{{logout_url}}">Logout</a> </p> </div> </body> </html> Most of the content is static, which means that it will be rendered as standard HTML as we see it but there is a part that is dynamic and whose content depend on which data will be passed at runtime to the rendering process. This data is commonly referred to as template context. What has to be dynamic is the username of the current user and the link used to log out from the application. The HTML code contains two special elements written in the Jinja2 template syntax, {{user}} and {{logout_url}}, that will be substituted before the final output occurs. Back to the Python script; we need to add the code to initialize the template engine before the MainHandler class definition: import os import jinja2 jinja_env = jinja2.Environment( loader=jinja2.FileSystemLoader(os.path.dirname(__file__))) The environment instance stores engine configuration and global objects, and it's used to load templates instances; in our case, instances are loaded from HTML files on the filesystem in the same directory as the Python script. To load and render our template, we add the following code to the MainHandler.get() method: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) Similar to how we get the login URL, the create_logout_url() method provided by the user service returns the absolute URI to the logout procedure that we assign to the logout_url variable. We then create the template_context dictionary that contains the context values we want to pass to the template engine for the rendering process. We assign the nickname of the current user to the user key in the dictionary and the logout URL string to the logout_url key. The get_template() method from the jinja_env instance takes the name of the file that contains the HTML code and returns a Jinja2 template object. To obtain the final output, we call the render() method on the template object passing in the template_context dictionary whose values will be accessed, specifying their respective keys in the HTML file with the template syntax elements {{user}} and {{logout_url}}. Handling forms The main page of the application is supposed to list all the notes that belong to the current user but there isn't any way to create such notes at the moment. We need to display a web form on the main page so that users can submit details and create a note. To display a form to collect data and create notes, we put the following HTML code right below the username and the logout link in the main.html template file: {% if note_title %} <p>Title: {{note_title}}</p> <p>Content: {{note_content}}</p> {% endif %} <h4>Add a new note</h4> <form action="" method="post"> <div class="form-group"> <label for="title">Title:</label> <input type="text" id="title" name="title" /> </div> <div class="form-group"> <label for="content">Content:</label> <textarea id="content" name="content"></textarea> </div> <div class="form-group"> <button type="submit">Save note</button> </div> </form> Before showing the form, a message is displayed only when the template context contains a variable named note_title. To do this, we use an if statement, executed between the {% if note_title %} and {% endif %} delimiters; similar delimiters are used to perform for loops or assign values inside a template. The action property of the form tag is empty; this means that upon form submission, the browser will perform a POST request to the same URL, which in this case is the home page URL. As our WSGI application maps the home page to the MainHandler class, we need to add a method to this class so that it can handle POST requests: class MainHandler(webapp2.RequestHandler): def get(self): user = users.get_current_user() if user is not None: logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) else: login_url = users.create_login_url(self.request.uri) self.redirect(login_url) def post(self): user = users.get_current_user() if user is None: self.error(401) logout_url = users.create_logout_url(self.request.uri) template_context = { 'user': user.nickname(), 'logout_url': logout_url, 'note_title': self.request.get('title'), 'note_content': self.request.get('content'), } template = jinja_env.get_template('main.html') self.response.out.write( template.render(template_context)) When the form is submitted, the handler is invoked and the post() method is called. We first check whether a valid user is logged in; if not, we raise an HTTP 401: Unauthorized error without serving any content in the response body. Since the HTML template is the same served by the get() method, we still need to add the logout URL and the user name to the context. In this case, we also store the data coming from the HTML form in the context. To access the form data, we call the get() method on the self.request object. The last three lines are boilerplate code to load and render the home page template. We can move this code in a separate method to avoid duplication: def _render_template(self, template_name, context=None): if context is None: context = {} template = jinja_env.get_template(template_name) return template.render(context) In the handler class, we will then use something like this to output the template rendering result: self.response.out.write( self._render_template('main.html', template_context)) We can try to submit the form and check whether the note title and content are actually displayed above the form. Summary Thanks to App Engine, we have already implemented a rich set of features with a relatively small effort so far. We have discovered some more details about the webapp2 framework and its capabilities, implementing a nontrivial request handler. We have learned how to use the App Engine users service to provide users authentication. We have delved into some fundamental details of Datastore and now we know how to structure data in grouped entities and how to effectively retrieve data with ancestor queries. In addition, we have created an HTML user interface with the help of the Jinja2 template library, learning how to serve static content such as CSS files. Resources for Article: Further resources on this subject: Machine Learning in IPython with scikit-learn [Article] Introspecting Maya, Python, and PyMEL [Article] Driving Visual Analyses with Automobile Data (Python) [Article]
Read more
  • 0
  • 0
  • 3192

article-image-drupal-6-performance-optimization-using-views-and-panels-caching
Packt
19 Feb 2010
5 min read
Save for later

Drupal 6 Performance Optimization Using Views and Panels Caching

Packt
19 Feb 2010
5 min read
Views caching The Views 2 module allows you to cache your Views data and content. You can cache Views data per View. We're going to enable caching on one of our existing Views, and also create a brand new View and set caching for that as well using the test content we just generated. This will show you a nice integration of the Devel functionality with the Views module and then how caching works with Views. Go to your Site building | Views configuration page and you'll see many of your default and custom views listed. We have a view on this site for our main photo gallery. The view is named photo_gallery in our View listing. Go ahead and click on one of your Views edit links to get into edit mode for a View. In our Views 2 interface mode, we'll see our tabs for default, Page, and/or Block View display. I'm going to click on my Page tab to see my View's page settings. Under my Basic settings configuration, I'll see a link for Caching. Currently, our Caching link states None, meaning that no caching has been configured for this view. Click on the None link. Select the Time-based radio button. This will enable Time-based caching for our View page. Click the Update default display button. The next caching options configuration screen will ask you to set the amount of time for both, your View Query results and for your View Rendered output. Query results refer to the amount of time raw queries should be cached. Rendered output is the amount of time the View HTML output should be cached. So basically, you can cache both your data and your frontend HTML output. Set them both to the default of 1 hour. You can also set one to a specific time and the other to None. Go ahead and tweak these settings to your own requirements. I'm leaving both set to the default of 1 hour. Click on the Update button to save your caching options settings. You are now caching your View. Save your View by clicking on the Save button. The next time you look at your View interface you should see the caching time notation listed under your Basic settings. It will say 1 hour/1 hour for this setting. Once you enable Views caching, if you make a change to your View settings and configuration, the results and output of the View may not update while you have caching enabled. So, while in Views development you may want to disable caching and set it to None. Otherwise, this next section will show you how to disable your Views cache while you are in development. To see the performance results of this, you can use the Devel module's functionality again. When you load your View after you enable caching, you should see a decrease in the amount of ms (milliseconds) needed to build your Views plugin, data, and handlers. So, if your Views plugin build loaded in 27.1 ms before you enabled caching, you may notice that it changes to something less—for example, in my case it now shows that it loads in 2.8 ms. You can immediately see a slight performance increase with your View build. Let's go ahead and build a brand new View using the test content that we generated with the Devel module and then enable caching for this View as well. Go to your Views admin and follow these steps: Add a new View. Name the View, add a description and a tag if applicable. Click on Next. I'm going to create a View that filters my blog entries and lists the new blog entries in post date order using the Devel content I generated. Add a Page display to your new View. Name the page View. Give the page View a title. Give your View an HTML list style. Set the View to display 5 posts and to use a full pager. Set your caching to Time-based (following instructions above in the first view we edited). Give the view a path. Add a Node: Title field and set the field to be linked to its node. Add a filter in order to filter by Node:Type and then select Blog entry. Set your Sort criteria to sort by Node:Post date in ascending order by hour. Your settings should look similar to this: Save your View by clicking on the Save button. Your new View will be visible at the Page path you gave it and it will also be caching the content and data it presents. Again, if you refresh your View page each time you should notice that the plugins, data, and handlers build times decrease or stay very similar and consistent in load times. You should also notice that the Devel database queries status is telling you that it's using the cached results and cached output for the View build times and the MySQL statements. You should see the following code sitting below your page content on the View page you are looking at. It will resemble this: Views plugins build time: 23.509979248 msViews data build time: 55.7069778442 msViews handlers build time: 1.95503234863 msSELECT node.nid AS nid,node_data_field_photo_gallery_photo.field_photo_gallery_photo_fidAS node_data_field_photo_gallery_photo_field_photo_gallery_photo_fid,node_data_field_photo_gallery_photo.field_photo_gallery_photo_listAS node_data_field_photo_gallery_photo_field_photo_gallery_photo_list,node_data_field_photo_gallery_photo.field_photo_gallery_photo_dataAS node_data_field_photo_gallery_photo_field_photo_gallery_photo_data,node.type AS node_type,node.vid AS node_vid,node.title AS node_title,node.created AS node_createdFROM {node} nodeLEFT JOIN {content_type_photo} node_data_field_photo_gallery_photo ONnode.vid = node_data_field_photo_gallery_photo.vidWHERE (node.status <> 0) AND (node.type in ('%s'))ORDER BY node_created ASCUsed cached resultsUsed cached output
Read more
  • 0
  • 0
  • 3182

article-image-creating-skeleton-apps-coily-spring-python
Packt
16 Dec 2010
3 min read
Save for later

Creating Skeleton Apps with Coily in Spring Python

Packt
16 Dec 2010
3 min read
  Spring Python 1.1 Create powerful and versatile Spring Python applications using pragmatic libraries and useful abstractions   Maximize the use of Spring features in Python and develop impressive Spring Python applications Explore the versatility of Spring Python by integrating it with frameworks, libraries, and tools Discover the non-intrusive Spring way of wiring together Python components Packed with hands-on-examples, case studies, and clear explanations for better understanding          Read more about this book       (For more resources on this subject, see here.) Plugin approach of Coily coily is a Python script designed from the beginning to provide a plugin based platform for building Spring Python apps. Another important feature is version control of the plugins. Developers should not have to worry about installing an out-of-date plugin that was designed for an older version of Spring Python. coily allows different users on a system to have different sets of plugins installed. It also requires no administrative privileges to install a plugin. Key functions of coily coily is included in the standard installation of Spring Python. To see the available commands, just ask for help. The following table elaborates these commands. Required parts of a plugin A coily plugin closely resembles a Python package with some slight tweaks. This doesn't mean that a plugin is meant to be installed as a Python package. It is only a description of the folder structure. Let's look at the layout of the gen-cherrypy-app plugin as an example. Some parts of this layout are required, and other parts are not. The top folder is the name of the plugin. A plugin requires a __init__.py file inside the top directory. __init__.py must include a __description__ variable. This description is shown when we run the coily --help command. __init__.py must include a command function, which is either a create or apply function. create is used when the plugin needs one argument from the user. apply is used when no argument is needed from the user. Let's look at how gen-cherrypy-app meets each of these requirements. We can already see from the diagram that the top level folder has the same name as our plugin. Inside __init__.py, we can see the following help message defined. __description__ = "plugin to create skeleton CherryPy applications" gen-cherrypy-app is used to create a skeleton application. It needs the user to supply the name of the application it will create. Again, looking inside __init__.py, the following method signature can be found. def create(plugin_path, name) plugin_path is an argument provided to gen-cherrypy-app by coily, which points at the base directory of gen-cherrypy-app. This argument is also provided for plug-ins that use the apply command function. name is the name of the application provided by the user.
Read more
  • 0
  • 0
  • 3177
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cxf-architecture
Packt
07 Jan 2010
8 min read
Save for later

CXF architecture

Packt
07 Jan 2010
8 min read
The following figure shows the overall architecture: Bus Bus is the backbone of the CXF architecture. The CXF bus is comprised of a Spring-based configuration file, namely, cxf.xml which is loaded upon servlet initialization through SpringBusFactory. It defines a common context for all the endpoints. It wires all the runtime infrastructure components and provides a common application context. The SpringBusFactory scans and loads the relevant configuration files in the META-INF/cxf directory placed in the classpath and accordingly builds the application context. It builds the application context from the following files: META-INF/cxf/cxf.xml META-INF/cxf/cxf-extension.xml META-INF/cxf/cxf-property-editors.xml The XML file is part of the installation bundle's core CXF library JAR. Now, we know that CXF internally uses Spring for its configuration. The following XML fragment shows the bus definition in the cxf.xml file. <bean id="cxf" class="org.apache.cxf.bus.CXFBusImpl" /> The core bus component is CXFBusImpl . The class acts more as an interceptor provider for incoming and outgoing requests to a web service endpoint. These interceptors, once defined, are available to all the endpoints in that context. The cxf.xml file also defines other infrastructure components such as BindingFactoryManager, ConduitFactoryManager, and so on. These components are made available as bus extensions. One can access these infrastructure objects using the getExtension method. These infrastructure components are registered so as to get and update various service endpoint level parameters such as service binding, transport protocol, conduits, and so on. CXF bus architecture can be overridden, but one must apply caution when overriding the default bus behavior. Since the bus is the core component that loads the CXF runtime, many shared objects are also loaded as part of this runtime. You want to make sure that these objects are loaded when overriding the existing bus implementation. You can extend the default bus to include your own custom components or service objects such as factory managers. You can also add interceptors to the bus bean. These interceptors defined at the bus level are available to all the endpoints. The following code shows how to create a custom bus: SpringBeanFactory.createBus("mycxf.xml") SpringBeanFactory class is used to create a bus. You can complement or overwrite the bean definitions that the original cxf.xml file would use. For the CXF to load the mycxf.xml file, it has to be in the classpath or you can use a factory method to load the file. The following code illustrates the use of interceptors at the bus level: <bean id="cxf" class="org.apache.cxf.bus.spring.SpringBusImpl"> <property name="outInterceptors"> <list> <ref bean="myLoggingInterceptor"/> </list> </property></bean><bean id="myLogHandler" class="org.mycompany.com.cxf.logging. LoggingInterceptor"> ...</bean> The preceding bus definition adds the logging interceptor that will perform logging for all outgoing messages. Frontend CXF provides the concept of frontend modeling, which lets you create web services using different frontend APIs. The APIs let you create a web service using simple factory beans and JAX-WS implementation. It also lets you create dynamic web service clients. The primary frontend supported by CXF is JAX-WS. JAX-WS JAX-WS is a specification that establishes the semantics to develop, publish, and consume web services. JAX-WS simplifies web service development. It defines Java-based APIs that ease the development and deployment of web services. The specification supports WS-Basic Profile 1.1 that addresses web service interoperability. It effectively means a web service can be invoked or consumed by a client written in any language. JAX-WS also defines standards such as JAXB and SAAJ. CXF provides support for complete JAX-WS stack. JAXB provides data binding capabilities by providing a convenient way to map XML schema to a representation in Java code. The JAXB shields the conversion of XML schema messages in SOAP messages to Java code without the developers seeing XML and SOAP parsing. JAXB specification defines the binding between Java and XML Schema. SAAJ provides a standard way of dealing with XML attachments contained in a SOAP message. JAX-WS also speeds up web service development by providing a library of annotations to turn Plain Old Java classes into web services and specifies a detailed mapping from a service defined in WSDL to the Java classes that will implement that service. Any complex types defined in WSDL are mapped into Java classes following the mapping defined by the JAXB specification. As discussed earlier, two approaches for web service development exist: Code-First and Contract-First. With JAX-WS, you can perform web service development using one of the said approaches, depending on the nature of the application. With the Code-first approach, you start by developing a Java class and interface and annotating the same as a web service. The approach is particularly useful where Java implementations are already available and you need to expose implementations as services. You typically create a Service Endpoint Interface (SEI) that defines the service methods and the implementation class that implements the SEI methods. The consumer of a web service uses SEI to invoke the service functions. The SEI directly corresponds to a wsdl:portType element. The methods defined by SEI correspond to the wsdl:operation element. @WebServicepublic interface OrderProcess { String processOrder(Order order);} JAX-WS makes use of annotations to convert an SEI or a Java class to a web service. In the above example, the @WebService annotation defined above the interface declaration signifies an interface as a web service interface or Service Endpoint Interface. In the Contract-first approach, you start with the existing WSDL contract, and generate Java class to implement the service. The advantage is that you are sure about what to expose as a service since you define the appropriate WSDL Contract-first. Again the contract definitions can be made consistent with respect to data types so that it can be easily converted in Java objects without any portability issue. WSDL contains different elements that can be directly mapped to a Java class that implements the service. For example, the wsdl:portType element is directly mapped to SEI, type elements are mapped to Java class types through the use of Java Architecture of XML Binding (JAXB), and the wsdl:service element is mapped to a Java class that is used by a consumer to access the web service. The WSDL2Java tool can be used to generate a web service from WSDL. It has various options to generate SEI and the implementation web service class. As a developer, you need to provide the method implementation for the generated class. If the WSDL includes custom XML Schema types, then the same is converted into its equivalent Java class. Simple frontend Apart from JAX-WS frontend, CXF also supports what is known as 'simple frontend'. The simple frontend provides simple components or Java classes that use reflection to build and publish web services. It is simple because we do not use any annotation to create web services. In JAX-WS, we have to annotate a Java class to denote it as a web service and use tools to convert between a Java object and WSDL. The simple frontend uses factory components to create a service and the client. It does so by using Java reflection API. The following code shows a web service created using simple frontend: // Build and publish the serviceOrderProcessImpl orderProcessImpl = new OrderProcessImpl();ServerFactoryBean svrFactory = new ServerFactoryBean();svrFactory.setServiceClass(OrderProcess.class);svrFactory.setAddress("http://localhost:8080/OrderProcess");svrFactory.setServiceBean(orderProcessImpl);svrFactory.create(); Messaging and Interceptors One of the important elements of CXF architecture is the Interceptor components. Interceptors are components that intercept the messages exchanged or passed between web service clients and server components. In CXF, this is implemented through the concept of Interceptor chains. The concept of Interceptor chaining is the core functionality of CXF runtime. The interceptors act on the messages which are sent and received from the web service and are processed in chains. Each interceptor in a chain is configurable, and the user has the ability to control its execution. The core of the framework is the Interceptor interface. It defines two abstract methods—handleMessage and handleFault. Each of the methods takes the object of type Message as a parameter. A developer implements the handleMessage to process or act upon the message. The handleFault method is implemented to handle the error condition. Interceptors are usually processed in chains with every interceptor in the chain performing some processing on the message in sequence, and the chain moves forward. Whenever an error condition arises, a handleFault method is invoked on each interceptor, and the chain unwinds or moves backwards. Interceptors are often organized or grouped into phases. Interceptors providing common functionality can be grouped into one phase. Each phase performs specific message processing. Each phase is then added to the interceptor chain. The chain, therefore, is a list of ordered interceptor phases. The chain can be created for both inbound and outbound messages. A typical web service endpoint will have three interceptor chains: Inbound messages chain Outbound messages chain Error messages chain There are built-in interceptors such as logging, security, and so on, and the developers can also choose to create custom interceptors.
Read more
  • 0
  • 0
  • 3166

article-image-blackberry-enterprise-server-5-activating-devices-and-users
Packt
03 Mar 2011
11 min read
Save for later

BlackBerry Enterprise Server 5: Activating Devices and Users

Packt
03 Mar 2011
11 min read
BlackBerry Enterprise Server 5 Implementation Guide Simplify the implementation of BlackBerry Enterprise Server in your corporate environment Install, configure, and manage a BlackBerry Enterprise Server Use Microsoft Internet Explorer along with Active X plugins to control and administer the BES with the help of Blackberry Administration Service Troubleshoot, monitor, and offer high availability of the BES in your organization Updated to the latest version – BlackBerry Enterprise Server 5 Implementation Guide       BlackBerry Enterprise users must already exist on the Microsoft Exchange Server. As with the administrative users, to make tasks and management of device users easier, we can create groups and add users to the groups, and then assign policies to the whole group rather than individual users. Again, users can be part of multiple groups and we will see how the policies are affected and applied when users are in more than one group. Creating users on the BES 5.0 We will go through the following steps to create users on the BES 5.0: Within the BlackBerry Administration Service, navigate to the BlackBerry solution management section. Expand User and select Create a user. We can now search for the user we want to add either by typing the user's display name or e-mail address. Enter the search criteria and select Search. We then have the ability to add the user to any group we have already created; in our case we only have an administrative group. We have three options on how the user will be created, with regards to how the device for the user will be activated: With activation password: This will allow us to set an activation password along with the expiry time of the activation password for the user With generated activation password: The system will autogenerate a password for activation, based on the settings we have made in our BlackBerry Server (shown further on in this article) Without activation password: This will create just a user who will have no pre-configured method for assigning a device For this example, we will select Create a user without activation password. Once we have covered the theory and explored the settings within this article regarding activating devices, we will return to the other two options. We can create a user even if the search results do not display the user—generally this occurs when the Exchange Server has not yet synched the user account to the BlackBerry Configuration Database, typically when new users are added. This method is shown in Lab. Groups can be created to help manage users within our network and simplify tasks. Next we are going to look at creating a group that will house users—all belonging to our Sales Team. Creating a user-based group To create a user-based group, go through the following steps: Expand Group, select Create a group, in the Name field enter Sales Team, and click on Save. Select View group list. Click on Sales Team. Select Add users to group membership. Select the user we have just created by placing a tick in the checkbox next to the user's name, and click on Add to group membership. We can click on View group membership to confirm the addition of our user to the group. We will be adding more users to this group later on in the Lab when we import the users via a text file. Preparing to distribute a BlackBerry device Before we can distribute a BlackBerry device to a user using various methods, we need to address a few more settings that will affect how the device will initially be populated. By default when a device is activated for a user, the BlackBerry Enterprise Server will prepopulate/synchronize the BlackBerry device with the headers of 200 e-mail messages from the previous five days. We can alter these settings so that headers and the full body of the e-mail message can be synched to the device for up to a maximum of 750 messages over the past 14 days. In the BlackBerry Administration Service, under Servers and components expand BlackBerry Domain | Component view | Email and select the BES instance. On the right-hand pane select the Messaging tab. Scroll down and select Edit instance. To ensure that both headers and the full e-mail message is populated to the BlackBerry Device, in the Message prepopulation settings, change the Send headers only drop-down to False. Change the Prepopulation by message age to a max of 14 days, by entering 14. We can change the number of e-mails that are prepopulated on the device by changing the number of Prepopulation by message count, again a max of 750. By making the preceding two values to zero, we can ensure that no previous e-mails are populated on the device. Within the same tab, we can set our Messaging options, which we will examine next. We have the ability to set: A Prepended disclaimer (goes before the body of the message) An Appended disclaimer (goes after the user's signature) We can enter the text of our disclaimer in the space provided, then choose what happens if there is a conflict. The majority of these settings can also be set at a user level (settings made on the server override any settings made by the user, that's why it is best practice to have these set on the server level), which we will see later in Lab. If user setting exists then we need to notify the server how to deal with a potential conflict. The default setting is to use the user's disclaimer first then the one set on the server. Bear in mind, the default setting will show both the user's disclaimer and then the server disclaimer on the e-mail message. Wireless message reconciliation should be set to True—the BlackBerry Enterprise Server synchronizes e-mail message status changes between the BlackBerry device and Outlook on the user's computer. The BES reconciles e-mail messages that are moved from one folder to another, deleted messages, and also changes the status of read and unread messages. By default the BES performs a reconcile every 30 minutes; the reconcile is in effect checking that for a particular user the Outlook and the BlackBerry have the same information in their databases. If this is set to False then the above mentioned changes will only take effect when the device is plugged in to Desktop Manager or Web Desktop Access. We have the option of setting the maximum size for a single attachment or multiple attachments in KB. We can also specify the maximum download size for a single attachment. Rich content turned on set to True allows e-mail messages that contain HTML and rich content to be delivered to BlackBerry devices; having it set to False would mean all messages are delivered in plain text. This will save a lot of resources on the server(s) housing the BES components. We can set the same principle for downloading inline images. Remote search turned on set to True—this will allow users to search the Microsoft Exchange server for e-mails from their BlackBerry devices. In BES 5, we have a new feature that allows the user, when on his device-prior to sending out a meeting request—to check if a potential participant is available at that time or not. (Microsoft Exchange 2007 users need to make some changes to support this feature; see the BlackBerry website for further details on the hot fixes required.) Free busy lookup turned on is set to True if you want the above service. If system resources are being utilized heavily, this feature can be turned off by selecting False. Hard deletes reconciliation allows users to delete e-mail messages permanently in Microsoft Outlook (by holding the shift + del keys). You can also configure the BES to remove permanently deleted messages from the user's BlackBerry device. You must have wireless reconciliation turned on for this to work. Now that we have prepared our messaging environment, we are ready to activate our first user. Activating users When it comes to activating users, we have five options to choose from: BlackBerry Administration Service: We can connect the device to a computer and log on to the BAS to assign and activate a device for a user Over the Wireless Network (OTA): We can activate a BlackBerry to join our BES without needing it to be physically connected to our organization Over the LAN: A user who has BlackBerry Desktop Manager running on his or her computer in the corporate LAN can activate the device by plugging the device into his or her machine and running the BlackBerry Desktop Manager BlackBerry Web Desktop Manager: This is a new feature of BES 5 that allows users to connect the device to a computer and log in to the BlackBerry Web Desktop Manager to activate the device, with no other software required Over your corporate organization's Wi-Fi network: You can activate Wi-Fi-enabled BlackBerry devices over your corporate Wi-Fi network Before we look at each of the options available to us, let's examine what enterprise activation is and how it works along with its settings; this will also help us choose the best option for activating devices for users and avoid errors during the enterprise activation. Understanding enterprise activation To allow a user's device to join the BlackBerry Enterprise Server, we need to activate the device for the user when we create a user and assign the user an activation password. The user will enter his or her corporate e-mail address and the activation password into the device in the Enterprise Activation screen, which can be reached on the device by going to Options | Advance Options | Enterprise Activation. Once the user types in the information and selects Activate, the BlackBerry device will generate an ETP.dat message. It is important that if you have any virus scanning or e-mail sweeping systems running in your organization, we ensure that this type of filename with extension is added to the safe list. Please note that this ETP.dat message is only generated when we activate a device over the air. If we use other methods where the device is plugged in via a cable to activate it, NO ETP.dat file is generated. The ETP.dat message is then sent to the user's mailbox on the Exchange Server over the wireless network. To ensure that the activation occurs smoothly, make sure the device has good battery life and the wireless coverage on the device is less than 100db. This can be checked by pressing the following combination on the device Alt + NMLL. The BlackBerry Enterprise Server then confirms that the activation password is correct and generates a new permanent encryption key and sends it to the BlackBerry device. The BlackBerry Policy service then receives a request to send out an IT policy. Service books control the wireless synchronization data. Data is now transferred between the BlackBerry device and the user's mailbox using a slow synch process. The information that is sent to the BlackBerry device is stored in databases on the device, and each application database is shown with a percentage completed next to it during the slow synch. Once the activation is complete, a message will pop up on the device stating 'Activation complete'. The device is now fully in synch with the user's mailbox and is ready to send and receive data. Now that we have got a general grasp of the device activation process, we are going to look at the five options mentioned previously, in more detail. Activating a device using BlackBerry Administration Service This method provides a higher level of control over the device, but is more labor-intensive on the administrator as it requires no user interaction. Connect the device to a computer that can access the BlackBerry Administration Service, and log in to the service using an account that has permissions to assign devices. Under the Devices section, expand Attached devices. Click on Manage current device and then select Assign current device. This will then prompt you to search for the user's account that we want to assign the device to. Once we have found the user, we can click on User and then select Associate user and finally click on Assign current device.
Read more
  • 0
  • 0
  • 3166

article-image-highcharts
Packt
20 Aug 2013
5 min read
Save for later

Highcharts

Packt
20 Aug 2013
5 min read
(For more resources related to this topic, see here.) Creating a line chart with a time axis and two Y axes We will now create the code for this chart: You start the creation of your chart by implementing the constructor of your Highcharts' chart: var chart = $('#myFirstChartContainer').highcharts({}); We will now set the different sections inside the constructor. We start by the chart section. Since we'll be creating a line chart, we define the type element with the value line. Then, we implement the zoom feature by setting the zoomType element. You can set the value to x, y, or xy depending on which axes you want to be able to zoom. For our chart, we will implement the possibility to zoom on the x-axis: chart: {type: 'line',zoomType: 'x'}, We define the title of our chart: title: {text: 'Energy consumption linked to the temperature'}, Now, we create the x axis. We set the type to datetime because we are using time data, and we remove the title by setting the text to null. You need to set a null value in order to disable the title of the xAxis: xAxis: {type: 'datetime',title: {text: null}}, We then configure the Y axes. As defined, we add two Y axes with the titles Temperature and Electricity consumed (in KWh), which we override with a minimum value of 0. We set the opposite parameter to true for the second axis in order to have the second y axis on the right side: yAxis: [{title: {text: 'Temperature'},min:0},{title: {text: 'Energy consumed (in KWh)'},opposite:true,min:0}], We will now customize the tooltip section. We use the crosshairs option in order to have a line for our tooltip that we will use to follow values of both series. Then, we set the shared value to true in order to have values of both series on the same tooltip. tooltip: {crosshairs: true,shared: true}, Further, we set the series section. For the datetime axes, you can set your series section by using two different ways. You can use the first way when your data follow a regular time interval and the second way when your data don't necessarily follow a regular time interval. We will use both the ways by setting the two series with two different options. The first series follows a regular interval. For this series, we set the pointInterval parameter where we define the data interval in milliseconds. For our chart, we set an interval of one day. We set the pointStart parameter with the date of the first value. We then set the data section with our values. The tooltip section is set with the valueSuffix element, where we define the suffix to be added after the value inside our tool tip. We set our yAxis element with the axis we want to associate with our series. Because we want to set this series to the first axis, we set the value to 0(zero). For the second series, we will use the second way because our data is not necessarily following the regular intervals. But you can also use this way, even if your data follows a regular interval. We set our data by couple, where the first element represents the date and the second element represents the value. We also override the tooltip section of the second series. We then set the yAxis element with the value 1 because we want to associate this series to the second axis. For your chart, you can also set your date values with a timestamp value instead of using the JavaScript function Date.UTC. series: [{name: 'Temperature',pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ' °C'},yAxis: 0},{name: 'Electricity consumption',data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ' KWh'},yAxis: 1}] You should have this as the final code: $(function () {var chart = $(‘#myFirstChartContainer’).highcharts({chart: {type: ‘line’,zoomType: ‘x’},title: {text: ‘Energy consumption linked to the temperature’},xAxis: {type: ‘datetime’,title: {text: null}},yAxis: [{title: {text: ‘Temperature’},min:0},{title: {text: ‘Electricity consumed’},opposite:true,min:0}],tooltip: {crosshairs: true,shared: true},series: [{name: ‘Temperature’,pointInterval: 24 * 3600 * 1000,pointStart: Date.UTC(2013, 0, 01),data: [17.5, 16.2, 16.1, 16.1, 15.9, 15.8, 16.2],tooltip: {valueSuffix: ‘ °C’},yAxis: 0},{name: ‘Electricity consumption’,data: [[Date.UTC(2013, 0, 01), 8.1],[Date.UTC(2013, 0, 02), 6.2],[Date.UTC(2013, 0, 03), 7.3],[Date.UTC(2013, 0, 05), 7.1],[Date.UTC(2013, 0, 06), 12.3],[Date.UTC(2013, 0, 07), 10.2]],tooltip: {valueSuffix: ‘ KWh’},yAxis: 1}]});}); You should have the expected result as shown in the following screenshot: Summary In this article, we learned how to perform a task with the most important features of Highcharts. We created a line chart with a time axis and two Y-axes and realized that there are a wide variety of things that you can do with it. Also, we learned about the most commonly performed tasks and most commonly used features in Highcharts. Resources for Article : Further resources on this subject: Converting tables into graphs (Advanced) [Article] Line, Area, and Scatter Charts [Article] Data sources for the Charts [Article]
Read more
  • 0
  • 0
  • 3155

article-image-fuelphp
Packt
15 Nov 2013
11 min read
Save for later

FuelPHP

Packt
15 Nov 2013
11 min read
(For more resources related to this topic, see here.) Since it is community-driven, everyone is in an equal position to spot bugs, provide fixes, or add new features to the framework. This has led to the creation of features such as the new temporal ORM (Object Relation Mapper), which is a first for any PHP-based ORM. This also means that everyone can help build tools that make development easier, more straightforward, and quicker. The framework is lightweight and allows developers to load only what they need. It's a configuration over convention approach. Instead of enforcing conventions, they act as recommendations and best practices. This allows new developers to jump onto a project and catch up to speed quicker. It also helps when we want to find extra team members for projects. A brief history of FuelPHP FuelPHP started out with the goal of adopting the best practices from other frameworks to form a thoroughly modern starting point, which makes full use of PHP Version 5.3 features, such as namespaces. It has little in the way of legacy and compatibility issues that can affect older frameworks. The framework was started in the year 2010 by Dan Horrigan. He was joined by Phil Sturgeon, Jelmer Schreuder, Harro Verton, and Frank de Jonge. FuelPHP was a break from other frameworks such as CodeIgniter, which was basically still a PHP 4 framework. This break allowed for the creation of a more modern framework for PHP 5.3, and brings together decades of experience of other languages and frameworks, such as Ruby on Rails and Kohana. After a period of community development and testing, Version 1.0 of the FuelPHP framework was released in July 2011. This marked a version ready for use on production sites and the start of the growth of the community. The community provides periodic releases (at the time of writing, it is up to Version 1.7) with a clear roadmap (http://fuelphp.com/roadmap) of features to be added. This also includes a good guide of progress made to date. The development of FuelPHP is an open process and all the code is hosted on GitHub at https://github.com/fuel/fuel, and the main core packages can be found in other repositories on the Fuel GitHub account—a full list of these can be found at https://github.com/fuel/. Features of FuelPHP Using a Bespoke PHP or a custom-developed framework could give you a greater performance. FuelPHP provides many features, documentation, and a great community. The following sections describe some of the most useful features. (H)MVC Although FuelPHP is a Model-View-Controller (MVC) framework, it was built to support the HMVC variant of MVC. Hierarchical Model-View-Controller (HMVC) is a way of separating logic and then reusing the controller logic in multiple places. This means that when a web page is generated using a theme or a template section, it can be split into multiple sections or widgets. Using this approach, it is possible to reuse components or functionality throughout a project or in multiple projects. In addition to the usual MVC structure, FuelPHP allows the use of presentation modules (ViewModels). These are a powerful layer that sits between the controller and the views, allowing for a smaller controller while still separating the view logic from both the controller and the views. If this isn't enough, FuelPHP also supports a router-based approach where you can directly route to a closure. This then deals with the execution of the input URI. Modular and extendable The core of FuelPHP has been designed so that it can be extended without the need for changing any code in the core. It introduces the notion of packages, which are self-contained functionality that can be shared between projects and people. Like the core, in the new versions of FuelPHP, these can be installed via the Composer tool . Just like packages, functionality can also be divided into modules. For example, a full user-authentication module can be created to handle user actions, such as registration. Modules can include both logic and views, and they can be shared between projects. The main difference between packages and modules is that packages can be extensions of the core functionality and they are not routable, while modules are routable. Security Everyone wants their applications to be as secure as possible; to this end, FuelPHP handles some of the basics for you. Views in FuelPHP will encode all the output to ensure that it's secure and is capable of avoiding Cross-site scripting (XSS) attacks. This behavior can be overridden or can be cleaned by the included htmLawed library. The framework also supports Cross-site request forgery (CSRF) prevention with tokens, input filtering, and the query builder, which tries to help in preventing SQL injection attacks. PHPSecLib is used to offer some of the security features in the framework. Oil – the power of the command line If you are familiar with CakePHP or the Zend framework or Ruby on Rails, then you will be comfortable with FuelPHP Oil. It is the command-line utility at the heart of FuelPHP—designed to speed up development and efficiency. It also helps with testing and debugging. Although not essential, it proves indispensable during development. Oil provides a quick way for code generation, scaffolding, running database migrations, debugging, and cron-like tasks for background operations. It can also be used for custom tasks and background processes. Oil is a package and can be found at https://github.com/fuel/oil. ORM FuelPHP also comes with an Object Relation Mapper (ORM) package that helps in working with various databases through an object-oriented approach. It is relatively lightweight and is not supposed to replace the more complex ORMs such as Doctrine or Propel. The ORM also supports data relations such as: belongs-to has-one has-many many-to-many relationships Another nice feature is cascading deletions; in this case, the ORM will delete all the data associated with a single entry. The ORM package is available separately from FuelPHP and is hosted on GitHub at https://github.com/fuel/orm. Base controller classes and model classes FuelPHP includes several classes to give a head start on projects. These include controllers that help with templates, one for constructing RESTful APIs, and another that combines both templates and RESTful APIs. On the model side, base classes include CRUD (Create, Read, Update, and Delete) operations. There is a model for soft deletion of records, one for nested sets, and lastly a temporal model. This is an easy way of keeping revisions of data. The authentication package The authentication framework gives a good basis for user authentication and login functionality. It can be extended using drivers for new authentication methods. Some of the basics such as groups, basic ACL functions, and password hashing can be handled directly in the authentication framework. Although the authentication package is included when installing FuelPHP, it can be upgraded separately to the rest of the application. The code can be obtained from https://github.com/fuel/auth. Template parsers The parser package makes it even easier to separate logic from views instead of embedding basic PHP into the views. FuelPHP supports many template languages, such as Twig, Markdown, Smarty, and HTML Abstraction Markup Language (Haml). Documentation Although not particularly a feature of the actual framework, the documentation for FuelPHP is one of the best available. It is kept up-to-date for each release and can be found at http://fuelphp.com/docs/. What to look forward to in Version 2.0 Although this book focuses on FuelPHP 1.6 and newer, it is worth looking forward to the next major release of the framework. It brings significant improvements but also makes some changes to the way the framework functions. Global scope and moving to dependency injection One of the nice features of FuelPHP is the global scope that allows easy static syntax and instances when needed. One of the biggest changes in Version 2 is the move away from static syntax and instances. The framework used the Multiton design pattern, rather than the Singleton design pattern. Now, the majority of Multitons will be replaced with the Dependency Injection Container (DiC) design pattern , but this depends on the class in question. The reason for the changes is to allow the unit testing of core files and to dynamically swap and/or extend our other classes depending upon the needs of the application. The move to dependency injection will allow all the core functionality to be tested in isolation. Before detailing the next feature, let's run through the design patterns in more detail. Singleton Ensures that a class only has a single instance and it provides a global point of access to it. The thinking is that a single instance of a class or object can be more efficient, but it can add unnecessary restrictions to classes that may be better served using a different design pattern. Multiton This is similar to the singleton pattern but expands upon it to include a way of managing a map of named instances as key-value pairs. So instead of having a single instance of a class or object, this design pattern ensures that there is a single instance for each key-value pair. Often the multiton is known as a registry of singletons. Dependency injection container This design pattern aims to remove hard coded dependencies and make is possible to change them either at run time or compile time. One example is ensure that variables have default values but also allow for them to be overridden, also allow for other objects to be passed to class for manipulation. It allows for mock objects to be used whilst testing functionality. Coding standards One of the far-reaching changes will be the difference in coding standards. FuelPHP Version 2.0 will now conform to both PSR-0 and PSR-1. This allows a more standard auto-loading mechanism and the ability to use Composer. Although Composer compatibility was introduced in Version 1.5, this move to PSR is for better consistency. It means that the method names will follow the "camelCase" method rather than the current "snake_case" method names. Although a simple change, this is likely to have a large effect on existing projects and APIs. With a similar move of other PHP frameworks to a more standardized coding standard, there will be more opportunities to re-use functionality from other frameworks. Package management and modularization Package management for other languages such as Ruby and Ruby on Rails has made sharing pieces of code and functionality easy and common-place. The PHP world is much larger and this same sharing of functionality is not as common. PHP Extension and Application Repository (PEAR) was a precursor of most package managers. It is a framework and distribution system for re-usable PHP components. Although infinitely useful, it is not as widely supported by the more popular PHP frameworks. Starting with FuelPHP 1.6 and leading into FuelPHP 2.0, dependency management will be possible through Composer (http://getcomposer.org). This deals with not only single packages, but also their dependencies. It allows projects to consistently set up with known versions of libraries required by each project. This helps not only with development, but also its testability of the project as well as its maintainability. It also protests against API changes. The core of FuelPHP and other modules will be installed via Composer and there will be a gradual migration of some Version 1 packages. Backwards compatibility A legacy package will be released for FuelPHP that will provide aliases for the changed function names as part of the change in the coding standards. It will also allow the current use of static function calling to continue working, while allowing for a better ability to unit test the core functionality. Speed boosts Although initially slower during the initial alpha phases, Version 2.0 is shaping up to be faster than Version 1.0. Currently, the beta version (at the time of writing) is 7 percent faster while requiring 8 percent less memory. This might not sound much, but it can equate to a large saving if running a large website over multiple servers. These figures may get better in the final release of Version 2.0 after the remaining optimizations are complete. Summary We now know a little more about the history of FuelPHP and some of the useful features such as ORM, authentication, modules, (H)MVC, and Oil (the command-line interface). We have also listed the following useful links, including the official API documentation (http://fuelphp.com/docs/) and the FuelPHP home page (http://fuelphp.com). This article also touched upon some of the new features and changes due in Version 2.0 of FuelPHP. Resources for Article: Further resources on this subject: Installing PHP-Nuke [Article] Installing phpMyAdmin [Article] Integrating phpList 2 with Drupal [Article]
Read more
  • 0
  • 0
  • 3148
article-image-tips-deploying-sakai
Packt
19 Jul 2011
10 min read
Save for later

Tips for Deploying Sakai

Packt
19 Jul 2011
10 min read
  Sakai CLE Courseware Management: The Official Guide The benefits of knowing that frameworks exist Sakai is built on top of numerous third-party open source libraries and frameworks. Why write code for converting from XML text files to Java objects or connecting and managing databases, when others have specialized and thought out the technical problems and found appropriate and consistent solutions? This reuse of code saves effort and decreases the complexity of creating new functionality. Using third-party frameworks has other benefits as well; you can choose the best from a series of external libraries, increasing the quality of your own product. The external frameworks have their own communities who test them actively. Outsourcing generic requirements, such as the rudiments of generating indexes for searching, allows the Sakai community to concentrate on higher-level goals, such as building new tools. For developers, also for course instructors and system administrators, it is useful background to know, roughly, what the underlying frameworks do: For a developer, it makes sense to look at reuse first. Why re-invent the wheel? Why write with external framework X for manipulating XML files when other developers have already extensively tried and tested and are running framework Y? Knowing what others have done saves time. This knowledge is especially handy for the new-to-Sakai developers who could be tempted to write from scratch. For the system administrator, each framework has its own strengths, weaknesses, and terminology. Understanding the terminology and technologies gives you a head start in debugging glitches and communicating with the developers. For a manager, knowing that Sakai has chosen solid and well-respected open source libraries should help influence buying decisions in favor of this platform. For the course instructor, knowing which frameworks exist and what their potential is helps inform the debate about adding interesting new features. Knowing what Sakai uses and what is possible sharpens the instructors' focus and the ability to define realistic requirements. For the software engineering student, Sakai represents a collection of best practices and frameworks that will make the students more saleable in the labor market. Using the third-party frameworks This section details frameworks that Sakai is heavily dependent on: Spring (http://www.springsource.org/), Hibernate ((http://www.hibernate.org/), and numerous Apache projects (http://www.apache.org/). Generally, Java application builders understand these frameworks. This makes it relatively easier to hire programmers with experience. All projects are open source and the individual use does not clash with Sakai's open source license (http://www.opensource.org/licenses/ecl2.php). The benefit of using Spring Spring is a tightly architected set of frameworks designed to support the main goals of building modern business applications. Spring has a broad set of abilities, from connecting to databases, to transaction, managing business logic, validation, security, and remote access. It fully supports the most modern architectural design patterns. The framework takes away a lot of drudgery for a programmer and enables pieces of code to be plugged in or to be removed by editing XML configuration files rather than refactoring the raw code base itself. You can see for yourself; this is the best framework for the user provider within Sakai. When you log in, you may want to validate the user credentials using a piece of code that connects to a directory service such as LDAP , or replace the code with another piece of code that gets credentials from an external database or even reads from a text file. Thanks to Sakai's services that rely on Spring! You can give (called injecting) the wanted code to a Service manager, which then calls the code when needed. In Sakai terminology, within a running application a service manager manages services for a particular type of data. For example, a course service manager allows programmers to add, modify, or delete courses. A user service manager does the same for users. Spring is responsible for deciding which pieces of code it injects into which service manager, and developers do not need to program the heavy lifting, only the configuration. The advantage is that later, as a part of adapting Sakai to a specific organization, system administrators can also reconfigure authentication or many other services to tailor to local preferences without recompilation. Spring abstracts away underlying differences between different databases. This allows you to program once, each for MySQL , Oracle , and so on, without taking into account the databases' differences. Spring can sit on the top of Hibernate and other limited frameworks, such as JDBC (yet another standard for connecting to databases). This adaptability gives architects more freedom to change and refactor (the process of changing the structure of the code to improve it) without affecting other parts of the code. As Sakai grows in code size, Spring and good architectural design patterns diminish the chance breaking older code. To sum up, the Spring framework makes programming more efficient. Sakai relies on the main framework. Many tasks that programmers would have previously hard coded are now delegated to XML configuration files. Hibernate for database coupling Hibernate is all about coupling databases to the code. Hibernate is a powerful, high performance object/relational persistence and query service. That is to say, a designer describes Java objects in a specific structure within XML files. After reading these files, Hibernate gains the ability to save or load instances of the object from the database. Hibernate supports complex data structures, such as Java collections and arrays of objects. Again, it is a choice of an external framework that does the programmer's dog work, mostly via XML configuration. The many Apache frameworks Sakai is biased rightfully towards projects associated with the Apache Software Foundation (ASF) (http://www.apache.org/). Sakai instances run within a Tomcat server and many institutes place an Apache web server in front of the Tomcat server to deal with dishing out static content (content that does not change, such as an ordinary web page), SSL/TLS, ease of configuration, and log parsing. Further, individual internal and external frameworks make use of the Apache commons frameworks, (http://commons.apache.org/) which have reusable libraries for all kinds of specific needs, such as validation, encoding, e-mailing, uploading files, and so on. Even if a developer does not use the common libraries directly, they are often called by other frameworks and have significant impact on the wellbeing; for example, security of a Sakai instance. To ensure look and feel consistency, designers used common technologies, such as Apache Velocity, Apache Wicket , Apache MyFaces (an implementation of Java Server Faces), Reasonable Server Faces (RSF) , and plain old Java Server Pages (JSP) Apache Velocity places much of the look and feel in text templates that non-programmers can then manipulate with text editors. The use of Velocity is mostly superseded by JSF. However, as Sakai moves forward, technologies such as RSF and Wicket (http://wicket.apache.org/) are playing a predominate role. Sakai uses XML as the format of choice to support much of its functionality, from configuration files, to the backing up of sites and the storage of internal data representations, RSS feeds, and so on. There is a lot of runtime effort in converting to and from XML and translating XML into other formats. Here are the gory technical details: there are two main methods for parsing XML: You can parse (another word for process) XML into a Document Object Model (DOM) in the memory that you can later transverse and manipulate programmatically. XML can also be parsed via an event-driven mechanism where Java methods are called, for example, when an XML tag begins or ends, or there is a body to the tag. Programmatically simple API for XML (SAX) libraries support the second approach in Java. Generally, it is easier to program with DOM than SAX, but as you need a model of the XML in memory, DOM, by its nature, is more memory intensive. Why would that matter? In large-scale deployments, the amount of memory tends to limit a Sakai instance's performance rather than Sakai being limited by the computational power of the servers. Therefore, as Sakai heavily uses XML, whenever possible, a developer should consider using SAX and avoid keeping the whole model of the XML document in memory. Looking at dependencies As Sakai adapts and expands its feature set, expect the range of external libraries to expand. The table mentions libraries used, their links to the relevant home page, and a very brief description of their functionality. Name Homepage Description Apache-Axis http://ws.apache.org/axis/ SOAP web services Apache-Axis2 http://ws.apache.org/axis2   SOAP, REST web services. A total rewrite of Apache-axis. However, not currently used within Entity Broker, a Sakai specific component.   Apache Commons http://commons.apache.org Lower-level utilities Batik http://xmlgraphics.apache.org/batik/ Batik is a Java-based toolkit for applications or applets that want to use images in the Scalable Vector Graphics (SVG) format. Commons-beanutils http://commons.apache.org/beanutils/ Methods for Java bean manipulation Commons-codec http://commons.apache.org/codec Commons Codec provides implementations of common encoders and decoders, such as Base64, Hex, Phonetic, and URLs. Commons-digester http://commons.apache.org/digester Common methods for initializing objects from XML configuration Commons-httpclient http://hc.apache.org/httpcomponents-client Supports HTTP-based standards with the client side in mind Commons-logging http://commons.apache.org/logging/ Logging support Commons-validator http://commons.apache.org/validator Support for verifying the integrity of received data Excalibur http://excalibur.apache.org Utilities FOP http://xmlgraphics.apache.org/fop Print formatting ready for conversions to PDF and a number of other formats Hibernate http://www.hibernate.org ORM database framework Log4j http://logging.apache.org/log4j For logging Jackrabbit http://jackrabbit. apache.org http://jcp.org/en/jsr/detail?id=170 Content repository. A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more. James http://james.apache.org A mail server Java Server Faces http://java.sun.com/javaee/javaserverfaces Simplifies building user interfaces for JavaServer applications Lucene http://lucene.apache.org Indexing MyFaces http://myfaces.apache.org JSF implementation with implementation-specific widgets Pluto http://portals.apache.org/pluto The Reference Implementation of the Java Portlet Specfication Quartz http://www.opensymphony.com/quartz Scheduling Reasonable Server Faces (RSF) http://www2.caret.cam.ac.uk/rsfwiki RSF is built on the Spring framework, and simplifies the building of views via XHTML. ROME https://rome.dev.java.net ROME is a set of open source Java tools for parsing, generating, and publishing RSS and Atom feeds. SAX http://www.saxproject.org Event-based XML parser STRUTS http://struts.apache.org/ Heavy-weight MVC framework, not used in the core of Sakai, but rather some components used as part of the occasional tool Spring http://www.springsource.org Used extensively within the code base of Sakai. It is a broad framework that is designed to make building business applications simpler. Tomcat http://tomcat.apache.org Servlet container Velocity http://velocity.apache.org Templating Wicket http://wicket.apache.org Web app development framework Xalan http://xml.apache.org/xalan-j An XSLT (Extensible Stylesheet Language Transformation) processor for transforming XML documents into HTML, text, or other XML document types xerces http://xerces.apache.org/xerces-j XML parser For the reader who has downloaded and built Sakai from source code, you can automatically generate a list of current external dependencies via Maven. First, you will need to build the binary version and then print out the dependency report. To achieve this from within the top-level directory of the source code, you can run the following commands: mvn -Ppack-demo install mvn dependency:list The table is based on an abbreviated version of the dependency list, generated from the source code from March 2009. For those of you wishing to dive into the depths of Sakai, you can search the home pages mentioned in the table. In summary, Spring is the most important underlying third-party framework and Sakai spends a lot of its time manipulating XML.  
Read more
  • 0
  • 0
  • 3145

article-image-creating-simple-skin-using-dotnetnuke
Packt
08 Sep 2010
5 min read
Save for later

Creating a Simple Skin using DotNetNuke

Packt
08 Sep 2010
5 min read
Introduction In DNN, skinning is a term that refers to the process of customizing the look and feel of the DNN portal. One of the powerful features of DNN is that the functionality of the portal is separated from the presentation of the portal. This means we can change the appearance of the portal without affecting how the portal works. To create a skin in DNN we will work with three kinds of files: HTML, ASCX, and CSS. The HTML or ASCX file describes the layout of the page and the CSS file provides the styling. If you have worked with HTML and CSS before than you will be able to immediately get started. However, if you are familiar with ASCX (and as a DNN developer that is likely) you can achieve the same results faster than HTML. In the recipes, we will show primarily ASCX skinning with some brief examples of HTML skinning. Skin Objects Before we start looking at the recipes, we need a quick word about Skin Objects. Skin Objects are used in both HTML and ASCX skin files as placeholders for different kinds of dynamic functionality. In HTML skins, you place text tokens such as [CURRENTDATE] in your code and when the code is parsed by the skin engine it will insert the matching skin object. If you are working in ASCX, you register skin objects as controls that you place directly in your code. DNN offers many different skin objects such as CurrentDate, Logo, Login link, and others and we'll see many of these in action in the recipes of this article. Downloading and installing a skin Often the easiest way to start skinning is to download an existing skin package and see the different files used for skinning. In this recipe we will download an excellent skin created by Jon Henning from a site called CodePlex that demonstrates the most common skin objects and layouts. Another reason for starting with an existing skin is that it allows incremental development. We can start with a fully functional skin, deploy it to our DNN portal and then edit the source files right on the server. In this way the changes we make are immediately displayed and problems are easily spotted and fixed. However, as applying a skin can affect the entire site, it is best to create and test skins on a development DNN site before using them on a production site. Finally, it should also be noted that as a skin is really just another type of extension in DNN, you are already familiar with some of these steps. How to do it... Open your favorite web browser and go to the site http://codeendeavortemplate.codeplex.com/. Click on Downloads in the toolbar. Scroll down a little and click on the CodeEndeavors Template Skin link. When prompted with the License Agreement, click I Agree. The File download dialog will ask if you want to Open or Save. Click on Save and select a temporary folder to hold the ZIP file. That's all we need from the CodePlex site, so close the browser. To install the skin on the DNN site, begin by logging in as the SuperUser. Look at the Control Panel and make sure you're in Edit mode. Look under the Host menu and select Extensions. Scroll to the bottom and click on the link Install Extension Wizard. The wizard will prompt for the ZIP file (called the extension package). Click on the Browse button and select the file you just downloaded (for example CodeEndeavors.TemplateSkin.install.v01.01.07.00.zip). Click on Open then click on Next. The wizard will display the Extension information. Click on Next. The wizard will display the Release Notes. Click on Next. On the license page, check Accept License? and click on Next. Now the install script will run, creating the skin. At the end you should see the message "Installation successful". Click on Return. To make the skin active, select Skins under the Admin menu. (Move the mouse over the image to enlarge.) From the Skins drop-down lists, select CodeEndeavors.TemplateSkin. For this article, we will use the Index skin for our examples. Click on the Apply link under the index skin to make it active. To see the skin files, you can look in the root folder of the DNN instance under Portals_defaultSkinsCodeEndeavors.TemplateSkin. Here is a summary of the key files you are likely to see in a skin like this: File name Description animated.ascx An ASCX skin file. container.ascx An ASCX container file. index.html An HTML skin file. skin.css The stylesheet for the skin. container.css The stylesheet for the container. TemplateSkin.dnn The manifest file for the skin package. thumbnail_animated.jpg A preview image of the ASCX skin. thumbnail_container.jpg A preview image of the ASCX container. thumbnail_index.jpg A preview image of the HTML skin. license.txt The text of the license agreement. releasenotes.txt The text of the release notes. version.txt The version number. Images folder A folder holding the graphic images supporting a skin or container.
Read more
  • 0
  • 0
  • 3143

article-image-creating-real-time-widget
Packt
22 Apr 2014
11 min read
Save for later

Creating a real-time widget

Packt
22 Apr 2014
11 min read
(For more resources related to this topic, see here.) The configuration options and well thought out methods of socket.io make for a highly versatile library. Let's explore the dexterity of socket.io by creating a real-time widget that can be placed on any website and instantly interfacing it with a remote Socket.IO server. We're doing this to begin providing a constantly updated total of all users currently on the site. We'll name it the live online counter (loc for short). Our widget is for public consumption and should require only basic knowledge, so we want a very simple interface. Loading our widget through a script tag and then initializing the widget with a prefabricated init method would be ideal (this allows us to predefine properties before initialization if necessary). Getting ready We'll need to create a new folder with some new files: widget_server.js, widget_client.js, server.js, and index.html. How to do it... Let's create the index.html file to define the kind of interface we want as follows: <html> <head> <style> #_loc {color:blue;} /* widget customization */ </style> </head> <body> <h1> My Web Page </h1> <script src = http://localhost:8081 > </script> <script> locWidget.init(); </script> </body> </html> The localhost:8081 domain is where we'll be serving a concatenated script of both the client-side socket.io code and our own widget code. By default, Socket.IO hosts its client-side library over HTTP while simultaneously providing a WebSocket server at the same address, in this case localhost:8081. See the There's more… section for tips on how to configure this behavior. Let's create our widget code, saving it as widget_client.js: ;(function() { window.locWidget = { style : 'position:absolute;bottom:0;right:0;font-size:3em', init : function () { var socket = io.connect('http://localhost:8081'), style = this.style; socket.on('connect', function () { var head = document.head, body = document.body, loc = document.getElementById('_lo_count'); if (!loc) { head.innerHTML += '<style>#_loc{' + style + '}</style>'; loc = document.createElement('div'); loc.id = '_loc'; loc.innerHTML = '<span id=_lo_count></span>'; body.appendChild(loc); } socket.on('total', function (total) { loc.innerHTML = total; }); }); } } }()); We need to test our widget from multiple domains. We'll just implement a quick HTTP server (server.js) to serve index.html so we can access it by http://127.0.0.1:8080 and http://localhost:8080, as shown in the following code: var http = require('http'); var fs = require('fs'); var clientHtml = fs.readFileSync('index.html'); http.createServer(function (request, response) { response.writeHead(200, {'Content-type' : 'text/html'}); response.end(clientHtml); }).listen(8080); Finally, for the server for our widget, we write the following code in the widget_server.js file: var io = require('socket.io')(), totals = {}, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, require('fs').readFileSync('widget_client.js') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.write(sioclient.source); res.write(widgetScript); res.end(); }).listen(8081)); io.on('connection', function (socket) { var origin = socket.request.socket.domain || 'local'; totals[origin] = totals[origin] || 0; totals[origin] += 1; socket.join(origin); io.sockets.to(origin).emit('total', totals[origin]); socket.on('disconnect', function () { totals[origin] -= 1; io.sockets.to(origin).emit('total', totals[origin]); }); }); To test it, we need two terminals; in the first one, we execute the following command: node widget_server.js In the other terminal, we execute the following command: node server.js We point our browser to http://localhost:8080 by opening a new tab or window and navigating to http://localhost:8080. Again, we will see the counter rise by one. If we close either window, it will drop by one. We can also navigate to http://127.0.0.1:8080 to emulate a separate origin. The counter at this address is independent from the counter at http://localhost:8080. How it works... The widget_server.js file is the powerhouse of this recipe. We start by using require with socket.io and calling it (note the empty parentheses following require); this becomes our io instance. Under this is our totals object; we'll be using this later to store the total number of connected clients for each domain. Next, we create our clientScript variable; it contains both the socket.io client code and our widget_client.js code. We'll be serving this to all HTTP requests. Both scripts are stored as buffers, not strings. We could simply concatenate them with the plus (+) operator; however, this would force a string conversion first, so we use Buffer.concat instead. Anything that is passed to res.write or res.end is converted to a Buffer before being sent across the wire. Using the Buffer.concat method means our data stays in buffer format the whole way through instead of being a buffer, then a string then a buffer again. When we require socket.io at the top of widget_server.js, we call it to create an io instance. Usually, at this point, we would pass in an HTTP server instance or else a port number, and optionally pass in an options object. To keep our top variables tidy, however, we use some configuration methods available on the io instance after all our requires. The io.static(false) call prevents socket.io from providing its client-side code (because we're providing our own concatenated script file that contains both the socket.io client-side code and our widget code). Then we use the io.attach call to hook up our socket.io server with an HTTP server. All requests that use the http:// protocol will be handled by the server we pass to io.attach, and all ws:// protocols will be handled by socket.io (whether or not the browser supports the ws:// protocol). We're only using the http module once, so we require it within the io.attach call; we use it's createServer method to serve all requests with our clientScript variable. Now, the stage is set for the actual socket action. We wait for a connection by listening for the connection event on io.sockets. Inside the event handler, we use a few as yet undiscussed socket.io qualities. WebSocket is formed when a client initiates a handshake request over HTTP and the server responds affirmatively. We can access the original request object with socket.request. The request object itself has a socket (this is the underlying HTTP socket, not our socket.io socket; we can access this via socket.request.socket. The socket contains the domain a client request came from. We load socket.request.socket.domain into our origin object unless it's null or undefined, in which case we say the origin is 'local'. We extract (and simplify) the origin object because it allows us to distinguish between websites that use a widget, enabling site-specific counts. To keep count, we use our totals object and add a property for every new origin object with an initial value of 0. On each connection, we add 1 to totals[origin] while listening to our socket; for the disconnect event, we subtract 1 from totals[origin]. If these values were exclusively for server use, our solution would be complete. However, we need a way to communicate the total connections to the client, but on a site by site basis. Socket.IO has had a handy new feature since Socket.IO version 0.7 that allows us to group sockets into rooms by using the socket.join method. We cause each socket to join a room named after its origin, then we use the io.sockets.to(origin).emit method to instruct socket.io to only emit to sockets that belongs to the originating sites room. In both the io.sockets connection and socket disconnect events, we emit our specific totals to corresponding sockets to update each client with the total number of connections to the site the user is on. The widget_client.js file simply creates a div element called #_loc and updates it with any new totals it receives from widget_server.js. There's more... Let's look at how our app could be made more scalable, as well as looking at another use for WebSockets. Preparing for scalability If we were to serve thousands of websites, we would need scalable memory storage, and Redis would be a perfect fit. It operates in memory but also allows us to scale across multiple servers. We'll need Redis installed along with the Redis module. We'll alter our totals variable so it contains a Redis client instead of a JavaScript object: var io = require('socket.io')(), totals = require('redis').createClient(), //other variables Now, we modify our connection event handler as shown in the following code: io.sockets.on('connection', function (socket) { var origin = (socket.handshake.xdomain) ? url.parse(socket.handshake.headers.origin).hostname : 'local'; socket.join(origin); totals.incr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); socket.on('disconnect', function () { totals.decr(origin, function (err, total) { io.sockets.to(origin).emit('total', total); }); }); }); Instead of adding 1 to totals[origin], we use the Redis INCR command to increment a Redis key named after origin. Redis automatically creates the key if it doesn't exist. When a client disconnects, we do the reverse and readjust totals using DECR. WebSockets as a development tool When developing a website, we often change something small in our editor, upload our file (if necessary), refresh the browser, and wait to see the results. What if the browser would refresh automatically whenever we saved any file relevant to our site? We can achieve this with the fs.watch method and WebSockets. The fs.watch method monitors a directory, executing a callback whenever a change to any files in the folder occurs (but it doesn't monitor subfolders). The fs.watch method is dependent on the operating system. To date, fs.watch has also been historically buggy (mostly under Mac OS X). Therefore, until further advancements, fs.watch is suited purely to development environments rather than production (you can monitor how fs.watch is doing by viewing the open and closed issues at https://github.com/joyent/node/search?q=fs.watch&ref=cmdform&state=open&type=Issues). Our development tool could be used alongside any framework, from PHP to static files. For the server counterpart of our tool, we'll configure watcher.js: var io = require('socket.io')(), fs = require('fs'), totals = {}, watcher = function () { var socket = io.connect('ws://localhost:8081'); socket.on('update', function () { location.reload(); }); }, clientScript = Buffer.concat([ require('socket.io/node_modules/socket.io-client').source, Buffer(';(' + watcher + '());') ]); io.static(false); io.attach(require('http').createServer(function(req, res){ res.setHeader('Content-Type', 'text/javascript; charset=utf-8'); res.end(clientScript); }).listen(8081)); fs.watch('content', function (e, f) { if (f[0] !== '.') { io.sockets.emit('update'); } }); Most of this code is familiar. We make a socket.io server (on a different port to avoid clashing), generate a concatenated socket.io.js plus client-side watcher code file, and deliver it via our attached server. Since this is a quick tool for our own development uses, our client-side code is written as a normal JavaScript function (our watcher variable), converted to a string while wrapping it in self-calling function code, and then changed to Buffer so it's compatible with Buffer.concat. The last piece of code calls the fs.watch method where the callback receives the event name (e) and the filename (f). We check that the filename isn't a hidden dotfile. During a save event, some filesystems or editors will change the hidden files in the directory, thus triggering multiple callbacks and sending several messages at high speed, which can cause issues for the browser. To use it, we simply place it as a script within every page that is served (probably using server-side templating). However, for demonstration purposes, we simply place the following code into content/index.html: <script src = http://localhost:8081/socket.io/watcher.js > </script> Once we fire up server.js and watcher.js, we can point our browser to http://localhost:8080 and see the familiar excited Yay!. Any changes we make and save (either to index.html, styles.css, script.js, or the addition of new files) will be almost instantly reflected in the browser. The first change we can make is to get rid of the alert box in the script.js file so that the changes can be seen fluidly. Summary We saw how we could create a real-time widget in this article. We also used some third-party modules to explore some of the potential of the powerful combination of Node and WebSockets. Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [Article] So, what is Node.js? [Article] Setting up Node [Article]
Read more
  • 0
  • 0
  • 3142
article-image-integrating-facebook-magento
Packt
21 Jan 2011
4 min read
Save for later

Integrating Facebook with Magento

Packt
21 Jan 2011
4 min read
  Magento 1.4 Themes Design Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine Install and configure Magento 1.4 and learn the fundamental principles behind Magento themes Customize the appearance of your Magento 1.4 e-commerce store with Magento's powerful theming engine by changing Magento templates, skin files and layout files Change the basics of your Magento theme from the logo of your store to the color scheme of your theme Integrate popular social media aspects such as Twitter and Facebook into your Magento store Facebook (http://www.facebook.com) is a social networking website that allows people to add each other as 'friends' and to send messages and share content. Move the mouse over the image to enlarge it. As with Twitter, there are two options you have for integrating Facebook with your Magento store: Adding a 'Like' button to your store's product pages to allow your customers to show their appreciation for individual products on your store. Integrating a widget of the latest news from your store's Facebook profile. Adding a 'Like' button to your Magento store's product pages The Facebook 'Like' button allows Facebook users to show that they approve of a particular web page and you can put this to use on your Magento store. Getting the 'Like' button markup To get the markup required for your store's 'Like' button, go to the Facebook Developers website at: http://developers.facebook.com/docs/reference/ plugins/like. Fill in the form below the description text with relevant values, leaving the URL to like field as URLTOLIKE for now, and setting the Width to 200: Click on the Get Code button at the bottom of the form and then copy the code that is presented in the iframe field: The generated markup should look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= URLTOLIKE&amp;layout=standard&amp;show_faces=true&amp;width= 200&amp;action=like&amp;colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> You now need to replace the URLTOLIKE in the previous markup to the URL of the current page in your Magento store. The PHP required to do this in Magento looks like the following: <?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?> The new Like button markup for your Magento store should now look like the following: <iframe src="http://www.facebook.com/plugins/like.php?href= ".<?php $currentUrl = $this->helper(‘core/url’)->getCurrentUrl(); ?>". &amp;layout=standard&amp;show_faces=true&amp;width=200&amp;action= like&amp;colorscheme=light&amp;height=80» scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> Open your theme's view.phtml file in the /app/design/frontend/default/m2/ template/catalog/product directory and locate the lines that read: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div></div> Insert the code generated by Facebook here, so that it now reads the following: <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?> </div> <iframe src="http://www.facebook.com/plugins/like.php?href=<?php echo $this->helper('core/url')->getCurrentUrl();?>&amp;layout= standard&amp;show_faces=true&amp;width=200&amp;action=like&amp; colorscheme=light&amp;height=80" scrolling="no" frameborder="0" style="border:none; overflow:hidden; width:200px; height:80px;" allowTransparency="true"> </iframe> </div> Save and upload this file back to your Magento installation and then visit a product page within your store to see the button appear below the brief description of the product: That's it, your product pages can now be liked on Facebook!
Read more
  • 0
  • 0
  • 3139

article-image-layout-dojo-part-1
Packt
15 Oct 2009
17 min read
Save for later

Layout in Dojo: Part 1

Packt
15 Oct 2009
17 min read
Basic Dojo layout facts The Layout widgets in Dojo are varied in nature, but their most common use is as 'windows' or areas which organize and present other widgets or information. Several use the same kind of child elements the ContentPane. The ContentPane is a widget which can contain other widgets and plain HTML, reload content using Ajax and so on. The ContentPane can also be used stand-alone in a page, but is more usable inside a layout container of some sort. And what is a layout container? Well, it's a widget which contains ContentPanes, of course. A layout container can often contain other widgets as well, but most containers work very well with a different configuration of ContentPanes, which properly insulates the further contents. Take the TabContainer, for example. It is used to organize two or more ContentPanes, where each gets its own tab. When a user clicks on one of the tabs, the ContentPane inside it is shown and all others are hidden. Using BorderManager can bring the necessary CSS styling down to a minimum, while giving a simple interface for managing dynamic changes of child widgets and elements. ContentPane A ContentPane can look like anything of course, so it doesn't really help putting a screen-dump of one on the page. However, the interface is very good to know. The following arguments are detected by ContentPane and can be used when creating one either programmatically or by markup: // href: String //The href of the content that displays now. //Set this at construction if you want to load data externally //when the pane is shown.(Set preload=true to load it immediately.) //Changing href after creation doesn't have any effect; //see setHref(); href: "", //extractContent: Boolean //Extract visible content from inside of <body> .... </body> extractContent: false, //parseOnLoad: Boolean //parse content and create the widgets, if any parseOnLoad:true, //preventCache: Boolean //Cache content retreived externally preventCache:false, //preload: Boolean //Force load of data even if pane is hidden. preload: false, //refreshOnShow: Boolean //Refresh (re-download) content when pane goes from hidden to shown refreshOnShow: false, //loadingMessage: String //Message that shows while downloading loadingMessage: "<span class='dijitContentPaneLoading'>$ {loadingState}</span>", //errorMessage: String //Message that shows if an error occurs errorMessage: "<span class='dijitContentPaneError'>${errorState}</span>", You don't need any of those, of course. A simple way to create a ContentPane would be: var pane = new dojo.layout.ContentPane({}); And a more common example would be the following: var panediv = dojo.byId('panediv');var pane = new dojo.layout.ContentPane({ href: "/foo/content.html", preload: true}, panediv); where we would have an element already in the page with the id 'panediv'. As you see, there are also a couple of properties that manage caching and parsing of contents. At times, you want your ContentPane to parse and render any content inside it (if it contains other widgets), whereas other times you might not (if it contains a source code listing, for instance). You will see additional properties being passed in the creation of a ContentPane which are not part of the ContentPane itself, but are properties that give information specific to the surrounding Container. For example, the TabContainer wants to know which tab this is, and so on. Container functions All container widgets arrange other widgets, and so have a lot of common functionality defined in the dijit._Container class. The following functions are provided for all Container widgets: addChild: Adds a child widget to the container. removeChild: Removes a child widget from the container. destroyDescendants: Iterates over all children, calling destroy on each. getChildren: Returns an array containing references to all children. hasChildren: Returns a boolean. LayoutContainer The LayoutContainer is a widget which lays out children widgets according to one of five alignments: right, left, top, bottom, or client. Client means "whatever is left", basically. The widgets being organized need not be ContentPanes, but this is normally the case. Each widget then gets to set a layoutAlign property, like this: layoutAlign = "left". The normal way to use LayoutContainer is to define it using markup in the page, and then define the widgets to be laid out inside it. LayoutContainer has been superceeded by BorderContainer, and will be removed in Dojo version 2.0. SplitContainer The SplitContainer creates a horizontal or vertical split bar between two or more child widgets. A markup declaration of a SplitContainer can look like this: <div dojoType="dijit.layout.SplitContainer" orientation="vertical" sizerWidth="7" activeSizing="false" style="border: 1px solid #bfbfbf; float: left; margin-right: 30px; width: 400px; height: 300px;"> The SplitContainer must have a defined height and width. The orientation property is self-explanatory, as is sizerWidth. The property activeSizing means, if set to true, that the child widgets will be continually resized when the user changes the position of the sizer. This can be bad if the child widgets are complex or access remote information to render themselves, in which case the setting can be set to false, as in the above example. Then the resize event will only be sent to the child widgets when the user stops. Each child widget needs to define the sizeMin and sizeShare attributes. The sizeMin attribute defines the minimum size for the widget in pixels, but the sizeShare attribute is a relative value for the share of space this widget takes in relation to the other widget's sizeShare values. If we have three widgets inside the SplitPane with sizeShare values of 10, 40 and 50, they will have the same ratios in size as if the values had been 1:4:5. StackContainer The StackContainer hides all children widgets but only one at any given time, and is one of the base classes for both the Accordion and TabContainers. StackContainer exists as a separate widget to allow you to define how and when the child widgets are shown. Maybe you would like to define a special kind of control for changing between child widget views, or maybe you want other events in your application to make the Container show specific widgets. Either way, the StackContainer is one of the most versatile Containers, along with the BorderContainer. The following functions are provided for interacting with the StackContainer: back - Selects and shows the previous child widget. forward - Selects and shows the next child widget. getNextSibling - Returns a reference to the next child widget. getPreviousSibling - Returns a reference to the previous child widget. selectChild - Takes a reference to the child widget to select and show.   Here is the slightly abbreviated markup for the test shown above (test_StackContainer.html): <div id="myStackContainer" dojoType="dijit.layout.StackContainer" style="width: 90%; border: 1px solid #9b9b9b; height: 20em; margin: 0.5em 0 0.5em 0; padding: 0.5em;"> <p id="page1" dojoType="dijit.layout.ContentPane" title= "page 1">IT WAS the best of times, ....</p> <p id="page2" dojoType="dijit.layout.ContentPane" title= "page 2">There were a king with a large jaw ...</p> <p id="page3" dojoType="dijit.layout.ContentPane" title= "page 3">It was the year of Our Lord one thousand seven hundred and seventy- five. .../p></div> The StackContainer also publishes topics on certain events which can be caught using the messaging system. The topics are: [widgetId]-addChild [widgetId]-removeChild [widgetId]-selectChild [widgetId]-containerKeyPress Where [widgetId] is the id of this widget. So if you had a StackContainer defined in the following manner: <div id="myStackContainer" dojoType="dijit.layout.StackContainer">...</div> You can use the following code to listen to events from your StackContainer: dojo.subscribe("myStackContainer-addChild", this, function(arg){ var child = arg[0]; var index = arg[1];}); Compare with the following code from the StackContainer class itself: addChild: function(/*Widget*/ child, /*Integer?*/ insertIndex) { // summary: Adds a widget to the stack this.inherited(arguments); if(this._started) { // in case the tab titles have overflowed from one line // to two lines this.layout(); dojo.publish(this.id+"-addChild", [child, insertIndex]); // if this is the first child, then select it if(!this.selectedChildWidget) { this.selectChild(child); } } }, Also declared in the class file for the StackContainer is the dijit.layout.StackController. This is a sample implementation of a separate widget which presents user controls for stepping forward, backward, and so on in the widget stack. What differentiates this widget from the Tabs in the TabContainer, for example, is that the widget is completely separate and uses the message bus to listen to events from the StackContainer. You can use it as-is, or subclass it as a base for you own controllers. But naturally, you can build whatever you want and connect the events to the forward() and back() function on the StackContainer. It's interesting to note that at the end of the files that define StackContainer, the _Widget base class for all widgets is extended in the following way: //These arguments can be specified for the children of a//StackContainer.//Since any widget can be specified as a StackContainer child,//mix them into the base widget class. (This is a hack, but it's//effective.)dojo.extend(dijit._Widget, {//title: String//Title of this widget.Used by TabContainer to the name the tab, etc.title: "",//selected: Boolean//Is this child currently selected?selected: false,//closable: Boolean//True if user can close (destroy) this child, such as//(for example) clicking the X on the tab.closable: false, //true if user can close this tab paneonClose: function(){//summary: Callback if someone tries to close the child, child//will be closed if func returns true return true; }}); This means that all child widgets inside a StackContainer (or Tab or AccordionContainer) can define the above properties, which will be respected and used accordingly. However, since the properties are applied to the _Widget superclass they are of course now generic to all widgets, even those not used inside any containers at all. The most commonly used property is the closable property, which adds a close icon to the widget and title, which defines a title for the tab. A lot of Dijits respond to keypress events, according to WAI rules. Let's look at the code that is responsible for managing key events in StackContainer and all its descendants: onkeypress: function(/*Event*/ e){ //summary: //Handle keystrokes on the page list, for advancing to next/previous button //and closing the current page if the page is closable. if(this.disabled || e.altKey ){ return; } var forward = null; if(e.ctrlKey || !e._djpage){ var k = dojo.keys; switch(e.charOrCode){ case k.LEFT_ARROW: case k.UP_ARROW: if(!e._djpage){ forward = false; } break; case k.PAGE_UP: if(e.ctrlKey){ forward = false; } break; case k.RIGHT_ARROW: case k.DOWN_ARROW: if(!e._djpage){ forward = true; } break; case k.PAGE_DOWN: if(e.ctrlKey){ forward = true; } break; case k.DELETE: if(this._currentChild.closable) { this.onCloseButtonClick(this._currentChild); } dojo.stopEvent(e); break; default: if(e.ctrlKey){ if(e.charOrCode == k.TAB) { this.adjacent(!.shiftKey).onClick(); dojo.stopEvent(e); } else if(e.charOrCode == "w") { if(this._currentChild.closable) { this.onCloseButtonClick(this._currentChild); } dojo.stopEvent(e); // avoid browser tab closing. } } } // handle page navigation if(forward !== null) { this.adjacent(forward).onClick(); dojo.stopEvent(e); } }}, The code is a very good example on how to handle key press events in Dojo in its own right, but for our purposes we can summarize in the following way: If UP, LEFT, or SHIFT+TAB is pressed, forward is set to false, and the last block of code will use that as an argument to the adjacent function which returns the prior child widget if false and the next child widget if true. In this case, the former. If DOWN, RIGHT, or TAB is pressed, forward will be set to true, which will declare the next child widget to be activated and shown. If DELETE or w is pressed and the current child widget is closable, it will be destroyed. TabContainer The TabContainer, which derives from StackContainer, organizes all its children into tabs, which are shown one at a time. As you can see in the picture below, the TabContainer can also manage hierarchical versions of itself. The TabContainer takes an argument property called tabPosition, which controls where the tab icons are displayed for each tab. Possible values are "top", "bottom", "left-h", "right-h", with "top" as default. There are no special functions provided for TabContainer, which adds very little logic to that provided from the StackContainer superclass. AccordionContainer The AccorionContainer shows a horizontal bar for each added child widget, which represents its collapsed state. The bar acts as a tab and also holds the title defined for the child widget. When the bar is clicked, an animation hides the current widget, and also animates in the widget whose bar was clicked. The abbreviated code for the test case above (test_accordionContainer.html) is here: <div dojoType="dijit.layout.AccordionContainer" style="width: 400px; height: 300px; overflow: hidden"> <div dojoType="dijit.layout.AccordionPane" title="a"> Hello World </div> <div dojoType="dijit.layout.AccordionPane" title="b"> <p> Nunc consequat nisi ... </p> <p> Sed arcu magna... </p> </div> <div dojoType="dijit.layout.AccordionPane" title="c"> <p>The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog. The quick brown fox jumps over the lazy dog.</p> </div></div>  A funny thing about the AccordionContainer is that it requires not any old widget as a child node, but its own AccordionPane, as you see in the code above. However, the AccordionPane has ContentPane as superclass, and defines itself slightly differently due to the special looks of an accordion. Also, the AccordionPane does not currently support nested Layout widgets, even though single-level widgets are supported. BorderContainer The BorderContainer has replaced the functionality of both LayoutContainer and SplitContainer. Note that the outermost BorderContainer widget does not carry any layout information. This is instead delegated to each individual widget. As each child gets added to the BorderContainer, the layout is recalculated. Using the BorderContainer is a very good alternative to using CSS-based "tableless tables". For using the BorderContainer, you don't need any other rules, and the Container recalculates positioning automatically, without the need for additional CSS rules (except for the height/width case below) each time you add an element or widget to the area. Since BorderContainer widget replaces both SplitContainer and LayoutContainer, it both lets its child widgets declare where they are in relation to each other. Optionally, add resizing splitter between children. Also, instead of optionally declaring one child as "client", one child must now always be declared as "center". For some reason, the child widget now use region, instead of layoutAlign, so a child widget which would have been defined like this in LayoutContainer: <div dojoType="dijit.layout.ContentPane" layoutAlign="top">...</div> is now defined like this instead: <div dojoType="dijit.layout.ContentPane" region="top">...</div> All "side" widgets must define a width, in style, by CSS class or otherwise, and the same applies for top/bottom widgets, but with height. Center widgets must not declare either height or width, since they use whatever is left over from the other widgets. You can also use leading and trailing instead of right and left. The only difference is that when you change locale to a region that has text going from right to left (like Arabic and many others), this will arrange the widgets appropriate to the locale. The BorderContainer also takes an optional design property, which defines if the BorderContainer is a headline or sidebar. The headline is the default and looks like the picture below. headline means that the top and bottom widgets extend the full length of the container, whereas sidebar means the the right and left (or leading and trailing) widgets extend top to bottom. The sizeShare attribute for the ContentPanes used in the SplitContainer is deprecated in BorderContainer. All ContentPanes sizes are defined using regular techniques (direct stylin, classes, and so on). From the BorderContainer test located in dijit/tests/layout/test_BorderContainer_nested.html, we find the following layout: The (abbreviated) source code for the example is here: <div dojoType="dijit.layout.BorderContainer" style="border: 2px solid black; width: 90%; height: 500px; padding: 10px;"> <div dojoType="dijit.layout.ContentPane" region="left" style="background-color: #acb386; width: 100px;"> left </div> <div dojoType="dijit.layout.ContentPane" region="right" style="background-color: #acb386; width: 100px;"> right </div> <div dojoType="dijit.layout.ContentPane" region="top" style="background-color: #b39b86; height: 100px;"> top bar </div> <div dojoType="dijit.layout.ContentPane" region="bottom" style="background-color: #b39b86; height: 100px;"> bottom bar </div> <div dojoType="dijit.layout.ContentPane" region="center" style="background-color: #f5ffbf; padding: 0px;"> <div dojoType="dijit.layout.BorderContainer" design="sidebar" style="border: 2px solid black; height: 300px;"> <div dojoType="dijit.layout.ContentPane" region="left" style="background-color: #acb386; width: 100px;"> left </div> <div dojoType="dijit.layout.ContentPane" region="right" style="background-color: #acb386; width: 100px;"> right </div> <div dojoType="dijit.layout.ContentPane" region="top" style="background-color: #b39b86; height: 100px;"> top bar </div> <div dojoType="dijit.layout.ContentPane" region="bottom" style="background-color: #b39b86; height: 100px;"> bottom bar </div> <div dojoType="dijit.layout.ContentPane" region="center" style="background-color: #f5ffbf; padding: 10px;"> main panel with <a href="http://www.dojotoolkit.org/"> a link</a>.<br /> (to check we're copying children around properly). <br /> <select dojoType="dijit.form.FilteringSelect"> <option value="1">foo</option> <option value="2">bar</option> <option value="3">baz</option> </select> Here's some text that comes AFTER the combo box. </div> </div> </div></div> You see here the recurring theme of using ContentPanes inside Containers. Also, the innermost "center" ContentPane wraps a new BorderContainer which has its own internal top/left layout widgets. Depending on what kind of application you are building, the BorderContainer might be a good starting point. Since you already know that you can change and reload the contents of individual ContentPanes, you are left with a layout in which each element can function as a lightweight Iframe with none of the negative side effects. DragPane The DragPane is a very simple idea. You have a very large area of elements to display, and want to let the user 'drag' the underlying pane around using the mouse. The DragPane can be used in instances where you have a lot of pictures to present. It can also be used to present text or other widgets that are too numerous to fit in your current designated area of screen real estate. The only property of DragPane is invert, which if set to true, inverts the axis of the drag of the mouse. Example: <div class="hugetext" id="container" invert="false" dojoType="dojox.layout.DragPane"> <p style="color:#666; padding:8px; margin:0;"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. In porta. Etiam mattis libero nec ante. Nam porta lacus eu ligula. Cras mauris. Suspendisse vel augue. Vivamus aliquam orci ut eros. Nunc eleifend sagittis turpis. purus purus in nibh. Phasellus in nunc. </p></div>
Read more
  • 0
  • 0
  • 3136
Modal Close icon
Modal Close icon