Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
Packt
29 Oct 2013
10 min read
Save for later

RESS – The idea and the Controversies

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) The RWD concept appeared first in 2010 in an article by Ethan Marcotte (available at http://alistapart.com/article/responsive-web-design). He presented an approach that allows us to progressively enhance page design within different viewing contexts with the help of fluid grids, flexible images, and media queries. This approach was opposed to the one that separates websites geared toward specific devices. Instead of two or more websites (desktop and mobile), we could have one that adapts to all devices. The technical foundation of RWD (as proposed in Marcotte's article) consists of three things, fluid grids, flexible images, and media queries. Illustration: Fluid (and responsive) grid adapts to device using both column width and column count Fluid grid is basically nothing more than a concept of dividing the monitor width into modular columns, often accompanied by some kind of a CSS framework (some of the best-known examples were the 960 grid system, blueprint, pure, 1140px grid, and elastic), that is, a base stylesheet that simplifies and standardizes writing website-specific CSS. What makes it fluid is the use of relative measurements like %, em, or rem. With changing the screen (or the window), the number of these columns changes (thanks to CSS statements enclosed in media queries). This allows us to adjust the design layout to device capabilities (screen width and pixel density in particular). Images in such a layout become fluid by using a simple technique of setting width, x% or max-width, 100% in CSS, which causes the image to scale proportionally. With those two methods and a little help from media queries, one can radically change the page layout and handle this enormous, up to 800 percent, difference between the thinnest and the widest screen (WQXGA's 2560px/iPhone's 320px). This is a big step forward and a good base to start creating One Web, that is, to use one URL to deliver content to all the devices. Unfortunately, that is not enough to achieve results that would provide an equally great experience and fast loading websites for everybody. The RESS idea Besides screen width, we may need to take into account other things such as bandwidth and pay-per-bandwidth plans, processor speed, available memory, level of HTML/CSS compatibility, monitoring color depth, and possible navigation methods (touch screen, buttons, and keyboard). On a practical level, it means we may have to optimize images and navigation patterns, and reduce page complexity for some devices. To make this possible, some Server Side solutions need to be engaged. We may use Server Side just for optimizing images. Server Side optimization lets us send pages with just some elements adjusted or a completely changed page; we can rethink the application structure to build a RESTful web interface and turn our Server Side application into a web service. The more we need to place responsibility for device optimization on the Server Side, the closer we get to the old way of disparate desktops and mobile web's separate mobile domains, such as iPhone, Android, or Windows applications. There are many ways to build responsive websites but there is no golden rule to tell you which way is the best. It depends on the target audience, technical contexts, money, and time. Ultimately, the way to be chosen depends on the business decisions of the website owner. When we decide to employ Server Side logic to optimize components of a web page designed in a responsive way, we are going the RESS (Responsive Web Design with Server Side components) way. RESS was proposed by Luke Wroblewski on his blog as a result of his experiences on extending RWD with Server Side components. Essentially, the idea was based on storing IDs of resources (such as images) and serving different versions of the same resource, optimized for some defined classes of devices. Device detection and assigning them to respective classes can be based on libraries such as WURFL or YABFDL. Controversies It is worth noting that both of these approaches raised many controversies. Introducing RWD has broken some long-established rules or habits such as standard screen width (the famous 960px maximum page width limit). It has put in question the long-practiced ways of dealing with mobile web (such as separate desktop and mobile websites). It is no surprise that it raises both delight and rage. One can easily find people calling this fool's gold, useless, too difficult, a fad, amazing, future proof, and so on. Each of those opinions has a reason behind it, for better or worse. A glimpse of the following opinions may help us understand some of the key benefits and issues related to RWD. "Separate mobile websites are a good thing" You may have heard this line in an article by Jason Grigsby, Css media query for mobile is fool's gold , available at http://blog.cloudfour.com/css-media-query-for-mobile-is-fools-gold/. Separate mobile websites allow reduction of bandwidth, prepare pages that are less CPU and memory intensive, and at the same time allow us to use some mobile-only features such as geolocation. Also, not all mobile browsers are wise enough to understand media queries. That is generally true and media queries are not enough in most scenarios, but with some JavaScript (Peter-Paul Koch blog available at, http://www.quirksmode.org/blog/archives/2010/08/combining_media.html#more, describes a method to exclude some page elements or change the page structure via JS paired with media queries), it is possible to overcome many of those problems. At the same time, making a separate mobile website introduces its own problems and requires significant additional investment that can easily get to tens or hundreds of times more than the RWD solution (detecting devices, changing application logic, writing separate templates, integrating, and testing the whole thing). Also, at the end of the day, your visitors may prefer the mobile version, but this doesn't have to be the case. Users are often accessing the same content via various devices and providing consistent experience across all of them becomes more and more important. The preceding controversy is just a part of a wider discussion on channels to provide content on the Internet. RWD and RESS are relatively new kids on the block. For years, technologies to provide content for mobile devices were being built and used, from device-detection libraries to platform-specific applications (such as iStore, Google Play, and MS). When, in 2010, US smartphone users started to spend more time using their mobile apps than browsers (Mobile App Usage Further Dominates Web, Spurred by Facebook, at http://blog.flurry.com/bid/80241/Mobile-App-Usage-Further-Dominates-Web-Spurred-by-Facebook), some hailed it as dangerous for the Web (Apps: The Web Is The Platform, available at http://blog.mozilla.org/webdev/2012/09/14/apps-the-web-is-the-platform/). A closer look at stats reveals though, that most of this time was spent on playing games. No matter how much time kids can spend playing Angry Birds now, after more than two years from then, people still prefer to read the news via a browser rather than via native mobile applications. The Future of Mobile News report from October 2012 reveals that for accessing news, 61 percent mobile users prefer a browser while 28 percent would rather use apps (Future of Mobile News, http://www.journalism.org/analysis_report/future_mobile_news). The British government is not keen on apps either, as they say, "Our position is that native apps are rarely justified" (UK Digital Cabinet Office blog, at http://digital.cabinetoffice.gov.uk/2013/03/12/were-not-appy-not-appy-at-all/). Recently, Tim Berners-Lee, the inventor of the Web, criticized closed world apps such as those released by Apple for threatening openness and universality that the architects of the Internet saw as central to its design. He explains it the following way, "When you make a link, you can link to anything. That means people must be able to put anything on the Web, no matter what computer they have, what software they use, or which human language they speak and regardless of whether they have a wired or a wireless Internet connection." This kind of thinking goes in line with the RWD/RESS philosophy to have one URL for the same content, no matter what way you'd like to access it. Nonetheless, it is just one of the reasons why RWD became so popular during the last year. "RWD is too difficult" CSS coupled with JS can get really complex (some would say messy) and requires a lot of testing on all target browsers/platforms. That is or was true. Building RWD websites requires good CSS knowledge and some battlefield experience in this field. But hey, learning is the most important skill in this industry. It actually gets easier and easier with new tools released nearly every week. "RWD means degrading design" Fluid layouts break the composition of the page; Mobile First and Progressive Enhancement mean, in fact, reducing design to a few simplistic and naive patterns. Actually the Mobile First concept contains two concepts. One is design direction and the second is the structure of CSS stylesheets, in particular the order of media queries. With regard to design direction, the Mobile First concept is meant to describe the sequence of designs. First the design for a mobile should be created and then for a desktop. While there are several good reasons for using this approach, one should never forget the basic truth that at the end of the day only the quality of designs matters, not the order they were created in. With regard to the stylesheet structure, Mobile First means that we first write statements for small screens and then add statements for wider screens, such as @media screen and (min-width: 480px). It is a design principle meant to simplify the whole thing. It is assumed here that CSS for small screens is the simplest version, which will be progressively enhanced for larger screens. The idea is smart and helps to maintain a well-structured CSS but sometimes the opposite, the Desktop First approach, seems natural. Typical examples are tables with many columns. The Mobile First principle is not a religious dogma and should not be treated as such. As a side note, it remains an open question why this is still named Mobile First, while the new iPad-related statements should come here at the end (min-width: 2000px). There are some examples of rather poor designs made by RWD celebrities. But there are also examples of great designs that happened, thanks to the freedom that RWD gave to the web design world. The rapid increase in Internet access via mobile devices during 2012 made RWD one of the hottest topics in web design. The numbers vary across countries and websites but no matter what numbers you look at, one thing is certain, mobile is already big and will soon get even bigger (valuable stats on mobile use are available at http://www.thinkwithgoogle.com/mobileplanet/en/). Statistics are not the only reason why Responsive Web Design became popular. Equally important are the benefits for web designers, users, website owners, and developers. Summary This article, as discussed, covered the RESS idea, as well as the controversies associated with it. Resources for Article: Further resources on this subject: Introduction to RWD frameworks [Article] Getting started with Modernizr using PHP IDE [Article] Understanding Express Routes [Article]
Read more
  • 0
  • 0
  • 11690

article-image-rendering-web-pages-pdf-using-railo-open-source
Packt
01 Apr 2010
6 min read
Save for later

Rendering web pages to PDF using Railo Open Source

Packt
01 Apr 2010
6 min read
As a pre-requisite for this tutorial, you should be familiar with html, css and web technologies in general. You do not need to know any CFML (ColdFusion Markup Language). You should have an understanding of databases, particularly MySQL, which we will use in this example. Server environment For this tutorial, we are using a virtual machine (using VirtualBox) running Windows 2003 Server Standard, IIS, MySQL 5.1, and Railo 3.1. The code we will be writing is not platform specific. You can just as easily implement this on Linux or Mac OSX Server with MySQL and Apache. Website architecture Lets assume that we have the HTML layout code provided to us for our website, and that we need to make it “work” on the web server, pulling its page content from a MySQL database. In addition, we will add the capability for any page to render itself as a PDF for printing purposes. You will want to make sure you have done a few things first: Make sure Railo and MySQL are running on your server (or VM) Make sure you created a MySQL database, and a user that has permission to use your database For development (as opposed to production web servers), try running Railo Express, which runs in a terminal window. This allows you to see the templates it is running, and the full stack trace of errors when they occur. Setting up the Railo datasource The first thing we need to do is setup a Railo datasource. This named datasource will define our database credentials for our website. Open a web browser to the Railo Server Administrator, which is usually located at: http://your.server.com/railo-context/admin/server.cfm. Check the Railo documentation if you have trouble opening up the administrator. Enter your password to login. Once logged in, click on the Services / Datasource link in the left navigation. At the bottom of this page is where you can create a new datasource. Enter a datasource name (Letters and numbers only, no spaces, and start with a letter) and select MySQL as the database type. On the next screen, enter the MySQL server host name or ip address, and username and password for the user you created. All of the other settings can be kept with their default values. Scroll to the bottom and click on the “Create” button to complete the datasource configuration. Railo Administrator will test your connection, so you can confirm that it is setup properly. We created the datasource in the “server” context. When you deploy an application on a production server, you should create your datasources in the “web” context, so that it is only available to the website that needs it. Datasources created in the server context are accessible from any website on the same server. Setting up the website structure In the document root of your website (or in the directory you choose for this tutorial), you will need a couple of files. index.cfm – this is the default document, like index.html or default.htm header.cfm – this is the top portion of web pages, what appears above your page content. This is where you would put your html “design” code. footer.cfm – this is the bottom portion of your web pages, what shows up after your page content. This is where you would put your html “design” code. layout.css – the basic CSS layout for our pages styles.css – the CSS styles for your page content Let's look first at header.cfm: This file contains what you would expect to see at the top of a standard HTML web page, including head, title, meta tags, etc. There are a few different tags included in the content however. Any tags that begin with <cf…> are CFML language tags, that are processed by Railo and removed from the source code before the page is sent to your web browser. Just like PHP or ASP.net code, if somebody views the source of your web page, they won’t see your CFML code, but rather, they will see the html code, and any code that your CFML tags generate. <cfparam name="title" default="Untitled" /> This tag defines and sets a default value for a variable named “title.” This value can be overridden using a <cfset …> tag which we will see later. #title#</cfoutput> Any content inside <cfoutput>…</cfoutput> tags will be included in the output to your web browser. In this case, the title variable is written to the page output. As with all <cf..> tags, the <cfoutput> tags themselves will be removed from the web page output. Anything inside # characters will be evaluated as an expression. So if your code was <cfoutput>title</cfoutput>, then the literal word “title” would be included in your page output. When you put # characters around it, Railo will evaluate the expression and replace it with the result.</cfoutput></cf..> header.cfm also includes the two .css files we need for our layout and styles. These are standard css which should be familiar to you, and won’t be covered in this article. Let's next look at index.cfm <cfset title="Home Page"> This first tag sets a local variable called “title” and sets its value to “Home Page.” <cfinclude template="header.cfm"> This tag includes the file “header.cfm” into the current page, which means that any page output (the html, head, title, etc. tags) will be included in this page’s output. Notice that we set the local variable title in our page, and it gets used in the header.cfm file. Local variables are available to any pages that are included in the requested web page. The “Hello World” content is the main textual content of this web page, and can be any html or CFML generated content you want. After the main content, another <cfinclude…> tag includes footer.cfm, which includes the expected end of page tags such as </body> and </html> Take a look If you open your web browser and browse to index.cfm, you will see your basic web page layout, with a page title, and Hello World content. If you view the page source, you should see the combined content of the header.cfm, index.cfm and footer.cfm, with no <cf…> tags in the output, only pure HTML. You can easily organize and create static web pages like this, and like many other programming languages, structuring your website like this has many benefits. If you make a change to your header or footer, then all your pages inherit the change immediately. Likewise if you design a new layout or style for your site, applying it once to your header and footer then applies it to your entire website.
Read more
  • 0
  • 0
  • 11663

article-image-changing-views
Packt
15 Feb 2016
25 min read
Save for later

Changing Views

Packt
15 Feb 2016
25 min read
In this article by Christopher Pitt, author of the book React Components, has explained how to change sections without reloading the page. We'll use this knowledge to create the public pages of the website our CMS is meant to control. (For more resources related to this topic, see here.) Location, location, and location Before we can learn about alternatives to reloading pages, let's take a look at how the browser manages reloads. You've probably encountered the window object. It's a global catch-all object for browser-based functionality and state. It's also the default this scope in any HTML page: We've even accessed window before. When we rendered to document.body or used document.querySelector, the window object was assumed. It's the same as if we were to call window.document.querySelector. Most of the time document is the only property we need. That doesn't mean it's the only property useful to us. Try the following, in the console: console.log(window.location); You should see something similar to: Location {     hash: ""     host: "127.0.0.1:3000"     hostname: "127.0.0.1"     href: "http://127.0.0.1:3000/examples/login.html"     origin: "http://127.0.0.1:3000"     pathname: "/examples/login.html"     port: "3000"     ... } If we were trying to work out which components to show based on the browser URL, this would be an excellent place to start. Not only can we read from this object, but we can also write to it: <script>     window.location.href = "http://material-ui.com"; </script> Putting this in an HTML page or entering that line of JavaScript in the console will redirect the browser to material-ui.com. It's the same if you click on the link. And if it's to a different page (than the one the browser is pointing at), then it will cause a full page reload. A bit of history So how does this help us? We're trying to avoid full page reloads, after all. Let's experiment with this object. Let's see what happens when we add something like #page-admin to the URL: Adding #page-admin to the URL leads to the window.location.hash property being populated with the same page. What's more, changing the hash value won't reload the page! It's the same as if we clicked on a link that had that hash in the href attribute. We can modify it without causing full page reloads, and each modification will store a new entry in the browser history. Using this trick, we can step through a number of different "states" without reloading the page, and be able to backtrack each with the browser's back button. Using browser history Let's put this trick to use in our CMS. First, let's add a couple functions to our Nav component: export default (props) => {     // ...define class names       var redirect = (event, section) => {         window.location.hash = `#${section}`;         event.preventDefault();     }       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <a className="mdl-navigation__link"                href="/examples/login.html"                onClick={(e) => redirect(e, "login")}>                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </a>             <a className="mdl-navigation__link"                href="/examples/page-admin.html"                onClick={(e) => redirect(e, "page-admin")}>                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </a>         </nav>     </div>; }; We add an onClick attribute to our navigation links. We've created a special function that will change window.location.hash and prevent the default full page reload behavior the links would otherwise have caused. This is a neat use of arrow functions, but we're ultimately creating three new functions in each render call. Remember that this can be expensive, so it's best to move the function creation out of render. We'll replace this shortly. It's also interesting to see template strings in action. Instead of "#" + section, we can use `#${section}` to interpolate the section name. It's not as useful in small strings, but becomes increasingly useful in large ones. Clicking on the navigation links will now change the URL hash. We can add to this behavior by rendering different components when the navigation links are clicked: import React from "react"; import ReactDOM from "react-dom"; import Component from "src/component"; import Login from "src/login"; import Backend from "src/backend"; import PageAdmin from "src/page-admin";   class Nav extends Component {     render() {         // ...define class names           return <div className={drawerClassNames}>             <header className="demo-drawer-header">                 <img src="images/user.jpg"                      className="demo-avatar" />             </header>             <nav className={navClassNames}>                 <a className="mdl-navigation__link"                    href="/examples/login.html"                    onClick={(e) => this.redirect(e, "login")}>                     <i className={buttonIconClassNames}                        role="presentation">                         lock                     </i>                     Login                 </a>                 <a className="mdl-navigation__link"                    href="/examples/page-admin.html"                    onClick={(e) => this.redirect(e, "page-admin")}>                     <i className={buttonIconClassNames}                        role="presentation">                         pages                     </i>                     Pages                 </a>             </nav>         </div>;     }       redirect(event, section) {         window.location.hash = `#${section}`;           var component = null;           switch (section) {             case "login":                 component = <Login />;                 break;             case "page-admin":                 var backend = new Backend();                 component = <PageAdmin backend={backend} />;                 break;         }           var layoutClassNames = [             "demo-layout",             "mdl-layout",             "mdl-js-layout",             "mdl-layout--fixed-drawer"         ].join(" ");           ReactDOM.render(             <div className={layoutClassNames}>                 <Nav />                 {component}             </div>,             document.querySelector(".react")         );           event.preventDefault();     } };   export default Nav; We've had to convert the Nav function to a Nav class. We want to create the redirect method outside of render (as that is more efficient) and also isolate the choice of which component to render. Using a class also gives us a way to name and reference Nav, so we can create a new instance to overwrite it from within the redirect method. It's not ideal packaging this kind of code within a component, so we'll clean that up in a bit. We can now switch between different sections without full page reloads. There is one problem still to solve. When we use the browser back button, the components don't change to reflect the component that should be shown for each hash. We can solve this in a couple of ways. The first approach we can try is checking the hash frequently: componentDidMount() {     var hash = window.location.hash;       setInterval(() => {         if (hash !== window.location.hash) {             hash = window.location.hash;             this.redirect(null, hash.slice(1), true);         }     }, 100); }   redirect(event, section, respondingToHashChange = false) {     if (!respondingToHashChange) {         window.location.hash = `#${section}`;     }       var component = null;       switch (section) {         case "login":             component = <Login />;             break;         case "page-admin":             var backend = new Backend();             component = <PageAdmin backend={backend} />;             break;     }       var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       ReactDOM.render(         <div className={layoutClassNames}>             <Nav />             {component}         </div>,         document.querySelector(".react")     );       if (event) {         event.preventDefault();     } } Our redirect method has an extra parameter, to apply the new hash whenever we're not responding to a hash change. We've also wrapped the call to event.preventDefault in case we don't have a click event to work with. Other than those changes, the redirect method is the same. We've also added a componentDidMount method, in which we have a call to setInterval. We store the initial window.location.hash and check 10 times a second to see if it has change. The hash value is #login or #page-admin, so we slice the first character off and pass the rest to the redirect method. Try clicking on the different navigation links, and then use the browser back button. The second option is to use the newish pushState and popState methods on the window.history object. They're not very well supported yet, so you need to be careful to handle older browsers or sure you don't need to handle them. You can learn more about pushState and popState at https://developer.mozilla.org/en-US/docs/Web/API/History_API. Using a router Our hash code is functional but invasive. We shouldn't be calling the render method from inside a component (at least not one we own). So instead, we're going to use a popular router to manage this stuff for us. Download it with the following: $ npm install react-router --save Then we need to join login.html and page-admin.html back into the same file: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>         <script src="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js"></script>         <link rel="stylesheet" href="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css" />         <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons" />         <link rel="stylesheet" href="admin.css" />     </head>     <body class="         mdl-demo         mdl-color--grey-100         mdl-color-text--grey-700         mdl-base">         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/admin");         </script>     </body> </html> Notice how we've added the ReactRouter file to the import map? We'll use that in admin.js. First, let's define our layout component: var App = function(props) {     var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       return (         <div className={layoutClassNames}>             <Nav />             {props.children}         </div>     ); }; This creates the page layout we've been using and allows a dynamic content component. Every React component has a this.props.children property (or props.children in the case of a stateless component), which is an array of nested components. For example, consider this component: <App>     <Login /> </App> Inside the App component, this.props.children will be an array with a single item—an instance of the Login. Next, we'll define handler components for the two sections we want to route: var LoginHandler = function() {     return <Login />; }; var PageAdminHandler = function() {     var backend = new Backend();     return <PageAdmin backend={backend} />; }; We don't really need to wrap Login in LoginHandler but I've chosen to do it to be consistent with PageAdminHandler. PageAdmin expects an instance of Backend, so we have to wrap it as we see in this example. Now we can define routes for our CMS: ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App}>             <IndexRoute component={LoginHandler} />             <Route path="login" component={LoginHandler} />             <Route path="page-admin" component={PageAdminHandler} />         </Route>     </Router>,     document.querySelector(".react") ); There's a single root route, for the path /. It creates an instance of App, so we always get the same layout. Then we nest a login route and a page-admin route. These create instances of their respective components. We also define an IndexRoute so that the login page will be displayed as a landing page. We need to remove our custom history code from Nav: import React from "react"; import ReactDOM from "react-dom"; import { Link } from "router";   export default (props) => {     // ...define class names       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <Link className="mdl-navigation__link" to="login">                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </Link>             <Link className="mdl-navigation__link" to="page-admin">                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </Link>         </nav>     </div>; }; And since we no longer need a separate redirect method, we can convert the class back into a statement component (function). Notice we've swapped anchor components for a new Link component. This interacts with the router to show the correct section when we click on the navigation links. We can also change the route paths without needing to update this component (unless we also change the route names). Creating public pages Now that we can easily switch between CMS sections, we can use the same trick to show the public pages of our website. Let's create a new HTML page just for these: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>     </head>     <body>         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/index");         </script>     </body> </html> This is a reduced form of admin.html without the material design resources. I think we can ignore the appearance of these pages for the moment, while we focus on the navigation. The public pages are almost 100%, so we can use stateless components for them. Let's begin with the layout component: var App = function(props) {     return (         <div className="layout">             <Nav pages={props.route.backend.all()} />             {props.children}         </div>     ); }; This is similar to the App admin component, but it also has a reference to a Backend. We define that when we render the components: var backend = new Backend(); ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App} backend={backend}>             <IndexRoute component={StaticPage} backend={backend} />             <Route path="pages/:page" component={StaticPage} backend={backend} />         </Route>     </Router>,     document.querySelector(".react") ); For this to work, we also need to define a StaticPage: var StaticPage = function(props) {     var id = props.params.page || 1;     var backend = props.route.backend;       var pages = backend.all().filter(         (page) => {             return page.id == id;         }     );       if (pages.length < 1) {         return <div>not found</div>;     }       return (         <div className="page">             <h1>{pages[0].title}</h1>             {pages[0].content}         </div>     ); }; This component is more interesting. We access the params property, which is a map of all the URL path parameters defined for this route. We have :page in the path (pages/:page), so when we go to pages/1, the params object is {"page":1}. We also pass a Backend to Page, so we can fetch all pages and filter them by page.id. If no page.id is provided, we default to 1. After filtering, we check to see if there are any pages. If not, we return a simple not found message. Otherwise, we render the content of the first page in the array (since we expect the array to have a length of at least 1). We now have a page for the public pages of the website: Summary In this article, we learned about how the browser stores URL history and how we can manipulate it to load different sections without full page reloads. Resources for Article:   Further resources on this subject: Introduction to Akka [article] An Introduction to ReactJs [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 11553

article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 11551

article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 11520

article-image-dialog-widget
Packt
30 Oct 2013
14 min read
Save for later

The Dialog Widget

Packt
30 Oct 2013
14 min read
(For more resources related to this topic, see here.) Wijmo additions to the dialog widget at a glance By default, the dialog window includes the pin, toggle, minimize, maximize, and close buttons. Pinning the dialog to a location on the screen disables the dragging feature on the title bar. The dialog can still be resized. Maximizing the dialog makes it take up the area inside the browser window. Toggling it expands or collapses it so that the dialog contents are shown or hidden with the title bar remaining visible. If these buttons cramp your style, they can be turned off with the captionButtons option. You can see how the dialog is presented in the browser from the following screenshot: Wijmo features additional API compared to jQuery UI for changing the behavior of the dialog. The new API is mostly for the buttons in the title bar and managing window stacking. Window stacking determines which windows are drawn on top of other ones. Clicking on a dialog raises it above other dialogs and changes their window stacking settings. The following table shows the options added in Wijmo. Options Events Methods captionButtons contentUrl disabled expandingAnimation stack zIndex blur buttonCreating stateChanged disable enable getState maximize minimize pin refresh reset restore toggle widget The contentUrl option allows you to specify a URL to load within the window. The expandingAnimation option is applied when the dialog is toggled from a collapsed state to an expanded state. The stack and zIndex options determine whether the dialog sits on top of other dialogs. Similar to the blur event on input elements, the blur event for dialog is fired when the dialog loses focus. The buttonCreating method is called when buttons are created and can modify the buttons on the title bar. The disable method disables the event handlers for the dialog. It prevents the default button actions and disables dragging and resizing. The widget method returns the dialog HTML element. The methods maximize, minimize, pin, refresh, reset, restore, and toggle, are available as buttons on the title bar. The best way to see what they do is play around with them. In addition, the getState method is used to find the dialog state and returns either maximized, minimized, or normal. Similarly, the stateChanged event is fired when the state of the dialog changes. The methods are called as a parameter to the wijdialog method. To disable button interactions, pass the string disable: $("#dialog").wijdialog ("disable"); Many of the methods come as pairs, and enable and disable are one of them. Calling enable enables the buttons again. Another pair is restore/minimize. minimize hides the dialog in a tray on the left bottom of the screen. restore sets the dialog back to its normal size and displays it again. The most important option for usability is the captionButtons option. Although users are likely familiar with the minimize, resize, and close buttons; the pin and toggle buttons are not featured in common desktop environments. Therefore, you will want to choose the buttons that are visible depending on your use of the dialog box in your project. To turn off a button on the title bar, set the visible option to false. A default jQuery UI dialog window with only the close button can be created with: $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: false } } }); The other options for each button are click, iconClassOff, and iconClassOn. The click option specifies an event handler for the button. Nevertheless, the buttons come with default actions and you will want to use different icons for custom actions. That's where iconClass comes in. iconClassOn defines the CSS class for the button when it is loaded. iconClassOff is the class for the button icon after clicking. For a list of available jQuery UI icons and their classes, see http://jquery-ui.googlecode.com/svn/tags/1.6rc5/tests/static/icons.html. Our next example uses ui-icon-zoomin, ui-icon-zoomout, and ui-icon-lightbulb. They can be found by toggling the text for the icons on the web page as shown in the preceding screenshot. Adding custom buttons jQuery UI's dialog API lacks an option for configuring the buttons shown on the title bar. Wijmo not only comes with useful default buttons, but also lets you override them easily. <!DOCTYPE HTML> <html> <head> ... <style> .plus { font-size: 150%; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({ autoOpen: true, captionButtons: { pin: { visible: false }, refresh: { visible: false }, toggle: {visible: true, click: function () { $('#dialog').toggleClass('plus') }, iconClassOn: 'ui-icon-zoomin', iconClassOff: 'ui-icon-zoomout'} , minimize: { visible: false }, maximize: {visible: true, click: function () { alert('To enloarge text, click the zoom icon.') }, iconClassOn: 'ui-icon-lightbulb' }, close: {visible: true, click: self.close, iconClassOn:'ui-icon-close'} } }); }); </script> </head> <body> <div id="dialog" title="Basic dialog"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate</p> </div> </body> </html> We create a dialog window passing in the captionButtons option. The pin, refresh, and minimize buttons have visible set to false so that the title bar is initialized without them. The final output looks as shown in the following screenshot: In addition, the toggle and maximize buttons are modified and given custom behaviors. The toggle button toggles the font size of the text by applying or removing a CSS class. Its default icon, set with iconClassOn, indicates that clicking on it will zoom in on the text. Once clicked, the icon changes to a zoom out icon. Likewise, the behavior and appearance of the maximize button have been changed. In the position where the maximize icon was displayed in the title bar previously, there is now a lightbulb icon with a tip. Although this method of adding new buttons to the title bar seems clumsy, it is the only option that Wijmo currently offers. Adding buttons in the content area is much simpler. The buttons option specifies the buttons to be displayed in the dialog window content area below the title bar. For example, to display a simple confirmation button: $('#dialog').wijdialog({buttons: {ok: function () { $(this).wijdialog('close') }}}); The text displayed on the button is ok and clicking on the button hides the dialog. Calling $('#dialog').wijdialog('open') will show the dialog again. Configuring the dialog widget's appearance Wijmo offers several options that change the dialog's appearance including title, height, width, and position. The title of the dialog can be changed either by setting the title attribute of the div element of the dialog, or by using the title option. To change the dialog's theme, you can use CSS styling on the wijmo-wijdialog and wijmo-wijdialog-captionbutton classes: <!DOCTYPE HTML> <html> <head> ... <style> .wijmo-wijdialog { /*rounded corners*/ -webkit-border-radius: 12px; border-radius: 12px; background-clip: padding-box; /*shadow behind dialog window*/ -moz-box-shadow: 3px 3px 5px 6px #ccc; -webkit-box-shadow: 3px 3px 5px 6px #ccc; box-shadow: 3px 3px 5px 6px #ccc; /*fade contents from dark gray to gray*/ background-image: -webkit-gradient(linear, left top, left bottom, from(#444444), to(#999999)); background-image: -webkit-linear-gradient(top, #444444, #999999); background-image: -moz-linear-gradient(top, #444444, #999999); background-image: -o-linear-gradient(top, #444444, #999999); background-image: linear-gradient(to bottom, #444444, #999999); background-color: transparent; text-shadow: 1px 1px 3px #888; } </style> <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('#dialog').wijdialog({width: 350}); }); </script> </head> <body> <div id="dialog" title="Subtle gradients"> <p>Loremipsum dolor sitamet, consectetueradipiscingelit. Aeneancommodo ligula eget dolor.Aeneanmassa. Cum sociisnatoquepenatibusetmagnis dis parturient montes, nasceturridiculus mus. Donec quam felis, ultriciesnec, pellentesqueeu, pretiumquis, sem. Nullaconsequatmassaquisenim. Donecpedejusto, fringillavel, aliquetnec, vulputate </p> </div> </body> </html> We now add rounded boxes, a box shadow, and a text shadow to the dialog box. This is done with the .wijmo-wijdialog class. Since many of the CSS3 properties have different names on different browsers, the browser specific properties are used. For example, -webkit-box-shadow is necessary on Webkit-based browsers. The dialog width is set to 350 px when initialized so that the title text and buttons all fit on one line. Loading external content Wijmo makes it easy to load content in an iFrame. Simply pass a URL with the contentUrl option: $(document).ready(function () { $("#dialog").wijdialog({captionButtons: { pin: { visible: false }, refresh: { visible: true }, toggle: { visible: false }, minimize: { visible: false }, maximize: { visible: true }, close: { visible: false } }, contentUrl: "http://wijmo.com/demo/themes/" }); }); This will load the Wijmo theme explorer in a dialog window with refresh and maximize/restore buttons. This output can be seen in the following screenshot: The refresh button reloads the content in the iFrame, which is useful for dynamic content. The maximize button resizes the dialog window. Form Components Wijmo form decorator widgets for radio button, checkbox, dropdown, and textbox elements give forms a consistent visual style across all platforms. There are separate libraries for decorating the dropdown and other form elements, but Wijmo gives them a consistent theme. jQuery UI lacks form decorators, leaving the styling of form components to the designer. Using Wijmo form components saves time during development and presents a consistent interface across all browsers. Checkbox The checkbox widget is an excellent example of the style enhancements that Wijmo provides over default form controls. The checkbox is used if multiple choices are allowed. The following screenshot shows the different checkbox states: Wijmo adds rounded corners, gradients, and hover highlighting to the checkbox. Also, the increased size makes it more usable. Wijmo checkboxes can be initialized to be checked. The code for this purpose is as follows: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $("#checkbox3").wijcheckbox({checked: true}); $(":input[type='checkbox']:not(:checked)").wijcheckbox(); }); </script> <style> div { display: block; margin-top: 2em; } </style> </head> <body> <div><input type='checkbox' id='checkbox1' /><label for='checkbox1'>Unchecked</label></div> <div><input type='checkbox' id='checkbox2' /><label for='checkbox2'>Hover</label></div> <div><input type='checkbox' id='checkbox3' /><label for='checkbox3'>Checked</label></div> </body> </html>. In this instance, checkbox3 is set to Checked as it is initialized. You will not get the same result if one of the checkboxes is initialized twice. Here, we avoid that by selecting the checkboxes that are not checked after checkbox3 is set to be Checked. Radio buttons Radio buttons, in contrast with checkboxes, allow only one of the several options to be selected. In addition, they are customized through the HTML markup rather than a JavaScript API. To illustrate, the checked option is set by the checked attribute: <input type="radio" checked /> jQuery UI offers a button widget for radio buttons, as shown in the following screenshot, which in my experience causes confusion as users think that they can select multiple options: The Wijmo radio buttons are closer in appearance to regular radio buttons so that users would expect the same behavior, as shown in the following screenshot: Wijmo radio buttons are initialized by calling the wijradiomethod method on radio button elements: <!DOCTYPE html> <html> <head> ... <script id="scriptInit" type="text/javascript">$(document).ready(function () { $(":input[type='radio']").wijradio({ changed: function (e, data) { if (data.checked) { alert($(this).attr('id') + ' is checked') } } }); }); </script> </head> <body> <div id="radio"> <input type="radio" id="radio1" name="radio"/><label for="radio1">Choice 1</label> <input type="radio" id="radio2" name="radio" checked="checked"/><label for="radio2">Choice 2</label> <input type="radio" id="radio3" name="radio"/><label for="radio3">Choice 3</label> </div> </body> </html> In this example, the changed option, which is also available for checkboxes, is set to a handler. The handler is passed a jQuery.Event object as the first argument. It is just a JavaScript event object normalized for consistency across browsers. The second argument exposes the state of the widget. For both checkboxes and radio buttons, it is an object with only the checked property. Dropdown Styling a dropdown to be consistent across all browsers is notoriously difficult. Wijmo offers two options for styling the HTML select and option elements. When there are no option groups, the ComboBox is the better widget to use. For a dropdown with nested options under option groups, only the wijdropdown widget will work. As an example, consider a country selector categorized by continent: <!DOCTYPE HTML> <html> <head> ... <script id="scriptInit" type="text/javascript"> $(document).ready(function () { $('select[name=country]').wijdropdown(); $('#reset').button().click(function(){ $('select[name=country]').wijdropdown('destroy') }); $('#refresh').button().click(function(){ $('select[name=country]').wijdropdown('refresh') }) }); </script> </head> <body> <button id="reset"> Reset </button> <button id="refresh"> Refresh </button> <select name="country" style="width:170px"> <optgroup label="Africa"> <option value="gam">Gambia</option> <option value="mad">Madagascar</option> <option value="nam">Namibia</option> </optgroup> <optgroup label="Europe"> <option value="fra">France</option> <option value="rus">Russia</option> </optgroup> <optgroup label="North America"> <option value="can">Canada</option> <option value="mex">Mexico</option> <option selected="selected" value="usa">United States</option> </optgroup> </select> </body> </html> The select element's width is set to 170 pixels so that when the dropdown is initialized, both the dropdown menu and items have a width of 170 pixels. This allows the North America option category to be displayed on a single line, as shown in the following screenshot. Although the dropdown widget lacks a width option, it takes the select element's width when it is initialized. To initialize the dropdown, call the wijdropdown method on the select element: $('select[name=country]').wijdropdown(); The dropdown element uses the blind animation to show the items when the menu is toggled. Also, it applies the same click animation as on buttons to the slider and menu: To reset the dropdown to a select box, I've added a reset button that calls the destroy method. If you have JavaScript code that dynamically changes the styling of the dropdown, the refresh method applies the Wijmo styles again. Summary The Wijmo dialog widget is an extension of the jQuery UI dialog. In this article, the features unique to Wijmo's dialog widget are explored and given emphasis. I showed you how to add custom buttons, how to change the dialog appearance, and how to load content from other URLs in the dialog. We also learned about Wijmo's form components. A checkbox is used when multiple items can be selected. Wijmo's checkbox widget has style enhancements over the default checkboxes. Radio buttons are used when only one item is to be selected. While jQuery UI only supports button sets on radio buttons, Wijmo's radio buttons are much more intuitive. Wijmo's dropdown widget should only be used when there are nested or categorized <select> options. The ComboBox comes with more features when the structure of the options is flat. Resources for Article: Further resources on this subject: Wijmo Widgets [Article] jQuery Animation: Tips and Tricks [Article] Building a Custom Version of jQuery [Article]
Read more
  • 0
  • 0
  • 11508
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-roles-and-responsibilities-records-management-implementation-alfresco-3
Packt
17 Jan 2011
10 min read
Save for later

Roles and Responsibilities for Records Management Implementation in Alfresco 3

Packt
17 Jan 2011
10 min read
  Alfresco 3 Records Management Comply with regulations and secure your organization’s records with Alfresco Records Management. Successfully implement your records program using Alfresco Records Management, fully certified for DoD-5015.2 compliance The first and only book to focus exclusively on Alfresco Records Management Step-by-step instructions describe how to identify records, organize records, and manage records to comply with regulatory requirements Learn in detail about the software internals to get a jump-start on performing customizations Appendix         Read more about this book       (For more resources on this subject, see here.) The steering committee To succeed, our Records Management program needs continued commitment from all levels of the organization. A good way to cultivate that commitment is by establishing a steering committee for the records program. From a high level, the steering committee will direct the program, set priorities for it, and assist in making decisions. The steering committee will provide the leadership to ensure that the program is adequately funded, staffed, properly prioritized with business objectives, and successfully implemented. Committee members should know the organization well and be in a position to be both able and willing to make decisions. Once the program is implemented, the steering committee should not be dissolved; it still will play an important function. It will continue to meet and oversee the Records Management program to make sure that it is properly maintained and updated. The Records Management system is not something that can simply be turned on and forgotten. The steering committee should meet regularly, track the progress of the implementation, keep abreast of changes in regulatory controls, and be proactive in addressing the needs of the Records Management program. Key stakeholders The Records Management steering committee should include executives and senior management from core business units such as Compliance, Legal, Finance, IT, Risk Management, Human Resources, and any other groups that will be affected by Records Management. Each of these groups will represent the needs and responsibilities of their respective groups. They will provide input relative to policies and procedures. The groups will work together to develop a priority-sequenced implementation plan that all can agree upon. Creating a committee that is heavily weighted with company executives will visibly demonstrate that our company is strongly committed to the program and it ensures that we will have the right people on board when it is time to make decisions, and that will keep the program on track. The steering committee should also include representatives from Records Management, IT, and users. Alternatively, representatives from these groups can be appointed and, if not members of the steering committee, they should report directly to the steering committee on a regular basis: The Program Contact The Program Contact is the chair of the steering committee. This role is typically held by someone in senior management and is often someone from the technology side of the business, such as the Director of IT. The Program Contact signs off with the final approval on technology deliverables and budget items. The Program Sponsor A key member of the records steering committee is the Program Sponsor or Project Champion. This role is typically held by a senior executive who will be able to represent the records initiative within the organization's executive team. The Sponsor will be able to establish the priority of the records program, relative to other organizational initiatives and be able to persuade the executive team and others in the company of the importance of the records management initiative. Corporate Records Manager Another key role of the steering committee is the Corporate Records Manager. This role acts as the senior champion for the records program and is responsible for defining the procedures and policies around Records Management. The person in this role will promote the rollout of and the use of the records program. They will work with each of the participating departments or groups, cultivating local champions for Records Management within each of those groups. The Corporate Records Manager must effectively communicate with business units to explain the program to all staff members and work with the various business units to collect user feedback so that those ideas can be incorporated into the planning process. The Corporate Records Manager will try to minimize any adverse user impact or disruption. Project Manager The Project Manager typically is on the steering committee or reports directly to it. The Project Manager plans and tracks the implementation of work on the program and ensures that program milestones are met. The person in this role manages both, the details of the system setup and implementation. This Project Manager also manages the staff time spent working on the program tasks. Business Analyst The Business Analyst analyzes business processes and records, and from these, creates a design and plan for the records program implementation. The Business Analyst works closely with the Corporate Records Manager to develop records procedures and provides support for the system during rollout. Systems Administrator The Systems Administrator leads the technical team for supporting the records application. The Systems Administrator specifies and puts into place the hardware required for the records program, the storage space, memory, and CPU capabilities. The person in this role monitors the system performance and backs up the system regularly. The Systems Administrator leads the team to apply software upgrades and to perform system troubleshooting. The Network Administrator The Network Administrator ensures that the network infrastructure is in place for the records program to support the appropriate bandwidth for the server and client workstations that will access the application. The Network Administrator works closely with the Systems Administrator. The Technical Analyst The Technical Analyst is responsible for analyzing the configuration of the records program. The Technical Analyst needs to work closely with the Business Analyst and Corporate Records Manager. The person in this role will specify the classification and structure used for the records program File Plan. They will also specify the classes of documents stored as records in the records application and the associated metadata for those documents. The Records Assistant The Records Assistant assists in the configuration of the records application. Tasks that the Records Assistant will perform include data entry and creating the folder structure hierarchy of the File Plan within the records application based on the specification created by the Technical Analyst. The Records Developer The Records Developer is a software engineer that is assigned to support the implementation of the records program, based on requirements derived by the Business Analyst. The Records Developer may need to edit and update configuration files, often using technologies like XML. The Records Developer may also need to make customizations to the user interface of the application. The Trainer The Trainer will work with end users to ensure that they understand the system and their responsibilities in interacting with it. The trainer typically creates training materials and provides training seminars to users. The Technical Support Specialist The Technical Support Specialist provides support to users on the functioning of the Records Management application. This person is typically an advanced user and is trained to be able to provide guidance in interacting with the application. But more than just the Records Management application, the support specialist should also be well versed in and be able to assist users and answer their questions about records processes and procedures, as well as concepts like retention and disposition of documents. The Technical Support Specialist will, very often, be faced with requests or questions that are really enhancement requests. The support specialist needs to have a good understanding of the scope of the records implementation and be able to distinguish an enhancement request from a defect or bug report. Enhancements should be collected and routed back through the Project Manager and, depending on the nature of the request or problem, possibly even to the Corporate Records Manager or the Steering Committee. Similarly, application defects or bugs that are found should be reported back through to the Project Manager. Bug reports will be prioritized by the Project Manager, as appropriate, assigned to the Technical Developers, or reported to the Systems Integrator or to Alfresco. The Users The Users are the staff members who will use the Records Management application as part of their job. Users are often the key to the success or failure of a records program. Unfortunately, users are one aspect of the project that is often overlooked. Obviously, it is important that the records application be well designed and meet the objectives and requirements set out for it. But if users complain and can't accept it, then the program will be doomed to failure. Users will often be asked to make changes to processes that they have become very comfortable with. Frequent and early communication with users is a must in order to ultimately gain their acceptance and participation. Prior to and during the implementation of the records system, users should receive status updates and explanations from the Corporate Records Manager and also from the Records Manager lead in their business unit. It is important that frequent communications be made with users to ensure their opinions and ideas are heard, and also so that they will learn to be able to most effectively use the records system. Once the application is ready, or better yet, well before the application goes live, users should attend training sessions on proper records-handling behavior; they should experience hands-on training with the application; and they should also be instructed in how best to communicate with the Technical Support Specialist, should they ever have questions or encounter any problems. Alfresco, Consultants, and Systems Integrators Alfresco is the software vendor for Alfresco Records Management, but Alfresco typically does not work directly with customers. We could go at it alone, but more likely, we'll probably choose to work directly with one of Alfresco's System Integration partners or consultants in planning for and setting up our system. Depending on the size of our organization and the available skill set within it, the Systems Integrator can take on as much or as little of the burden for helping us to get up and running with our Records Management program. Almost any of the Technical Team roles discussed in this section, like those of the Analyst and Developer, and even the role of the Project Manager, are ones that can be performed by a Systems Integrator. A list of certified Alfresco Integrators can be found on the Alfresco site: http://www.alfresco.com/partners/search.jsp?t=si A Systems Integrator can bring to our project an important breadth of experience that can help save time and ensure that our project will go smoothly. Alfresco Systems Integration partners know their stuff. They are required to be certified in Alfresco technology and they have worked with Alfresco extensively. They are familiar with best practices and have picked up numerous implementation tips and tricks having worked on similar projects with other clients.
Read more
  • 0
  • 0
  • 11505

article-image-building-multipage-forms-intermediate
Packt
11 Jun 2013
5 min read
Save for later

Building multipage forms (Intermediate)

Packt
11 Jun 2013
5 min read
(For more resources related to this topic, see here.) Getting ready We'll separate our existing user registration form already created to multipage forms. The sections will be for personal information, address details, and contact information. How to do it... All code related to the form is written in a file named _form under protected/views/user. We are dividing the input fields into three sections, so create three separate files in the same folder with the names _page1, _page2, and _page3. Separate the code's respective files. Some sample lines are as follows: <?php $form=$this->beginWidget('CActiveForm', array( 'id'=>'user-form', 'enableAjaxValidation'=>false, 'stateful'=>true, )); ?> <div class="row"> <?php echo $form->labelEx($model,'first_name'); ?> <?php echo $form->textField($model,'first_name', array( 'size'=>50, 'maxlength'=>50 )); ?> <?php echo $form->error($model,'first_name'); ?> </div> ..... ..... <div class="row buttons"> <?php echo CHtml::submitButton('Next', array( 'name'=>'page2' )); ?> </div> <?php $this->endWidget(); ?> .... <div class="row buttons"> <?php echo CHtml::submitButton('back', array( 'name'=>'page1' )); ?> <?php echo CHtml::submitButton('Next', array( 'name'=>'page3' )); ?> </div> <div class="row buttons"> <?php echo CHtml::submitButton('Back', array( 'name'=>'page2' )); ?> <?php echo CHtml::submitButton('submit', array( 'name'=>'submit' )); ?> </div> Now, in the User controller, change the code for actionCreate as follows: public function actionCreate() { if(isset($_POST['page1'])) { $model = new User('page1'); $this->checkPageState($model, $_POST['User']); $view = '_page1'; } elseif(isset($_POST['page2'])) { $model = new User('page1'); $this->checkPageState($model, $_POST['User']); if($model->validate()) { $view = '_page2'; $model->scenario = 'page2'; } else { $view = '_page1'; } } .... $this->render($view, array('model'=>$model)); } And add a function, checkPageState(), as follows: private function checkPageState(&$model, $data) { $model->attributes = $this->getPageState('page', array()); $model->attributes = $data; $this->setPageState('page', $model->attributes); } Lastly, create scenarios in the model User to validate each page of the form separately. Add three arrays specifying all the required fields per page, as follows: return array( array('first_name, last_name, gender, dob', 'required', 'on'=>'page1' ), array('address_1, city, state, country', 'required', 'on'=>'page2' ), array('phone_number_1, email_1', 'required', 'on'=>'page3' ), How it works... We have separated all our input fields into three forms. Each page contains an entire standalone form that accepts the input from the user, validates it from the server, and stores the data till we finally submit this form. The parameter stateful passed to the CactiveForm widget specifies the form needed to maintain the state across the pages. To do this, Yii creates a hidden field in each form with the name YII_PAGE_STATE, as shown in the following screenshot: All the data submitted on the first page is stored in this hidden field and passed to the server with the second page. To read the data from this field we have used the method getPageState(), and to write we have used setPageState(). We have added a private method checkPageState() to the User controller, which reads the page state, if any, and assigns it to $model->attributes, then assigns data from the current form using $model->attributes = $_POST['User'], and finally overwrites the page state with freshly combined data. When we click on Next on _page1, we set the POST variable page2, which in turn executes the second block in the if-else ladder in actionCreate. In this article, we create an instance of the model User with scenario set to _page1 (as we need to validate the data received from _page1). With a call to checkPageState(), we check the current page state and add any new data from _page1 to the page state. Then we check if the data filled is valid using $model->validate() . If the model passes the validation we set, apply view to _page2 and set $model->scenario to _page2, to mark the required fields on _page2. If the validation fails, we set the view to _page1 with the validation errors set in the model. At the end of the action, we render the selected view with the current state of the model. If any validation errors are set, they are listed on the same page; else, the next page will be rendered. The same steps are repeated for _page2 as well. When the submit button is clicked on on _page3, we retrieve the previous data from the page state using getPageState(). Here we are not using checkPageState() as now we do not need to store any data to the page state. We simply assign the data from _page3 to the model, and if the model validates we save all the data to the database with $model->save(). After saving, we are redirected to actionView(), where data from all three forms is listed as shown in the following screenshot: Summary In this article, we saw the importance of dividing big single forms into mulitpage forms. The article provided an insight into the making of multipage forms. Resources for Article : Further resources on this subject: Play! Framework 2 – Dealing with Content [Article] Play Framework: Introduction to Writing Modules [Article] Generating Content in WordPress Top Plugins—A Sequel [Article]
Read more
  • 0
  • 0
  • 11467

article-image-adding-and-editing-content-your-web-pages
Packt
06 Apr 2015
12 min read
Save for later

Adding and Editing Content in Your Web Pages

Packt
06 Apr 2015
12 min read
This article by Miko Coffey, the author of the book, Building Business Websites with Squarespace 7, delves into the processes of adjusting images, adding content to sidebars or footers, and adding links. (For more resources related to this topic, see here.) Adjusting images in Squarespace We've learned how to adjust the size of images in relation to other elements on the page, but so far, the images themselves have remained intact, showing the full image. However, you can actually crop or zoom images so they only show a portion of the photo on the screen without having to leave the Squarespace interface. You can also apply effects to images using the built-in Aviary Image Editor, such as rotating, enhancing color, boosting contrast, whitening teeth, removing blemishes, or hundreds of other adjustments, which means you don't need fancy image editing software to perform even fairly advanced image adjustments. Cropping and zooming images with LayoutEngine If you only want to crop your image, you don't need to use the Aviary Image Editor: you can crop images using LayoutEngine in the Squarespace Content Editor. To crop an image, you perform the same steps as those to adjust the height of a Spacer Block: just click and drag the dot to change the part of the image that is shown. As you drag the dot up or down, you will notice that: Dragging the dot up will chop off the top and bottom of your image Dragging the dot down will zoom in your image, cutting off the sides and making the image appear larger When dragging the dot very near the original dimensions of your image, you will feel and see the cursor pull/snap to the original size Cropping an image in an Image Block in this manner does not remove parts from the original image; it merely adjusts the part of the image that will be shown in the Image Block on the page. You can always change your mind later. Adjusting the Focal Point of images You'll notice that all of the cropping and zooming of images is based on the center of the image. What if your image has elements near the edges that you want to show instead of weighting things towards the center? With Squarespace, you can influence which part of the image displays by adjusting the Focal Point of the image. The Focal Point identifies the most important part of the image to instruct the system to try to use this point as the basis for cropping or zooming. However, if your Image Block is an extreme shape, such as a long skinny rectangle, it may not be possible to fit all of your desired area into the cropped or zoomed image space. Adjusting the Focal Point can also be useful for Gallery images, as certain templates display images in a square format or other formats that may not match the dimensions of your images. You can also adjust the Focal Point of any Thumbnail Images that you have added in Page Settings to select which part to show as the thumbnail or header banner. To adjust an image's Focal Point, follow these steps: Double-click on the image to open the Edit Image overlay window. Hover your mouse over the image thumbnail, and you will see a translucent circle appear at the center of the thumbnail. This is the Focal Point. Click and drag the circle until it sits on top of the part of the image you want to include, as shown in the following screenshot: Using the Aviary Image Editor You can also use the Aviary Image Editor to crop or zoom into your images as well as many more adjustments that are too numerous to list here. It's important to remember that all adjustments carried out in the Aviary Image Editor are permanent: there is no way to go back to a previous version of your image. Therefore, it's better to use LayoutEngine for cropping and zooming and reserve Aviary for other adjustments that you know you want to make permanently, such as rotating a portrait image that was taken sideways to display vertically. Because edits performed in the Aviary Image Editor are permanent, use it with caution and always keep a backup original version of the image on your computer just in case. Here's how to edit an image with the Aviary Image Editor: Double-click on the image to open the Edit Image overlay window. Click on the Edit button below the image thumbnail. This will open the Aviary window, as shown in the following screenshot: Select the type of adjustment you want to perform from the menu at the top. Use the controls to perform the adjustment and click on Apply. The window will show you the effect of the adjustment on the image. Perform any other adjustments in the same manner. You can go back to previous steps using the arrows in the bottom-left section of the editor window. Once you have performed all desired adjustments, click on Save to commit the adjustments to the image permanently. The Aviary window will now close. In the Edit Image window, click on Save to store the Aviary adjustments. The Edit Image window will now close. In the Content Editor window, click on the Save button to refresh the page with the newly edited version of the image. Adding content to sidebars or footers Until this point, all of our content additions and edits have been performed on a single page. However, it's likely that you will want to have certain blocks of content appear on multiple or all pages, such as a copyright notice in your page footer or an About the Author text snippet in the sidebar of all blog posts. You add content to footers or sidebars using the Content Editor in much the same way as adding the page content. However, there are a few restrictions. Certain templates only allow certain types of blocks in footers or sidebars, and some templates have restrictions on positioning elements as well—for example, it's unlikely that you will be able to wrap text around an image in a sidebar due to space limitations. If you are unable to get the system to accept an addition or repositioning move that you are trying to perform to a block in a sidebar or footer, it usually indicates that you are trying to perform something that is prohibited. Adding or editing content in a footer Follow these steps to add or edit content in a footer: In the preview screen, scroll to the bottom of your page and hover over the footer area to activate the Annotations there, and click on the Edit button next to the Footer Content label, as shown in the following screenshot: This will open the Content Editor window. You will notice that Insert Points appear just like before, and you can click on an Insert Point to add a block, or click within an existing Text Block to edit it. You can move blocks in a footer in the same way as those in a page body. Most templates have a footer, but not all of them do. Some templates hide the footer on certain page types, so if you can't see your footer, try looking at a standard page, or double-check whether your template offers one. Adding or editing content in a sidebar Not all templates have a sidebar, but if yours does, here's how you can add or edit content in your sidebar: While in the Pages menu, navigate to a page that you know has a sidebar in your template, such as a Blog page. You should see the template's demo content preloaded into your sidebar. Hover your mouse over the sidebar area to activate the Annotations, and click on the Edit button that appears at the top of the sidebar area. Make sure you click on the correct Annotation. Other Annotations may be activated on the page, so don't get confused and click on anything other than the sidebar Annotations, as shown in the following screenshot: Once the Content Editor window opens, you can click on an Insert Point to add a block or click within an existing Text Block to edit it. You can move blocks in a sidebar in the same way as those in a page body. Enabling a sidebar If you do not see the sidebar, but you know that your template allows one, and you know you are on the correct page type (for example, a Blog post), then it's possible that your sidebar is not enabled. Depending on the template, you enable your sidebar in one of two ways. The first method is in the Style Editor, as follows: First, ensure you are looking at a Blog page (or any page that can have a sidebar in your template). From the Home menu in the side panel, navigate to Design | Style Editor. In the Style Editor menu, scroll down until you see a set of controls related to Blog Styles. Find the control for the sidebar, and select the position you want. The following screenshot shows you an example of this: Click on Save to commit your changes. If you don't see the sidebar control in the Style Editor, you may find it in the Blog or Page Settings instead, as described here: First, ensure you are looking at the Blog page (or any page that can have a sidebar in your template), and then click on the Settings button in the Annotations or the cog icon in the Pages menu to open the Settings window. Look for a menu item called Page Layout and select the sidebar position, as shown in the following screenshot: On smaller screens, many templates use fluid layout to stack the sidebar below the main content area instead of showing it on the left- or right-hand side. If you can't see your sidebar and are viewing the website on a tablet or another small/low-resolution screen, scroll down to the bottom of the page and you will most likely see your sidebar content there, just above the footer. Adding links that point to web pages or files The final type of basic content that we'll cover in this article is adding hyperlinks to your pages. You can use these links to point to external websites, other pages on your own website, or files that visitors can either view within the browser or download to their computers. You can assign a link to any word or phrase in a Text Block, or you can assign a link to an image. You can also use a special type of block called a Button to make links really stand out and encourage users to click. When creating links in any of these scenarios, you will be presented with three main options: External: You can paste or type the full web address of the external website you want the link to point to. You can also choose to have this website open in a new window to allow users to keep your site open instead of navigating away entirely. The following screenshot shows the External option: Files: You can either upload a file directly, or link to a file that you have already uploaded earlier. This screenshot shows the Files option: Content: You can link to any page, category, or tag that you have created on your site. Linking to a category or tag will display a list of all items that have been labeled with that tag or category. Here's an example of the Content option: Assigning a link to word(s) in a Text Block Follow these steps to assign a link to a word or phrase in a Text Block: Highlight the word(s), and then click on the link icon (two interlocked ovals) in the text editor menu. Select the type of link you want to add, input the necessary settings, and then click anywhere outside the Edit Link window. This will close the Edit Link window, and you will see that the word is now a different color to indicate that the link has been applied. Click on the Save button at the top of the page. You can change or remove the link at any time by clicking on the word. A floating window will appear to show you what the link currently points to, along with the options to edit or remove the link. This is shown in the following screenshot: Assigning a link to an image Here's how you can assign a link to an image: Double-click on the image to open the Edit Image window. Under Clickthrough URL, select the type of link that you want to add and input the necessary settings. Click on Save. Creating a Button on your page You can create a Button on your page by following these steps: In the Content Editor window, find the point where you want to insert the button on the page, and click on the Insert Point to open the Add Block menu. Under the Filters & Lists category, choose Button. Type the text that you want to show on the button, select the type of link, and select the button size and alignment you want. The following screenshot shows the Edit Button window: This is how the button appears on the page: Summary In this article, you have acquired skills you need to add and edit basic web content, and you have learned how to move things around to create finished web pages with your desired page layout. Visit www.square-help.com/inspiration if you'd like to see some examples of different page layouts to give you ideas for making your own pages. Here, you'll see how you can use LayoutEngine to create sophisticated layouts for a range of page types. Resources for Article: Further resources on this subject: Welcoming your Visitors: Creating Attractive Home Pages and Overview Pages [article] Selecting Elements [article] Creating Blog Content in WordPress [article]
Read more
  • 0
  • 0
  • 11464

article-image-customer-management-joomla-and-virtuemart
Packt
22 Oct 2009
7 min read
Save for later

Customer Management in Joomla! and VirtueMart

Packt
22 Oct 2009
7 min read
Note that all VirtueMart customers must be registered with Joomla!. However, not all Joomla! users need to be the VirtueMart customers. Within the first few sections of this article, you will have a clear concept about user management in Joomla! and VirtueMart. Customer management Customer management in VirtueMart includes registering customers to the VirtueMart shop, assigning them to user groups for appropriate permission levels, managing fields in the registration form, viewing and editing customer information, and managing the user groups. Let's dive in to these activities in the following sections. Registration/Authentication of customers Joomla! has a very strong user registration and authentication system. One core component in Joomla! is com_users, which manages user registration and authentication in Joomla!. However, VirtueMart needs some extra information for customers. VirtueMart collects this information through its own customer registration process, and stores the information in separate tables in the database. The extra information required by VirtueMart is stored in a table named jos_vm_user_info, which is related to the jos_users table by the user id field. Usually, when a user registers to the Joomla! site, they also register with VirtueMart. This depends on some global settings. In the following sections, we are going to learn how to enable the user registration and authentication for VirtueMart. Revisiting registration settings We configure the registration settings from VirtueMart's administration panel Admin | Configuration | Global screen. There is a section titled User Registration Settings, which defines how the user registration will be handled: Ensure that your VirtueMart shop has been configured as shown in the screenshot above. The first field to configure is the User Registration Type. Selecting Normal Account Creation in this field creates both a Joomla! and VirtueMart account during user registration. For our example shop, we will be using this setting. Joomla!'s new user activation should be disabled when we are using VirtueMart. That means the Joomla! New account activation necessary? field should read No. Enabling VirtueMart login module There is a default module in Joomla! which is used for user registrations and login. When we are using this default Login Form (mod_login module), it does not collect information required by VirtueMart, and does not create customers in VirtueMart. By default, when published, the mod_login module looks like the following screenshot: As you see, registered users can log in to Joomla! through this form, recover their forgotten password by clicking on the Forgot your password? link, and create a new user account by clicking on the Create an account link. When a user clicks on the Create an account link, they get the form as shown in the following screenshot: We see that normal registration in Joomla! only requires four pieces of information: Name, Username, Email, and Password. It does not collect information needed in VirtueMart, such as billing and shipping address, to be a customer. Therefore, we need to disable the mod_login module and enable the mod_virtuemart_login module. We have already learned how to enable and disable a module in Joomla!. We have also learned how to install modules. By default, the mod_virtuemart_login module's title is VirtueMart Login. You may prefer to show this title as Login only. In that case, click on the VirtueMart Login link in the Module Name column. This brings the Module:[Edit] screen: In the Title field, type Login (or any other text you want to show as the title of this module). Make sure the module is enabled and position is set to left or right. Click  on the Save icon to save your settings. Now, browse to your site's front-page  (for example, http://localhost/bdosn/), and you will see the login form as shown in the following screenshot: As you can see, this module has the same functionalities as we saw in the mod_login module of Joomla!. Let us test the account creation in this module. Click on the Register link. It brings the following screen: The registration form has three main sections: Customer Information, Bill To Information, and Send Registration. At the end, there is the Send Registration button for submitting the form data. In the Customer Information section, type your email address, the desired username, and password. Confirm the password by typing it again in the Confirm password field. In the Bill To Information section, type the address details where bills are to be sent. In the entire form, required fields are marked with an asterisk (*). You must provide information for these required fields. In the Send Registration section, you need to agree to the Terms of Service. Click on the Terms of Service link to read it. Then, check the I agree to the Terms of Service checkbox and click on the Send Registration button to submit the form data: If you have provided all of the required information and submitted a unique  email address, the registration will be successful. On successful completion of registration, you get the following screen notification, and will be logged in to  the shop automatically: If you scroll down to the Login module, you will see that you are logged in and greeted by the store. You also see the User Menu in this screen: Both the User Menu and the Login modules contain a Logout button. Click on either of these buttons to log out from the Joomla! site. In fact, links in the User Menu module are for Joomla! only. Let us try the link Your Details. Click on the Your Details link, and you will see the information shown in the following screenshot: As you see in the screenshot above, you can change your full name, email, password, frontend language, and time zone. You cannot view any information regarding billing address, or other information of the customer. In fact, this information is for regular Joomla! users. We can only get full customer information by clicking on the Account Maintenance link in the Login module. Let us try it. Click on the Account Maintenance link, and it shows the following screenshot: The Account Maintenance screen has three sections: Account Information, Shipping Information, and Order Information. Click on the Account Information link to see what happens. It shows the following screen: This shows Customer Information and Bill To Information, which have been entered during user registration. The last section on this screen is the Bank Information, from where the customer can add bank account information. This section looks like the following screenshot: As you can see, from the Bank Account Info section, the customers can enter their bank account information including the account holder's name, account number, bank's sorting code number, bank's name, account type, and IBAN (International Bank Account Number). Entering this information is important when you are using  a Bank Account Debit payment method. Now, let us go back to the Account Maintenance screen and see the other sections. Click on the Shipping Information link, and you get the following screen: There is one default shipping address, which is the same as the billing address. The customers can create additional shipping addresses. For creating a new shipping address, click on the Add Address link. It shows the following screen:
Read more
  • 0
  • 0
  • 11449
article-image-icinga-object-configuration
Packt
18 Nov 2013
9 min read
Save for later

Icinga Object Configuration

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) A localhost monitoring setup Let us take a close look at our current setup, which we created, for monitoring a localhost. Icinga by default comes with object configuration for a localhost. The object configuration files are inside /etc/icinga/objects for default installations. $ ls /etc/icinga/objects commands.cfg notifications.cfg templates.cfg contacts.cfg printer.cfg timeperiods.cfg localhost.cfg switch.cfg windows.cfg There are several configuration files with object definitions. Together, these object definitions define the monitoring setup for monitoring some services on a localhost. Let's first look at localhost.cfg, which has most of the relevant configuration. We have a host definition: define host{ use linux-server host_name localhost alias localhost address 127.0.0.1 } The preceding object block defines one object, that is, the host that we want to monitor, with details such as the hostname, alias for the host, and the address of the server—which is optional, but is useful when you don't have DNS record for the hostname. We have a localhost host object defined in Icinga with the preceding object configuration. The localhost.cfg file also has a hostgroup defined which is as follows: define hostgroup { hostgroup_name linux-servers alias Linux Servers members localhost // host_name of the host object } The preceding object defines a hostgroup with only one member, localhost, which we will extend later to include more hosts. The members directive specifies the host members of the hostgroup. The value of this directive refers to the value of the host_name directive in the host definitions. It can be a comma-separated list of several hostnames. There is also a directive called hostgroups in the host object, where you can give a comma-separated list of names of the hostgroups that we want the host to be part of. For example, in this case, we could have omitted the members directive in the hostgroup definition and specified a hostgroups directive, which has the value linux-servers, in the localhost host definition. At this point, we have a localhost host and a linux-servers hostgroup, and localhost is a member of linux-servers. This is illustrated in the following figure: Going further into localhost.cfg, we have a bunch of service object definitions that follow. Each of these definitions indicate the service on a localhost that we want to monitor with the host_name directive. define service { use local-service host_name localhost service_description PING check_command check_ping!100.0,20%!500.0,60% } This is one of the service definitions. The object defines a PING service check that monitors the reachability. The host_name directive specifies the host that this service check should be associated with, which in this case is localhost. Again, the value of the host_name directive here should reflect the value of the host_name directive defined in the host object definition. So, we have a PING service check defined for a localhost, which is illustrated by following figure: There are several such service definitions that are placed on a localhost. Each service has a check_command directive that specifies the command for monitoring that service. Note that the exclamation marks in the check_command values are the command argument separators. So, cmd!foo!bar indicates that the command is cmd with foo as its first argument and bar as the second. It is important to remember that the check_ping part in check_command in the preceding example does not mean the check_ping executable that is in /usr/lib64/nagios/plugins/check_ping for most installations; it refers to the Icinga object of type command. In our setup, all command object definitions are inside commands.cfg. The commands.cfg file has the command object definition for check_ping. define command { command_name check_ping command_line $USER1$/check_ping -H $HOSTADDRESS$ -w $ARG1$ -c $ARG2$ -p 5 } The check_command value in the PING service definition refers to the preceding command object, which indicates the exact command to be executed for performing the service check. $USER1$ is a user-defined Icinga macro. Macros in Icinga are like variables that can be used in various object definitions to wrap data inside these variables. Some macros are predefined, while some are user defined. These user macros are usually defined in /etc/icinga/resources.cfg: $USER1$=/usr/lib64/nagios/plugins So replace the $USER1$ macro with its value, and execute: $ value/of/USER1/check_ping --help This command will print the usual usage string with all the command-line options available. $ARG1$ and $ARG2$ in the command definition are macros referring to the arguments passed in the check_command value in the service definition, which are 100.0,20% and 500.0,60% respectively for the PING service definition. We will come to this later. As noted earlier, the status of the service is determined by the exit code of the command that is specified in the command_line directive in command definition. We have many such service definitions for a localhost in localhost.cfg, such as Root Partition (monitors disk space), Total Processes, Current Load, HTTP, along with command definitions in commands.cfg for check_commands of each of these service definitions. So, we have a host definition for localhost, a hostgroup definition linux-servers having localhost as its member, several service check definitions for localhost with check commands, and the command definitions specifying the exact command with arguments to execute for the checks. This is illustrated with the example Ping check in the following figure: This completes the basic understanding of how our localhost monitoring is built up from plain-text configuration. Notifications We would, as is the point of having monitoring systems, like to get alerted when something actually goes down. We don't want to keep monitoring the Icinga web interface screen, waiting for something to go down. Icinga provides a very generic and flexible way of sending out alerts. We can have any alerting script triggered when something goes wrong, which in turn may run commands for sending e-mails, SMS, Jabber messages, Twitter tweets, or practically anything that can be done from within a script. The default localhost monitoring setup has an e-mail alerting configuration. The way these notifications work is that we define contact objects where we give the contact name, e-mail addresses, pager numbers, and other necessary details. These contact names are specified in the host/service templates or the objects themselves. So, when Icinga detects that a host/service has gone down, it will use this contact object to send contact details to the alerting script. The contact object definition also has the host_notification_commands and service_notification_commands directives. These directives specify the command objects that should be used to send out the notifications for that particular contact. The former directive is used when the host goes down, and the latter is used when a service goes down. The respective command objects are then looked up and the value of their command_line directive is executed. This command object is the same as the one we looked at previously for executing checks. The same command object type is used to also define notification commands. We can also define contact groups and specify them in the host/service object definitions to alert a bunch of contacts at the same time. We can also give a comma-separated list of contact names instead of a contact group. Let's have a look at our current setup for notification configuration. The host/service template objects have the admin contact group specified, whose definition is in contacts.cfg: define contactgroup { contactgroup_name admins alias Icinga Administrators members icingaadmin } The group has the icingaadmin member contact, which is again defined in the same file: define contact { contact_name icingaadmin use generic-contact alias Icinga Admin email your@email.com } The contacts.cfg file has your e-mail address. The contact object inherits the generic-contact template contact object. define contact{ name generic-contact service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r,f,s host_notification_options d,u,r,f,s service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email register 0 } This template object has the host_notification_commands and service_notification_commands directives defined as notify-host-by-email and notify-service-by-email respectively. These are commands similar to what we use in service definitions. These commands are defined in commands.cfg: define command { command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nHost: $HOSTNAME$nState: $HOSTSTATE$nAddress: $HOSTADDRESS$nInfo: $HOSTOUTPUT$nnDate/Time: $LONGDATETIME$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } define command { command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Icinga *****nnNotification Type: $NOTIFICATIONTYPE$nnService: $SERVICEDESC$nHost: $HOSTALIAS$nAddress: $HOSTADDRESS$nState: $SERVICESTATE$nnDate/Time: $LONGDATETIME$nnAdditional Info:nn$SERVICEOUTPUT$n" | /bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } These commands are eventually executed to send out e-mail notifications to the supplied e-mail addresses. Notice that command_lines uses the /bin/mail command to send e-mails, which is why we need a working setup of a SMTP server. Similarly, we could use any command/script path to send out custom alerts, such as SMS and Jabber. We could also change the above e-mail command to change the content format to suit our requirements. The following figure illustrates the contact and notification configuration: The correlation between hosts/services and contacts/notification commands is shown below: Summary In this article, we analyzed our current configuration for the Icinga setup which monitors a localhost. We can replicate this to monitor a number of other servers using the desired service checks. We also looked at how the alerting configuration works to send out notifications when something goes down. Resources for Article: Further resources on this subject: Troubleshooting Nagios 3.0 [Article] Notifications and Events in Nagios 3.0-part1 [Article] BackTrack 4: Target Scoping [Article]
Read more
  • 0
  • 0
  • 11410

article-image-running-simpletest-and-phpunit
Packt
09 Feb 2016
4 min read
Save for later

Running Simpletest and PHPUnit

Packt
09 Feb 2016
4 min read
In this article by Matt Glaman, the author of the book Drupal 8 Development Cookbook. This article will kick off with an introduction to getting a Drupal 8 site installed and running. (For more resources related to this topic, see here.) Running Simpletest and PHPUnit Drupal 8 ships with two testing suites. Previously Drupal only supported Simpletest. Now there are PHPUnit tests as well. In the official change record, PHPUnit was added to provide testing without requiring a full Drupal bootstrap, which occurs with each Simpletest test. Read the change record here: https://www.drupal.org/node/2012184. Getting ready Currently core comes with Composer dependencies prepackaged and no extra steps need to be taken to run PHPUnit. This article will demonstrate how to run tests the same way that the QA testbot on Drupal.org does. The process of managing Composer dependencies may change, but is currently postponed due to Drupal.org's testing and packaging infrastructure. Read more here: https://www.drupal.org/node/1475510. How to do it… First enable the Simpletest module. Even though you might only want to run PHPUnit, this is a soft dependency for running the test runner script. Open a command line terminal and navigate to your Drupal installation directory and run the following to execute all available PHPUnit tests: php core/scripts/run-tests.sh PHPUnit Running Simpletest tests required executing the same script, however instead of passing PHPUnit as the argument, you must define the URL option and tests option: php core/scripts/run-tests.sh --url http://localhost --all Review the test output! How it works… The run-tests.sh script has been shipped with Drupal since 2008, then named run-functional-tests.php. The command interacts with the test suites in Drupal to run all or specific tests and sets up other configuration items. We will highlight some of the useful options below: --help: This displays the items covered in the following bullets --list: This displays the available test groups that can be run --url: This is required unless the Drupal site is accessible through http://localhost:80 --sqlite: This allows you to run Simpletest without having to have Drupal installed --concurrency: This allows you to define how many tests run in parallel There's more… Is run-tests a shell script? The run-tests.sh isn't actually a shell script. It is a PHP script, which is why you must execute it with PHP. In fact, within core/scripts each file is a PHP script file meant to be executed from the command line. These scripts are not intended to be run through a web server, which is one of the reasons for the .sh extension. There are issues with discovered PHP across platforms that prevent providing a shebang line to allow executing the file as a normal bash or bat script. For more info view this Drupal.org issue: https://www.drupal.org/node/655178. Running Simpletest without Drupal installed With Drupal 8, Simpletest can be run off SQLlite and no longer requires an installed database. This can be accomplished by passing the sqlite and dburl options to the run-tests.sh script. This requires the PHP SQLite extension to be installed. Here is an example adapted from the DrupalCI test runner for Drupal core: php core/scripts/run-tests.sh --sqlite /tmp/.ht.sqlite --die-on-fail --dburl sqlite://tmp/.ht.sqlite --all Combined with the built in PHP web server for debugging you can run Simpletest without a full-fledged environment. Running specific tests Each example thus far has used the all option to run every Simpletest available. There are various ways to run specific tests: --module: This allows you to run all the tests of a specific module --class: This runs a specific path, identified by full namespace path --file: This runs tests from a specified file --directory: This run tests within a specified directory Previously in Drupal tests were grouped inside of module.test files, which is where the file option derives from. Drupal 8 utilizes the PSR-4 autoloading method and requires one class per file. DrupalCI With Drupal 8 came a new initiative to upgrade the testing infrastructure on Drupal.org. The outcome was DrupalCI. DrupalCI is open source and can be downloaded and run locally. The project page for DrupalCI is: https://www.drupal.org/project/drupalci. The test bot utilizes Docker and can be downloaded locally to run tests. The project ships with a Vagrant file to allow it to be run within a virtual machine or locally. Learn more on the testbot's project page: https://www.drupal.org/project/drupalci_testbot. See also PHPUnit manual: https://phpunit.de/manual/4.8/en/writing-tests-for-phpunit.html Drupal PHPUnit handbook: https://drupal.org/phpunit Simpletest from the command line: https://www.drupal.org/node/645286 Resources for Article: Further resources on this subject: Installing Drupal 8 [article] What is Drupal? [article] Drupal 7 Social Networking: Managing Users and Profiles [article]
Read more
  • 0
  • 0
  • 11389

article-image-linking-dynamic-content-external-websites
Packt
22 Jul 2014
5 min read
Save for later

Linking Dynamic Content from External Websites

Packt
22 Jul 2014
5 min read
(For more resources related to this topic, see here.) Introduction to the YouTube API YouTube provides three different APIs for a client application to access. The following figure shows the three different APIs provided by YouTube: Configuring a YouTube API In the Google Developers Console, we need to create a client project. We will be creating a new project, called PacktYoutubeapi. The URL for the Google Developers Console is https://console.developers.google.com. The following screenshot shows the pop-up window that appears when you want to create a new client project in the Developers Console: After the successful creation of the new client project, it will be available in the Console's project list. The following screenshot shows our new client project listed in the Developers Console: There is an option available to enable access to the YouTube API for our application. The following screenshot shows the YouTube API listed in the Developers Console. By default, the status of this API is OFF for the application. To enable this API for our application, we need to toggle the STATUS button to ON. The following screenshot shows the status of the YouTube API, which is ON for our application: To access YouTube API methods, we need to create an API key for our client application. You can find the option to create a public API key in the APIs & auth section. The following screenshot shows the Credentials subsection where you can create an API key: In the preceding screenshot, you can see a button to create a new API key. After clicking on this button, it provides some choices to create an API key, and after the successful creation of an API key, the key will be listed in the Credentials section. The following screenshot shows the API key generated for our application: Searching for a YouTube video In this section, we will learn about integrating a YouTube-related search video. YouTube Data API Version 3.0 is the new API to access YouTube data. It requires the API key that has been created in the previous section. The main steps that we have to follow to do a YouTube search are: After adding the YouTube Search button, click on it to trigger the search process. The script reads the data-booktitle attribute to get the title. This will serve as a keyword for the search. Check the following screenshot for the HTML markup showing the data-booktitle attribute: Then, it creates an AJAX request to make an asynchronous call to the YouTube API, and returns a promise object. After the successful completion of the AJAX call, the promise object is resolved successfully. Once the data is available, we fetch the jQuery template for the search results and compile it with a script function. We then link it to the search data returned by the AJAX call and generate the HTML markup for rendering. The base URL for the YouTube search is through a secure HTTP protocol, https://www.googleapis.com/youtube/v3/search. It takes different parameters as input for the search and filter criteria. Some of the important parameters are field and part. The part parameter The part parameter is about accessing a resource from a YouTube API. It really helps the application to choose resource components that your application actually uses. The following figure shows some of the resource components: The fields parameter The fields parameter is used to filter out the exact fields that are needed by the client application. This is really helpful to reduce the size of the response. For example, fields = items(id, snippet(title)) will result in a small footprint of a response containing an ID and a title. The YouTube button markup We have added a button in our jQuery product template to display the search option in the product. The following code shows the updated template: <script id="aProductTemplate" type="text/x-jquery-tmpl"> <div class="ts-product panel panel-default"> <div class="panel-head"> <div class="fb-like" data-href="${url}" datalayout=" button_count" data-action="like" data-show-faces="true" datashare=" true"> </div> </div> <div class="panel-body"> <span class="glyphicon glyphicon-certificate ts-costicon"> <label>${cost}$</label> </span> <img class="img-responsive" src ="${url}"> <h5>${title}</h5> </div> <div class="panel-footer"> <button type="button" class="btn btn-danger btn-block packt-youtube-button" data-bookTitle="${title}">YouTube Search</ button> <button type="button" class="btn btn-info btn-block">Buy</ button> <button type="button" class="btn btn-info btn-block twitme" data-bookTitle="${title}" data-imgURI="${url}">Tweet</button> <div class="g-plus-button"> <div class="g-plusone" data-width="180" datahref="${ url}"></div> </div> </div> </div> </script> The following screenshot shows the updated product markup with a YouTube button added to the product template:
Read more
  • 0
  • 0
  • 11323
article-image-quick-user-authentication-setup-django
Packt
10 May 2016
21 min read
Save for later

Quick User Authentication Setup with Django

Packt
10 May 2016
21 min read
In this article by Asad Jibran Ahmed author of book Django Project Blueprints we are going to start with a simple blogging platform in Django. In recent years, Django has emerged as one of the clear leaders in web frameworks. When most people decide to start using a web framework, their searches lead them to either Ruby on Rails or Django. Both are mature, stable, and extensively used. It appears that the decision to use one or the other depends mostly on which programming language you’re familiar with. Rubyists go with RoR, and Pythonistas go with Django. In terms of features, both can be used to achieve the same results, although they have different approaches to how things are done. One of the most popular platforms these days is Medium, widely used by a number of high-profile bloggers. Its popularity stems from its elegant theme and simple-to-use interface. I’ll walk you through creating a similar application in Django with a few surprise features that most blogging platforms don’t have. This will give you a taste of things to come and show you just how versatile Django can be. Before starting any software development project, it’s a good idea to have a rough roadmap of what we would like to achieve. Here’s a list of features that our blogging platform will have: Users should be able to register an account and create their blogs Users should be able to tweak the settings of their blogs There should be a simple interface for users to create and edit blog posts Users should be able to share their blog posts on other blogs on the platform I know this seems like a lot of work, but Django comes with a couple of contrib packages that speed up our work considerably. (For more resources related to this topic, see here.) The contrib Packages The contrib packages are a part of Django that contain some very useful applications that the Django developers decided should be shipped with Django. The included applications provide an impressive set of features, including some that we’ll be using in this application: Admin: This is a full-featured CMS that can be used to manage the content of a Django site. The Admin application is an important reason for the popularity of Django. We’ll use this to provide an interface for site administrators to moderate and manage the data in our application. Auth: This provides user registration and authentication without requiring us to do any work. We’ll be using this module to allow users to sign up, sign in, and manage their profiles in our application. Sites: This frameworkprovides utilities that help us run multiple Django sites using the same code base. We’ll use this feature to allow each user to have his own blog with content that can be shared between multiple blogs. There are a lot more goodies in the contrib module. I suggest you take a look at the complete list at https://docs.djangoproject.com/en/stable/ref/contrib/#contrib-packages. I usually end up using at least three of the contrib packages in all my Django projects. They provide often-required features such as user registration and management and free you to work on the core parts of your project, providing a solid foundation to build upon. Setting up our development environment Let’s start by creating the directory structure for our project, setting up the virtual environment, and configuring some basic Django settings that need to be set up in every project. Let’s call our blogging platform BlueBlog. To start a new project, you need to first open up your terminal program. In Mac OS X, it is the built-in terminal. In Linux, the terminal is named separately for each distribution, but you should not have trouble finding it; try searching your program list for the word terminal and something relevant should show up. In Windows, the terminal program is called the Command Line. You’ll need to start the relevant program depending on your operating system. If you are using the Windows operating system, some things will need to be done differently from what the book shows. If you are using Mac OS X or Linux, the commands shown here should work without any problems. Open the relevant terminal program for your operating system and start by creating the directory structure for our project and cd into the root project directory using the following commands: > mkdir –p blueblog > cd blueblog Next, let’s create the virtual environment, install Django, and start our project: > pyvenv blueblogEnv > source blueblogEnv/bin/activate > pip install django > django-admin.py startproject blueblog src With this out of the way, we’re ready to start developing our blogging platform. Database settings Open up the settings found at $PROJECT_DIR/src/blueblog/settings.py in your favorite editor and make sure that the DATABASES settings variable matches this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } In order to initialize the database file, run the following commands: > cd src > python manage.py migrate Staticfiles settings The last step in setting up our development environment is configuring the staticfiles contrib application. The staticfiles application provides a number of features that make it easy to manage the static files (css, images, and javascript) of your projects. While our usage will be minimal, you should look at the Django documentation for staticfiles in further detail as it is used quite heavily in most real-world Django projects. You can find the documentation at https://docs.djangoproject.com/en/stable/howto/static-files/. In order to set up the staticfiles application, we have to configure a few settings in the settings.py file. First, make sure that django.contrib.staticfiles is added to INSTALLED_APPS. Django should have done this by default. Next, set STATIC_URL to whatever URL you want your static files to be served from. I usually leave this to the default value, ‘/static/’. This is the URL that Django will put in your templates when you use the static template tag to get the path to a static file. Base template Next, let’s set up a base template that all the other templates in our application will inherit from. I prefer to have templates that are used by more than one application of a project in a directory named templates in the project source folder. To set this up, add os.path.join(BASE_DIR, 'templates') to the DIRS array of the TEMPLATES configuration dictionary in the settings file, and then create a directory named templates in $PROJECT_ROOT/src. Next, using your favorite text editor, create a file named base.html in the new folder with the following content: <html> <head> <title>BlueBlog</title> </head> <body> {% block content %} {% endblock %} </body> </html> Just as Python classes can inherit from other classes, Django templates can also inherit from other templates. Also, just as Python classes can have functions overridden by their subclasses, Django templates can also define blocks that children templates can override. Our base.html template provides one block to inherit templates to override called content. The reason for using template inheritance is code reuse. We should put HTML that we want to be visible on every page of our site, such as headers, footers, copyright notices, meta tags, and so on, in the base template. Then, any template inheriting from it will automatically get all this common HTML included automatically, and we will only need to override the HTML code for the block that we want to customize. You’ll see this principal of creating and overriding blocks in base templates used throughout the projects in this book. User accounts With the database setup out of the way, let’s start creating our application. If you remember, the first thing on our list of features is to allow users to register accounts on our site. As I’ve mentioned before, we’ll be using the auth package from the Django contrib packages to provide user account features. In order to use the auth package, we’ll need to add it our INSTALLED_APPS list in the settings file (found at $PROJECT_ROOT/src/blueblog/settings.py). In the settings file, find the line defining INSTALLED_APPS and make sure that the ‘django.contrib.auth’ string is part of the list. It should be by default but if, for some reason, it’s not there, add it manually. You’ll see that Django has included the auth package and a couple of other contrib applications to the list by default. A new Django project includes these applications by default because almost all Django projects end up using these. If you need to add the auth application to the list, remember to use quotes to surround the application name. We also need to make sure that the MIDDLEWARE_CLASSES list contains django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.auth.middleware.SessionAuthenticationMiddleware. These middleware classes give us access to the logged in user in our views and also make sure that if I change the password for my account, I’m logged out from all other devices that I previously logged on to. As you learn more about the various contrib applications and their purpose, you can start removing any that you know you won’t need in your project. Now, let’s add the URLs, views, and templates that allow the users to register with our application. The user accounts app In order to create the various views, URLs, and templates related to user accounts, we’ll start a new application. To do so, type the following in your command line: > python manage.py startapp accounts This should create a new accounts folder in the src folder. We’ll add code that deals with user accounts in files found in this folder. To let Django know that we want to use this application in our project, add the application name (accounts) to the INSTALLED_APPS setting variable; making sure to surround it with quotes. Account registration The first feature that we will work on is user registration. Let’s start by writing the code for the registration view in accounts/views.py. Make the contents of views.py match what is shown here: from django.contrib.auth.forms import UserCreationForm from django.core.urlresolvers import reverse from django.views.generic import CreateView class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') I’ll explain what each line of this code is doing in a bit. First, I’d like you to get to a state where you can register a new user and see for yourself how the flow works. Next, we’ll create the template for this view. In order to create the template, you first need to create a new folder called templates in the accounts folder. The name of the folder is important as Django automatically searches for templates in folders of that name. To create this folder, just type the following: > mkdir accounts/templates Next, create a new file called user_registration.html in the templates folder and type in the following code: {% extends "base.html" %} {% block content %} <h1>Create New User</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create Account" /> </form> {% endblock %} Finally, remove the existing code in blueblog/urls.py and replace it with this: from django.conf.urls import include from django.conf.urls import url from django.contrib import admin from django.views.generic import TemplateView from accounts.views import UserRegistrationView urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^$', TemplateView.as_view(template_name='base.html'), name='home'), url(r'^new-user/$', UserRegistrationView.as_view(), name='user_registration'), ] That’s all the code that we need to get user registration in our project! Let’s do a quick demonstration. Run the development server by typing as follows: > python manage.py runserver In your browser, visit http://127.0.0.1:8000/new-user/ and you’ll see a user registration form. Fill this in and click on submit. You’ll be taken to a blank page on successful registration. If there are some errors, the form will be shown again with the appropriate error messages. Let’s verify that our new account was indeed created in our database. For the next step, we will need to have an administrator account. The Django auth contrib application can assign permissions to user accounts. The user with the highest level of permission is called the super user. The super user account has free reign over the application and can perform any administrator actions. To create a super user account, run this command: > python manage.py createsuperuser As you already have the runserver command running in your terminal, you will need to quit it first by pressing Ctrl + C in the terminal. You can then run the createsuperuser command in the same terminal. After running the createsuperuser command, you’ll need to start the runserver command again to browse the site. If you want to keep the runserver command running and run the createsuperuser command in a new terminal window, you will need to make sure that you activate the virtual environment for this application by running the same source blueblogEnv/bin/activate command that we ran earlier when we created our new project. After you have created the account, visit http://127.0.0.1:8000/admin/ and log in with the admin account. You will see a link titled Users. Click on this, and you should see a list of users registered in our app. It will include the user that you just created. Congrats! In most other frameworks, getting to this point with a working user registration feature would take a lot more effort. Django, with it’s batteries included approach, allows us to do the same with a minimum of effort. Next, I’ll explain what each line of code that you wrote does. Generic views Here’s the code for the user registration view again: class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') Our view is pretty short for something that does such a lot of work. That’s because instead of writing code from scratch to handle all the work, we use one of the most useful features of Django, Generic Views. Generic views are base classes included with Django that provide functionality commonly required by a lot of web apps. The power of generic views comes from the ability to customize them to a great degree with ease. You can read more about Django generic views in the documentation available at https://docs.djangoproject.com/en/1.9/topics/class-based-views/. Here, we’re using the CreateView generic view. This generic view can display ModelForm using a template and, on submission, can either redisplay the page with errors if the form data was invalid or call the save method on the form and redirect the user to a configurable URL. CreateView can be configured in a number of ways. If you want ModelForm to be created automatically from some Django model, just set the model attribute to the model class, and the form will be generated automatically from the fields of the model. If you want to have the form only show certain fields from the model, use the fields attribute to list the fields that you want, exactly like you’d do while creating ModelForm. In our case, instead of having ModelForm generated automatically, we’re providing one of our own, UserCreationForm. We do this by setting the form_class attribute on the view. This form, which is part of the auth contrib app, provides the fields and a save method that can be used to create a new user. You’ll see that this theme of composing solutions from small reusable parts provided by Django is a common practice in Django web app development and, in my opinion, one of the best features of the framework. Finally, we define a get_success_url function that does a simple reverse URL and returns the generated URL. CreateView calls this function to get the URL to redirect the user to when a valid form is submitted and saved successfully. To get something up and running quickly, we left out a real success page and just redirected the user to a blank page. We’ll fix this later. Templates and URLs The template, which extends the base template that we created earlier, simply displays the form passed to it by CreateView using the form.as_p method, which you might have seen in the simple Django projects you may have worked on before. The urls.py file is a bit more interesting. You should be familiar with most of it—the parts where we include the admin site URLs and the one where we assign our view a URL. It’s the usage of TemplateView that I want to explain here. Like CreateView, TemplateView is another generic view provided to us by Django. As the name suggests, this view can render and display a template to the user. It has a number of customization options. The most important one is template_name, which tells it which template to render and display to the user. We could have created another view class that subclassed TemplateView and customized it by setting attributes and overriding functions like we did for our registration view. However, I wanted to show you another method of using a generic view in Django. If you only need to customize some basic parameters of a generic view—in this case, we only wanted to set the template_name parameter of the view—you can just pass the values as key=value pairs as function keyword arguments to the as_view method of the class when including it in the urls.py file. Here, we pass the template name that the view renders when the user accesses its URL. As we just needed a placeholder URL to redirect the user to, we simply use the blank base.html template. This technique of customizing generic views by passing key/value pairs only makes sense when you’re interested in customizing very basic attributes, like we do here. In case you want more complicated customizations, I advice you to subclass the view; otherwise, you will quickly get messy code that is difficult to maintain. Login and logout With registration out of the way, let’s write code to provide users with the ability to log in and log out. To start out, the user needs some way to go to the login and registration pages from any page on the site. To do this, we’ll need to add header links to our template. This is the perfect opportunity to demonstrate how template inheritance can lead to much cleaner and less code in our templates. Add the following lines right after the body tag to our base.html file: {% block header %} <ul> <li><a href="">Login</a></li> <li><a href="">Logout</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> </ul> {% endblock %} If you open the home page for our site now (at http://127.0.0.1:8000/), you should see that we now have three links on what was previously a blank page. It should look similar to the following screenshot: Click on the Register Account link. You’ll see the registration form we had before and the same three links again. Note how we only added these links to the base.html template. However, as the user registration template extends the base template, it got those links without any effort on our part. This is where template inheritance really shines. You might have noticed that href for the login/logout links is empty. Let’s start with the login part. Login view Let’s define the URL first. In blueblog/urls.py, import the login view from the auth app: from django.contrib.auth.views import login Next, add this to the urlpatterns list: url(r'^login/$', login, {'template_name': 'login.html'}, name='login'), Then, create a new file in accounts/templates called login.html. Put in the following content: {% extends "base.html" %} {% block content %} <h1>Login</h1> <form action="{% url "login" %}" method="post">{% csrf_token %} {{ form.as_p }} <input type="hidden" name="next" value="{{ next }}" /> <input type="submit" value="Submit" /> </form> {% endblock %} Finally, open up blueblog/settings.py file and add the following line to the end of the file: LOGIN_REDIRECT_URL = '/' Let’s go over what we’ve done here. First, notice that instead of creating our own code to handle the login feature, we used the view provided by the auth app. We import it using from django.contrib.auth.views import login. Next, we associate it with the login/ URL. If you remember the user registration part, we passed the template name to the home page view as a keyword parameter in the as_view() function. This approach is used for class-based views. For old-style view functions, we can pass a dictionary to the url function that is passed as keyword arguments to the view. Here, we use the template that we created in login.html. If you look at the documentation for the login view (https://docs.djangoproject.com/en/stable/topics/auth/default/#django.contrib.auth.views.login), you’ll see that on successfully logging in, it redirects the user to settings.LOGIN_REDIRECT_URL. By default, this setting has a value of /accounts/profile/. As we don’t have such a URL defined, we change the setting to point to our home page URL instead. Next, let’s define the logout view. Logout view In blueblog/urls.py, import the logout view: from django.contrib.auth.views import logout Add the following to the urlpatterns list: url(r'^logout/$', logout, {'next_page': '/login/'}, name='logout'), That’s it. The logout view doesn’t need a template; it just needs to be configured with a URL to redirect the user to after logging them out. We just redirect the user back to the login page. Navigation links Having added the login/logout view, we need to make the links we added in our navigation menu earlier take the user to those views. Change the list of links that we had in templates/base.html to the following: <ul> {% if request.user.is_authenticated %} <li><a href="{% url "logout" %}">Logout</a></li> {% else %} <li><a href="{% url "login" %}">Login</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> {% endif %} </ul> This will show the Login and Register Account links to the user if they aren’t already logged in. If they are logged in, which we check using the request.user.is_authenticated function, they are only shown the Logout link. You can test all of these links yourself and see how little code was needed to make such a major feature of our site work. This is all possible because of the contrib applications that Django provides. Summary In this article we started with a simple blogging platform in Django. We also had a look at setting up the Database, Staticfiles and Base templates. We have also created a user account app with registration and navigation links in it. Resources for Article: Further resources on this subject: Setting up a Complete Django E-commerce store in 30 minutes [article] "D-J-A-N-G-O... The D is silent." - Authentication in Django [article] Test-driven API Development with Django REST Framework [article]
Read more
  • 0
  • 0
  • 11304

article-image-enhancing-user-interface-ajax
Packt
22 Oct 2009
32 min read
Save for later

Enhancing the User Interface with Ajax

Packt
22 Oct 2009
32 min read
Since our project is a Web 2.0 application, it should be heavily focused on the user experience. The success of our application depends on getting users to post and share content on it. Therefore, the user interface of our application is one of our major concerns. This article will improve the interface of our application by introducing Ajax features, making it more user-friendly and interactive. Ajax and Its Advantages Ajax, which stands for Asynchronous JavaScript and XML, consists of the following technologies: HTML and CSS for structuring and styling information. JavaScript for accessing and manipulating information dynamically. XMLHttpRequest, which is an object provided by modern browsers for exchanging data with the server without reloading the current web page. A format for transferring data between the client and server. XML is sometimes used, but it could be HTML, plain text, or a JavaScript-based format called JSON. Ajax technologies let code on the client-side exchange data with the server behind the scenes, without having to reload the entire page each time the user makes a request. By using Ajax, web developers are able to increase the interactivity and usability of web pages. Ajax offers the following advantages when implemented in the right places: Better user experience. With Ajax, the user can do a lot without refreshing the page, which brings web applications closer to regular desktop applications. Better performance. By exchanging only the required data with the server, Ajax saves bandwidth and increases the application's speed. There are numerous examples of web applications that use Ajax. Google Maps and Gmail are perhaps two of the most prominent examples. In fact, these two applications played an important role in spreading the adoption of Ajax, because of the success that they enjoyed. What sets Gmail from other web mail services is its user interface, which enables users to manage their emails interactively without waiting for a page reload after every action. This creates a better user experience and makes Gmail feel like a responsive and feature-rich application rather than a simple web site. This article explains how to use Ajax with Django so as to make our application more responsive and user friendly. We are going to implement three of the most common Ajax features found in web applications today. But before that, we will learn about the benefits of using an Ajax framework as opposed to working with raw JavaScript functions. Using an Ajax Framework in Django In this section we will choose and install an Ajax framework in our application. This step isn't entirely necessary when using Ajax in Django, but it can greatly simplify working with Ajax. There are many advantages to using an Ajax framework: JavaScript implementations vary from browser to browser. Some browsers provide more complete and feature-rich implementations, whereas others contain implementations that are incomplete or don't adhere to standards. Without an Ajax framework, the developer must keep track of browser support for the JavaScript features that they are using, and work around the limitations that are present in some browser implementations of JavaScript. On the other hand, when using an Ajax framework, the framework takes care of this for us; it abstracts access to the JavaScript implementation and deals with the differences and quirks of JavaScript across browsers. This way, we concentrate on developing features instead of worrying about browser differences and limitations. The standard set of JavaScript functions and classes is a bit lacking for fully fledged web application development. Various common tasks require many lines of code even though they could have been wrapped in simple functions. Therefore, even if you decide not to use an Ajax framework, you will find yourself having to write a library of functions that encapsulates JavaScript facilities and makes them more usable. But why reinvent the wheel when there are many excellent Open Source libraries already available? Ajax frameworks available on the market today range from comprehensive solutions that provide server-side and client-side components to light-weight client-side libraries that simplify working with JavaScript. Given that we are already using Django on the server-side, we only want a client-side framework. In addition, the framework should be easy to integrate with Django without requiring additional dependencies. And finally, it is preferable to pick a light and fast framework. There are many excellent frameworks that fulfil our requirements, such as Prototype, the Yahoo! UI Library and jQuery. I have worked with them all and they are all great. But for our application, I'm going to pick jQuery, because it's the lightest of the three. It also enjoys a very active development community and a wide range of plugins. If you already have experience with another framework, you can continue using it during this article. It is true that you will have to adapt the JavaScript code in this article to your framework, but Django code on the server-side will remain the same no matter which framework you choose. Now that you know the benefits of using an Ajax framework, we will move to installing jQuery into our project. Downloading and Installing jQuery One of the advantages of jQuery is that it consists of a single light-weight file. To download it, head to http://jquery.com/ and choose the latest version (1.2.3 at the time of writing). You will find two choices: Uncompressed version: This is the standard version that I recommend you to use during development. You will get a .js file with the library's code in it. Compressed version: You will also get a .js file if you download this version. However, the code will look obfuscated. jQuery developers produce this version by applying many operations on the uncompressed .js file to reduce its size, such as removing white spaces and renaming variables, as well as many other techniques. This version is useful when you deploy your application, because it offers exactly the same features as the uncompressed one, but with a smaller file size. I recommend the uncompressed version during development because you may want to look into jQuery's code and see how a particular method works. However, the two versions offer exactly the same set of features, and switching from one to another is just a matter of replacing one file. Once you have the jquery-xxx.js file (where xxx is the version number), rename it to jquery.js and copy it to the site_media directory of our project (Remember that this directory holds static files which are not Python code). Next, you will have to include this file in the base template of our site. This will make jQuery available to all of our project pages. To do so, open templates/base.html and add the highlighted code to the head section in it: <head> <title>Django Bookmarks | {% block title %}{% endblock %}</title> <link rel="stylesheet" href="/site_media/style.css"type="text/css" /> <script type="text/javascript"src="/site_media/jquery.js"></script></head> To add your own JavaScript code to an HTML page, you can either put the code in a separate .js file and link it to the HTML page by using the script tag as above, or write the code directly in the body of a script tag: <script type="text/javascript"> // JavaScript code goes here.</script> The first method, however, is recommended over the second one, because it helps keep the source tree organized by putting HTML and JavaScript code in different files. Since we are going to write our own .js files during this article, we need a way to link .js files to templates without having to edit base.html every time. We will do this by creating a template block in the head section of the base.html template. When a particular page wants to include its own JavaScript code, this block may be overridden to add the relevant script tag to the page. We will call this block external, because it is used to link external files to pages. Open templates/base.html and modify its head section as follows: <head> <title>Django Bookmarks | {% block title %}{% endblock %}</title> <link rel="stylesheet" href="/site_media/style.css" type="text/css"/> <script type="text/javascript" src="/site_media/jquery.js"> </script> {% block external %}{% endblock %}</head> And we have finished. From now on, when a view wants to use some JavaScript code, it can link a JavaScript file to its template by overriding the external template block. Before we start to implement Ajax enhancements in our project, let's go through a quick introduction to the jQuery framework. The jQuery JavaScript Framework jQuery is a library of JavaScript functions that facilitates interacting with HTML documents and manipulating them. The library is designed to reduce the time and effort spent on writing code and achieving cross-browser compatibility, while at the same time taking full advantage of what JavaScript offers to build interactive and responsive web applications. The general workflow of using jQuery consists of two steps: Select an HTML element or a group of elements to work on. Apply a jQuery method to the selected group Element Selectors jQuery provides a simple approach to select elements; it works by passing a CSS selector string to a function called $. Here are some examples to illustrate the usage of this function: If you want to select all anchor (<a>) elements on a page, you can use the following function call: $("a") If you want to select anchor elements which have the .title CSS class, use $("a.title") To select an element whose ID is #nav, you can use $("#nav") To select all list item (<li>) elements inside #nav, use $("#nav li") And so on. The $() function constructs and returns a jQuery object. After that, you can call methods on this object to interact with the selected HTML elements. jQuery Methods jQuery offers a variety of methods to manipulate HTML documents. You can hide or show elements, attach event handlers to events, modify CSS properties, manipulate the page structure and, most importantly, perform Ajax requests. Before we go through some of the most important methods, I highly recommend using the Firefox web browser and an extension called Firebug to experiment with jQuery. This extension provides a JavaScript console that is very similar to the interactive Python console. With it, you can enter JavaScript statements and see their output directly without having to create and edit files. To obtain Firebug, go to http://www.getfirebug.com/, and click on the install link. Depending on the security settings of Firefox, you may need to approve the website as a safe source of extensions. If you do not want to use Firefox for any reason, Firebug's website offers a "lite" version of the extension for other browsers in the form of a JavaScript file. Download the file to the site_media directory, and then include it in the templates/base.html template as we did with jquery.js: <head> <title>Django Bookmarks | {% block title %}{% endblock %}</title> <link rel="stylesheet" href="/site_media/style.css" type="text/css"/> <script type="text/javascript" src="/site_media/firebug.js"> </script> <script type="text/javascript" src="/site_media/jquery.js"> </script> {% block external %}{% endblock %}</head> To experiment with the methods outlined in this section, launch the development server and navigate to the application's main page. Open the Firebug console by pressing F12, and try selecting elements and manipulating them. Hiding and Showing Elements Let's start with something simple. To hide an element on the page, call the hide() method on it. To show it again, call the show() method. For example, try this on the navigation menu of your application: >>> $("#nav").hide()>>> $("#nav").show() You can also animate the element while hiding and showing it. Try the fadeOut(), fadeIn(), slideUp() or slideDown() methods to see two of these animated effects. Of course, these methods (like all other jQuery methods) also work if you select more than one element at once. For example, if you open your user page and enter the following method call into the Firebug console, all of the tags will disappear: >>> $('.tags').slideUp() Accessing CSS Properties and HTML Attributes Next, we will learn how to change CSS properties of elements. jQuery offers a method called css() for performing CSS operations. If you call this method with a CSS property name passed as a string, it returns the value of this property: >>> $("#nav").css("display") Result: "block" If you pass a second argument to this method, it sets the specified CSS property of the selected element to the additional argument: >>> $("#nav").css("font-size", "0.8em") Result: <div id="nav" style="font-size: 0.8em;"> In fact, you can manipulate any HTML attribute and not just CSS properties. To do so, use the attr() method which works in a similar way to css(). Calling it with an attribute name returns the attribute value, whereas calling it with an attribute name/value pair sets the attribute to the passed value. To test this, go to the bookmark submission form and enter the following into the console: >>> $("input").attr("size", "48") Results: <input id="id_url" type="text" size="48" name="url"> <input id="id_title" type="text" size="48" name="title"> <input id="id_tags" type="text" size="48" name="tags"> (Output may slightly differ depending on the versions of Firefox and Firebug). This will change the sizes of all input elements on the page at once to 48. In addition, there are shortcut methods to get and set commonly used attributes, such as val() which returns the value of an input field when called without arguments, and sets this value to an argument if you pass one. There is also html() which controls the HTML code inside an element. Finally, there are two methods that can be used to attach or detach a CSS class to an element; they are called addClass() and removeClass(). A third method is provided to toggle a CSS class, and it is called toggleClass(). All of these class methods take the name of the class to be changed as a parameter. Manipulating HTML Documents Now that you are comfortable with manipulating HTML elements, let's see how to add new elements or remove existing elements. To insert HTML code before an element, use the before() method, and to insert code after an element, use the after() method. Notice how jQuery methods are well-named and very easy to remember! Let's test these methods by inserting parentheses around tag lists on the user page. Open your user page and enter the following in the Firebug console: >>> $(".tags").before("<strong>(</strong>")>>> $(".tags").after("<strong>)</strong>") You can pass any string you want to - before() or after() - the string may contain plain text, one HTML element or more. These methods offer a very flexible way to dynamically add HTML elements to an HTML document. If you want to remove an element, use the remove() method. For example: $("#nav").remove() Not only does this method hide the element, it also removes it completely from the document tree. If you try to select the element again after using the remove() method, you will get an empty set: >>> $("#nav") Result: [] Of course, this only removes the elements from the current instance of the page. If you reload the page, the elements will appear again. Traversing the Document Tree Although CSS selectors offer a very powerful way to select elements, there are times when you want to traverse the document tree starting from a particular element. For this, jQuery provides several methods. The parent() method returns the parent of the currently selected element. The children() method returns all the immediate children of the selected element. Finally, the find() method returns all the descendants of the currently selected element. All of these methods take an optional CSS selector string to limit the result to elements that match the selector. For example, $("#nav").find ("li") returns all the <li> descendants of #nav. If you want to access an individual element of a group, use the get() method which takes the index of the element as a parameter. $("li").get(0) for example returns the first <li> element out of the selected group. Handling Events Next, we will learn about event handlers. An event handler is a JavaScript function that is invoked when a particular event happens, for example, when a button is clicked or a form is submitted. jQuery provides a large set of methods to attach handlers to events; events of particular interest in our application are mouse clicks and form submissions. To handle the event of clicking on an element, we select this element and call the click() method on it. This method takes an event handler function as a parameter. Let's try this using the Firebug console. Open the main page of the application, and insert a button after the welcome message: >>> $("p").after("<button id="test-button">Click me!</button>") (Notice that we had to escape the quotations in the strings passed to the after() method.) If you try to click this button, nothing will happen, so let's attach an event handler to it: >>> $("#test-button").click(function () { alert("You clicked me!"); }) Now, when you click the button, a message box will appear. How did this work? The argument that we passed to click() may look a bit complicated, so let's examine it again: function () { alert("You clicked me!"); } This appears to be a function declaration but without a function name. Indeed, this construct creates what is called an anonymous function in JavaScript terminology, and it is used when you need to create a function on the fly and pass it as an argument to another function. We could have avoided using anonymous functions and declared the event handler as a regular function: >>> function handler() { alert("You clicked me!"); }>>> $("#test-button").click(handler) The above code achieves the same effect, but the first one is more concise and compact. I highly recommend you to get used to anonymous functions in JavaScript (if you are not already), as I'm sure you will appreciate this construct and find it more readable after using it for a while. Handling form submissions is very similar to handling mouse clicks. First, you select the form, and then you call the submit() method on it and pass the handler as an argument. We will use this method many times while adding Ajax features to our project in later sections. Sending Ajax Requests Before we finish this section, let's talk about Ajax requests. jQuery provides many ways to send Ajax requests to the server. There is, for example, the load() method which takes a URL and loads the page at this URL into the selected element. There are also methods for sending GET or POST requests, and receiving the results. We will examine these methods in more depth while implementing Ajax features in our project. What Next? This wraps up our quick introduction to jQuery. The information provided in this section will be enough to continue with this article, and once you finish the article, you will be able to implement many interesting Ajax features on your own. But please keep in mind that this jQuery introduction is only the tip of the iceberg. If you want a comprehensive treatment of the jQuery framework, I highly recommend the book "Learning jQuery" from Packt Publishing, as it covers jQuery in much more detail. You can find out more about the book at: http://www.packtpub.com/jQuery Implementing Live Searching of Bookmarks We will start introducing Ajax into our application by implementing live searching. The idea behind this feature is simple: when the user types a few keywords into a text field and clicks search, a script works behind the scenes to fetch search results and present them on the same page. The search page does not reload, thus saving bandwidth, and providing a better and more responsive user experience. Before we start implementing this, we need to keep in mind an important rule while working with Ajax: write your application so that it works without Ajax, and then introduce Ajax to it. If you do so, you ensure that everyone will be able to use your application, including users who don't have JavaScript enabled and those who use browsers without Ajax support. Implementing Searching So before we work with Ajax, let's write a simple view that searches bookmarks by title. First of all, we need to create a search form, so open bookmarks/forms.py and add the following class to it: class SearchForm(forms.Form): query = forms.CharField( label='Enter a keyword to search for', widget=forms.TextInput(attrs={'size': 32})) As you can see, it's a pretty straightforward form class with only one text field. This field will be used by the user to enter search keywords. Next, let's create a view for searching. Open bookmarks/views.py and enter the following code into it: def search_page(request): form = SearchForm() bookmarks = [] show_results = False if request.GET.has_key('query'): show_results = True query = request.GET['query'].strip() if query: form = SearchForm({'query' : query}) bookmarks = Bookmark.objects.filter (title__icontains=query)[:10] variables = RequestContext(request, { 'form': form, 'bookmarks': bookmarks, 'show_results': show_results, 'show_tags': True, 'show_user': True })return render_to_response('search.html', variables) Apart from a couple of method calls, the view should be very easy to understand. We first initialize three variables, form which holds the search form, bookmarks which holds the bookmarks that we will display in the search results, and show_results which is a Boolean flag. We use this flag to distinguish between two cases: The search page was requested without a search query. In this case, we shouldn't display any search results, not even a "No bookmarks found" message. The search page was requested with a search query. In this case, we display the search results, or a "No bookmarks found" message if the query does not match any bookmarks. We need the show_results flag because the bookmarks variable alone is not enough to distinguish between the above two cases. bookmarks will empty when the search page is requested without a query, and it will also be empty when the query does not match any bookmarks. Next, we check whether a query was sent by calling the has_key method on the request.GET dictionary: if request.GET.has_key('query'): show_results = True query = request.GET['query'].strip() if query: form = SearchForm({'query' : query}) bookmarks = Bookmark.objects.filter(title__icontains=query)[:10] We use GET instead of POST here because the search form does not create or change data; it merely queries the database, and the general rule is to use GET with forms that query the database, and POST with forms that create, change or delete records from the database. If a query was submitted by the user, we set show_results to True and call strip() on the query string to ensure that it contains non-whitespace characters before we proceed with searching. If this is indeed the case, we bind the form to the query and retrieve a list of bookmarks that contain the query in their title. Searching is done by using a method called filter in Bookmark.objects. This is the first time that we have used this method; you can think of it as the equivalent of a SELECT statements in Django models. It receives the search criteria in its arguments and returns search results. The name of each argument must adhere to the following naming convention: field__operator Note that field and operator are separated by two underscores: field is the name of the field that we want to search by and operator is the lookup method that we want to use. Here is a list of the commonly-used operators: exact: The value of the argument is an exact match of the field. contains: The field contains the value of the argument. startswith: The field starts with the value of the argument. lt: The field is less than the value of the argument. gt: The field is greater than the value of the argument. Also, there are case-insensitive versions of the first three operators: iexact, icontains and istartswith. After this explanation of the filter method, let's get back to our search view. We use the icontains operator to get a list of bookmarks that match the query and retrieve the first ten items using Python's list slicing syntax. Finally we pass all the variables to a template called search.html to render the search page. Now create the search.html template in the templates directory with the following content: {% extends "base.html" %}{% block title %}Search Bookmarks{% endblock %}{% block head %}Search Bookmarks{% endblock %}{% block content %}<form id="search-form" method="get" action="."> {{ form.as_p }} <input type="submit" value="search" /></form><div id="search-results"> {% if show_results %} {% include 'bookmark_list.html' %} {% endif %}</div>{% endblock %} The template consists of familiar aspects that we have used before. We build the results list by including the bookmark_list.html like we did when building the user and tag pages. We gave the search form an ID, and rendered the search results in a div identified by another ID so that we can interact with them using JavaScript later. Notice how many times the include template tag saved us from writing additional code? It also lets us modify the look of the bookmarks list by editing a single file. This Django template feature is indeed very helpful in organizing and managing templates. Before you test the new view, add an entry for it in urls.py: urlpatterns = patterns('', # Browsing (r'^$', main_page), (r'^user/(w+)/$', user_page), (r'^tag/([^s]+)/$', tag_page), (r'^tag/$', tag_cloud_page), (r'^search/$', search_page),) Now test the search view by navigating to http://127.0.0.1:8000/search/ and experiment with it. You can also add a link to it in the navigation menu if you want; edit templates/base.html and add the highlighted code: <div id="nav"> <a href="/">home</a> | {% if user.is_authenticated %} <a href="/save/">submit</a> | <a href="/search/">search</a> | <a href="/user/{{ user.username }}/"> {{ user.username }}</a> | <a href="/logout/">logout</a> {% else %} <a href="/login/">login</a> | <a href="/register/">register</a> {% endif %}</div> We now have a functional (albeit very basic) search page. Thanks to our modular code, the task will turn out to be much simpler than it may seem. Implementing Live Searching To implement live searching, we need to do two things: Intercept and handle the event of submitting the search form. This can be done using the submit() method of jQuery. Use Ajax to load the search results in the back scenes, and insert them into the page. This can be done using the load() method of jQuery as we will see next. jQuery offers a method called load() that retrieves a page from the server and inserts its contents into the selected element. In its simplest form, the function takes the URL of the remote page to be loaded as a parameter. First of all, let's modify our search view a little so that it only returns search results without the rest of the search page when it receives an additional GET variable called ajax. We do so to enable JavaScript code on the client-side to easily retrieve search results without the rest of the search page HTML. This can be done by simply using the bookmark_list.html template instead of search.html when request.GET contains the key ajax. Open bookmarks/views.py and modify search_page (towards the end) so that it becomes as follows: def search_page(request): [...] variables = RequestContext(request, { 'form': form, 'bookmarks': bookmarks, 'show_results': show_results, 'show_tags': True, 'show_user': True }) if request.GET.has_key('ajax'): return render_to_response('bookmark_list.html', variables) else: return render_to_response('search.html', variables) Next, create a file called search.js in the site_media directory and link it to templates/search.html like this: {% extends "base.html" %}{% block external %} <script type="text/javascript" src="/site_media/search.js"> </script>{% endblock %}{% block title %}Search Bookmarks{% endblock %}{% block head %}Search Bookmarks{% endblock %}[...] Now for the fun part! Let's create a function that loads search results and inserts them into the corresponding div. Write the following code into site_media/search.js: function search_submit() { var query = $("#id_query").val(); $("#search-results").load( "/search/?ajax&query=" + encodeURIComponent(query) ); return false;} Let's go through this function line by line: The function first gets the query string from the text field using the val() method. We use the load() method to get search results from the search_page view, and insert the search results into the #search-results div. The request URL is constructed by first calling encodeURIComponent on query, which works exactly like the urlencode filter we used in Django templates. Calling this function is important to ensure that the constructed URL remains valid even if the user enters special characters into the text field such as &. After escaping query, we concatenate it with /search/?ajax&query=. This URL invokes the search_page view and passes the GET variables ajax and query to it. The view returns search results, and the load() method in turn loads the results into the #search-results div. We return false from the function to tell the browser not to submit the form after calling our handler. If we don't return false in the function, the browser will continue to submit the form as usual, and we don't want that. One little detail remains; where and when to attach search_submit to the submit event of the search form? A rule of a thumb when writing JavaScript is that we cannot manipulate elements in the document tree before the document finishes loading. Therefore, our function must be invoked as soon as the search page is loaded. Fortunately for us, jQuery provides a method to execute a function when the HTML document is loaded. Let's utilize it by appending the following code to site_media/search.js: $(document).ready(function () { $("#search-form").submit(search_submit);}); $(document) selects the document element of the current page. Notice that there are no quotations around document; it's a variable provided by the browser, not a string. ready() is a method that takes a function and executes it as soon as the selected element finishes loading. So in effect, we are telling jQuery to execute the passed function as soon as the HTML document is loaded. We pass an anonymous function to the ready() method; this function simply binds search_submit to the submit event of the form #search-form. That's it. We've implemented live searching with less than fifteen lines of code. To test the new functionality, navigate to http://127.0.0.1:8000/search/, submit queries, and notice how the results are displayed without reloading the page: The information covered in this section can be applied to any form that needs to be processed in the back scenes without reloading the page. You can, for example, create a comment form with a preview button that loads the preview in the same page without reloading. In the next section, we will enhance the user page to let users edit their bookmarks in place, without navigating away from the user page. Editing Bookmarks in Place Editing of posted content is a very common task in web sites. It's usually implemented by offering an edit link next to content. When clicked, this link takes the user to a form located on another page where content can be edited. When the user submits the form, they are redirected back to the content page. Imagine, on the other hand, that you could edit content without navigating away from the content page. When you click edit, the content is replaced with a form. When you submit the form, it disappears and the updated content appears in its place. Everything happens on the same page; edit form rendering and submission are done using JavaScript and Ajax. Wouldn't such a workflow be more intuitive and responsive? The technique described above is called in-place editing. It is now finding its way into web applications and becoming more common. We will implement this feature in our application by letting the user edit their bookmarks in place on the user page. Since our application doesn't support the editing of bookmarks yet, we will implement this first, and then modify the editing procedure to work in place. Implementing Bookmark Editing We already have most of the parts that are needed to implement bookmark editing. This was easy to do thanks to the get_or_create method provided by data models. This little detail greatly simplifies the implementation of bookmark editing. Here is what we need to do: We pass the URL of the bookmark that we want to edit as a GET variable named url to the bookmark_save_page view. We modify bookmark_save_page so that it populates the fields of the bookmark form if it receives the GET variable. The form is populated with the data of the bookmark that corresponds to the passed URL. When the populated form is submitted, the bookmark will be updated as we explained earlier, because it will look like the user submitted the same URL another time. Before we implement the technique described above, let's reduce the size of bookmark_save_page by moving the part that saves a bookmark to a separate function. We will call this function _bookmark_save. The underscore at the beginning of the name tells Python not to import this function when the views module is imported. The function expects a request and a valid form object as parameters; it saves a bookmark out of the form data, and returns this bookmark. Open bookmarks/views.py and create the following function; you can cut and paste the code from bookmark_save_page if you like, as we are not making any changes to it except for the return statement at the end. def _bookmark_save(request, form): # Create or get link. link, dummy = Link.objects.get_or_create(url=form.clean_data['url']) # Create or get bookmark. bookmark, created = Bookmark.objects.get_or_create( user=request.user, link=link ) # Update bookmark title. bookmark.title = form.clean_data['title'] # If the bookmark is being updated, clear old tag list. if not created: bookmark.tag_set.clear() # Create new tag list. tag_names = form.clean_data['tags'].split() for tag_name in tag_names: tag, dummy = Tag.objects.get_or_create(name=tag_name) bookmark.tag_set.add(tag)# Save bookmark to database and return it.bookmark.save()return bookmark Now in the same file, replace the code that you removed from bookmark_save_page with a call to _bookmark_save: @login_requireddef bookmark_save_page(request): if request.method == 'POST': form = BookmarkSaveForm(request.POST) if form.is_valid(): bookmark = _bookmark_save(request, form) return HttpResponseRedirect( '/user/%s/' % request.user.username )else: form = BookmarkSaveForm()variables = RequestContext(request, { 'form': form})return render_to_response('bookmark_save.html', variables) The current logic in bookmark_save_page works like this: if there is POST data:Validate and save bookmark.Redirect to user page.else:Create an empty form.Render page. To implement bookmark editing, we need to slightly modify the logic as follows: if there is POST data: Validate and save bookmark. Redirect to user page.else if there is a URL in GET data: Create a form an populate it with the URL's bookmark.else: Create an empty form.Render page. Let's translate the above pseudo code into Python. Modify bookmark_save_page in bookmarks/views.py so that it looks like the following (new code is highlighted): from django.core.exceptions import ObjectDoesNotExist@login_requireddef bookmark_save_page(request): if request.method == 'POST': form = BookmarkSaveForm(request.POST) if form.is_valid(): bookmark = _bookmark_save(request, form) return HttpResponseRedirect( '/user/%s/' % request.user.username ) elif request.GET.has_key('url'): url = request.GET['url'] title = '' tags = '' try: link = Link.objects.get(url=url) bookmark = Bookmark.objects.get( link=link, user=request.user ) title = bookmark.title tags = ' '.join( tag.name for tag in bookmark.tag_set.all() ) except ObjectDoesNotExist: pass form = BookmarkSaveForm({ 'url': url, 'title': title, 'tags': tags }) else: form = BookmarkSaveForm() variables = RequestContext(request, { 'form': form }) return render_to_response('bookmark_save.html', variables) This new section of the code first checks whether a GET variable called url exists. If this is the case, it loads the corresponding Link and Bookmark objects of this URL, and binds all the data to a bookmark saving form. You may wonder why we load the Link and Bookmark objects in a try-except construct that silently ignores exceptions. Indeed, it's perfectly valid to raise an Http404 exception if no bookmark was found for the requested URL. But our code chooses to only populate the URL field in this situation, leaving the title and tags fields empty.
Read more
  • 0
  • 0
  • 11304
Modal Close icon
Modal Close icon