Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-create-your-first-react-element
Packt
17 Feb 2016
22 min read
Save for later

Create Your First React Element

Packt
17 Feb 2016
22 min read
From the 7th to the 13th of November 2016, you can save up to 80% on some of our top ReactJS content - so what are you waiting for? Dive in here before the week ends! As many of you know, creating a simple web application today involves writing the HTML, CSS, and JavaScript code. The reason we use three different technologies is because we want to separate three different concerns: Content (HTML) Styling (CSS) Logic (JavaScript) (For more resources related to this topic, see here.) This separation works great for creating a web page because, traditionally, we had different people working on different parts of our web page: one person structured the content using HTML and styled it using CSS, and then another person implemented the dynamic behavior of various elements on that web page using JavaScript. It was a content-centric approach. Today, we mostly don't think of a website as a collection of web pages anymore. Instead, we build web applications that might have only one web page, and that web page does not represent the layout for our content—it represents a container for our web application. Such a web application with a single web page is called (unsurprisingly) a Single Page Application (SPA). You might be wondering, how do we represent the rest of the content in a SPA? Surely, we need to create an additional layout using HTML tags? Otherwise, how does a web browser know what to render? These are all valid questions. Let's take a look at how it works in this article. Once you load your web page in a web browser, it creates a Document Object Model (DOM) of that web page. A DOM represents your web page in a tree structure, and at this point, it reflects the structure of the layout that you created with only HTML tags. This is what happens regardless of whether you're building a traditional web page or a SPA. The difference between the two is what happens next. If you are building a traditional web page, then you would finish creating your web page's layout. On the other hand, if you are building a SPA, then you would need to start creating additional elements by manipulating the DOM with JavaScript. A web browser provides you with the JavaScript DOM API to do this. You can learn more about it at https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model. However, manipulating (or mutating) the DOM with JavaScript has two issues: Your programming style will be imperative if you decide to use the JavaScript DOM API directly. This programming style leads to a code base that is harder to maintain. DOM mutations are slow because they cannot be optimized for speed, unlike other JavaScript code. Luckily, React solves both these problems for us. Understanding virtual DOM Why do we need to manipulate the DOM in the first place? Because our web applications are not static. They have a state represented by the user interface (UI) that a web browser renders, and that state can be changed when an event occurs. What kind of events are we talking about? There are two types of events that we're interested in: User events: When a user types, clicks, scrolls, resizes, and so on Server events: When an application receives data or an error from a server, among others What happens while handling these events? Usually, we update the data that our application depends on, and that data represents a state of our data model. In turn, when a state of our data model changes, we might want to reflect this change by updating a state of our UI. Looks like what we want is a way of syncing two different states: the UI state and the data model state. We want one to react to the changes in  the other and vice versa. How can we achieve this? One of the ways to sync your application's UI state with an underlying data model's state is two-way data binding. There are different types of two-way data binding. One of them is key-value observing (KVO), which is used in Ember.js, Knockout, Backbone, and iOS, among others. Another one is dirty checking, which is used in Angular. Instead of two-way data binding, React offers a different solution called the virtual DOM. The virtual DOM is a fast, in-memory representation of the real DOM, and it's an abstraction that allows us to treat JavaScript and DOM as if they were reactive. Let's take a look at how it works: Whenever the state of your data model changes, the virtual DOM and React will rerender your UI to a virtual DOM representation. React then calculates the difference between the two virtual DOM representations: the previous virtual DOM representation that was computed before the data was changed and the current virtual DOM representation that was computed after the data was changed. This difference between the two virtual DOM representations is what actually needs to be changed in the real DOM. React updates only what needs to be updated in the real DOM. The process of finding a difference between the two representations of the virtual DOM and rerendering only the updated patches in a real DOM is fast. Also, the best part is, as a React developer, that you don't need to worry about what actually needs to be rerendered. React allows you to write your code as if you were rerendering the entire DOM every time your application's state changes. If you would like to learn more about the virtual DOM, the rationale behind it, and how it can be compared to data binding, then I would strongly recommend that you watch this very informative talk by Pete Hunt from Facebook at https://www.youtube.com/watch?v=-DX3vJiqxm4. Now that we've learnt about the virtual DOM, let's mutate a real DOM by installing React and creating our first React element. Installing React To start using the React library, we need to first install it. I am going to show you two ways of doing this: the simplest one and the one using the npm install command. The simplest way is to add the <script> tag to our ~/snapterest/build/index.html file: For the development version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.js"></script> For the production version version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.min.js"></script> For our project, we'll be using the development version of React. At the time of writing, the latest version of React library is 0.14.0-beta3. Over time, React gets updated, so make sure you use the latest version that is available to you, unless it introduces breaking changes that are incompatible with the code samples provided in this article. Visit https://github.com/fedosejev/react-essentials to learn about any compatibility issues between the code samples and the latest version of React. We all know that Browserify allows us to import all the dependency modules for our application using the require() function. We'll be using require() to import the React library as well, which means that, instead of adding a <script> tag to our index.html, we'll be using the npm install command to install React: Navigate to the ~/snapterest/ directory and run this command:  npm install --save react@0.14.0-beta3 react-dom@0.14.0-beta3 Then, open the ~/snapterest/source/app.js file in your text editor and import the React and ReactDOM libraries to the React and ReactDOM variables, respectively: var React = require('react'); var ReactDOM = require('react-dom'); The react package contains methods that are concerned with the key idea behind React, that is, describing what you want to render in a declarative way. On the other hand, the react-dom package offers methods that are responsible for rendering to the DOM. You can read more about why developers at Facebook think it's a good idea to separate the React library into two packages at https://facebook.github.io/react/blog/2015/07/03/react-v0.14-beta-1.html#two-packages. Now we're ready to start using the React library in our project. Next, let's create our first React Element! Creating React Elements with JavaScript We'll start by familiarizing ourselves with a fundamental React terminology. It will help us build a clear picture of what the React library is made of. This terminology will most likely update over time, so keep an eye on the official documentation at http://facebook.github.io/react/docs/glossary.html. Just like the DOM is a tree of nodes, React's virtual DOM is a tree of React nodes. One of the core types in React is called ReactNode. It's a building block for a virtual DOM, and it can be any one of these core types: ReactElement: This is the primary type in React. It's a light, stateless, immutable, virtual representation of a DOM Element. ReactText: This is a string or a number. It represents textual content and it's a virtual representation of a Text Node in the DOM. ReactElements and ReactTexts are ReactNodes. An array of ReactNodes is called a ReactFragment. You will see examples of all of these in this article. Let's start with an example of a ReactElement: Add the following code to your ~/snapterest/source/app.js file: var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Now your app.js file should look exactly like this: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Navigate to the ~/snapterest/ directory and run Gulp's default task: gulp You will see the following output: Starting 'default'... Finished 'default' after 1.73 s Navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a blank web page. Open Developer Tools in your web browser and inspect the HTML markup for your blank web page. You should see this line, among others: <h1 data-reactid=".0"></h1> Well done! We've just created your first React element. Let's see exactly how we did it. The entry point to the React library is the React object. This object has a method called createElement() that takes three parameters: type, props, and children: React.createElement(type, props, children); Let's take a look at each parameter in more detail. The type parameter The type parameter can be either a string or a ReactClass: A string could be an HTML tag name such as 'div', 'p', 'h1', and so on. React supports all the common HTML tags and attributes. For a complete list of HTML tags and attributes supported by React, you can refer to http://facebook.github.io/react/docs/tags-and-attributes.html. A ReactClass is created via the React.createClass() method. The type parameter describes how an HTML tag or a ReactClass is going to be rendered. In our example, we're rendering the h1 HTML tag. The props parameter The props parameter is a JavaScript object passed from a parent element to a child element (and not the other way around) with some properties that are considered immutable, that is, those that should not be changed. While creating DOM elements with React, we can pass the props object with properties that represent the HTML attributes such as class, style, and so on. For example, run the following commands: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }); ReactDOM.render(reactElement, document.getElementById('react-application')); The preceding code will create an h1 HTML element with a class attribute set to header: <h1 class="header" data-reactid=".0"></h1> Notice that we name our property className rather than class. The reason is that the class keyword is reserved in JavaScript. If you use class as a property name, it will be ignored by React, and a helpful warning message will be printed on the web browser's console: Warning: Unknown DOM property class. Did you mean className?Use className instead. You might be wondering what this data-reactid=".0" attribute is doing in our h1 tag? We didn't pass it to our props object, so where did it come from? It is added and used by React to track the DOM nodes; it might be removed in a future version of React. The children parameter The children parameter describes what child elements this element should have, if any. A child element can be any type of ReactNode: a virtual DOM element represented by a ReactElement, a string or a number represented by a ReactText, or an array of other ReactNodes, which is also called ReactFragment. Let's take a look at this example: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }, 'This is React'); ReactDOM.render(reactElement, document.getElementById('react-application')); The following code will create an h1 HTML element with a class attribute and a text node, This is React: <h1 class="header" data-reactid=".0">This is React</h1> The h1 tag is represented by a ReactElement, while the This is React string is represented by a ReactText. Next, let's create a React element with a number of other React elements as it's children: var React = require('react'); var ReactDOM = require('react-dom');   var h1 = React.createElement('h1', { className: 'header', key: 'header' }, 'This is React'); var p = React.createElement('p', { className: 'content', key: 'content' }, "And that's how it works."); var reactFragment = [ h1, p ]; var section = React.createElement('section', { className: 'container' }, reactFragment);   ReactDOM.render(section, document.getElementById('react-application')); We've created three React elements: h1, p, and section. h1 and p both have child text nodes, "This is React" and "And that's how it works.", respectively. The section has a child that is an array of two ReactElements, h1 and p, called reactFragment. This is also an array of ReactNodes. Each ReactElement in the reactFragment array must have a key property that helps React to identify that ReactElement. As a result, we get the following HTML markup: <section class="container" data-reactid=".0">   <h1 class="header" data-reactid=".0.$header">This is React</h1>   <p class="content" data-reactid=".0.$content">And that's how it works.</p> </section> Now we understand how to create React elements. What if we want to create a number of React elements of the same type? Does it mean that we need to call React.createElement('type') over and over again for each element of the same type? We can, but we don't need to because React provides us with a factory function called React.createFactory(). A factory function is a function that creates other functions. This is exactly what React.createFactory(type) does: it creates a function that produces a ReactElement of a given type. Consider the following example: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.createElement('li', { className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.createElement('li', { className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.createElement('li', { className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); The preceding example produces this HTML: <ul class="list-of-items" data-reactid=".0">   <li class="item-1" data-reactid=".0.$item-1">Item 1</li>   <li class="item-2" data-reactid=".0.$item-2">Item 2</li>   <li class="item-3" data-reactid=".0.$item-3">Item 3</li> </ul> We can simplify it by first creating a factory function: var React = require('react'); var ReactDOM = require('react-dom'); var createListItemElement = React.createFactory('li'); var listItemElement1 = createListItemElement({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = createListItemElement({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = createListItemElement({ className: 'item-3', key: 'item-3' }, 'Item 3'); var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment); ReactDOM.render(listOfItems, document.getElementById('react-application')); In the preceding example, we're first calling the React.createFactory() function and passing a li HTML tag name as a type parameter. Then, the React.createFactory() function returns a new function that we can use as a convenient shorthand to create elements of type li. We store a reference to this function in a variable called createListItemElement. Then, we call this function three times, and each time we only pass the props and children parameters, which are unique for each element. Notice that React.createElement() and React.createFactory() both expect the HTML tag name string (such as li) or the ReactClass object as a type parameter. React provides us with a number of built-in factory functions to create the common HTML tags. You can call them from the React.DOM object; for example, React.DOM.ul(), React.DOM.li(), React.DOM.div(), and so on. Using them, we can simplify our previous example even further: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Now we know how to create a tree of ReactNodes. However, there is one important line of code that we need to discuss before we can progress further: ReactDOM.render(listOfItems, document.getElementById('react-application')); As you might have already guessed, it renders our ReactNode tree to the DOM. Let's take a closer look at how it works. Rendering React Elements The ReactDOM.render() method takes three parameters: ReactElement, a regular DOMElement, and a callback function: ReactDOM.render(ReactElement, DOMElement, callback); ReactElement is a root element in the tree of ReactNodes that you've created. A regular DOMElement is a container DOM node for that tree. The callback is a function executed after the tree is rendered or updated. It's important to note that if this ReactElement was previously rendered to a parent DOM Element, then ReactDOM.render() will perform an update on the already rendered DOM tree and only mutate the DOM as it is necessary to reflect the latest version of the ReactElement. This is why a virtual DOM requires fewer DOM mutations. So far, we've assumed that we're always creating our virtual DOM in a web browser. This is understandable because, after all, React is a user interface library, and all the user interfaces are rendered in a web browser. Can you think of a case when rendering a user interface on a client would be slow? Some of you might have already guessed that I am talking about the initial page load. The problem with the initial page load is the one I mentioned at the beginning of this article—we're not creating static web pages anymore. Instead, when a web browser loads our web application, it receives only the bare minimum HTML markup that is usually used as a container or a parent element for our web application. Then, our JavaScript code creates the rest of the DOM, but in order for it to do so it often needs to request extra data from the server. However, getting this data takes time. Once this data is received, our JavaScript code starts to mutate the DOM. We know that DOM mutations are slow. How can we solve this problem? The solution is somewhat unexpected. Instead of mutating the DOM in a web browser, we mutate it on a server. Just like we would with our static web pages. A web browser will then receive an HTML that fully represents a user interface of our web application at the time of the initial page load. Sounds simple, but we can't mutate the DOM on a server because it doesn't exist outside a web browser. Or can we? We have a virtual DOM that is just a JavaScript, and as you know using Node.js, we can run JavaScript on a server. So technically, we can use the React library on a server, and we can create our ReactNode tree on a server. The question is how can we render it to a string that we can send to a client? React has a method called ReactDOMServer.renderToString() just to do this: var ReactDOMServer = require('react-dom/server'); ReactDOMServer.renderToString(ReactElement); It takes a ReactElement as a parameter and renders it to its initial HTML. Not only is this faster than mutating a DOM on a client, but it also improves the Search Engine Optimization (SEO) of your web application. Speaking of generating static web pages, we can do this too with React: var ReactDOMServer = require('react-dom/server'); ReactDOM.renderToStaticMarkup(ReactElement); Similar to ReactDOM.renderToString(), this method also takes a ReactElement as a parameter and outputs an HTML string. However, it doesn't create the extra DOM attributes that React uses internally, it produces shorter HTML strings that we can transfer to the wire quickly. Now you know not only how to create a virtual DOM tree using React elements, but you also know how to render it to a client and server. Our next question is whether we can do it quickly and in a more visual manner. Creating React Elements with JSX When we build our virtual DOM by constantly calling the React.createElement() method, it becomes quite hard to visually translate these multiple function calls into a hierarchy of HTML tags. Don't forget that, even though we're working with a virtual DOM, we're still creating a structure layout for our content and user interface. Wouldn't it be great to be able to visualize that layout easily by simply looking at our React code? JSX is an optional HTML-like syntax that allows us to create a virtual DOM tree without using the React.createElement() method. Let's take a look at the previous example that we created without JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Translate this to the one with JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listOfItems = <ul className="list-of-items">                     <li className="item-1">Item 1</li>                     <li className="item-2">Item 2</li>                     <li className="item-3">Item 3</li>                   </ul>; ReactDOM.render(listOfItems, document.getElementById('react-application'));   As you can see, JSX allows us to write HTML-like syntax in our JavaScript code. More importantly, we can now clearly see what our HTML layout will look like once it's rendered. JSX is a convenience tool and it comes with a price in the form of an additional transformation step. Transformation of the JSX syntax into valid JavaScript syntax must happen before our "invalid" JavaScript code is interpreted. We know that the babely module transforms our JSX syntax into a JavaScript one. This transformation happens every time we run our default task from gulpfile.js: gulp.task('default', function () {   return browserify('./source/app.js')         .transform(babelify)         .bundle()         .pipe(source('snapterest.js'))         .pipe(gulp.dest('./build/')); }); As you can see, the .transform(babelify) function call transforms JSX into JavaScript before bundling it with the other JavaScript code. To test our transformation, run this command: gulp Then, navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a list of three items. The React team has built an online JSX Compiler that you can use to test your understanding of how JSX works at http://facebook.github.io/react/jsx-compiler.html. Using JSX, you might feel very unusual in the beginning, but it can become a very intuitive and convenient tool to use. The best part is that you can choose whether to use it or not. I found that JSX saves me development time, so I chose to use it in this project that we're building. If you choose to not use it, then I believe that you have learned enough in this article to be able to translate the JSX syntax into a JavaScript code with the React.createElement() function calls. If you have a question about what we have discussed in this article, then you can refer to https://github.com/fedosejev/react-essentials and create a new issue. Summary We started this article by discussing the issues with single web page applications and how they can be addressed. Then, we learned what a virtual DOM is and how React allows us to build it. We also installed React and created our first React element using only JavaScript. Then, we also learned how to render React elements in a web browser and on a server. Finally, we looked at a simpler way of creating React elements with JSX. Resources for Article: Further resources on this subject: Changing Views [article] Introduction to Akka [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 17426

article-image-front-page-customization-moodle
Packt
23 Oct 2009
11 min read
Save for later

Front Page Customization in Moodle

Packt
23 Oct 2009
11 min read
Look and Feel: An Overview Moodle can be fully customized in terms of layout and branding. It has to be stressed that certain aspects of changing the look and feel require some design skills. While you as an administrator will be able to make most of the relevant adjustments, it might be necessary to get a professional designer involved, especially when it comes to styling. The two relevant components for customization are the Moodle front page and Moodle themes, though this article will focus only on Moodle front page. Before going into further details, let's try to understand which part is responsible for which element of the look and feel of your site. Have a look at the screenshot that would follow. It shows the front page of Moodle site after you are logged in as an administrator. It is not obvious which parts are driven by the Moodle theme and by the front page settings. The next table sheds some light on this: Element Settings Theme Other Logos - x - Logged-in information (location and font) - x - Language Drop Down - - x Site Administration block (position) x - - Available Courses block (position) x - - Available Courses block (content) - - x Course categories and Calendar block (position) x - - Course categories and Calendar block (icons, fonts, colors) - x - Footer text - x - Footer logo - x - Copyright statement - x -   While this list is by no means complete, it hopefully gives you an idea that the look and feel of your Moodle site is driven by a number of different elemen In short, the settings (mostly front page settings as well as a few related parameters) dictate what content users will see before and after they log on. The theme is responsible for the design scheme or branding, that is, the header and footer as well as colors, fonts, icons, and so on used throughout the site. Now let's move towards the core part of this article. Customizing Your Front Page The appearance of Moodle's front page changes after a user has logged in. The content and layout of the page before and after login can be customized to represent the identity of your organization. Look at the following screenshot. It is the same site that the preceding screenshot was taken from, but before a user has logged in. In this particular example, a Login block is shown on the left and the Course categories are displayed in the center, as opposed to the list of available courses. Front Page Settings To customize the front page, you either have to be logged in as Moodle administrator, or have front-page-related permissions in the Front Page context. From the Site Administration block, select Front Page | Front Page Settings. The screen showing all available parameters will be loaded displaying your current settings that are changeable. ts. Setting Description Full site name This is the name that appears in the browser's title bar. It is usually the full name of your organization, or the name of the dedicated course, or qualification the site is used for. Short name for site This is the name that appears as the first item in the breadcrumb trail. Front Page Description This description of the site will be displayed on the front page via the Site Description block. It can, therefore, only be displayed in the left or right column, never in the center of the front page. The description text is also picked up by the Google search engine spider, if allowed. Front Page Moodle can display up to four elements in the center column of the front page when not logged in. List of courses List of categories News items Combo list(categories and courses) The order of the elements is the same as the one chosen in the pull-down menus. Front page items when logged in Same as "Front Page", but used when logged in. Include a topic section If ticked, an additional topic section (just like the topic blocks in the center column of a topics-format course) appears on top of the front page's center column. It can contain any mix of resources or activities available in Moodle. It is very often used to provide information about the site. News items to show Number of news items that are displayed. Courses per page This is a threshold setting that is used when displaying courses within categories. If there are more courses in a category than specified, page navigation will be displayed at the top of the page. Also, when a combo list is used, course names are only displayed if the number is less than the specified threshold. For all other categories, only the number of courses is shown after the category name. Allow visible courses within hidden categories By default, courses in hidden categories are not shown unless the said setting is applied. Default frontpage role If logged-in users should be allowed to participate in front page activities, a default front page role should be set. The default is None.   Arranging Front Page Blocks To configure the left and right column areas with blocks, you have to turn on editing (using the Blocks editing on button). The menu includes blocks that are not available in courses such as Course/Site description and Main menu. Blocks are added to the front page in exactly the same way as in courses. To change their position, use the standard arrows. The Main Menu block allows you to add any installed Moodle resource or activity inside the block. For example, using labels and links to (internal or external) websites, you are able to create a menu-like structure on your front page. If the Include a topic section parameter has been selected in the Front Page settings, you have to edit the part and add any installed Moodle activity or resource. This topic section is usually used by organizations to add a welcome message to visitors, often accompanied by a picture or other multimedia content. Login From a Different Website The purpose of the Login block is for users to authenticate themselves by entering their username and password. It is possible to log into Moodle from a different website, maybe your organization's homepage, effectively avoiding the Login block. To implement this, you will have to add some HTML code on that page as shown: <form class="loginform" name="login" method="post" action="http://www.mysite.com/login/index.php">   <p>Username :     <input size="10" name="username" />   </p>   <p>Password :     <input size="10" name="password" type="password" />   </p>   <p>     <input name="Submit" value="Login" type="submit" />   </p></form> The form will pass the username and password to your Moodle system. You will have to replace www.mysite.com with your URL. This address has to be entered in the Alternate Login URL field at Users | Authentication | Manage authentication in the Site Administration block. Other Front Page Items The Moodle front page is treated as a standalone component in Moodle, and therefore has a top-level menu with a number of features that can all be accessed via the Front Page item in the Site Administration menu. Having now looked in detail at the front page settings, let's turn to examining the other available options. Front Page Roles The front page has its own context in which roles can be assigned to users. This allows a separate user to develop and maintain the front page without having access to any other elements in Moodle. Since the front page is treated as a course, a Teacher role is usually sufficient for this. Front Page Backup and Restore The front page has its own backup and restore facilities to back up and restore all elements of the front page including any content. The mechanism of performing backup and restore is the same as for course backups.   Front page backups are stored in the backupdata folder in the Site Files area, and can be accessed by anybody who is aware of the URL. It is therefore best to move the created ZIP files to a more secure location. Front Page Questions Since the Moodle front page is treated in the same way as a course, it also has its own question bank, which is used to store any questions used on front-page quizzes. For more information on quizzes and the question bank, go to the MoodleDocs at http://docs.moodle.org/en/Quiz . Site Files The files areas of all courses are separate from each other, that is, files in Moodle belong to a course and can only be accessed by users who have been granted appropriate rights. The difference between Site files and the files area of any other course is that files in Site files can be accessed without logging in. Files placed in this location are meant for the front page, but can be accessed from anywhere in the system. In fact, if the location is known, files can be even be accessed from outside Moodle. Make sure that in the Site files area, you only place files that are acceptable to be seen by users who are not authenticated on your Moodle system. Typical files to be placed in this area are any images you want to show on the front page (such as the logo of your organization) or any document that you want to be accessed (for example, the curriculum). However, it is also used for other files that are required to be accessible without access to a course, such as the Site Policy Agreement, which has to be accepted before starting Moodle. To access these publicly available Site files elsewhere (for example, as a resource within other courses), you have to copy the link location that has the format: http://mysite.com/file.php/1/file.doc. Allow Personalization via My Moodle By default, the same front page is displayed for all users on your Moodle system. To relax this restriction and to allow users to personalize their own front page, you have to activate the My Moodle feature via the Force users to use My Moodle setting in Appearance | My Moodle in the Site Administration block. Once enabled, Moodle creates a /my directory for each user (except administrators) at their first login, which is displayed instead of the main Moodle front page. It is a very flexible feature that is similar to a customizable dashboard, but requires some more disk space on your server. Once logged in, users will have the ability to edit their page by adding blocks to their My Moodle area. The center of the page will be populated by the main front page, for instance displaying a list of courses, that users cannot modify. Making Blocks Sticky There might be some blocks that you wish to "stick", that is, display on each My Moodle page, making them effectively compulsory blocks. For example, you might want to pin the Calendar block on the top right corner of each user's My Moodle page. To do this, go to Modules | Blocks | Sticky blocks in the Site Administration block and select My Moodle from the pull-down menu. You can now add any item from the pull-down menu in the Blocks block. If the block is single instance (that is, only one occurrence is allowed per page), the block will not be available for the user to choose from. If the user has already selected a particular block, a duplicate will appear on their site, which can be edited and deleted. To prevent users from editing their My Moodle pages, change the moodle/my: manageblocks capability in the Authenticated user role from Allow to Not set. The sticky block feature is also available for course pages. A course creator has the ability to add and position blocks inside a course unless they have been made sticky. Select the Course page item from the same menu to configure the sticky blocks for courses, as shown in the preceding screenshot. Summary After providing a general overview of look and feel elements in Moodle, the article covered front page customization. As mentioned earlier, the front page in Moodle is a course. This has advantages (you can do everything you can do in a course and a little bit more), but it also has certain limitations (you can only do what you can do in a course and might feel limited by this). However, some organizations are now using the Moodle front page as their homepage.
Read more
  • 0
  • 0
  • 17368

article-image-creating-and-optimizing-your-first-retina-image
Packt
03 Apr 2013
6 min read
Save for later

Creating and optimizing your first Retina image

Packt
03 Apr 2013
6 min read
(For more resources related to this topic, see here.) Creating your first Retina image (Must know) Apple's Retina Display is a brand name for their high pixel density screens. These screens have so many pixels within a small space that the human eye cannot see pixelation, making images and text appear smoother. To compete with Apple's display, other manufacturers are also releasing devices using high-density displays. These types of displays are becoming standard in high quality devices. When you first start browsing the Web using a Retina Display, you'll notice that many images on your favorite sites are blurry. This is a result of low-resolution images being stretched to fill the screen. The effect can make an otherwise beautiful website look unattractive. The key to making your website look exceptional on Retina Displays is the quality of the images that you are using. In this recipe, we will cover the basics of creating high-resolution images and suggestions on how to name your files. Then we'll use some simple HTML to display the image on a web page. Getting ready Creating a Retina-ready site doesn't require any special software beyond what you're already using to build web pages. You'll need a graphics editor (such as Photoshop or GIMP) and your preferred code/text editor. To test the code on Retina Display you'll also need a web server that you can reach from a browser, if you aren't coding directly on the Retina device. The primary consideration in getting started is the quality of your images. A Retina image needs to be at least two times as large as it will be displayed on screen. If you have a photo you'd like to add to your page that is 500 pixels wide, you'll want to start out with an image that is at least 1000 pixels wide. Trying to increase the size of a small image won't work because the extra pixels are what make your image sharp. When designing your own graphics, such as icons and buttons, it's best to create them using a vector graphics program so they will be easy to resize without affecting the quality. Once you have your high-resolution artwork gathered, we're ready to start creating Retina images. How to do it... To get started, let's create a folder on your computer called retina. Inside that folder, create another folder called images. We'll use this as the directory for building our test website. To create your first Retina image, first open a high-resolution image in your graphics editor. You'll want to set the image size to be double the size of what you want to display on the page. For example, if you wanted to display a 700 x 400 pixel image, you would start with an image that is 1400 x 800 pixels. Make sure you aren't increasing the size of the original image or it won't work correctly. Next, save this image as a .jpg file with the filename myImage@2x.jpg inside of the /images/ folder within the /retina/ folder that we created. Then resize the image to 50 percent and save it as myImage.jpg to the same location. Now we're ready to add our new images to a web page. Create an HTML document called retinaTest.html inside the /retina/ folder. Inside of the basic HTML structure add the two images we created and set the dimensions for both images to the size of the smaller image. <body> <img src = "images/myImage@2x.jpg" width="700" height="400" /> <img src = "images/myImage.jpg" width="700" height="400" /> </body> If you are working on a Retina device you should be able to open this page locally; if not, upload the folder to your web server and open the page on your device. You will notice how much sharper the first image is than the second image. On a device without a Retina Display, both images will look the same. Congratulations! you've just built your first Retina-optimized web page. How it works... Retina Displays have a higher amount of pixels per inch (PPI) than a normal display. In Apple's devices they have double the PPI of older devices, which is why we created an image that was two times as large as the final image we wanted to display. When that large image is added to the code and then resized to 50 percent, it has more data than what is being shown on a normal display. A Retina device will see that extra pixel data and use it to fill the extra PPI that its screen, contains. Without the added pixel data, the device will use the data available to fill the screen creating a blurry image. You'll notice that this effect is most obvious on large photos and computer graphics like icons. Keep in mind this technique will work with any image format such as .jpg, .png, or .gif. There's more... As an alternative to using the image width and height attributes in HTML, like the previous code, you can also give the image a CSS class with width and height attributes. This is only recommended if you will be using many images that are of the same size and you want to be able to change them easily. <style> .imgHeader { width: 700px; height: 400px; } </style> <img src = "images/myImage@2x.jpg" class="imgHeader" /> Tips for creating images We created both a Retina and a normal image. It's always a good idea to create both images because the Retina image will be quite a bit larger than the normal one. Then you'll have the option of which image you'd like to have displayed so users without a Retina device don't have to download the larger file. You'll also notice that we added @2x to the filename of the larger image. It's a good practice to create consistent filenames to differentiate the images that are high-resolution. It'll make our coding work much easier going forward. Pixels per inch and dots per inch When designers with a print background first look into creating graphics for Retina Displays there can be some confusion regarding dots per inch (DPI). Keep in mind that computer displays are only concerned with the number of pixels in an image. An 800 x 600 pixel image at 300 DPI will display the same as an 800 x 600 pixel image at 72 DPI.
Read more
  • 0
  • 0
  • 17337

article-image-connecting-react-redux-firebase-part-1
AJ Webb
09 Nov 2016
7 min read
Save for later

Connecting React to Redux & Firebase - Part 1

AJ Webb
09 Nov 2016
7 min read
Have you tried using React and now you want to increase your abilities? Are you ready to scale up your small React app? Have you wondered how to offload all of your state into a single place and keep your components more modular? Using Redux with React allows you to have a single source of truth for all of your app's state. The two of them together allow you to never have to set state on a component and allows your components to be completely reusable. For some added sugar, you'll also learn how to leverage Firebase and use Redux actions to subscribe to data that is updated in real time. In this two-part post, you'll walk through creating a new chat app called Yak using React's new CLI, integrating Redux into your app, updating your state and connecting it all to Firebase. Let's get started. Setting up This post is written with the assumption that you have Node.js and NPM already installed. Also, it assumes some knowledge of JavaScript and React.js. If you don't already have Node.js and NPM installed, head over to the Node.js install instructions. At the time of writing this post, the Node.js version is 6.6.0 and NPM version is 3.10.8. Once you have Node.js installed, open up your favorite terminal app and install the NPM package Create React App; the current version at the time of writing this post is 0.6.0, so make sure to specify that version. [~]$ npm install -g create-react-app@0.6.0 Now we'll want to set up our app and install our dependencies. First we'll navigate to where we want our project to live. I like to keep my projects at ~/code, so I'll navigate there. You may need to create the directory using mkdir if you don't have it, or you might want to store it elsewhere. It doesn't matter which you choose; just head to where you want to store your project. [~]$ cd ~/code Once there, use Create React App to create the app: [~/code]$ create-react-app yak This command is going to create a directory called yak and create all the necessary files you need in order to start a baseline React.js app. Once the command has completed you should see some commands that you can run in your new app. Create React App has created the boilerplate files for you. Take a moment to familiarize yourself with these files. .gitignore All the files and directories you want ignored by git. README.md Documentation on what has been created. This is a good resource to lean on as you're learning React.js and using your app. node_modules All the packages that are required to run and build the application up to this point. package.json Instructs NPM how scripts run on your app, which packages your app depending on and other meta things such as version and app name. public All the static files that aren't used within the app. Mainly for index.html and favicon.ico. src All the app files; the app is run by Webpack and is set up to watch all the files inside of this directory. This is where you will spend the majority of your time. There are two files that cannot be moved while working on the app; they are public/index.html and src/index.js. The app relies on these two files in order to run. You can change them, but don't move them. Now to get started, navigate into the app folder and start the app. [~/code]$ cd yak [~/code/yak]$ npm start The app should start and automatically open http://localhost:3000/ in your default browser. You should see a black banner with the React.js logo spinning and some instructions on how to get started. To stop the app, press ctrl-c in the terminal window that is running the app. Getting started with Redux Next install Redux and React-Redux: [~/code/yak]$ npm install --save redux react-redux Redux will allow the app to have a single source of truth for state throughout the app. The idea is to keep all the React components ignorant of the state, and to pass that state to them via props. Containers will be used to select data from the state and pass the data to the ignorant components via props. React-Redux is a utility that assists in integrating React with Redux. Redux's state is read-only and you can only change the state by emitting an action that a reducer function uses to take the previous state and return a new state. Make sure as you are writing your reducers to never mutate the state (more on that later). Now you will add Redux to your app, in src/index.js. Just below importing ReactDOM, add: import { createStore, compose } from 'redux'; import { Provider } from 'react-redux'; You now have the necessary functions and components to set up your Redux store and pass it to your React app. Go ahead and get your store initialized. After the last import statement and before ReactDOM.render() is where you will create your store. const store = createStore(); Yikes! If you run the app and open your inspector, you should see the following console error: Uncaught Error: Expected the reducer to be a function. That error is thrown because the createStore function requires a reducer as the first parameter. The second parameter is an optional initial state and the last parameter is for middleware that you may want for your store. Go ahead and create a reducer for your store, and ignore the other two parameters for now. [~/code/yak]$ touch src/reducer.js Now open reducer.js and add the following code to the reducer: const initialState = { messages: [] }; export function yakApp(state = initialState, action) { return state; } Here you have created an initial state for the current reducer, and a function that is either accepting a new state or using ES6 default arguments to set an undefined state to the initial state. The function is simply returning the state and not making any changes for now. This is a perfectly valid reducer and will work to solve the console error and get the app running again. Now it's time to add it to the store. Back in src/index.js, import the reducer in and then set the yakApp function to your store. import { yakApp } from './reducer'; const store = createStore(yakApp); Restart the app and you'll see that it is now working again! One last thing to get things set up in your bootstrapping file src/index.js. You have your store and have imported Provider; now it’s time to connect the two and allow the app to have access to the store. Update the ReactDOM.render method to look like the following. ReactDOM.render( <Provider store={store}> <App /> </Provider>, document.getElementById('root') ); Now you can jump into App.js and connect your store. In App.js, add the following import statement: import { connect } from 'react-redux'; At the bottom of the file, just before the export statement, add: function mapStateToProps(state) { return { messages: state.messages } } And change the export statement to be: export default connect(mapStateToProps)(App); That's it! Your App component is now connected to the redux store. The messages array is being mapped to this.props. Go ahead and try it; add a console log to the render() method just before the return statement. console.log(this.props.messages); The console should log an empty array. This is great! Conclusion In this post, you've learned to create a new React app without having to worry about tooling. You've integrated Redux into the app and created a simple reducer. You've also connected the reducer to your React component. But how do you add data to the array as messages are sent? How do you persist the array of messages after you leave the app? How do you connect all of this to your UI? How do you allow your users to create glorious data for you? In the next post, you'll learn to do all those things. Stay tuned! About the author AJ Webb is a team lead and frontend engineer for @tannerlabs, and the co-creator of Payba.cc.
Read more
  • 0
  • 0
  • 17314

article-image-connecting-react-redux-and-firebase-part-2
AJ Webb
10 Nov 2016
11 min read
Save for later

Connecting React to Redux and Firebase – Part 2

AJ Webb
10 Nov 2016
11 min read
This is the second part in a series on using Redux and Firebase with React. If you haven't read through the first part, then you should go back and do so, since this post will build on the other. If you have gone through the first part, then you're in the right place. In this final part of this two-part series, you will be creating actions in your app to update the store. Then you will create a Firebase app and set up async actions that will subscribe to Firebase. Whenever the data on Firebase changes, the store will automatically update and your app will receive the latest state. After all of that is wired up, you'll create a quick user interface. Lets get started. Creating Actions in Redux For your actions, you are going to create a new file inside of src/: [~/code/yak]$ touch src/actions.js Inside of actions.js, you will create your first action; an action in Redux is simply an object that contains a type and a payload. The type allows you to catch the action inside of your reducer and then use the payload to update state. The type can be a string literal or a constant. I recommend using constants for the same reasons given by the Redux team. In this app you will set up some constants to follow this practice. [~/code/yak]$ touch src/constants.js Create your first constant in src/constants.js: export const SEND_MESSAGE = 'CHATS:SEND_MESSAGE'; It is good practice to namespace your constants, like the constant above the CHATS. Namespace can be specific to the CHATS portion of the application and helps document your constants. Now go ahead and use that constant to create an action. In src/actions.js, add: import { SEND_MESSAGE } from './constants'; export function sendMessage(message) { return { type: SEND_MESSAGE, payload: message }; } You now have your first action! You just need to add a way to handle that action in your reducer. So open up src/reducers.js and do just that. First import the constant. import { SEND_MESSAGE } from './constants'; Then inside of the yakApp function, you'll handle different actions: export function yakApp(state = initialState, action) { switch(action.type) { case SEND_MESSAGE: return Object.assign({}, state, { messages: state.messages.concat([action.payload]) }); default: return state; } } A couple of things are happening here, you'll notice that there is a default case that returns state, your reducer must return some sort of state. If you don't return state your app will stop working. Redux does a good job of letting you know what's happening, but it is still good to know. Another thing to notice is that the SEND_MESSAGE case does not mutate the state. Instead it creates a new state and returns it. Mutating the state can result in bad side effects that are hard to debug; do your very best to never mutate the state. You should also avoid mutating the arguments passed to a reducer, performing side effects and calling any non-pure functions within your reducer. For the most part, your reducer is set up to take the new state and return it in conjunction with the old state. Now that you have an action and a reducer that is handling that action, you are ready for your component to interact with them. In src/App.js add an input and a button: <p className="App-intro"> <input type="text" />{' '} <button>Send Message</button> </p> Now that you have the input and button in, you're going to add two things. The first is a function that runs when the button is clicked. The second is a reference on the input so that the function can easily find the value of your input. js <p className="App-intro"> <input type="text" ref={(i) => this.message = i}/>{' '} <button onClick={this.handleSendMessage.bind(this)}>Send Message</button> </p> Now that you have those set, you will need to add the sendMessage function, but you'll also need to be able to dispatch an action. So you'll want to map your actions to props, similar to how you mapped state to props in the previous guide. At the end of the file, under the mapStateToProps function, add the following function: function mapDispatchToProps(dispatch) { return { sendMessage: (msg) => dispatch(sendMessage(msg)) }; } Then you'll need to add the mapDispatchToProps function to the export statement: export default connect(mapStateToProps, mapDispatchToProps)(App); And you'll need to import the sendMessage action into src/App.js: import { sendMessage } from './actions'; Finally you'll need to create the handleSendMessage method inside of your App class, just above the render method: handleSendMessage() { const { sendMessage } = this.props; const { value } = this.message; if (!value) { return false; } sendMessage(value); this.message.value = ''; } If you still have a console.log statement inside of your render method, from the last guide, you should see your message being added to the array each time you click on the button. The final piece of UI that you need to create is a list of all the messages you've received. Add the following to the render method in src/App.js, just below the <p/> that contains your input: {messages.map((message, i) => ( <p key={i} style={{textAlign: 'left', padding: '0 10px'}}>{message}</p> ))} Now you have a crude chat interface; you can improve it later. Set up Firebase If you've never used Firebase before, you'll first need a Google account. If you don't have a Google account, sign up for one; then you'll be able to create a new Firebase project. Head over to Firebase and create a new project. If you have trouble naming it, try YakApp_[YourName]. After you have created your Firebase project, you'll be taken to the project. There is a button that says "Add Firebase to your web app"; click on it and you'll be able to get the configuration information that you will need for your app to be able to work with Firebase. A dialog will open and you should see something like this: You will only be using the database for this project, so you'll only need a portion of the config. Keep the config handy while you prepare the app to use Firebase. First add the Firebase package to your app: [~/code/yak]$ npm install --save firebase Open src/actions.js and import firebase: import firebase from 'firebase' You will only be using Firebase in your actions; that is the reason you are importing it here. Once imported, you'll need to initialize Firebase. After the import section, add your Firebase config (the one from the image above), initialize Firebase and create a reference to messages: const firebaseConfig = { apiKey: '[YOUR API KEY]', databaseURL: '[YOUR DATABASE URL]' } firebase.initializeApp(firebaseConfig); const messages = firebase.database().ref('messages'); You'll notice that you only need the apiKey and databaseUrl for now. Eventually you might want to add auth and storage, and other facets of Firebase, but for now you only need database. Now that Firebase is set up and initialized, you can use it to store the chat history. One last thing before you leave the Firebase console: Firebase automatically sets the database rules to require users to be logged in. That is a great idea, but authentication is outside of the scope of this post. So you'll need to turn that off in the rules. In the Firebase console, click on the "Database" navigation item on the left side of the page; now there should be a tab bar with a "Rules" option. Click on "Rules" and replace what is in the textbox with: { "rules": { ".read": true, ".write": true } } Create subscriptions In order to subscribe to Firebase, you will need a way to send async actions. So you'll need another package to help with that. Go ahead and install redux-thunk. [~/code/yak]$ npm install --save redux-thunk After it's finished installing, you'll need to add it as middleware to your store. That means it's time to head back to src/index.js and add the extra parameters to the createStore function. First import redux-thunk into src/index.js and import another method from redux itself. import { createStore, applyMiddleware } from 'redux';import thunk from 'redux-thunk'; Now update the createStore call to be: const store = createStore(yakApp, undefined, applyMiddleware(thunk)); You are now ready to create async actions. In order to create more actions, you're going to need more constants. Head over to src/constants.js and add the following constant. export const RECEIVE_MESSAGE = 'CHATS:RECEIVE_MESSAGE'; This is the only constant you will need for now; after this guide, you should create edit and delete actions. At that point, you'll need more constants for those actions. For now, only concern yourself with adding and subscribing. You'll need to refactor your actions and reducer to handle these new features. In src/actions.js, refactor the sendMessage action to be an async action. The way redux-thunk works is that it intercepts any action that is dispatched and inspects it. If the action returns a function instead of an object, redux-thunk stops it from reaching the reducer and runs the function. If the action returns an object, redux-thunk ignores it and allows it to continue to the reducer. Change sendMessage to look like the following: export function sendMessage(message) { return function() { messages.push(message); }; } Now when you type a message in and hit the "Submit Message" button, the message will get stored in Firebase. Try it out! UH OH! There's a problem! You are adding messages to Firebase but they aren't showing up in your app anymore! That's ok. You can fix that! You'll need to create a few more actions though, and refactor your reducer. First off, you can delete one of your constants. You no longer need the constant from earlier: export const SEND_MESSAGE = 'CHATS:SEND_MESSAGE'; Go ahead and remove it; that leaves you with only one constant. Change your constant import in both src/actions.js and src/reducer.js: import { RECEIVE_MESSAGE } from './constants'; Now in actions, add the following action: function receiveMessage(message) { return { type: RECEIVE_MESSAGE, payload: message }; } That should look familiar; it's almost identical to the original sendMessage action. You'll also need to rename the action that your reducer is looking for. So now your reducer function should look like this: export function yakApp(state = initialState, action) { switch(action.type) { case RECEIVE_MESSAGE: return Object.assign({}, state, { messages: state.messages.concat([action.payload]) }); default: return state; } } Before the list starts to show up again, you'll need to create a subscription to Firebase. Sounds more complicated than it really is. Add the following action to src/actions.js: export function subscribeToMessages() { return function(dispatch) { messages.on('child_added', data => dispatch(receiveMessage(data.val()))); } } Now, in src/App.js you'll need to dispatch that action and set up the subscription. First change your import statement from ./actions to include subscribeToMessages: import { sendMessage, subscribeToMessages } from './actions'; Now, in mapDispatchToProps you need to map subscribeToMessages: function mapDispatchToProps(dispatch) { return { sendMessage: (msg) => dispatch(sendMessage(msg)), subscribeToMessages: () => dispatch(subscribeToMessages()), }; } Finally, inside of the App class, add the componentWillMount life cycle method just above the handleSendMessage method, and call subscribeToMessages: componentWillMount() { this.props.subscribeToMessages(); } Once you save the file, you should see that your app is subscribed to the Firebase database, which is automatically updating your Redux state, and your UI is displaying all the messages stored in Firebase. Have some fun and open another browser window! Conclusion You have now created a React app from scratch, added Redux to it, refactored it to connect to Firebase and, via subscriptions, updated your entire app state! What will you do now? Of course the app could use a better UI; you could redesign it! Maybe make it a little more usable. Try learning how to create users and show who yakked about what! The sky is the limit, now that you know how to put all the pieces together! Feel free to share what you've done with the app in the comments! About the author AJ Webb is team lead and frontend engineer for @tannerlabs and a co-creator of Payba.cc.
Read more
  • 0
  • 0
  • 17251

article-image-creating-blog-content-wordpress
Packt
25 Nov 2013
18 min read
Save for later

Creating Blog Content in WordPress

Packt
25 Nov 2013
18 min read
(For more resources related to this topic, see here.) Posting on your blog The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (in this case, you, though WordPress allows multiple authors to contribute to a blog). If a blog is like an online diary, every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date, excerpt, tags, and categories. In this section, you will learn how to create a new post and what kind of information to attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to add content or carry out a maintenance process on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) of your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log in to the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it so don't worry about that right now. The quickest way to get to the Add New Post page at any time is to click on + New and then the Post link at the top of the page in the top bar. This is the Add New Post page: To add a new post to your site quickly, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the Text view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or preview your post as well. Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post screen, but now the following message would have appeared telling you that your post was published, and giving you a link View post: If you view the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top). Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post and Edit Post pages. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical, meaning a category can be a parent of another category. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in such a blog is likely to have from one up to, maybe four categories assigned to it. For example, a blog about food and cooking might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Of course, the numbers mentioned are just suggestions; you can create and assign as many categories as you like. The way you structure your categories is entirely up to you as well. There are no true rules regarding this in the WordPress world, just guidelines like these. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to even 100 tags in use. Each post in this blog is likely to have 3 to 10 tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, and easy. Again, you can create and assign as many tags as you like. Let's add a new post to the blog. After you give it a title and content, let's add tags and categories. While adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little x buttons next to them. You can click on an x button to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most used tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field, and click on the Add New Category button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select — Parent Category — from the pull-down menu before clicking on the Add New Category button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do so on the Categories page. Click on the Publish button, and you're done (you can instead choose to schedule a post; we'll explore that in detail in a few pages). When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself. Images in your posts Almost every good blog post needs an image! An image will give the reader an instant idea of what the post is about, and the image will draw people's attention as well. WordPress makes it easy to add an image to your post, control default image sizes, make minor edits to that image, and designate a featured image for your post. Adding an image to a post Luckily, WordPress makes adding images to your content very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on the post's title. To add an image to a post, first you'll need to have that image on your computer, or know the exact URL pointing to the image if it's already online. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. Just to give you a good example here, I'm using a photo of my own so I don't have to worry about any copyright issues (always make sure to use only the images that you have the right to use, copyright infringement online is a serious problem, to say the least). I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, carry out the following steps to add the photo to your blog post: Click on the Add Media button, which is right above the content box and below the title box: The box that appears allows you to do a number of different things regarding the media you want to include in your post. The most user-friendly feature here, however, is the drag-and-drop support. Just drag the image from your desktop and drop it into the center area of the page labeled as Drop files anywhere to upload. Immediately after dropping the image, the uploader bar will show the progress of the operation, and when it's done, you'll be able to do some final tuning up. The fields that are important right now are Title, Alt Text, Alignment, Link To, and Size. Title is a description for the image, Alt Text is a phrase that's going to appear instead of the image in case the file goes missing or any other problems present themselves, Alignment will tell the image whether to have text wrap around it and whether it should be right, left, or center, Link To instructs WordPress whether or not to link the image to anything (a common solution is to select None), and Size is the size of the image. Once you have all of the above filled out click on Insert into post. This box will disappear, and your image will show up in the post—right where your cursor was prior to clicking on the Add Media button—on the edit page itself (in the visual editor, that is. If you're using the text editor, the HTML code of the image will be displayed instead). Now, click on the Update button, and go and look at the front page of your site again. There's your image! Controlling default image sizes You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? Whenever you upload an image, WordPress creates three versions of that image for you. You can set the pixel dimensions of those three versions by opening Settings in the main menu, and then clicking on Media. This takes you to the Media Settings page. Here you can specify the size of the uploaded images for: Thumbnail size Medium size Large size If you change the dimensions on this page, and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old settings. It's a good idea to decide what you want your three media sizes to be early on in your site, so you can set them and have them applied to all images, right from the start. Another thing about uploading images is the whole craze with HiDPI displays, also called Retina displays. Currently, WordPress is in a kind of a transitional phase with images and being in tune with the modern display technology; the Retina Ready functionality was introduced quite recently in WordPress 3.5. In short, if you want to make your images Retina-compatible (meaning that they look good on iPads and other devices with HiDPI screens), you should upload the images at twice the dimensions you plan to display them in. For example, if you want your image to be presented as 800 pixel wide and 600 pixel high, upload it as 1,600 pixel wide and 1,200 pixel high. WordPress will manage to display it properly anyway, and whoever visits your site from a modern device will see a high-definition version of the image. In future versions, WordPress will surely provide a more managed way of handling Retina-compatible images. Editing an uploaded image As of WordPress 2.9, you can now make minor edits on images you've uploaded. In fact, every image that has been previously uploaded to WordPress can be edited. In order to do this, go to Media Library by clicking on the Media button in the main sidebar. What you'll see is a standard WordPress listing (similar to the one we saw while working with posts) presenting all media files and allowing you to edit each one. When you click on the Edit link and then the Edit Image button on the subsequent screen, you'll enter the Edit Media section. Here, you can perform a number of operations to make your image just perfect. As it turns out, WordPress does a good enough job with simple image tuning so you don't really need expensive software such as Photoshop for this. Among the possibilities you'll find cropping, rotating, and flipping vertically and horizontally. For example, you can use your mouse to draw a box as I have done in the preceding image. On the right, in the box marked Image Crop, you'll see the pixel dimensions of your selection. Click on the Crop icon (top left), then the Thumbnail radio button (on the right), and then Save (just below your photo). You now have a new thumbnail! Of course, you can adjust any other version of your image just by making a different selection prior to hitting the Save button. Play around a little and you can become familiar with the details. Designating a featured image As of WordPress 2.9, you can designate a single image that represents your post. This is referred to as the featured image. Some themes will make use of this, and some will not. The default theme, the one we've been using, is named Twenty Thirteen, and it uses the featured image right above the post on the front page. Depending on the theme you're using, its behavior with featured images can vary, but in general, every modern theme supports them in one way or the other. In order to set a featured image, go to the Edit Post screen. In the sidebar you'll see a box labeled Featured Image. Just click on the Set featured image link. After doing so, you'll see a pop-up window, very similar to the one we used while uploading images. Here, you can either upload a completely new image or select an existing image by clicking on it. All you have to do now is click on the Set featured image button in the bottom right corner. After completing the operation, you can finally see what your new image looks like on the front page. Also, keep in mind that WordPress uses featured images in multiple places not only the front page. And as mentioned above, much of this behavior depends on your current theme. Using the visual editor versus text editor WordPress comes with a visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, and stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the text editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the text editor, click on the Text tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory, and you'll get a new set of buttons that lets you quickly bold and italicize text, as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. Even though the text editor allows you to use some HTML elements, it's not a fully fledged HTML support. For instance, using the <p> tags is not necessary in the text editor, as they will be stripped by default. In order to create a new paragraph in the text editor, all you have to do is press the Enter key twice. That being said, at the same time, the text editor is currently the only way to use HTML tables in WordPress (within posts and pages). You can easily place your table content inside the <table><tr><td> tags and WordPress won't alter it in any way, effectively allowing you to create the exact table you want. Another thing the text editor is most commonly used for is introducing custom HTML parameters in the <img /> and <a> tags and also custom CSS classes in other popular tags. Some content creators actually prefer working with the text editor rather than the visual editor because it gives them much more control and more certainty regarding the way their content is going to be presented on the frontend. Lead and body One of many interesting publishing features WordPress has to offer is the concept of the lead and the body of the post. This may sound like a strange thing, but it's actually quite simple. When you're publishing a new post, you don't necessarily want to display its whole contents right away on the front page. A much more user-friendly approach is to display only the lead, and then display the complete post under its individual URL. Achieving this in WordPress is very simple. All you have to do is use the Insert More Tag button available in the visual editor (or the more button in the text editor). Simply place your cursor exactly where you want to break your post (the text before the cursor will become the lead) and then click on the Insert More Tag button: An alternative way of using this tag is to switch to the text editor and input the tag manually, which is <!--more-->. Both approaches produce the same result. Clicking on the main Update button will save the changes. On the front page, most WordPress themes display such posts by presenting the lead along with a Continue reading link, and then the whole post (both the lead and the rest of the post) is displayed under the post's individual URL. Drafts, pending articles, timestamps, and managing posts There are four additional, simple but common, items I'd like to cover in this section: drafts, pending articles, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you, about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then show the time of the last draft saved: At this point, after a manual save or an autosave, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from Dashboard or from the Edit Posts page. In essence, drafts are meant to hold your "work in progress" which means all the articles that haven't been finished yet, or haven't even been started yet, and obviously everything in between. Pending articles Pending articles is a functionality that's going to be a lot more helpful to people working with multi-author blogs, rather than single-author blogs. The thing is that in a bigger publishing structure, there are individuals responsible for different areas of the publishing process. WordPress, being a quality tool, supports such a structure by providing a way to save articles as Pending Review. In an editor-author relationship, if an editor sees a post marked as Pending Review, they know that they should have a look at it and prepare it for the final publication. That's it for the theory, and now how to do it. While creating a new post, click on the Edit link right next to the Status: Draft label: Right after doing so, you'll be presented with a new drop-down menu from which you can select Pending Review and then click on the OK button. Now just click on the Save as Pending button that will appear in place of the old Save Draft button, and you have a shiny new article that's pending review. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. By default, the timestamp will be set to the moment you publish your post. To change it, just find the Publish box, and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then click on Publish to publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Post page in the WP Admin and navigate to Posts in the main menu. Once you do so, there are many things you can do on this page as with every management page in the WP Admin.
Read more
  • 0
  • 0
  • 17134
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-tuning-solr-jvm-and-container
Packt
22 Jul 2014
6 min read
Save for later

Tuning Solr JVM and Container

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) Some of these JVMs are commercially optimized for production usage; you may find comparison studies at http://dior.ics.muni.cz/~makub/java/speed.html. Some of the JVM implementations provide server versions, which would be more appropriate than normal ones. Since Solr runs in JVM, all the standard optimizations for applications are applicable to it. It starts with choosing the right heap size for your JVM. The heap size depends upon the following aspects: Use of facets and sorting options Size of the Solr index Update frequencies on Solr Solr cache Heap size for JVM can be controlled by the following parameters: Parameter Description -Xms This is the minimum heap size required during JVM initialization, that is, container -Xmx This is the maximum heap size up to which the JVM or J2EE container can consume Deciding heap size Heap in JVM contributes as a major factor while optimizing the performance of any system. JVM uses heap to store its objects, as well as its own content. Poor allocation of JVM heap results in Java heap space OutOfMemoryError thrown at runtime crashing the application. When the heap is allocated with less memory, the application takes a longer time to initialize, as well as slowing the execution speed of the Java process during runtime. Similarly, higher heap size may underutilize expensive memory, which otherwise could have been used by the other application. JVM starts with initial heap size, and as the demand grows, it tries to resize the heap to accommodate new space requirements. If a demand for memory crosses the maximum limit, JVM throws an Out of Memory exception. The objects that expire or are unused, unnecessarily consume memory in JVM. This memory can be taken back by releasing these objects by a process called garbage collection. Although it's tricky to find out whether you should increase or reduce the heap size, there are simple ways that can help you out. In a memory graph, typically, when you start the Solr server and run your first query, the memory usage increases, and based on subsequent queries and memory size, the memory graph may increase or remain constant. When garbage collection is run automatically by the JVM container, it sharply brings down its usage. If it's difficult to trace GC execution from the memory graph, you can run Solr with the following additional parameters: -Xloggc:<some file> -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails If you are monitoring the heap usage continuously, you will find a graph that increases and decreases (sawtooth); the increase is due to the querying that is going on consistently demanding more memory by your Solr cache, and decrease is due to GC execution. In a running environment, the average heap size should not grow over time or the number of GC runs should be less than the number of queries executed on Solr. If that's not the case, you will need more memory. Features such as Solr faceting and sorting requires more memory on top of traditional search. If memory is unavailable, the operating system needs to perform hot swapping with the storage media, thereby increasing the response time; thus, users find huge latency while searching on large indexes. Many of the operating systems allow users to control swapping of programs. How can we optimize JVM? Whenever a facet query is run in Solr, memory is used to store each unique element in the index for each field. So, for example, a search over a small set of facet value (an year from 1980 to 2014) will consume less memory than a search with larger set of facet value, such as people's names (can vary from person to person). To reduce the memory usage, you may set the term index divisor to 2 (default is 4) by setting the following in solrconfig.xml: <indexReaderFactory name="IndexReaderFactory" class="solr.StandardIndexReaderFactory"> <int name="setTermIndexDivisor">2</int> </indexReaderFactory > From Solr 4.x onwards, the ability to set the min, max (term index divisor) block size ability is not available. This will reduce the memory usage for storing all the terms to half; however, it will double the seek time for terms and will impact a little on your search runtime. One of the causes of large heap is the size of index, so one solution is to introduce SolrCloud and the distributed large index into multiple shards. This will not reduce your memory requirement, but will spread it across the cluster. You can look at some of the optimized GC parameters described at http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning page. Similarly, Oracle provides a GC tuning guide for advanced development stages, and it can be seen at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html. Additionally, you can look at the Solr performance problems at http://wiki.apache.org/solr/SolrPerformanceProblems. Optimizing JVM container JVM containers allow users to have their requests served in threads. This in turn enables JVM to support concurrent sessions created for different users connecting at the same time. The concurrency can, however, be controlled to reduce the load on the search server. If you are using Apache Tomcat, you can modify the following entries in server.xml for changing the number of concurrent connections: Similarly, in Jetty, you can control the number of connections held by modifying jetty.xml: Similarly, for other containers, these files can change appropriately. Many containers provide a cache on top of the application to avoid server hits. This cache can be utilized for static pages such as the search page. Containers such as Weblogic provide a development versus production mode. Typically, a development mode runs with 15 threads and a limited JDBC pool size by default, whereas, for a production mode, this can be increased. For tuning containers, besides standard optimization, specific performance-tuning guidelines should be followed, as shown in the following table: Container Performance tuning guide Jetty http://wiki.eclipse.org/Jetty/Howto/High_Load Tomcat http://www.mulesoft.com/tcat/tomcat-performance and http://javamaster.wordpress.com/2013/03/13/apache-tomcat-tuning-guide/ JBoss https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Application_Platform/5/pdf/Performance_Tuning_Guide/JBoss_Enterprise_Application_Platform-5-Performance_Tuning_Guide-en-US.pdf Weblogic http://docs.oracle.com/cd/E13222_01/wls/docs92/perform/WLSTuning.html Websphere http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.html Apache Solr works better with the default container it ships with, Jetty, since it offers a small footprint compared to other containers such as JBoss and Tomcat for which the memory required is a little higher. Summary In this article, we have learned about about Apache Solr which runs on the underlying JVM in the J2EE container and tuning containers. Resources for Article: Further resources on this subject: Apache Solr: Spellchecker, Statistics, and Grouping Mechanism [Article] Getting Started with Apache Solr [Article] Apache Solr PHP Integration [Article]
Read more
  • 0
  • 0
  • 16951

article-image-data-bindings-with-knockout-js
Vijin Boricha
23 Apr 2018
7 min read
Save for later

Data bindings with Knockout.js

Vijin Boricha
23 Apr 2018
7 min read
Today, we will learn about three data binding abilities of Knockout.js. Data bindings are attributes added by the framework for the purpose of data access between elements and view scope. While Observable arrays are efficient in accessing the list of objects with the number of operations on top of the display of the list using the foreach function, Knockout.js has provided three additional data binding abilities: Control-flow bindings Appearance bindings Interactive bindings Let us review these data bindings in detail in the following sections. Control-flow bindings As the name suggests, control-flow bindings help us access the data elements based on a certain condition. The if, if-not, and with are the control-flow bindings available from the Knockout.js. In the following example, we will be using if and with control-flow bindings. We have added a new attribute to the Employee object called age; we are displaying the age value in green only if it is greater than 20. Similarly, we have added another markedEmployee. With control-flow binding, we can limit the scope of access to that specific employee object in the following paragraph. Add the following code snippet to index.html and run the program to see the if and with control-flow bindings working: <!DOCTYPE html> <html> <head> <title>Knockout JS</title> </head> <body> <h1>Welcome to Knockout JS programming</h1> <table border="1" > <tr > <th colspan="2" style="padding:10px;"> <b>Employee Data - Organization : <span style="color:red" data-bind='text: organizationName'> </span> </b> </th> </tr> <tr> <td style="padding:10px;">Employee First Name:</td> <td style="padding:10px;"> <span data-bind='text: empFirstName'></span> </td> </tr> <tr> <td style="padding:10px;">Employee Last Name:</td> <td style="padding:10px;"> <span data-bind='text: empLastName'></span> </td> </tr> </table> <p>Organization Full Name : <span style="color:red" data-bind='text: orgFullName'></span> </p> <!-- Observable Arrays--> <h2>Observable Array Example : </h2> <table border="1"> <thead><tr> <th style="padding:10px;">First Name</th> <th style="padding:10px;">Last Name</th> <th style="padding:10px;">Age</th> </tr></thead> <tbody data-bind='foreach: organization'> <tr> <td style="padding:10px;" data-bind='text: firstName'></td> <td style="padding:10px;" data-bind='text: lastName'></td> <td data-bind="if: age() > 20" style="color: green;padding:10px;"> <span data-bind='text:age'></span> </td> </tr> </tbody> </table> <!-- with control flow bindings --> <p data-bind='with: markedEmployee'> Employee <strong data-bind="text: firstName() + ', ' + lastName()"> </strong> is marked with the age <strong data-bind='text: age'> </strong> </p> <h2>Add New Employee to Observable Array</h2> First Name : <input data-bind="value: newFirstName" /> Last Name : <input data-bind="value: newLastName" /> Age : <input data-bind="value: newEmpAge" /> <button data-bind='click: addEmployee'>Add Employee</button> <!-- JavaScript resources --> <script type='text/javascript' src='js/knockout-3.4.2.js'></script> <script type='text/javascript'> function Employee (firstName, lastName,age) { this.firstName = ko.observable(firstName); this.lastName = ko.observable(lastName); this.age = ko.observable(age); }; this.addEmployee = function() { this.organization.push(new Employee (employeeViewModel.newFirstName(), employeeViewModel.newLastName(), employeeViewModel.newEmpAge())); }; var employeeViewModel = { empFirstName: "Tony", empLastName: "Henry", //Observable organizationName: ko.observable("Sun"), newFirstName: ko.observable(""), newLastName: ko.observable(""), newEmpAge: ko.observable(""), //With control flow object markedEmployee: ko.observable(new Employee("Garry", "Parks", "65")), //Observable Arrays organization : ko.observableArray([ new Employee("John", "Kennedy", "24"), new Employee("Peter", "Hennes","18"), new Employee("Richmond", "Smith","54") ]) }; //Computed Observable employeeViewModel.orgFullName = ko.computed(function() { return employeeViewModel.organizationName() + " Limited"; }); ko.applyBindings(employeeViewModel); employeeViewModel.organizationName("Oracle"); </script> </body> </html> Run the preceding program to see the if control-flow acting on the Age field, and the with control-flow showing a marked employee record with age 65: Appearance bindings Appearance bindings deal with displaying the data from binding elements on view components in formats such as text and HTML, and applying styles with the help of a set of six bindings, as follows: Text: <value>—Sets the value to an element. Example: <td data-bind='text: name'></td> HTML: <value>—Sets the HTML value to an element. Example: //JavaScript: function Employee(firstname, lastname, age) { ... this.formattedName = ko.computed(function() { return "<strong>" + this.firstname() + "</strong>"; }, this); } //Html: <span data-bind='html: markedEmployee().formattedName'></span> Visible: <condition>—An element can be shown or hidden based on the condition. Example: <td data-bind='visible: age() > 20' style='color: green'> span data-bind='text:age'> CSS: <object>—An element can be associated with a CSS class. Example: //CSS: .strongEmployee { font-weight: bold; } //HTML: <span data-bind='text: formattedName, css: {strongEmployee}'> </span> Style: <object>—Associates an inline style to the element. Example: <span data-bind='text: age, style: {color: age() > 20 ? "green" :"red"}'> </span> Attr: <object>—Defines an attribute for the element. Example: <p><a data-bind='attr: {href: featuredEmployee().populatelink}'> View Employee</a></p> Interactive bindings Interactive bindings help the user interact with the form elements to be associated with corresponding viewmodel methods or events to be triggered in the pages. Knockout JS supports the following interactive bindings: Click: <method>—An element click invokes a ViewModel method. Example: <button data-bind='click: addEmployee'>Submit</button> Value:<property>—Associates the form element value to the ViewModel attribute. Example: <td>Age: <input data-bind='value: age' /></td> Event: <object>—With an user-initiated event, it invokes a method. Example: <p data-bind='event: {mouseover: showEmployee, mouseout: hideEmployee}'> Age: <input data-bind='value: Age' /> </p> Submit: <method>—With a form submit event, it can invoke a method. Example: <form data-bind="submit: addEmployee"> <!—Employee form fields --> <button type="submit">Submit</button> </form> Enable: <property>—Conditionally enables the form elements. Example: last name field is enabled only after adding first name field. Disable: <property>—Conditionally disables the form elements. Example: last name field is disabled after adding first name: <p>Last Name: <input data-bind='value: lastName, disable: firstName' /> </p> Checked: <property>—Associates a checkbox or radio element to the ViewModel attribute. Example: <p>Gender: <input data-bind='checked:gender' type='checkbox' /></p> Options: <array>—Defines a ViewModel array for the<select> element. Example: //Javascript: this.designations = ko.observableArray(['manager', 'administrator']); //Html: Designation: <select data-bind='options: designations'></select> selectedOptions: <array>—Defines the active/selected element from the <select> element. Example: Designation: <select data-bind='options: designations, optionsText:"Select", selectedOptions:defaultDesignation'> </select> hasfocus: <property>—Associates the focus attribute to the element. Example: First Name: <input data-bind='value: firstName, hasfocus: firstNameHasFocus' /> We learned about data binding abilities of Knockout.js. You can know more about external data access and Hybrid Mobile Application Development from the book Oracle JET for Developers. Read More Text and appearance bindings and form field bindings Getting to know KnockoutJS Templates    
Read more
  • 0
  • 0
  • 16888

article-image-blocking-versus-non-blocking-scripts
Packt
08 Feb 2013
7 min read
Save for later

Blocking versus Non blocking scripts

Packt
08 Feb 2013
7 min read
(For more resources related to this topic, see here.) Blocking versus non blocking The reason we've put this library into the head of the HTML page and not the footer is because we actually want Modernizr to be a blocking script; this way it will test for, and if applicable create or shim, any elements before the DOM is rendered. We also want to be able to tell what features are available to us before the page is rendered. The script will load, Modernizr will test the availability of the new semantic elements, and if necessary, shim in the ones that fail the tests, and the rest of the page load will be on its merry way. Now when I say that we want the script to be "blocking", what I mean by that is the browser will wait to render or download any more of the page content until the script has finished loading. In essence, everything will move in a serial process and future processes will be "blocked" from occurring until after this takes place. This is also referred to as single threaded. More commonly as you may already be aware of, JavaScripts are called upon in the footer of the page, typically before the last body tag or by way of self-construction through use of an anonymous or immediate function, which only builds itself once the DOM is already parsed. The async attribute Even more recently, included page scripts can have the async attribute added to their tag elements, which will tell the browser to download other scripts in parallel. I like to think of serial versus parallel script downloading in terms of a phone conversation, each script being a single phone call. For each call made, a conversation is held, and once complete, the next phone call is made until there aren't any numbers left to be dialled. Parallel or asynchronous would be like having all of the callers on a conference call at one time. The browser, as the person making all these calls at once, has the superpower to hold all these conversations at the same time. I like to think of blocking scripts as phone calls, which contain pieces of information in their conversations that the person or browser would need to know before communicating or dialling up with the other scripts on this metaphoric conference call. Blocking to allow shimming For our needs, however, we want Modernizr to block that, so that all feature tests and shimming can be done before DOM render. The piece of information the browser needs before calling out the other scripts and parts of the page is what features exist, and whether or not semantic HTML5 elements need to be simulated. Doing otherwise could mean tragedy for something being targeted that doesn't exist because our shim wasn't there to serve its purpose by doing so. It would be similar to a roofer trying to attach shingles to a roof without any nails. Think of shimming as the nails for the CSS to attach certain selectors to their respective DOM nodes. Browsers such as IE typically ignore elements they don't recognize by default so the shims make the styles hold to the replicated semantic elements, and blocking the page ensures that happens in a timely manner. Shimming, which is also referred to as a "shiv", is when JavaScript recreates an HTML5 element that doesn't exist natively in the browser. The elements are thus "shimmed" in for use in styling. The browser will often ignore elements that don't exist natively otherwise. Say for example, the browser that was used to render the page did not support the new HTML5 section element tag. If the page wasn't shimmed to accommodate this before the render tree was constructed, you would run the risk of the CSS not working on those section elements. Looking at the reference chart on http://caniuse.com , this is somewhat likely for anyone using IE 8 or earlier: Now that we've adequately covered how to load Modernizr in the page header, we can move back on to the HTML. Adding the navigation Now that we have verified all of the JavaScript that is connected, we can start adding in more visual HTML elements. I'm going to add in five sections to the page and a fixed navigation header to scroll to each of them. Once that is all in place and working, we'll disable the default HTML actions in the navigation and control everything with JavaScript. By doing this, there will be a nice graceful fallback for the two people on the planet that have JavaScript disabled. Just kidding, maybe it's only one person. All joking aside, a no JavaScript fallback will be in place in the event that it is disabled on the page. If everything checks out as it should, you'll see the following printed in the JavaScript console in developer tools: While we're at it let's remove the h1 tag as well. Since we now know for a fact that Modernizr is great, we don't need to "hello world" it. Once the h1 tag is removed, it's time for a bit of navigation. The HTML used is as follows: <!-- Placing everything in the <header> html5 tag. --> <header> <div id="navbar"> <div id="nav"> <!-- Wrap the navigation in the new html5 nav element --> <nav> <href="#frame-1">Section One</a> <href="#frame-2">Section Two</a> <href="#frame-3">Section Three</a> <href="#frame-4">Section Four</a> <href="#frame-5">Section Four</a> </nav> </div> </div> </header> This is a fairly straightforward navigation at the moment. The entire fragment is placed inside the HTML5 header element of the page. A div tag with the id field of navbar will be used for targeting. I prefer to use HTML5 purely for semantic markup of the page as much as possible and to use div tags to target with styles. You could just as easily add CSS selectors to the new elements and they would be picked up as if they were any other inline or block element. The section frames After the nav element we'll add the page section frames. Each frame will be a div element, and each div element will have an id field matching the href attribute of the element from the navigation. For example, the first frame will have the id field of frame-1 which matches the href attribute of the first anchor tag in the navigation. Everything will also be wrapped in a div tag with the id field of main. Each panel or section will have the class name of frame, which allows us to apply common styles across sections as shown in the following code snippet: <div id="main"> <div id="frame-1" ></div> <div id="frame-2" ></div> <div id="frame-3" ></div> <div id="frame-4" ></div> <div id="frame-5" ></div> </div> Summary In this article we saw the basics of blocking versus non blocking scripts. We saw how the async attribute allows the browser to download other scripts in parallel. We blocked scripts using shimming, and also added navigation to the scripts. Lastly, we saw the use of section frames in scripts. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 16783

article-image-accessing-data-spring
Packt
25 Jan 2016
8 min read
Save for later

Accessing Data with Spring

Packt
25 Jan 2016
8 min read
In this article written by Shameer Kunjumohamed and Hamidreza Sattari, authors of the book Spring Essentials, we will learn how to access data with Spring. (For more resources related to this topic, see here.) Data access or persistence is a major technical feature of data-driven applications. It is a critical area where careful design and expertise is required. Modern enterprise systems use a wide variety of data storage mechanisms, ranging from traditional relational databases such as Oracle, SQL Server, and Sybase to more flexible, schema-less NoSQL databases such as MongoDB, Cassandra, and Couchbase. Spring Framework provides comprehensive support for data persistence in multiple flavors of mechanisms, ranging from convenient template components to smart abstractions over popular Object Relational Mapping (ORM) tools and libraries, making them much easier to use. Spring's data access support is another great reason for choosing it to develop Java applications. Spring Framework offers developers the following primary approaches for data persistence mechanisms to choose from: Spring JDBC ORM data access Spring Data Furthermore, Spring standardizes the preceding approaches under a unified Data Access Object (DAO) notation called @Repository. Another compelling reason for using Spring is its first class transaction support. Spring provides consistent transaction management, abstracting different transaction APIs such as JTA, JDBC, JPA, Hibernate, JDO, and other container-specific transaction implementations. In order to make development and prototyping easier, Spring provides embedded database support, smart data source abstractions, and excellent test integration. This article explores various data access mechanisms provided by Spring Framework and its comprehensive support for transaction management in both standalone and web environments, with relevant examples. Why use Spring Data Access when we have JDBC? JDBC (short for Java Database Connectivity), the Java Standard Edition API for data connectivity from Java to relational databases, is a very a low-level framework. Data access via JDBC is often cumbersome; the boilerplate code that the developer needs to write makes the it error-prone. Moreover, JDBC exception handling is not sufficient for most use cases; there exists a real need for simplified but extensive and configurable exception handling for data access. Spring JDBC encapsulates the often-repeating code, simplifying the developer's code tremendously and letting him focus entirely on his business logic. Spring Data Access components abstract the technical details, including lookup and management of persistence resources such as connection, statement, and result set, and accept the specific SQL statements and relevant parameters to perform the operation. They use the same JDBC API under the hood while exposing simplified, straightforward interfaces for the client's use. This approach helps make a much cleaner and hence maintainable data access layer for Spring applications. DataSource The first step of connecting to a database from any Java application is obtaining a connection object specified by JDBC. DataSource, part of Java SE, is a generalized factory of java.sql.Connection objects that represents the physical connection to the database, which is the preferred means of producing a connection. DataSource handles transaction management, connection lookup, and pooling functionalities, relieving the developer from these infrastructural issues. DataSource objects are often implemented by database driver vendors and typically looked up via JNDI. Application servers and servlet engines provide their own implementations of DataSource, a connector to the one provided by the database vendor, or both. Typically configured inside XML-based server descriptor files, server-supplied DataSource objects generally provide built-in connection pooling and transaction support. As a developer, you just configure your DataSource objects inside the server (configuration files) declaratively in XML and look it up from your application via JNDI. In a Spring application, you configure your DataSource reference as a Spring bean and inject it as a dependency to your DAOs or other persistence resources. The Spring <jee:jndi-lookup/> tag (of  the http://www.springframework.org/schema/jee namespace) shown here allows you to easily look up and construct JNDI resources, including a DataSource object defined from inside an application server. For applications deployed on a J2EE application server, a JNDI DataSource object provided by the container is recommended. <jee:jndi-lookup id="taskifyDS" jndi-name="java:jboss/datasources/taskify"/> For standalone applications, you need to create your own DataSource implementation or use third-party implementations, such as Apache Commons DBCP, C3P0, and BoneCP. The following is a sample DataSource configuration using Apache Commons DBCP2: <bean id="taskifyDS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> . . . </bean> Make sure you add the corresponding dependency (of your DataSource implementation) to your build file. The following is the one for DBCP2: <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> <version>2.1.1</version> </dependency> Spring provides a simple implementation of DataSource called DriverManagerDataSource, which is only for testing and development purposes, not for production use. Note that it does not provide connection pooling. Here is how you configure it inside your application: <bean id="taskifyDS" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> </bean> It can also be configured in a pure JavaConfig model, as shown in the following code: @Bean DataSource getDatasource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(pgDsProps.getProperty("url")); dataSource.setDriverClassName( pgDsProps.getProperty("driverClassName")); dataSource.setUsername(pgDsProps.getProperty("username")); dataSource.setPassword(pgDsProps.getProperty("password")); return dataSource; } Never use DriverManagerDataSource on production environments. Use third-party DataSources such as DBCP, C3P0, and BoneCP for standalone applications, and JNDI DataSource, provided by the container, for J2EE containers instead. Using embedded databases For prototyping and test environments, it would be a good idea to use Java-based embedded databases for quickly ramping up the project. Spring natively supports HSQL, H2, and Derby database engines for this purpose. Here is a sample DataSource configuration for an embedded HSQL database: @Bean DataSource getHsqlDatasource() { return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL) .addScript("db-scripts/hsql/db-schema.sql") .addScript("db-scripts/hsql/data.sql") .addScript("db-scripts/hsql/storedprocs.sql") .addScript("db-scripts/hsql/functions.sql") .setSeparator("/").build(); } The XML version of the same would look as shown in the following code: <jdbc:embedded-database id="dataSource" type="HSQL"> <jdbc:script location="classpath:db-scripts/hsql/db-schema.sql" /> . . . </jdbc:embedded-database> Handling exceptions in the Spring data layer With traditional JDBC based applications, exception handling is based on java.sql.SQLException, which is a checked exception. It forces the developer to write catch and finally blocks carefully for proper handling and to avoid resource leakages. Spring, with its smart exception hierarchy based on runtime exception, saves the developer from this nightmare. By having DataAccessException as the root, Spring bundles a big set of meaningful exceptions that translate the traditional JDBC exceptions. Besides JDBC, Spring covers the Hibernate, JPA, and JDO exceptions in a consistent manner. Spring uses SQLErrorCodeExceptionTranslator, which inherits SQLExceptionTranslator in order to translate SQLExceptions to DataAccessExceptions. We can extend this class for customizing the default translations. We can replace the default translator with our custom implementation by injecting it into the persistence resources (such as JdbcTemplate, which we will cover soon). DAO support and @Repository annotation The standard way of accessing data is via specialized DAOs that perform persistence functions under the data access layer. Spring follows the same pattern by providing DAO components and allowing developers to mark their data access components as DAOs using an annotation called @Repository. This approach ensures consistency over various data access technologies, such as JDBC, Hibernate, JPA, and JDO, as well as project-specific repositories. Spring applies SQLExceptionTranslator across all these methods consistently. Spring recommends your data access components to be annotated with the stereotype @Repository. The term "repository" was originally defined in Domain-Driven Design, Eric Evans, Addison-Wesley, as "a mechanism for encapsulating storage, retrieval, and search behavior which emulates a collection of objects." This annotation makes the class eligible for DataAccessException translation by Spring Framework. Spring Data, another standard data access mechanism provided by Spring, revolves around @Repository components. Summary We have so far explored Spring Framework's comprehensive coverage of all the technical aspects around data access and transaction. Spring provides multiple convenient data access methods, which takes away much of the hard work involved in building the data layer from the developer and also standardizes business components. Correct usage of Spring data access components will ensure that our data layer is clean and highly maintainable. Resources for Article: Further resources on this subject: So, what is Spring for Android?[article] Getting Started with Spring Security[article] Creating a Spring Application[article]
Read more
  • 0
  • 0
  • 16718
article-image-building-movie-api-express
Packt
18 Feb 2016
22 min read
Save for later

Building A Movie API with Express

Packt
18 Feb 2016
22 min read
We will build a movie API that allows you to add actor and movie information to a database and connect actors with movies, and vice versa. This will give you a hands-on feel for what Express.js offers. We will cover the following topics in this article: Folder structure and organization Responding to CRUD operations Object modeling with Mongoose Generating unique IDs Testing (For more resources related to this topic, see here.) Folder structure and organization Folder structure is a very controversial topic. Though there are many clean ways to structure your project, we will use the following code for the remainder of our article: article +-- app.js +-- package.json +-- node_modules ¦+-- npm package folders +-- src ¦+-- lib ¦+-- models ¦+-- routes +-- test Let's take a look this at in detail: app.js: It is conventional to have the main app.js file in the root directory. The app.js is the entry point of our application and will be used to launch the server. package.json: As with any Node.js app, we have package.json in the root folder specifying our application name and version as well as all of our npm dependencies. node_modules: The node_modules folder and its content are generated via npm installation and should usually be ignored in your version control of choice because it depends on the platform the app runs on. Having said that, according to the npm FAQ, it is probably better to commit the node_modules folder as well. Check node_modules into git for things you deploy, such as websites and apps. Do not check node_modules into git for libraries and modules intended to be reused. Refer to the following article to read more about the rationale behind this: http://www.futurealoof.com/posts/nodemodules-in-git.html src: The src folder contains all the logic of the application. lib: Within the src folder, we have the lib folder, which contains the core of the application. This includes the middleware, routes, and creating the database connection. models: The models folder contains our mongoose models, which defines the structure and logic of the models we want to manipulate and save. routes: The routes folder contains the code for all the endpoints the API is able to serve. test: The test folder will contain our functional tests using Mocha as well as two other node modules, should and supertest, to make it easier to aim for 100 percent coverage. Responding to CRUD operations The term CRUD refers to the four basic operations one can perform on data: create, read, update, and delete. Express gives us an easy way to handle those operations by supporting the basic methods GET, POST, PUT, and DELETE: GET: This method is used to retrieve the existing data from the database. This can be used to read single or multiple rows (for SQL) or documents (for MongoDB) from the database. POST: This method is used to write new data into the database, and it is common to include a JSON payload that fits the data model. PUT: This method is used to update existing data in the database, and a JSON payload that fits the data model is often included for this method as well. DELETE: This method is used to remove an existing row or document from the database. Express 4 has dramatically changed from version 3. A lot of the core modules have been removed in order to make it even more lightweight and less dependent. Therefore, we have to explicitly require modules when needed. One helpful module is body-parser. It allows us to get a nicely formatted body when a POST or PUT HTTP request is received. We have to add this middleware before our business logic in order to use its result later. We write the following in src/lib/parser.js: var bodyParser = require('body-parser'); module;exports = function(app) { app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: false })); }; The preceding code is then used in src/lib/app.js as follows: var express = require('express'); var app = express(); require('./parser')(app); module.exports = app; The following example allows you to respond to a GET request on http://host/path. Once a request hits our API, Express will run it through the necessary middleware as well as the following function: app.get('/path/:id', function(req, res, next) { res.status(200).json({ hello: 'world'}); }); The first parameter is the path we want to handle a GET function. The path can contain parameters prefixed with :. Those path parameters will then be parsed in the request object. The second parameter is the callback that will be executed when the server receives the request. This function gets populated with three parameters: req, res, and next. The req parameter represents the HTTP request object that has been customized by Express and the middlewares we added in our applications. Using the path http://host/path/:id, suppose a GET request is sent to http://host/path/1?a=1&b=2. The req object would be the following: { params: { id: 1 }, query: { a: 1, b: 2 } } The params object is a representation of the path parameters. The query is the query string, which are the values stated after ? in the URL. In a POST request, there will often be a body in our request object as well, which includes the data we wish to place in our database. The res parameter represents the response object for that request. Some methods, such as status() or json(), are provided in order to tell Express how to respond to the client. Finally, the next() function will execute the next middleware defined in our application. Retrieving an actor with GET Retrieving a movie or actor from the database consists of submitting a GET request to the route: /movies/:id or /actors/:id. We will need a unique ID that refers to a unique movie or actor: app.get('/actors/:id', function(req, res, next) { //Find the actor object with this :id //Respond to the client }); Here, the URL parameter :id will be placed in our request object. Since we call the first variable in our callback function req as before, we can access the URL parameter by calling req.params.id. Since an actor may be in many movies and a movie may have many actors, we need a nested endpoint to reflect this as well: app.get('/actors/:id/movies', function(req, res, next) { //Find all movies the actor with this :id is in //Respond to the client }); If a bad GET request is submitted or no actor with the specified ID is found, then the appropriate status code bad request 400 or not found 404 will be returned. If the actor is found, then success request 200 will be sent back along with the actor information. On a success, the response JSON will look like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "AxiomZen", "birth_year": 2012, "__v": 0, "movies": [] } Creating a new actor with POST In our API, creating a new movie in the database involves submitting a POST request to /movies or /actors for a new actor: app.post('/actors', function(req, res, next) { //Save new actor //Respond to the client }); In this example, the user accessing our API sends a POST request with data that would be placed into request.body. Here, we call the first variable in our callback function req. Thus, to access the body of the request, we call req.body. The request body is sent as a JSON string; if an error occurs, a 400 (bad request) status would be sent back. Otherwise, a 201 (created) status is sent to the response object. On a success request, the response will look like the following: { "__v": 0, "id": 1, "name": "AxiomZen", "birth_year": 2012, "_id": "551322589911fefa1f656cc5", "movies": [] } Updating an actor with PUT To update a movie or actor entry, we first create a new route and submit a PUT request to /movies/:id or /actors /:id, where the id parameter is unique to an existing movie/actor. There are two steps to an update. We first find the movie or actor by using the unique id and then we update that entry with the body of the request object, as shown in the following code: app.put('/actors/:id', function(req, res) { //Find and update the actor with this :id //Respond to the client }); In the request, we would need request.body to be a JSON object that reflects the actor fields to be updated. The request.params.id would still be a unique identifier that refers to an existing actor in the database as before. On a successful update, the response JSON looks like this: { "_id": "551322589911fefa1f656cc5", "id": 1, "name": "Axiomzen", "birth_year": 99, "__v": 0, "movies": [] } Here, the response will reflect the changes we made to the data. Removing an actor with DELETE Deleting a movie is as simple as submitting a DELETE request to the same routes that were used earlier (specifying the ID). The actor with the appropriate id is found and then deleted: app.delete('/actors/:id', function(req, res) { //Remove the actor with this :id //Respond to the client }); If the actor with the unique id is found, it is then deleted and a response code of 204 is returned. If the actor cannot be found, a response code of 400 is returned. There is no response body for a DELETE() method; it will simply return the status code of 204 on a successful deletion. Our final endpoints for this simple app will be as follows: //Actor endpoints app.get('/actors', actors.getAll); app.post('/actors', actors.createOne); app.get('/actors/:id', actors.getOne); app.put('/actors/:id', actors.updateOne); app.delete('/actors/:id', actors.deleteOne) app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); //Movie endpoints app.get('/movies', movies.getAll); app.post('/movies', movies.createOne); app.get('/movies/:id', movies.getOne); app.put('/movies/:id', movies.updateOne); app.delete('/movies/:id', movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); In Express 4, there is an alternative way to describe your routes. Routes that share a common URL, but use a different HTTP verb, can be grouped together as follows: app.route('/actors') .get(actors.getAll) .post(actors.createOne); app.route('/actors/:id') .get(actors.getOne) .put(actors.updateOne) .delete(actors.deleteOne); app.post('/actors/:id/movies', actors.addMovie); app.delete('/actors/:id/movies/:mid', actors.deleteMovie); app.route('/movies') .get(movies.getAll) .post(movies.createOne); app.route('/movies/:id') .get(movies.getOne) .put(movies.updateOne) .delete(movies.deleteOne); app.post('/movies/:id/actors', movies.addActor); app.delete('/movies/:id/actors/:aid', movies.deleteActor); Whether you prefer it this way or not is up to you. At least now you have a choice! We have not discussed the logic of the function being run for each endpoint. We will get to that shortly. Express allows us to easily CRUD our database objects, but how do we model our objects? Object modeling with Mongoose Mongoose is an object data modeling library (ODM) that allows you to define schemas for your data collections. You can find out more about Mongoose on the project website: http://mongoosejs.com/. To connect to a MongoDB instance using the mongoose variable, we first need to install npm and save Mongoose. The save flag automatically adds the module to your package.json with the latest version, thus, it is always recommended to install your modules with the save flag. For modules that you only need locally (for example, Mocha), you can use the savedev flag. For this project, we create a new file db.js under /src/lib/db.js, which requires Mongoose. The local connection to the mongodb database is made in mongoose.connect as follows: var mongoose = require('mongoose'); module.exports = function(app) { mongoose.connect('mongodb://localhost/movies', { mongoose: { safe: true } }, function(err) { if (err) { return console.log('Mongoose - connection error:', err); } }); return mongoose; }; In our movies database, we need separate schemas for actors and movies. As an example, we will go through object modeling in our actor database /src/models/actor.js by creating an actor schema as follows: // /src/models/actor.js var mongoose = require('mongoose'); var generateId = require('./plugins/generateId'); var actorSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, name: { type: String, required: true }, birth_year: { type: Number, required: true }, movies: [{ type : mongoose.Schema.ObjectId, ref : 'Movie' }] }); actorSchema.plugin(generateId()); module.exports = mongoose.model('Actor', actorSchema); Each actor has a unique id, a name, and a birth year. The entries also contain validators such as the type and boolean value that are required. The model is exported upon definition (module.exports), so that we can reuse it directly in the app. Alternatively, you could fetch each model through Mongoose using mongoose.model('Actor', actorSchema), but this would feel less explicitly coupled compared to our approach of directly requiring it. Similarly, we need a movie schema as well. We define the movie schema as follows: // /src/models/movies.js var movieSchema = new mongoose.Schema({ id: { type: Number, required: true, index: { unique: true } }, title: { type: String, required: true }, year: { type: Number, required: true }, actors: [{ type : mongoose.Schema.ObjectId, ref : 'Actor' }] }); movieSchema.plugin(generateId()); module.exports = mongoose.model('Movie', movieSchema); Generating unique IDs In both our movie and actor schemas, we used a plugin called generateId(). While MongoDB automatically generates ObjectID for each document using the _id field, we want to generate our own IDs that are more human readable and hence friendlier. We also would like to give the user the opportunity to select their own id of choice. However, being able to choose an id can cause conflicts. If you were to choose an id that already exists, your POST request would be rejected. We should autogenerate an ID if the user does not pass one explicitly. Without this plugin, if either an actor or a movie is created without an explicit ID passed along by the user, the server would complain since the ID is required. We can create middleware for Mongoose that assigns an id before we persist the object as follows: // /src/models/plugins/generateId.js module.exports = function() { return function generateId(schema){ schema.pre('validate',function(next, done) { var instance = this; var model = instance.model(instance.constructor.modelName); if( instance.id == null ) { model.findOne().sort("-id").exec(function(err,maxInstance) { if (err){ return done(err); } else { var maxId = maxInstance.id || 0; instance.id = maxId+1; done(); } }) } else { done(); } }) } }; There are a few important notes about this code. See what we did to get the var model? This makes the plugin generic so that it can be applied to multiple Mongoose schemas. Notice that there are two callbacks available: next and done. The next variable passes the code to the next pre-validation middleware. That's something you would usually put at the bottom of the function right after you make your asynchronous call. This is generally a good thing since one of the advantages of asynchronous calls is that you can have many things running at the same time. However, in this case, we cannot call the next variable because it would conflict with our model definition of id required. Thus, we just stick to using the done variable when the logic is complete. Another concern arises due to the fact that MongoDB doesn't support transactions, which means you may have to account for this function failing in some edge cases. For example, if two calls to POST /actor happen at the same time, they will both have their IDs auto incremented to the same value. Now that we have the code for our generateId() plugin, we require it in our actor and movie schema as follows: var generateId = require('./plugins/generateId'); actorSchema.plugin(generateId()); Validating your database Each key in the Mongoose schema defines a property that is associated with a SchemaType. For example, in our actors.js schema, the actor's name key is associated with a string SchemaType. String, number, date, buffer, boolean, mixed, objectId, and array are all valid schema types. In addition to schema types, numbers have min and max validators and strings have enum and match validators. Validation occurs when a document is being saved (.save()) and will return an error object, containing type, path, and value properties, if the validation has failed. Extracting functions to reusable middleware We can use our anonymous or named functions as middleware. To do so, we would export our functions by calling module.exports in routes/actors.js and routes/movies.js: Let's take a look at our routes/actors.js file. At the top of this file, we require the Mongoose schemas we defined before: var Actor = require('../models/actor'); This allows our variable actor to access our MongoDB using mongo functions such as find(), create(), and update(). It will follow the schema defined in the file /models/actor. Since actors are in movies, we will also need to require the Movie schema to show this relationship by the following. var Movie = require('../models/movie'); Now that we have our schema, we can begin defining the logic for the functions we described in endpoints. For example, the endpoint GET /actors/:id will retrieve the actor with the corresponding ID from our database. Let's call this function getOne(). It is defined as follows: getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, Here, we use the mongo findOne() method to retrieve the actor with id: req.params.id. There are no joins in MongoDB so we use the .populate() method to retrieve the movies the actor is in. The .populate() method will retrieve documents from a separate collection based on its ObjectId. This function will return a status 400 if something went wrong with our Mongoose driver, a status 404 if the actor with :id is not found, and finally, it will return a status 200 along with the JSON of the actor object if an actor is found. We define all the functions required for the actor endpoints in this file. The result is as follows: // /src/routes/actors.js var Actor = require('../models/actor'); var Movie = require('../models/movie'); module.exports = { getAll: function(req, res, next) { Actor.find(function(err, actors) { if (err) return res.status(400).json(err); res.status(200).json(actors); }); }, createOne: function(req, res, next) { Actor.create(req.body, function(err, actor) { if (err) return res.status(400).json(err); res.status(201).json(actor); }); }, getOne: function(req, res, next) { Actor.findOne({ id: req.params.id }) .populate('movies') .exec(function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, updateOne: function(req, res, next) { Actor.findOneAndUpdate({ id: req.params.id }, req.body,function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); res.status(200).json(actor); }); }, deleteOne: function(req, res, next) { Actor.findOneAndRemove({ id: req.params.id }, function(err) { if (err) return res.status(400).json(err); res.status(204).json(); }); }, addMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); Movie.findOne({ id: req.body.id }, function(err, movie) { if (err) return res.status(400).json(err); if (!movie) return res.status(404).json(); actor.movies.push(movie); actor.save(function(err) { if (err) return res.status(500).json(err); res.status(201).json(actor); }); }) }); }, deleteMovie: function(req, res, next) { Actor.findOne({ id: req.params.id }, function(err, actor) { if (err) return res.status(400).json(err); if (!actor) return res.status(404).json(); actor.movies = []; actor.save(function(err) { if (err) return res.status(400).json(err); res.status(204).json(actor); }) }); } }; For all of our movie endpoints, we need the same functions but applied to the movie collection. After exporting these two files, we require them in app.js (/src/lib/app.js) by simply adding: require('../routes/movies'); require('../routes/actors'); By exporting our functions as reusable middleware, we keep our code clean and can refer to functions in our CRUD calls in the /routes folder. Testing Mocha is used as the test framework along with should.js and supertest. Testing supertest lets you test your HTTP assertions and testing API endpoints. The tests are placed in the root folder /test. Tests are completely separate from any of the source code and are written to be readable in plain English, that is, you should be able to follow along with what is being tested just by reading through them. Well-written tests with good coverage can serve as a readme for its API, since it clearly describes the behavior of the entire app. The initial setup to test our movies API is the same for both /test/actors.js and /test/movies.js: var should = require('should'); var assert = require('assert'); var request = require('supertest'); var app = require('../src/lib/app'); In src/test/actors.js, we test the basic CRUD operations: creating a new actor object, retrieving, editing, and deleting the actor object. An example test for the creation of a new actor is shown as follows: describe('Actors', function() { describe('POST actor', function(){ it('should create an actor', function(done){ var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(201, done) }); We can see that the tests are readable in plain English. We create a new POST request for a new actor to the database with the id of 1, name of AxiomZen, and birth_year of 2012. Then, we send the request with the .send() function. Similar tests are present for GET and DELETE requests as given in the following code: describe('GET actor', function() { it('should retrieve actor from db', function(done){ request(app) .get('/actors/1') .expect(200, done); }); describe('DELETE actor', function() { it('should remove a actor', function(done) { request(app) .delete('/actors/1') .expect(204, done); }); }); To test our PUT request, we will edit the name and birth_year of our first actor as follows: describe('PUT actor', function() { it('should edit an actor', function(done) { var actor = { 'name': 'ZenAxiom', 'birth_year': '2011' }; request(app) .put('/actors/1') .send(actor) .expect(200, done); }); it('should have been edited', function(done) { request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.name.should.eql('ZenAxiom'); res.body.birth_year.should.eql(2011); done(); }); }); }); The first part of the test modifies the actor name and birth_year keys, sends a PUT request for /actors/1 (1 is the actors id), and then saves the new information to the database. The second part of the test checks whether the database entry for the actor with id 1 has been changed. The name and birth_year values are checked against their expected values using .should.eql(). In addition to performing CRUD actions on the actor object, we can also perform these actions to the movies we add to each actor (associated by the actor's ID). The following snippet shows a test to add a new movie to our first actor (with the id of 1): describe('POST /actors/:id/movies', function() { it('should successfully add a movie to the actor',function(done) { var movie = { 'id': '1', 'title': 'Hello World', 'year': '2013' } request(app) .post('/actors/1/movies') .send(movie) .expect(201, done) }); }); it('actor should have array of movies now', function(done){ request(app) .get('/actors/1') .expect(200) .end(function(err, res) { res.body.movies.should.eql(['1']); done(); }); }); }); The first part of the test creates a new movie object with id, title, and year keys, and sends a POST request to add the movies as an array to the actor with id of 1. The second part of the test sends a GET request to retrieve the actor with id of 1, which should now include an array with the new movie input. We can similarly delete the movie entries as illustrated in the actors.js test file: describe('DELETE /actors/:id/movies/:movie_id', function() { it('should successfully remove a movie from actor', function(done){ request(app) .delete('/actors/1/movies/1') .expect(200, done); }); it('actor should no longer have that movie id', function(done){ request(app) .get('/actors/1') .expect(201) .end(function(err, res) { res.body.movies.should.eql([]); done(); }); }); }); Again, this code snippet should look familiar to you. The first part tests that sending a DELETE request specifying the actor ID and movie ID will delete that movie entry. In the second part, we make sure that the entry no longer exists by submitting a GET request to view the actor's details where no movies should be listed. In addition to ensuring that the basic CRUD operations work, we also test our schema validations. The following code tests to make sure two actors with the same ID do not exist (IDs are specified as unique): it('should not allow you to create duplicate actors', function(done) { var actor = { 'id': '1', 'name': 'AxiomZen', 'birth_year': '2012', }; request(app) .post('/actors') .send(actor) .expect(400, done); }); We should expect code 400 (bad request) if we try to create an actor who already exists in the database. A similar set of tests is present for tests/movies.js. The function and outcome of each test should be evident now. Summary In this article, we created a basic API that connects to MongoDB and supports CRUD methods. You should now be able to set up an API complete with tests, for any data, not just movies and actors! We hope you found that this article has laid a good foundation for the Express and API setup. To learn more about Express.js, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Web Application Development with Express(https://www.packtpub.com/web-development/mastering-web-application-development-express) Advanced Express Web Application Development(https://www.packtpub.com/web-development/advanced-express-web-application-development) Resources for Article: Further resources on this subject: Metal API: Get closer to the bare metal with Metal API [Article] Building a Basic Express Site [Article] Introducing Sails.js [Article]
Read more
  • 0
  • 0
  • 16596

article-image-testing-and-quality-control
Packt
04 Jan 2017
19 min read
Save for later

Testing and Quality Control

Packt
04 Jan 2017
19 min read
In this article by Pablo Solar Vilariño and Carlos Pérez Sánchez, the author of the book, PHP Microservices, we will see the following topics: (For more resources related to this topic, see here.) Test-driven development Behavior-driven development Acceptance test-driven development Tools Test-driven development Test-Driven Development (TDD) is part of Agile philosophy, and it appears to solve the common developer's problem that shows when an application is evolving and growing, and the code is getting sick, so the developers fix the problems to make it run but every single line that we add can be a new bug or it can even break other functions. Test-driven development is a learning technique that helps the developer to learn about the domain problem of the application they are going to build, doing it in an iterative, incremental, and constructivist way: Iterative because the technique always repeats the same process to get the value Incremental because for each iteration, we have more unit tests to be used Constructivist because it is possible to test all we are developing during the process straight away, so we can get immediate feedback Also, when we finish developing each unit test or iteration, we can forget it because it will be kept from now on throughout the entire development process, helping us to remember the domain problem through the unit test; this is a good approach for forgetful developers. It is very important to understand that TDD includes four things: analysis, design, development, and testing; in other words, doing TDD is understanding the domain problem and correctly analyzing the problem, designing the application well, developing well, and testing it. It needs to be clear; TDD is not just about implementing unit tests, it is the whole process of software development. TDD perfectly matches projects based on microservices because using microservices in a large project is dividing it into little microservices or functionalities, and it is like an aggrupation of little projects connected by a communication channel. The project size is independent of using TDD because in this technique, you divide each functionality into little examples, and to do this, it does not matter if the project is big or small, and even less when our project is divided by microservices. Also, microservices are still better than a monolithic project because the functionalities for the unit tests are organized in microservices, and it will help the developers to know where they can begin using TDD. How to do TDD? Doing TDD is not difficult; we just need to follow some steps and repeat them by improving our code and checking that we did not break anything. TDD involves the following steps: Write the unit test: It needs to be the simplest and clearest test possible, and once it is done, it has to fail; this is mandatory. If it does not fail, there is something that we are not doing properly. Run the tests: If it has errors (it fails), this is the moment to develop the minimum code to pass the test, just what is necessary, do not code additional things. Once you develop the minimum code to pass the test, run the test again (step two); if it passes, go to the next step, if not then fix it and run the test again. Improve the test: If you think it is possible to improve the code you wrote, do it and run the tests again (step two). If you think it is perfect then write a new unit test (step one). To do TDD, it is necessary to write the tests before implementing the function; if the tests are written after the implementation has started, it is not TDD; it is just testing. If we start implementing the application without testing and it is finished, or if we start creating unit tests during the process, we are doing the classic testing and we are not approaching the TDD benefits. Developing the functions without prior testing, the abstract idea of the domain problem in your mind can be wrong or may even be clear at the start but during the development process it can change or the concepts can be mixed. Writing the tests after that, we are checking if all the ideas in our main were correct after we finished the implementation, so probably we have to change some methods or even whole functionalities after spend time coding. Obviously, testing is always better than not testing, but doing TDD is still better than just classic testing. Why should I use TDD? TDD is the answer to questions such as: Where shall I begin? How can I do it? How can I write code that can be modified without breaking anything? How can I know what I have to implement? The goal is not to write many unit tests without sense but to design it properly following the requirements. In TDD, we do not to think about implementing functions, but we think about good examples of functions related with the domain problem in order to remove the ambiguity created by the domain problem. In other words, by doing TDD, we should reproduce a specific function or case of use in X examples until we get the necessary examples to describe the function or task without ambiguity or misinterpretations. TDD can be the best way to document your application. Using other methodologies of software development, we start thinking about how the architecture is going to be, what pattern is going to be used, how the communication between microservices is going to be, and so on, but what happens if once we have all this planned, we realize that this is not necessary? How much time is going to pass until we realize that? How much effort and money are we going to spend? TDD defines the architecture of our application by creating little examples in many iterations until we realize what the architecture is; the examples will slowly show us the steps to follow in order to define what the best structures, patterns, or tools to use are, avoiding expenditure of resources during the firsts stages of our application. This does not mean that we are working without an architecture; obviously, we have to know if our application is going to be a website or a mobile app and use a proper framework. What is going to be the interoperability in the application? In our case it will be an application based on microservices, so it will give us support to start creating the first unit tests. The architectures that we remove are the architectures on top of the architecture, in other words, the guidelines to develop an application as always. TDD will produce an architecture without ambiguity from unit testing. TDD is not cure-all: In other words, it does not give the same results to a senior developer as to a junior developer, but it is useful for the entire team. Let's look at some advantages of using TDD: Code reuse: Creates every functionality with only the necessary code to pass the tests in the second stage (Green) and allows you to see if there are more functions using the same code structure or parts of a specific function, so it helps you to reuse the previous code you wrote. Teamwork is easier: It allows you to be confident with your team colleagues. Some architects or senior developers do not trust developers with poor experience, and they need to check their code before committing the changes, creating a bottleneck at that point, so TDD helps to trust developers with less experience. Increases communication between team colleagues: The communication is more fluent, so the team share their knowledge about the project reflected on the unit tests. Avoid overdesigning application in the first stages: As we said before, doing TDD allows you to have an overview of the application little by little, avoiding the creation of useless structures or patterns in your project, which, maybe, you will trash in the future stages. Unit tests are the best documentation: The best way to give a good point of view of a specific functionality is reading its unit test. It will help to understand how it works instead of human words. Allows discovering more use cases in the design stage: In every test you have to create, you will understand how the functionality should work better and all the possible stages that a functionality can have. Increases the feeling of a job well done: In every commit of your code, you will have the feeling that it was done properly because the rest of the unit tests passes without errors, so you will not be worried about other broken functionalities. Increases the software quality: During the step of refactoring, we spend our efforts on making the code more efficient and maintainable, checking that the whole project still works properly after the changes. TDD algorithm The technical concepts and steps to follow the TDD algorithm are easy and clear, and the proper way to make it happen improves by practicing it. There are only three steps, called red, green, and refactor: Red – Writing the unit tests It is possible to write a test even when the code is not written yet; you just need to think about whether it is possible to write a specification before implementing it. So, in this first step you should consider that the unit test you start writing is not like a unit test, but it is like an example or specification of the functionality. In TDD, this first example or specification is not immovable; in other words, the unit test can be modified in the future. Before starting to write the first unit test, it is necessary to think about how the Software Under Test (SUT) is going to be. We need to think about how the SUT code is going to be and how we would check that it works they way we want it to. The way that TDD works drives us to firstly design what is more comfortable and clear if it fits the requirements. Green – Make the code work Once the example is written, we have to code the minimum to make it pass the test; in other words, set the unit test to green. It does not matter if the code is ugly and not optimized, it will be our task in the next step and iterations. In this step, the important thing is only to write the necessary code for the requirements without unnecessary things. It does not mean writing without thinking about the functionality, but thinking about it to be efficient. It looks easy but you will realize that you will write extra code the first time. If you concentrate on this step, new questions will appear about the SUT behavior with different entries, but you should be strong and avoid writing extra code about other functionalities related to the current one. Instead of coding them, take notes to convert them into functionalities in the next iterations. Refactor – Eliminate redundancy Refactoring is not the same as rewriting code. You should be able to change the design without changing the behavior. In this step, you should remove the duplicity in your code and check if the code matches the principles of good practices, thinking about the efficiency, clarity, and future maintainability of the code. This part depends on the experience of each developer. The key to good refactoring is making it in small steps To refactor a functionality, the best way is to change a little part and then execute all the available tests; if they pass, continue with another little part, until you are happy with the obtained result. Behavior-driven development Behavior-Driven Development (BDD) is a process that broadens the TDD technique and mixes it with other design ideas and business analyses provided to the developers, in order to improve the software development. In BDD, we test the scenarios and classes’ behavior in order to meet the scenarios, which can be composed by many classes. It is very useful to use a DSL in order to have a common language to be used by the customer, project owner, business analyst, or developers. The goal is to have a ubiquitous language. What is BDD? As we said before, BDD is an AGILE technique based on TDD and ATDD, promoting the collaboration between the entire team of a project. The goal of BDD is that the entire team understands what the customer wants, and the customer knows what the rest of the team understood from their specifications. Most of the times, when a project starts, the developers don't have the same point of view as the customer, and during the development process the customer realizes that, maybe, they did not explain it or the developer did not understand it properly, so it adds more time to changing the code to meet the customer's needs. So, BDD is writing test cases in human language, using rules, or in a ubiquitous language, so the customer and developers can understand it. It also defines a DSL for the tests. How does it work? It is necessary to define the features as user stories (we will explain what this is in the ATDD section of this article) and their acceptance criteria. Once the user story is defined, we have to focus on the possible scenarios, which describe the project behavior for a concrete user or a situation using DSL. The steps are: Given [context], When [event occurs], Then [Outcome]. To sum up, the defined scenario for a user story gives the acceptance criteria to check if the feature is done. Acceptance Test-Driven Development Perhaps, the most important methodology in a project is the Acceptance Test-Driven Development (ATDD) or Story Test-Driven Development (STDD); it is TDD but on a different level. The acceptance (or customer) tests are the written criteria for a project meeting the business requirements that the customer demands. They are examples (like the examples in TDD) written by the project owner. It is the start of development for each iteration, the bridge between Scrum and agile development. In ATDD, we start the implementation of our project in a way different from the traditional methodologies. The business requirements written in human language are replaced by executables agreed upon by some team members and also the customer. It is not about replacing the whole documentation, but only a part of the requirements. The advantages of using ATDD are the following: Real examples and a common language for the entire team to understand the domain It allows identifying the domain rules properly It is possible to know if a user story is finished in each iteration The workflow works from the first steps The development does not start until the tests are defined and accepted by the team ATDD algorithm The algorithm of ATDD is like that of TDD but reaches more people than only the developers; in other words, doing ATDD, the tests of each story are written in a meeting that includes the project owners, developers, and QA technicians because the entire team must understand what is necessary to do and why it is necessary, so they can see if it is what the code should do. The ATDD cycle is depicted in the following diagram: Discuss The starting point of the ATDD algorithm is the discussion. In this first step, the business has a meeting with the customer to clarify how the application should work, and the analyst should create the user stories from that conversation. Also, they should be able to explain the conditions of satisfaction of every user story in order to be translated into examples. By the end of the meeting, the examples should be clear and concise, so we can get a list of examples of user stories in order to cover all the needs of the customer, reviewed and understood for him. Also, the entire team will have a project overview in order to understand the business value of the user story, and in case the user story is too big, it could be divided into little user stories, getting the first one for the first iteration of this process. Distill High-level acceptance tests are written by the customer and the development team. In this step, the writing of the test cases that we got from the examples in the discussion step begins, and the entire team can take part in the discussion and help clarify the information or specify the real needs of that. The tests should cover all the examples that were discovered in the discussion step, and extra tests could be added during this process bit by bit till we understand the functionality better. At the end of this step, we will obtain the necessary tests written in human language, so the entire team (including the customer) can understand what they are going to do in the next step. These tests can be used like a documentation. Develop In this step, the development of acceptance test cases is begun by the development team and the project owner. The methodology to follow in this step is the same as TDD, the developers should create a test and watch it fail (Red) and then develop the minimum amount of lines to pass (Green). Once the acceptance tests are green, this should be verified and tested to be ready to be delivered. During this process, the developers may find new scenarios that need to be added into the tests or even if it needs a large amount of work, it could be pushed to the user story. At the end of this step, we will have software that passes the acceptance tests and maybe more comprehensive tests. Demo The created functionality is shown by running the acceptance test cases and manually exploring the features of the new functionality. After the demonstration, the team discusses whether the user story was done properly and it meets the product owner's needs and decides if it can continue with the next story. Tools After knowing more about TDD and BDD, it is time to explain a few tools you can use in your development workflow. There are a lot of tools available, but we will only explain the most used ones. Composer Composer is a PHP tool used to manage software dependencies. You only need to declare the libraries needed by your project and the composer will manage them, installing and updating when necessary. This tool has only a few requirements: if you have PHP 5.3.2+, you are ready to go. In the case of a missing requirement, the composer will warn you. You could install this dependency manager on your development machine, but since we are using Docker, we are going to install it directly on our PHP-FPM containers. The installation of composer in Docker is very easy; you only need to add the following rule to the Dockerfile: RUN curl -sS https://getcomposer.org/installer | php -- --install-"dir=/usr/bin/ --filename=composer PHPUnit Another tool we need for our project is PHPUnit, a unit test framework. As before, we will be adding this tool to our PHP-FPM containers to keep our development machine clean. If you are wondering why we are not installing anything on our development machine except for Docker, the response is clear. Having everything in the containers will help you avoid any conflict with other projects and gives you the flexibility of changing versions without being too worried. Add the following RUN command to your PHP-FPM Dockerfile, and you will have the latest PHPUnit version installed and ready to use: RUN curl -sSL https://phar.phpunit.de/phpunit.phar -o "/usr/bin/phpunit && chmod +x /usr/bin/phpunit Now that we have all our requirements too, it is time to install our PHP framework and start doing some TDD stuff. Later, we will continue updating our Docker environment with new tools. We choose Lumen for our example. Please feel free to adapt all the examples to your favorite framework. Our source code will be living inside our containers, but at this point of development, we do not want immutable containers. We want every change we make to our code to be available instantaneously in our containers, so we will be using a container as a storage volume. To create a container with our source and use it as a storage volume, we only need to edit our docker-compose.yml and create one source container per each microservice, as follows: source_battle: image: nginx:stable volumes: - ../source/battle:/var/www/html command: "true" The above piece of code creates a container image named source_battle, and it stores our battle source (located at ../source/battle from the docker-compose.yml current path). Once we have our source container available, we can edit each one of our services and assign a volume. For instance, we can add the following line in our microservice_battle_fpm and microservice_battle_nginx container descriptions: volumes_from: - source_battle Our battle source will be available in our source container in the path, /var/www/html, and the remaining step to install Lumen is to do a simple composer execution. First, you need to be sure that your infrastructure is up with a simple command, as follows: $ docker-compose up The preceding command spins up our containers and outputs the log to the standard IO. Now that we are sure that everything is up and running, we need to enter in our PHP-FPM containers and install Lumen. If you need to know the names assigned to each one of your containers, you can do a $ docker ps and copy the container name. As an example, we are going to enter the battle PHP-FPM container with the following command: $ docker exec -it docker_microservice_battle_fpm_1 /bin/bash The preceding command opens an interactive shell in your container, so you can do anything you want; let's install Lumen with a single command: # cd /var/www/html && composer create-project --prefer-dist "laravel/lumen . Repeat the preceding commands for each one of your microservices. Now, you have everything ready to start doing Unit tests and coding your application. Summary In this article, you learned about test-driven development, behavior-driven development, acceptance test-driven development, and PHPUnit. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [Article] Understanding PHP basics [Article] The Multi-Table Query Generator using phpMyAdmin and MySQL [Article]
Read more
  • 0
  • 0
  • 16582

article-image-how-to-create-flappy-bird-clone-with-melonjs
Ellison Leao
26 Sep 2014
18 min read
Save for later

How to Create a Flappy Bird Clone with MelonJS

Ellison Leao
26 Sep 2014
18 min read
How to create a Flappy Bird clone using MelonJS Web game frameworks such as MelonJS are becoming more popular every day. In this post I will show you how easy it is to create a Flappy Bird clone game using the MelonJS bag of tricks. I will assume that you have some experience with JavaScript and that you have visited the melonJS official page. All of the code shown in this post is available on this GitHub repository. Step 1 - Organization A MelonJS game can be divided into three basic objects: Scene objects: Define all of the game scenes (Play, Menus, Game Over, High Score, and so on) Game entities: Add all of the stuff that interacts on the game (Players, enemies, collectables, and so on) Hud entities: All of the HUD objects to be inserted on the scenes (Life, Score, Pause buttons, and so on) For our Flappy Bird game, first create a directory, flappybirdgame, on your machine. Then create the following structure: flabbybirdgame | |--js |--|--entities |--|--screens |--|--game.js |--|--resources.js |--data |--|--img |--|--bgm |--|--sfx |--lib |--index.html Just a quick explanation about the folders: The js contains all of the game source. The entities folder will handle the HUD and the Game entities. In the screen folder, we will create all of the scene files. The game.js is the main game file. It will initialize all of the game resources, which is created in the resources.js file, the input, and the loading of the first scene. The data folder is where all of the assets, sounds, and game themes are inserted. I divided the folders into img for images (backgrounds, player atlas, menus, and so on), bgm for background music files (we need to provide a .ogg and .mp3 file for each sound if we want full compatibility with all browsers) and sfx for sound effects. In the lib folder we will add the current 1.0.2 version of MelonJS. Lastly, an index.html file is used to build the canvas. Step 2 - Implementation First we will build the game.js file: var game = { data: { score : 0, steps: 0, start: false, newHiScore: false, muted: false }, "onload": function() { if (!me.video.init("screen", 900, 600, true, 'auto')) { alert("Your browser does not support HTML5 canvas."); return; } me.audio.init("mp3,ogg"); me.loader.onload = this.loaded.bind(this); me.loader.preload(game.resources); me.state.change(me.state.LOADING); }, "loaded": function() { me.state.set(me.state.MENU, new game.TitleScreen()); me.state.set(me.state.PLAY, new game.PlayScreen()); me.state.set(me.state.GAME_OVER, new game.GameOverScreen()); me.input.bindKey(me.input.KEY.SPACE, "fly", true); me.input.bindKey(me.input.KEY.M, "mute", true); me.input.bindPointer(me.input.KEY.SPACE); me.pool.register("clumsy", BirdEntity); me.pool.register("pipe", PipeEntity, true); me.pool.register("hit", HitEntity, true); // in melonJS 1.0.0, viewport size is set to Infinity by default me.game.viewport.setBounds(0, 0, 900, 600); me.state.change(me.state.MENU); } }; The game.js is divided into: data object: This global object will handle all of the global variables that will be used on the game. For our game we will use score to record the player score, and steps to record how far the bird goes. The other variables are flags that we are using to control some game states. onload method: This method preloads the resources and initializes the canvas screen and then calls the loaded method when it's done. loaded method: This method first creates and puts into the state stack the screens that we will use on the game. We will use the implementation for these screens later on. It enables all of the input keys to handle the game. For our game we will be using the space and left mouse keys to control the bird and the M key to mute sound. It also adds the game entities BirdEntity, PipeEntity and the HitEntity in the game poll. I will explain the entities later. Then you need to create the resource.js file: game.resources = [ {name: "bg", type:"image", src: "data/img/bg.png"}, {name: "clumsy", type:"image", src: "data/img/clumsy.png"}, {name: "pipe", type:"image", src: "data/img/pipe.png"}, {name: "logo", type:"image", src: "data/img/logo.png"}, {name: "ground", type:"image", src: "data/img/ground.png"}, {name: "gameover", type:"image", src: "data/img/gameover.png"}, {name: "gameoverbg", type:"image", src: "data/img/gameoverbg.png"}, {name: "hit", type:"image", src: "data/img/hit.png"}, {name: "getready", type:"image", src: "data/img/getready.png"}, {name: "new", type:"image", src: "data/img/new.png"}, {name: "share", type:"image", src: "data/img/share.png"}, {name: "tweet", type:"image", src: "data/img/tweet.png"}, {name: "leader", type:"image", src: "data/img/leader.png"}, {name: "theme", type: "audio", src: "data/bgm/"}, {name: "hit", type: "audio", src: "data/sfx/"}, {name: "lose", type: "audio", src: "data/sfx/"}, {name: "wing", type: "audio", src: "data/sfx/"}, ]; Now let's create the game entities. First the HUD elements: create a HUD.js file in the entities folder. In this file you will create: A score entity A background layer entity The share buttons entities (Facebook, Twitter, and so on) game.HUD = game.HUD || {}; game.HUD.Container = me.ObjectContainer.extend({ init: function() { // call the constructor this.parent(); // persistent across level change this.isPersistent = true; // non collidable this.collidable = false; // make sure our object is always draw first this.z = Infinity; // give a name this.name = "HUD"; // add our child score object at the top left corner this.addChild(new game.HUD.ScoreItem(5, 5)); } }); game.HUD.ScoreItem = me.Renderable.extend({ init: function(x, y) { // call the parent constructor // (size does not matter here) this.parent(new me.Vector2d(x, y), 10, 10); // local copy of the global score this.stepsFont = new me.Font('gamefont', 80, '#000', 'center'); // make sure we use screen coordinates this.floating = true; }, update: function() { return true; }, draw: function (context) { if (game.data.start && me.state.isCurrent(me.state.PLAY)) this.stepsFont.draw(context, game.data.steps, me.video.getWidth()/2, 10); } }); var BackgroundLayer = me.ImageLayer.extend({ init: function(image, z, speed) { name = image; width = 900; height = 600; ratio = 1; // call parent constructor this.parent(name, width, height, image, z, ratio); }, update: function() { if (me.input.isKeyPressed('mute')) { game.data.muted = !game.data.muted; if (game.data.muted){ me.audio.disable(); }else{ me.audio.enable(); } } return true; } }); var Share = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "share"; settings.spritewidth = 150; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; FB.ui( { method: 'feed', name: 'My Clumsy Bird Score!', caption: "Share to your friends", description: ( shareText ), link: url, picture: 'http://ellisonleao.github.io/clumsy-bird/data/img/clumsy.png' } ); return false; } }); var Tweet = me.GUI_Object.extend({ init: function(x, y) { var settings = {}; settings.image = "tweet"; settings.spritewidth = 152; settings.spriteheight = 75; this.parent(x, y, settings); }, onClick: function(event) { var shareText = 'Just made ' + game.data.steps + ' steps on Clumsy Bird! Can you beat me? Try online here!'; var url = 'http://ellisonleao.github.io/clumsy-bird/'; var hashtags = 'clumsybird,melonjs' window.open('https://twitter.com/intent/tweet?text=' + shareText + '&hashtags=' + hashtags + '&count=' + url + '&url=' + url, 'Tweet!', 'height=300,width=400') return false; } }); You should notice that there are different me classes for different types of entities. The ScoreItem is a Renderable object that is created under an ObjectContainer and it will render the game steps on the play screen that we will create later. The share and Tweet buttons are created with the GUI_Object class. This class implements the onClick event that handles click events used to create the share events. The BackgroundLayer is a particular object created using the ImageLayer class. This class controls some generic image layers that can be used in the game. In our particular case we are just using a single fixed image, with fixed ratio and no scrolling. Now to the game entities. For this game we will need: BirdEntity: The bird and its behavior PipeEntity: The pipe object HitEntity: A invisible entity just to get the steps counting PipeGenerator: Will handle the PipeEntity creation Ground: A entity for the ground TheGround: The animated ground Container Add an entities.js file into the entities folder: var BirdEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('clumsy'); settings.width = 85; settings.height = 60; settings.spritewidth = 85; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0.2; this.gravityForce = 0.01; this.maxAngleRotation = Number.prototype.degToRad(30); this.maxAngleRotationDown = Number.prototype.degToRad(90); this.renderable.addAnimation("flying", [0, 1, 2]); this.renderable.addAnimation("idle", [0]); this.renderable.setCurrentAnimation("flying"); this.animationController = 0; // manually add a rectangular collision shape this.addShape(new me.Rect(new me.Vector2d(5, 5), 70, 50)); // a tween object for the flying physic effect this.flyTween = new me.Tween(this.pos); this.flyTween.easing(me.Tween.Easing.Exponential.InOut); }, update: function(dt) { // mechanics if (game.data.start) { if (me.input.isKeyPressed('fly')) { me.audio.play('wing'); this.gravityForce = 0.01; var currentPos = this.pos.y; // stop the previous one this.flyTween.stop() this.flyTween.to({y: currentPos - 72}, 100); this.flyTween.start(); this.renderable.angle = -this.maxAngleRotation; } else { this.gravityForce += 0.2; this.pos.y += me.timer.tick * this.gravityForce; this.renderable.angle += Number.prototype.degToRad(3) * me.timer.tick; if (this.renderable.angle > this.maxAngleRotationDown) this.renderable.angle = this.maxAngleRotationDown; } } var res = me.game.world.collide(this); if (res) { if (res.obj.type != 'hit') { me.device.vibrate(500); me.state.change(me.state.GAME_OVER); return false; } // remove the hit box me.game.world.removeChildNow(res.obj); // the give dt parameter to the update function // give the time in ms since last frame // use it instead ? game.data.steps++; me.audio.play('hit'); } else { var hitGround = me.game.viewport.height - (96 + 60); var hitSky = -80; // bird height + 20px if (this.pos.y >= hitGround || this.pos.y <= hitSky) { me.state.change(me.state.GAME_OVER); return false; } } return this.parent(dt); }, }); var PipeEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('pipe'); settings.width = 148; settings.height= 1664; settings.spritewidth = 148; settings.spriteheight= 1664; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; }, update: function(dt) { // mechanics this.pos.add(new me.Vector2d(-this.gravity * me.timer.tick, 0)); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var PipeGenerator = me.Renderable.extend({ init: function() { this.parent(new me.Vector2d(), me.game.viewport.width, me.game.viewport.height); this.alwaysUpdate = true; this.generate = 0; this.pipeFrequency = 92; this.pipeHoleSize = 1240; this.posX = me.game.viewport.width; }, update: function(dt) { if (this.generate++ % this.pipeFrequency == 0) { var posY = Number.prototype.random( me.video.getHeight() - 100, 200 ); var posY2 = posY - me.video.getHeight() - this.pipeHoleSize; var pipe1 = new me.pool.pull("pipe", this.posX, posY); var pipe2 = new me.pool.pull("pipe", this.posX, posY2); var hitPos = posY - 100; var hit = new me.pool.pull("hit", this.posX, hitPos); pipe1.renderable.flipY(); me.game.world.addChild(pipe1, 10); me.game.world.addChild(pipe2, 10); me.game.world.addChild(hit, 11); } return true; }, }); var HitEntity = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('hit'); settings.width = 148; settings.height= 60; settings.spritewidth = 148; settings.spriteheight= 60; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 5; this.updateTime = false; this.type = 'hit'; this.renderable.alpha = 0; this.ac = new me.Vector2d(-this.gravity, 0); }, update: function() { // mechanics this.pos.add(this.ac); if (this.pos.x < -148) { me.game.world.removeChild(this); } return true; }, }); var Ground = me.ObjectEntity.extend({ init: function(x, y) { var settings = {}; settings.image = me.loader.getImage('ground'); settings.width = 900; settings.height= 96; this.parent(x, y, settings); this.alwaysUpdate = true; this.gravity = 0; this.updateTime = false; this.accel = new me.Vector2d(-4, 0); }, update: function() { // mechanics this.pos.add(this.accel); if (this.pos.x < -this.renderable.width) { this.pos.x = me.video.getWidth() - 10; } return true; }, }); var TheGround = Object.extend({ init: function() { this.ground1 = new Ground(0, me.video.getHeight() - 96); this.ground2 = new Ground(me.video.getWidth(), me.video.getHeight() - 96); me.game.world.addChild(this.ground1, 11); me.game.world.addChild(this.ground2, 11); }, update: function () { return true; } }) Note that every game entity inherits from the me.ObjectEntity class. We need to pass the settings of the entity on the init method, telling it which image we will use from the resources along with the image measure. We also implement the update method for each Entity, telling it how it will behave during game time. Now we need to create our scenes. The game is divided into: TitleScreen PlayScreen GameOverScreen We will separate the scenes into js files. First create a title.js file in the screens folder: game.TitleScreen = me.ScreenObject.extend({ init: function(){ this.font = null; }, onResetEvent: function() { me.audio.stop("theme"); game.data.newHiScore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", true); me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.PLAY); } }); //logo var logoImg = me.loader.getImage('logo'); var logo = new me.SpriteObject ( me.game.viewport.width/2 - 170, -logoImg, logoImg ); me.game.world.addChild(logo, 10); var logoTween = new me.Tween(logo.pos).to({y: me.game.viewport.height/2 - 100}, 1000).easing(me.Tween.Easing.Exponential.InOut).start(); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); me.game.world.addChild(new (me.Renderable.extend ({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); //this.font = new me.Font('Arial Black', 20, 'black', 'left'); this.text = me.device.touch ? 'Tap to start' : 'PRESS SPACE OR CLICK LEFT MOUSE BUTTON TO START ntttttttttttPRESS "M" TO MUTE SOUND'; this.font = new me.Font('gamefont', 20, '#000'); }, update: function () { return true; }, draw: function (context) { var measure = this.font.measureText(context, this.text); this.font.draw(context, this.text, me.game.viewport.width/2 - measure.width/2, me.game.viewport.height/2 + 50); } })), 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); } }); Then, create a play.js file on the same folder: game.PlayScreen = me.ScreenObject.extend({ init: function() { me.audio.play("theme", true); // lower audio volume on firefox browser var vol = me.device.ua.contains("Firefox") ? 0.3 : 0.5; me.audio.setVolume(vol); this.parent(this); }, onResetEvent: function() { me.audio.stop("theme"); if (!game.data.muted){ me.audio.play("theme", true); } me.input.bindKey(me.input.KEY.SPACE, "fly", true); game.data.score = 0; game.data.steps = 0; game.data.start = false; game.data.newHiscore = false; me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); this.HUD = new game.HUD.Container(); me.game.world.addChild(this.HUD); this.bird = me.pool.pull("clumsy", 60, me.game.viewport.height/2 - 100); me.game.world.addChild(this.bird, 10); //inputs me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.SPACE); this.getReady = new me.SpriteObject( me.video.getWidth()/2 - 200, me.video.getHeight()/2 - 100, me.loader.getImage('getready') ); me.game.world.addChild(this.getReady, 11); var fadeOut = new me.Tween(this.getReady).to({alpha: 0}, 2000) .easing(me.Tween.Easing.Linear.None) .onComplete(function() { game.data.start = true; me.game.world.addChild(new PipeGenerator(), 0); }).start(); }, onDestroyEvent: function() { me.audio.stopTrack('theme'); // free the stored instance this.HUD = null; this.bird = null; me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); } }); Finally, the gameover.js screen: game.GameOverScreen = me.ScreenObject.extend({ init: function() { this.savedData = null; this.handler = null; }, onResetEvent: function() { me.audio.play("lose"); //save section this.savedData = { score: game.data.score, steps: game.data.steps }; me.save.add(this.savedData); // clay.io if (game.data.score > 0) { me.plugin.clay.leaderboard('clumsy'); } if (!me.save.topSteps) me.save.add({topSteps: game.data.steps}); if (game.data.steps > me.save.topSteps) { me.save.topSteps = game.data.steps; game.data.newHiScore = true; } me.input.bindKey(me.input.KEY.ENTER, "enter", true); me.input.bindKey(me.input.KEY.SPACE, "enter", false) me.input.bindPointer(me.input.mouse.LEFT, me.input.KEY.ENTER); this.handler = me.event.subscribe(me.event.KEYDOWN, function (action, keyCode, edge) { if (action === "enter") { me.state.change(me.state.MENU); } }); var gImage = me.loader.getImage('gameover'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImage.width/2, me.video.getHeight()/2 - gImage.height/2 - 100, gImage ), 12); var gImageBoard = me.loader.getImage('gameoverbg'); me.game.world.addChild(new me.SpriteObject( me.video.getWidth()/2 - gImageBoard.width/2, me.video.getHeight()/2 - gImageBoard.height/2, gImageBoard ), 10); me.game.world.addChild(new BackgroundLayer('bg', 1)); this.ground = new TheGround(); me.game.world.addChild(this.ground, 11); // share button var buttonsHeight = me.video.getHeight() / 2 + 200; this.share = new Share(me.video.getWidth()/3 - 100, buttonsHeight); me.game.world.addChild(this.share, 12); //tweet button this.tweet = new Tweet(this.share.pos.x + 170, buttonsHeight); me.game.world.addChild(this.tweet, 12); //leaderboard button this.leader = new Leader(this.tweet.pos.x + 170, buttonsHeight); me.game.world.addChild(this.leader, 12); // add the dialog witht he game information if (game.data.newHiScore) { var newRect = new me.SpriteObject( 235, 355, me.loader.getImage('new') ); me.game.world.addChild(newRect, 12); } this.dialog = new (me.Renderable.extend({ // constructor init: function() { // size does not matter, it's just to avoid having a zero size // renderable this.parent(new me.Vector2d(), 100, 100); this.font = new me.Font('gamefont', 40, 'black', 'left'); this.steps = 'Steps: ' + game.data.steps.toString(); this.topSteps= 'Higher Step: ' + me.save.topSteps.toString(); }, update: function () { return true; }, draw: function (context) { var stepsText = this.font.measureText(context, this.steps); var topStepsText = this.font.measureText(context, this.topSteps); var scoreText = this.font.measureText(context, this.score); //steps this.font.draw( context, this.steps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 ); //top score this.font.draw( context, this.topSteps, me.game.viewport.width/2 - stepsText.width/2 - 60, me.game.viewport.height/2 + 50 ); } })); me.game.world.addChild(this.dialog, 12); }, onDestroyEvent: function() { // unregister the event me.event.unsubscribe(this.handler); me.input.unbindKey(me.input.KEY.ENTER); me.input.unbindKey(me.input.KEY.SPACE); me.input.unbindPointer(me.input.mouse.LEFT); me.game.world.removeChild(this.ground); this.font = null; me.audio.stop("theme"); } });  Here is how the ScreenObjects works: First it calls the init constructor method for any variable initialization. onResetEvent is called next. This method will be called every time the scene is called. In our case the onResetEvent will add some objects to the game world stack. The onDestroyEvent acts like a garbage collector and unregisters bind events and removes some elements on the draw calls. Now, let's put it all together in the index.html file: <!DOCTYPE HTML> <html lang="en"> <head> <title>Clumsy Bird</title> </head> <body> <!-- the facebook init for the share button --> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({ appId : '213642148840283', status : true, xfbml : true }); }; (function(d, s, id){ var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/pt_BR/all.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); </script> <!-- Canvas placeholder --> <div id="screen"></div> <!-- melonJS Library --> <script type="text/javascript" src="lib/melonJS-1.0.2.js" ></script> <script type="text/javascript" src="js/entities/HUD.js" ></script> <script type="text/javascript" src="js/entities/entities.js" ></script> <script type="text/javascript" src="js/screens/title.js" ></script> <script type="text/javascript" src="js/screens/play.js" ></script> <script type="text/javascript" src="js/screens/gameover.js" ></script> </body> </html> Step 3 - Flying! To run our game we will need a web server of your choice. If you have Python installed, you can simply type the following in your shell: $python -m SimpleHTTPServer Then you can open your browser at http://localhost:8000. If all went well, you will see the title screen after it loads, like in the following image: I hope you enjoyed this post!  About this author Ellison Leão (@ellisonleao) is a passionate software engineer with more than 6 years of experience in web projects and is a contributor to the MelonJS framework and other open source projects. When he is not writing games, he loves to play drums.
Read more
  • 0
  • 13
  • 16577
article-image-supporting-hypervisors-opennebula
Packt
25 May 2012
7 min read
Save for later

Supporting hypervisors by OpenNebula

Packt
25 May 2012
7 min read
(For more resources on Open Source, see here.) A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend. All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster. Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows: A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem. You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis. The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft). Configuring hosts The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows: Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password. Configure the shared network bridge that will be used by VM to get the physical network.   The oneadmin account and passwordless login Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands. If during the operating system install you did not create it, create a oneadmin user on the host by using the following command: youruser@host1 $ sudo adduser oneadmin You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend: oneadmin@front-end $ ssh-copy-id oneadmin@host1 Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command: oneadmin@front-end $ ssh oneadmin@host1 Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts. Verifying the SSH host fingerprints The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message: The authenticity of host host01 (192.168.254.2)can't be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)? Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks. For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts. In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed: Host* StrictHostKeyChecking no. If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Now you should be able to remotely connect from a node to another with your hostname using the following command: $ ssh oneadmin@kvm01 Configuring a simple DNS with dnsmasq If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network. The OpenNebula frontend may be a good place to install it. For an Ubuntu/Debian installation use the following command: $ sudo apt-get install dnsmasq The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following: # dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222 The /etc/hosts configuration details will look similar to the following: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code: # ip where dnsmasq is installed nameserver 192.168.0.2 Now you should be able to remotely connect from a node to another using your plain hostname using the following command: $ ssh oneadmin@kvm01 When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq. Configuring sudo To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code: # /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command. Now add the oneadmin user to the sudo group with the following command: $ sudo adduser oneadmin sudo Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula. Configuring network bridges Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example: # The loopback network interface auto lo iface lo inet loopback # The primary network interface iface eth0 inet manual auto lan0 iface lan0 inet static bridge_ports eth0 bridge_stp off bridge_fd 0 address 192.168.66.97 netmask 255.255.255.0 gateway 192.168.66.1 dns-nameservers 192.168.66.1 You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.
Read more
  • 0
  • 1
  • 16566

article-image-setting-environment-aspnet-mvc-6
Packt
02 Nov 2016
9 min read
Save for later

Setting Up the Environment for ASP.NET MVC 6

Packt
02 Nov 2016
9 min read
In this article by Mugilan TS Raghupathi author of the book Learning ASP.NET Core MVC Programming explains the setup for getting started with programming in ASP.NET MVC 6. In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving the environment issues or configuration problems. With respect to .NET, Visual Studio is the de-facto standard IDE (Integrated Development Environment) for building web applications in .NET. In this article, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure (For more resources related to this topic, see here.) Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile, and execute it. When you develop a web application, you might need the following things for you to be productive: Code editor: This is the text editor where you type your code. Your code-editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color. Intellisense: Intellisense is a context aware code-completion feature available in most of the modern IDEs including Visual Studio. One such example is, when you type a dot after an object, this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out of the box to build a separate project or to build the complete solution at a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of templates. Visual Studio generates different kinds of templates with the code for ASP.Net Web Forms, MVC, and Web API to get you up and running. Ease of addition of items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue along with the information and help you to fix the issues. Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of the developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional System requirements Visual Studio can be installed on computers installed with Operation System Windows 7 Service Pack1 and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requirements-vs.aspx Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers. Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers which contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team. Visual Studio Enterprise Visual Studio Enterprise is a full blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing. Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or the people who are involved in the testing which might include developers. In any software development methodology—either the waterfall model or agile—developers need to execute the development suite test cases for the code they are developing. Installation of Visual Studio Community Follow the given steps to install Visual Studio Community 2015: Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later: Run the downloaded executable file: Click on Run and the following screen will appear: There are two types of installation—default and custom installation. Default installation installs the most commonly used features and this will cover most of the use cases of the developer. Custom installation helps you to customize the components that you want to get installed, such as the following: Click on the Install button after selecting the installation type. Depending on your memory and processor speed, it will take 1 to 2 hours to install. Once all the components are installed, you will see the following Setup completed screen: Installation of ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will not have been installed by default. As the ASP.NET MVC 6 application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/ Another option is to install from the New Project template in Visual Studio This option is bit easier as you don't need to search and install. The following are the detailed steps: Create a new project by selecting File | New Project or using the shortcut Ctrl + Shift + N: Select ASP.NET Web Application and enter the project name and click on OK: The following window will appear to select the template. Select the Get ASP.NET 5 RC option as shown in the following screenshot: When you click on OK in the preceding screen, the following window will appear: When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button: Installation of ASP.NET 5 might take couple of hours and once it is completed you'll get the following screen: During the process of installation of ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so. Project structure in ASP.Net 5 application Once the ASP.NET 5 RC1 is successfully installed, open the Visual Studio and create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot: A new project will be created and the structure will be like the following: File-based project Whenever you add a file or folder in your file system (inside of our ASP.NET 5 project folder), the changes will be automatically reflected in your project structure. Support for full .NET and .NET core You could see a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities—which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET MVC 6 application on a Linux machine will be explained in the book. The Project.json package Usually in an ASP.NET web application, we would be having the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by the name of Project.json, which will contain all the necessary configuration with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager provided by Microsoft, which makes the package installation and uninstallation easier. Prior to NuGet, all the dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs about the frameworks being supported for the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item. Controllers This folder contains all of your controller files. Controllers are responsible for handling the requests and communicating the models and generating the views for the same. Models All of your classes representing the domain data will be present in this folder. Views Views are files which contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files. Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in the book. The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller. Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies which are supported by ASP.NET 5 applications. Summary In this article, you have learnt about the offerings in Visual Studio. Step-by-step instructions are provided for the installation of the Visual Studio Community version—which is freely available for individual developers. We have also discussed the new project structure of the ASP.Net 5 application and the changes when compared to the previous versions. In this book, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how it works. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] Debugging Your .NET Application [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 16502
Modal Close icon
Modal Close icon