Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-deep-customization-bootstrap
Packt
19 Dec 2014
8 min read
Save for later

Deep Customization of Bootstrap

Packt
19 Dec 2014
8 min read
This article is written by Aravind Shenoy and Ulrich Sossou, the authors of the book, Learning Bootstrap. It will introduce you to the concept of deep customization of Bootstrap. (For more resources related to this topic, see here.) Adding your own style sheet works when you are trying to do something quick or when the modifications are minimal. Customizing Bootstrap beyond small changes involves using the uncompiled Bootstrap source code. The Bootstrap CSS source code is written in LESS with variables and mixins to allow easy customization. LESS is an open source CSS preprocessor with cool features used to speed up your development time. LESS allows you to engage an efficient and modular style of working making it easier to maintain your CSS styling in your projects. The advantages of using variables in LESS are profound. You can reuse the same code many times thereby following the write once, use anywhere paradigm. Variables can be globally declared, which allows you to specify certain values in a single place. This needs to be updated only once if changes are required. LESS variables allow you to specify widely-used values such as colors, font family, and sizes in a single file. By modifying a single variable, the changes will be reflected in all the Bootstrap components that use it; for example, to change the background color of the body element to green (#00FF00 is the hexadecimal code for green), all you need to do is change the value of the variable called @body-bg in Bootstrap as shown in the following code: @body-bg: #00FF00; Mixins are similar to variables but for whole classes. Mixins enable you to embed the properties of a class into another. It allows you to group multiple code lines together so that it can be used numerous times across the style sheet. Mixins can also be used alongside variables and functions resulting in multiple inheritances; for example, to add clearfix to an article, you can use the .clearfix mixin as shown in the left column of the following table. It will result in all clearfix declarations included in the compiled CSS code shown in the right column: Mixin CSS code article { .clearfix; }   { article:before, article:after { content: " "; // 1 display: table; // 2 } article:after { clear: both; } }   A clearfix mixin is a way for an element to automatically clear after itself, so that you don't need to add additional markup. It's generally used in float layouts, where elements are floated to be stacked horizontally. Let's look at a pragmatic example to understand how this kind of customization is used in a real-time scenario: Download and unzip the Bootstrap files into a folder. Create an HTML file called bootstrap_example and save it in the same folder where you saved the Bootstrap files. Add the following code to it: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of this code upon execution will be as follows: The Bootstrap folder includes the following folders and file:     css     fonts     js     bootstrap_example.html This Bootstrap folder is shown in the following screenshot: Since we are going to use the Bootstrap source code now, let's download the ZIP file and keep it at any location. Unzip it, and we can see the contents of the folder as shown in the following screenshot: Let's now create a new folder called bootstrap in the css folder. The contents of our css folder will appear as displayed in the following screenshot: Copy the contents of the less folder from the source code and paste it into the newly created bootstrap folder inside the css folder. Thus, the contents of the same bootstrap folder within the css folder will appear as displayed in the following screenshot: In the bootstrap folder, look for the variable.less file and open it using Notepad or Notepad++. In this example, we are using a simple Notepad, and on opening the variable.less file with Notepad, we can see the contents of the file as shown in the following screenshot: Currently, we can see @body-bg is assigned the default value #fff as the color code. Change the background color of the body element to green by assigning the value #00ff00 to it. Save the file and later on, look for the bootstrap.less file in the bootstrap folder. In the next step, we are going to use WinLess. Open WinLess and add the contents of the bootstrap folder to it. In the folder pane, you will see all the less files loaded as shown in the following screenshot:   Now, we need to uncheck all the files and only select the bootstrap.less file as shown in following screenshot:  Click on Compile. This will compile your bootstrap.less file to bootstrap.css. Copy the newly compiled bootstrap.css file from the bootstrap folder and paste it into the css folder thereby replacing the original bootstrap.css file. Now that we have the updated bootstrap.css file, go back to bootstrap_example.html and execute it. Upon execution, the output of the code would be as follows:  Thus, we can see that the background color of the <body> element turns to green as we have altered it globally in the variables.less file that was linked to the bootstrap.less file, which was later compiled to bootstrap.css by WinLess. We can also use LESS variables and mixins to customize Bootstrap. We can import the Bootstrap files and add our customizations. Let's now create our own less file called styles.less in the css folder. We will now include the Bootstrap files by adding the following line of code in the styles.less file: @import "./bootstrap/bootstrap.less"; We have given the path,./bootstrap/bootstrap.less as per the location of the bootstrap.less file. Remember to give the appropriate path if you have placed it at any other location. Now, let's try a few customizations and add the following code to styles.less: @body-bg: #FFA500; @padding-large-horizontal: 40px; @font-size-base: 7px; @line-height-base: 9px; @border-radius-large: 75px; The next step is to compile the styles.less file to styles.css. We will again use WinLess for this purpose. You have to uncheck all the options and select only styles.less to be compiled:  On compilation, the styles.css file will contain all the CSS declarations from Bootstrap. The next step would be to add the styles.css stylesheet to the bootstrap_example.html file.So your HTML code will look like this: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> <link href="css/styles.css" rel="stylesheet"> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of the code is as follows: Since we changed the background color to orange (#ffa500), created a border radius, and defined the font-size-base and line-height-base, the output on execution was as displayed in the preceding screenshot. The LESS variables should be added to the styles.less file after the Bootstrap import so that they override the variables defined in the Bootstrap files. In short, all the custom code you write should be added after the Bootstrap import. Summary Therefore, we had a look at the procedure to implement Deep Customization in Bootstrap. However, we are still at the start of the journey. The learning curve is always steep as there is so much more to learn. Learning is always an ongoing process and it would never cease to exist. Thus, there is still a long way to go and in a pragmatic sense, the journey is the destination. Resources for Article: Further resources on this subject: Creating attention-grabbing pricing tables [article] Getting Started with Bootstrap [article] Bootstrap 3.0 is Mobile First [article]
Read more
  • 0
  • 0
  • 2133

article-image-role-angularjs
Packt
16 Dec 2014
7 min read
Save for later

Role of AngularJS

Packt
16 Dec 2014
7 min read
This article by Sandeep Kumar Patel, author of Responsive Web Design with AngularJS we will explore the role of AngularJS for responsive web development. Before going into AngularJS, you will learn about responsive "web development in general. Responsive" web development can be performed "in two ways: Using the browser sniffing approach Using the CSS3 media queries approach (For more resources related to this topic, see here.) Using the browser sniffing approach When we view" web pages through our browser, the browser sends a user agent string to the server. This string" provides information like browser and device details. Reading these details, the browser can be redirected to the appropriate view. This method of reading client details is known as browser sniffing. The browser string has a lot of different information about the source from where the request is generated. The following diagram shows the information shared by the user string:   Details of the parameters" present in the user agent string are as follows: Browser name: This" represents the actual name of the browser from where the request has originated, for example, Mozilla or Opera Browser version: This represents" the browser release version from the vendor, for example, Firefox has the latest version 31 Browser platform: This represents" the underlying engine on which the browser is running, for example, Trident or WebKit Device OS: This represents" the operating system running on the device from where the request has originated, for example, Linux or Windows Device processor: This represents" the processor type on which the operating system is running, for example, 32 or 64 bit A different browser string is generated based on the combination of the device and type of browser used while accessing a web page. The following table shows examples of browser "strings: Browser Device User agent string Firefox Windows desktop Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0 Chrome OS X 10 desktop Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.66 Safari/537.36 Opera Windows desktop Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14 Safari OS X 10 desktop Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.13+ (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2 Internet Explorer Windows desktop Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0   AngularJS has features like providers or services which can be most useful for this browser user-agent sniffing and a redirection approach. An AngularJS provider can be created that can be used in the configuration in the routing module. This provider can have reusable properties and reusable methods that can be used to identify the device and route the specific request to the appropriate template view. To discover more about user agent strings on various browser and device combinations, visit http://www.useragentstring.com/pages/Browserlist/. CSS3 media queries approach CSS3 brings a "new horizon to web application development. One of the key features" is media queries to develop a responsive web application. Media queries uses media types and features as "deciding parameters to apply the style to the current web page. Media type CSS3 media queries" provide rules for media types to have different styles applied to a web page. In the media queries specification, media types that should be supported by the implemented browser are listed. These media types are as follows: all: This is used" for all media type devices aural: This is "used for speech and sound synthesizers braille: This is used "for braille tactile feedback devices embossed: This" is used for paged braille printers handheld: This is "used for small or handheld devices, for example, mobile print: This" is used for printers, for example, an A4 size paper document projection: This is" used for projection-based devices, such as a projector screen with a slide screen: This is "used for computer screens, for example, desktop and "laptop screens tty: This is" used for media using a fixed-pitch character grid, such as teletypes and terminals tv: This is used "for television-type devices, for example, webOS "or Android-based television A media rule can be declared using the @media keyword with the specific type for the targeted media. The following code shows an example of the media rule usage, where the background body color" is black and text is white for the screen type media, and background body color is white and text is black for the printer media type: @media screen { body {    background:black;    color:white; } }   @media print{ body {    background:white;    color:black; } } An external style "sheet can be downloaded and applied to the current page based on the media type with the HTML link tag. The following code uses the link type in conjunction with media type: <link rel='stylesheet' media='screen' href='<fileName.css>' /> To learn more about" different media types,visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_types. Media feature Conditional styles can be "applied to a page based on different features of a device. The features that are "supported by CSS3 media queries to apply styles are as follows: color: Based on the" number of bits used for a color component by the device-specific style sheet, this can be applied to a page. color-index: Based "on the color look up, table styles can be applied "to a page. aspect-ratio: Based "on the aspect ratio, display area style sheets can be applied to a page. device-aspect-ratio: Based "on the device aspect ratio, styles can be applied to a page. device-height: Based "on device height, styles can be applied to a page. "This includes the entire screen. device-width: Based "on device width, styles can be applied to a page. "This includes the entire screen. grid: Based "on the device type, bitmap or grid, styles can be applied "to a page. height: Based on" the device rendering area height, styles can be used "to a page. monochrome: Based on" the monochrome type, styles can be applied. "This represents the number of bits used by the device in the grey scale. orientation: Based" on the viewport mode, landscape or portrait, styles can be applied to a page. resolution: Based" on the pixel density, styles can be applied to a page. scan: Based on the "scanning type used by the device for rendering, styles can be applied to a page. width: Based "on the device screen width, specific styles can be applied. The following" code shows some examples" of CSS3 media queries using different device features for conditional styles used: //for screen devices with a minimum aspect ratio 0.5 @media screen and (min-aspect-ratio: 1/2) { img {    height: 70px;    width: 70px; } } //for all device in portrait viewport @media all and (orientation: portrait) { img {    height: 100px;    width: 200px; } } //For printer devices with a minimum resolution of 300dpi pixel density @media print and (min-resolution: 300dpi) { img {    height: 600px;    width: 400px; } } To learn more" about different media features, visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_features. Summary In this chapter, you learned about responsive design and the SPA architecture. "You now understand the role of the AngularJS library when developing a responsive application. We quickly went through all the important features of AngularJS with the coded syntax. In the next chapter, you will set up your AngularJS application and learn to create dynamic routing-based on the devices. Resources for Article:  Further resources on this subject: Best Practices for Modern Web Applications [article] Important Aspect of AngularJS UI Development [article] A look into responsive design frameworks [article]
Read more
  • 0
  • 23
  • 41358

article-image-building-remote-controlled-tv-node-webkit
Roberto González
04 Dec 2014
14 min read
Save for later

Building a Remote-controlled TV with Node-Webkit

Roberto González
04 Dec 2014
14 min read
Node-webkit is one of the most promising technologies to come out in the last few years. It lets you ship a native desktop app for Windows, Mac, and Linux just using HTML, CSS, and some JavaScript. These are the exact same languages you use to build any web app. You basically get your very own Frameless Webkit to build your app, which is then supercharged with NodeJS, giving you access to some powerful libraries that are not available in a typical browser. As a demo, we are going to build a remote-controlled Youtube app. This involves creating a native app that displays YouTube videos on your computer, as well as a mobile client that will let you search for and select the videos you want to watch straight from your couch. You can download the finished project from https://github.com/Aerolab/youtube-tv. You need to follow the first part of this guide (Getting started) to set up the environment and then run run.sh (on Mac) or run.bat (on Windows) to start the app. Getting started First of all, you need to install Node.JS (a JavaScript platform), which you can download from http://nodejs.org/download/. The installer comes bundled with NPM (Node.JS Package Manager), which lets you install everything you need for this project. Since we are going to be building two apps (a desktop app and a mobile app), it’s better if we get the boring HTML+CSS part out of the way, so we can concentrate on the JavaScript part of the equation. Download the project files from https://github.com/Aerolab/youtube-tv/blob/master/assets/basics.zip and put them in a new folder. You can name the project’s folder youtube-tv  or whatever you want. The folder should look like this: - index.html   // This is the starting point for our desktop app- css         // Our desktop app styles- js           // This is where the magic happens- remote       // This is where the magic happens (Part 2)- libraries   // FFMPEG libraries, which give you H.264 video support in Node-Webkit- player      // Our youtube player- Gruntfile.js // Build scripts- run.bat     // run.bat runs the app on Windows- run.sh       // sh run.sh runs the app on Mac Now open the Terminal (on Mac or Linux) or a new command prompt (on Windows) right in that folder. Now we’ll install a couple of dependencies we need for this project, so type these commands to install node-gyp and grunt-cli. Each one will take a few seconds to download and install: On Mac or Linux: sudo npm install node-gyp -gsudo npm install grunt-cli -g  On Windows: npm install node-gyp -gnpm install grunt-cli -g Leave the Terminal open. We’ll be using it again in a bit. All Node.JS apps start with a package.json file (our manifest), which holds most of the settings for your project, including which dependencies you are using. Go ahead and create your own package.json file (right inside the project folder) with the following contents. Feel free to change anything you like, such as the project name, the icon, or anything else. Check out the documentation at https://github.com/rogerwang/node-webkit/wiki/Manifest-format: {"//": "The // keys in package.json are comments.", "//": "Your project’s name. Go ahead and change it!","name": "Remote","//": "A simple description of what the app does.","description": "An example of node-webkit","//": "This is the first html the app will load. Just leave this this way","main": "app://host/index.html","//": "The version number. 0.0.1 is a good start :D","version": "0.0.1", "//": "This is used by Node-Webkit to set up your app.","window": {"//": "The Window Title for the app","title": "Remote","//": "The Icon for the app","icon": "css/images/icon.png","//": "Do you want the File/Edit/Whatever toolbar?","toolbar": false,"//": "Do you want a standard window around your app (a title bar and some borders)?","frame": true,"//": "Can you resize the window?","resizable": true},"webkit": {"plugin": false,"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.36"}, "//": "These are the libraries we’ll be using:","//": "Express is a web server, which will handle the files for the remote","//": "Socket.io lets you handle events in real time, which we'll use with the remote as well.","dependencies": {"express": "^4.9.5","socket.io": "^1.1.0"}, "//": "And these are just task handlers to make things easier","devDependencies": {"grunt": "^0.4.5","grunt-contrib-copy": "^0.6.0","grunt-node-webkit-builder": "^0.1.21"}} You’ll also find Gruntfile.js, which takes care of downloading all of the node-webkit assets and building the app once we are ready to ship. Feel free to take a look into it, but it’s mostly boilerplate code. Once you’ve set everything up, go back to the Terminal and install everything you need by typing: npm installgrunt nodewebkitbuild You may run into some issues when doing this on Mac or Linux. In that case, try using sudo npm install and sudo grunt nodewebkitbuild. npm install installs all of the dependencies you mentioned in package.json, both the regular dependencies and the development ones, like grunt and grunt-nodewebkitbuild, which downloads the Windows and Mac version of node-webkit, setting them up so they can play videos, and building the app. Wait a bit for everything to install properly and we’re ready to get started. Note that if you are using Windows, you might get a scary error related to Visual C++ when running npm install. Just ignore it. Building the desktop app All web apps (or websites for that matter) start with an index.html file. We are going to be creating just that to get our app to run: <!DOCTYPE html><html><head><metacharset="utf-8"/><title>Youtube TV</title> <linkhref='http://fonts.googleapis.com/css?family=Roboto:500,400'rel='stylesheet'type='text/css'/><linkhref="css/normalize.css"rel="stylesheet"type="text/css"/><linkhref="css/styles.css"rel="stylesheet"type="text/css"/></head><body> <divid="serverInfo"><h1>Youtube TV</h1></div> <divid="videoPlayer"> </div> <script src="js/jquery-1.11.1.min.js"></script> <script src="js/youtube.js"></script><script src="js/app.js"></script> </body></html> As you may have noticed, we are using three scripts for our app: jQuery (pretty well known at this point), a Youtube video player, and finally app.js, which contains our app's logic. Let’s dive into that! First of all, we need to create the basic elements for our remote control. The easiest way of doing this is to create a basic web server and serve a small web app that can search Youtube, select a video, and have some play/pause controls so we don’t have any good reasons to get up from the couch. Open js/app.js and type the following: // Show the Developer Tools. And yes, Node-Webkit has developer tools built in! Uncomment it to open it automatically//require('nw.gui').Window.get().showDevTools(); // Express is a web server, will will allow us to create a small web app with which to control the playervar express = require('express');var app = express();var server = require('http').Server(app);var io = require('socket.io')(server); // We'll be opening up our web server on Port 8080 (which doesn't require root privileges)// You can access this server at http://127.0.0.1:8080var serverPort =8080;server.listen(serverPort); // All the static files (css, js, html) for the remote will be served using Express.// These assets are in the /remote folderapp.use('/', express.static('remote')); With those 7 lines of code (not counting comments) we just got a neat web server working on port 8080. If you were paying attention to the code, you may have noticed that we required something called socket.io. This lets us use websockets with minimal effort, which means we can communicate with, from, and to our remote instantly. You can learn more about socket.io at http://socket.io/. Let’s set that up next in app.js: // Socket.io handles the communication between the remote and our app in real time, // so we can instantly send commands from a computer to our remote and backio.on('connection', function (socket) { // When a remote connects to the app, let it know immediately the current status of the video (play/pause)socket.emit('statusChange', Youtube.status); // This is what happens when we receive the watchVideo command (picking a video from the list)socket.on('watchVideo', function (video) {// video contains a bit of info about our video (id, title, thumbnail)// Order our Youtube Player to watch that video   Youtube.watchVideo(video);}); // These are playback controls. They receive the “play” and “pause” events from the remotesocket.on('play', function () {   Youtube.playVideo();});socket.on('pause', function () {   Youtube.pauseVideo();}); }); // Notify all the remotes when the playback status changes (play/pause)// This is done with io.emit, which sends the same message to all the remotesYoutube.onStatusChange =function(status) {io.emit('statusChange', status);}; That’s the desktop part done! In a few dozen lines of code we got a web server running at http://127.0.0.1:8080 that can receive commands from a remote to watch a specific video, as well as handling some basic playback controls (play and pause). We are also notifying the remotes of the status of the player as soon as they connect so they can update their UI with the correct buttons (if it’s playing, show the pause button and vice versa). Now we just need to build the remote. Building the remote control The server is just half of the equation. We also need to add the corresponding logic on the remote control, so it’s able to communicate with our app. In remote/index.html, add the following HTML: <!DOCTYPE html><html><head><metacharset=“utf-8”/><title>TV Remote</title> <metaname="viewport"content="width=device-width, initial-scale=1, maximum-scale=1"/> <linkrel="stylesheet"href="/css/normalize.css"/><linkrel="stylesheet"href="/css/styles.css"/></head><body> <divclass="controls"><divclass="search"><inputid="searchQuery"type="search"value=""placeholder="Search on Youtube..."/></div><divclass="playback"><buttonclass="play">&gt;</button><buttonclass="pause">||</button></div></div> <divid="results"class="video-list"> </div> <divclass="__templates"style="display:none;"><articleclass="video"><figure><imgsrc=""alt=""/></figure> <divclass="info"><h2></h2></div> </article></div>  <script src="/socket.io/socket.io.js"></script><script src="/js/jquery-1.11.1.min.js"></script> <script src="/js/search.js"></script><script src="/js/remote.js"></script> </body></html> Again, we have a few libraries: Socket.io is served automatically by our desktop app at /socket.io/socket.io.js, and it manages the communication with the server. jQuery is somehow always there, search.js manages the integration with the Youtube API (you can take a look if you want), and remote.js handles the logic for the remote. The remote itself is pretty simple. It can look for videos on Youtube, and when we click on a video it connects with the app, telling it to play the video with socket.emit. Let’s dive into remote/js/remote.js to make this thing work: // First of all, connect to the server (our desktop app)var socket = io.connect(); // Search youtube when the user stops typing. This gives us an automatic search.var searchTimeout =null;$('#searchQuery').on('keyup', function(event){clearTimeout(searchTimeout);searchTimeout = setTimeout(function(){   searchYoutube($('#searchQuery').val());}, 500);}); // When we click on a video, watch it on the App$('#results').on('click', '.video', function(event){// Send an event to notify the server we want to watch this videosocket.emit('watchVideo', $(this).data());});  // When the server tells us that the player changed status (play/pause), alter the playback controlssocket.on('statusChange', function(status){if( status ==='play' ) {   $('.playback .pause').show();   $('.playback .play').hide();}elseif( status ==='pause'|| status ==='stop' ) {   $('.playback .pause').hide();   $('.playback .play').show();}});  // Notify the app when we hit the play button$('.playback .play').on('click', function(event){socket.emit('play');}); // Notify the app when we hit the pause button$('.playback .pause').on('click', function(event){socket.emit('pause');}); This is very similar to our server, except we are using socket.emit a lot more often to send commands back to our desktop app, telling it which videos to play and handle our basic play/pause controls. The only thing left to do is make the app run. Ready? Go to the terminal again and type: If you are on a Mac: sh run.sh If you are on Windows: run.bat If everything worked properly, you should be both seeing the app and if you open a web browser to http://127.0.0.1:8080 the remote client will open up. Search for a video, pick anything you like, and it’ll play in the app. This also works if you point any other device on the same network to your computer’s IP, which brings me to the next (and last) point. Finishing touches There is one small improvement we can make: print out the computer’s IP to make it easier to connect to the app from any other device on the same Wi-Fi network (like a smartphone). On js/app.js add the following code to find out the IP and update our UI so it’s the first thing we see when we open the app: // Find the local IPfunction getLocalIP(callback) {require('dns').lookup( require('os').hostname(),function (err, add, fam) {typeof callback =='function'? callback(add) :null;   });} // To make things easier, find out the machine's ip and communicate itgetLocalIP(function(ip){$('#serverInfo h1').html('Go to<br/><strong>http://'+ip+':'+serverPort+'</strong><br/>to open the remote');}); The next time you run the app, the first thing you’ll see is the IP for your computer, so you just need to type that URL in your smartphone to open the remote and control the player from any computer, tablet, or smartphone (as long as they are in the same Wi-Fi network). That's it! You can start expanding on this to improve the app: Why not open the app on a fullscreen by default? Why not get rid of the horrible default frame and create your own? You can actually designate any div as a window handle with CSS (using -webkit-app-region: drag), so you can drag the window by that div and create your own custom title bar. Summary While the app has a lot of interlocking parts, it's a good first project to find out what you can achieve with node-webkit in just a few minutes. I hope you enjoyed this post! About the author Roberto González is the co-founder of Aerolab, “an awesome place where we really push the barriers to create amazing, well-coded designs for the best digital products”. He can be reached at @robertcode.
Read more
  • 0
  • 0
  • 3098

article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3-part2
Mike Ball
28 Nov 2014
9 min read
Save for later

Part 2: Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
28 Nov 2014
9 min read
Part 2: Migrating WordPress blog content and deploying to production In part 1 of this series, we created middleman-demo, a basic Middleman-based blog. Part 1 addressed the benefits of a static site, setting up a Middleman development environment, Middleman’s templating system, and how to configure a Middleman project to support a basic blogging functionality. Now that middleman-demo is configured for blogging, let’s export old content from an existing WordPress blog, compile the application for production, and deploy to a web server. In this part, we’ll cover the following: Using the wp2middleman gem to migrate content from an existing WordPress blog Creating a Rake task to establish an Amazon Web Services S3 bucket Deploying a Middleman blog to Amazon S3 Setting up a custom domain for an S3-hosted site If you didn’t follow part 1, or you no longer have your original middleman-demo code, you can clone mine and check out the part2 branch: $ git clone http://github.com/mdb/middleman-demo && cd middleman-demo && git checkout part2 Export your content from Wordpress Now that middleman-demo is configured for blogging, let’s export old content from an existing Wordpress blog. Wordpress provides a tool through which blog content can be exported as an XML file, also called a WordPress “eXtended RSS” or “WXR” file. A WXR file can be generated and downloaded via the Wordpress admin’s Tools > Export screen, as is explained in Wordpress’s WXR documentation. In absence of a real Wordpress blog, download middleman_demo.wordpress.xml file, a sample WXR file: $ wget www.mikeball.info/downloads/middleman_demo.wordpress.xml Migrating the Wordpress posts to markdown To migrate the posts contained in the Wordpress WXR file, I created wp2middleman, a command line tool to generate Middleman-style markdown files from the posts in a WXR. Install wp2middleman via Rubygems: $ gem install wp2middleman wp2middleman provides a wp2mm command. Pass the middleman_demo.wordpress.xml file to the wp2mm command: $ wp2mm middleman_demo.wordpress.xml If all goes well, the following output is printed to the terminal: Successfully migrated middleman_demo.wordpress.xml wp2middleman also produced an export directory. The export directory houses the blog posts from the middleman_demo.wordpress.xml WXR file, now represented as Middleman-style markdown files: $ ls export/ 2007-02-14-Fusce-mauris-ligula-rutrum-at-tristique-at-pellentesque-quis-nisl.html.markdown 2007-07-21-Suspendisse-feugiat-enim-vel-lorem.html.markdown 2008-02-20-Suspendisse-rutrum-Suspendisse-nisi-turpis-congue-ac.html.markdown 2008-03-17-Duis-euismod-purus-ac-quam-Mauris-tortor.html.markdown 2008-04-02-Donec-cursus-tincidunt-libero-Nam-blandit.html.markdown 2008-04-28-Etiam-nulla-nisl-cursus-vel-auctor-at-mollis-a-quam.html.markdown 2008-06-08-Praesent-faucibus-ligula-luctus-dolor.html.markdown 2008-07-08-Proin-lobortis-sapien-non-venenatis-luctus.html.markdown 2008-08-08-Etiam-eu-urna-eget-dolor-imperdiet-vehicula-Phasellus-dictum-ipsum-vel-neque-mauris-interdum-iaculis-risus.html.markdown 2008-09-08-Lorem-ipsum-dolor-sit-amet-consectetuer-adipiscing-elit.html.markdown 2013-12-30-Hello-world.html.markdown Note that wp2mm supports additional options, though these are beyond the scope of this tutorial. Read more on wp2middleman’s GitHub page. Also note that the markdown posts in export are named *.html.markdown and some -- such as SOME EXAMPLE TODO -- contain the HTML embedded in the original Wordpress post. Middleman supports the ability to embed multiple languages within a single post file. For example, Middleman will evaluate a file named .html.erb.markdown first as markdown and then ERb. The final result would be HTML. Move the contents of export to source/blog and remove the export directory: $ mv export/* source/blog && rm -rf export Now, assuming the Middleman server is running, visiting http://localhost:4567 lists all the blog posts migrated from Wordpress. Each post links to its permalink. In the case of posts with tags, each tag links to a tag page. Compiling for production Thus far, we’ve been viewing middleman-demo in local development, where the Middleman server dynamically generates the HTML, CSS, and JavaScript with each request. However, Middleman’s value lies in its ability to generate a static website -- simple HTML, CSS, JavaScript, and image files -- served directly by a web server such as Nginx or Apache and thus requiring no application server or internal backend. Compile middleman-demo to a static build directory: $ middleman build The resulting build directory houses every HTML file that can be served by middleman-demo, as well as all necessary CSS, JavaScript, and images. Its directory layout maps to the URL patterns defined in config.rb. The build directory is typically ignored from source control. Deploying the build to Amazon S3 Amazon Web Services is Amazon’s cloud computing platform. Amazon S3, or Simple Storage Service, is a simple data storage service. Because S3 “buckets” can be accessible over HTTP, S3 offers a great cloud-based hosting solution for static websites, such as middleman-demo. While S3 is not free, it is generally extremely affordable. Amazon charges on a per-usage basis according to how many requests your bucket serves, including PUT requests, i.e. uploads. Read more about S3 pricing on AWS’s pricing guide. Let’s deploy the middleman-demo build to Amazon S3. First, sign up for AWS. Through AWS’s web-based admin, create an IAM user and locate the corresponding “access key id” and “secret access key:” 1: Visit the AWS IAM console. 2: From the navigation menu, click Users. 3: Select your IAM user name. 4: Click User Actions; then click Manage Access Keys. 5: Click Create Access Key. 6: Click Download Credentials; store the keys in a secure location. 7: Store your access key id in an environment variable named AWS_ACCESS_KEY_ID: $ export AWS_ACCESS_KEY_ID=your_access_key_id 8: Store your secret access key as an environment variable named AWS_SECRET_ACCESS_KEY: $ export AWS_SECRET_ACCESS_KEY=your_secret_access_key Note that, to persist these environment variables beyond the current shell session, you may want to automatically set them in each shell session. Setting them in a file such as your ~/.bashrc ensures this: export AWS_ACCESS_KEY_ID=your_access_key_id export AWS_SECRET_ACCESS_KEY=your_secret_access_key Creating an S3 bucket with Ruby To deploy to S3, we’ll need to create a “bucket,” or an S3 endpoint to which the middleman-demo’s build directory can be deployed. This can be done via AWS’s management console, but we can also automate its creation with Ruby. We’ll use the aws-sdk Ruby gem and a Rake task to create an S3 bucket for middleman-demo. Add the aws-sdk gem to middleman-demo’s Gemfile: gem 'aws-sdk Install the new gem: $ bundle install Create a Rakefile: $ touch Rakefile Add the following Ruby to the Rakefile; this code establishes a Rake task -- a quick command line utility -- to automate the creation of an S3 bucket: require 'aws-sdk' desc "Create an AWS S3 bucket" task :s3_bucket, :bucket_name do |task, args| s3 = AWS::S3.new(region: 'us-east-1) bucket = s3.buckets.create(args[:bucket_name]) bucket.configure_website do |config| config.index_document_suffix = 'index.html' config.error_document_key = 'error/index.html' end end From the command line, use the newly-established :s3_bucket Rake task to create a unique S3 bucket for your middleman-demo. Note that, if you have an existing domain you’d like to use, your bucket should be named www.yourdomain.com: $ rake s3_bucket[some_unique_bucket_name] For example, I named my S3 bucket www.middlemandemo.com by entering the following: $ rake s3_bucket[www.middlemandemo.com] After running rake s3_bucket[YOUR_BUCKET], you should see YOUR_BUCKET amongst the buckets listed in your AWS web console. Creating an error template Our rake task specifies a config.error_document_key whose value is error/index.html. This configures your S3 bucket to serve an error.html for erroring responses, such as 404s. Create an source/error.html.erb template: $ touch source/error.html.erb And add the following: --- title: Oops - something went wrong --- <h2><%= current_page.data.title %></h2> Deploying to your S3 bucket With an S3 bucket established, the middleman-sync Ruby gem can be used to automate uploading middleman-demo builds to S3. Add the middleman-sync gem to the Gemfile: gem ‘middleman-sync’ Install the middleman-sync gem: $ bundle install Add the necessary middleman-sync configuration to config.rb: activate :sync do |sync| sync.fog_provider = 'AWS' sync.fog_region = 'us-east-1' sync.fog_directory = '<YOUR_BUCKET>' sync.aws_access_key_id = ENV['AWS_ACCESS_KEY_ID'] sync.aws_secret_access_key = ENV['AWS_SECRET_ACCESS_KEY'] end Build and deploy middleman-demo: $ middleman build && middleman sync Note: if your deployment fails with a ’post_connection_check': hostname "YOUR_BUCKET" does not match the server certificate (OpenSSL::SSL::SSLError) (Excon::Errors::SocketError), it’s likely due to an open issue with middleman-sync. To work around this issue, add the following to the top of config.rb: require 'fog' Fog.credentials = { path_style: true } Now, middlemn-demo is browseable online at http://YOUR_BUCKET.s3-website-us-east-1.amazonaws.com/ Using a custom domain With middleman-demo -- deployed to an S3 bucket whose name matches a domain name, a custom domain can be configured easily. To use a custom domain, log into your domain management provider and add a CNAME mapping your domain to www.yourdomain.com.s3-website-us-east-1.amazonaws.com.. While the exact process for managing a CNAME varies between domain name providers, the process is generally fairly simple. Note that your S3 bucket name must perfectly match your domain name. Recap We’ve examined the benefits of static site generators and covered some basics regarding Middleman blogging. We’ve learned how to use the wp2middleman gem to migrate content from a Wordpress blog, and we’ve learned how to deploy Middleman to Amazon’s cloud-based Simple Storage Service (S3). About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 4206

article-image-using-front-controllers-create-new-page
Packt
28 Nov 2014
22 min read
Save for later

Using front controllers to create a new page

Packt
28 Nov 2014
22 min read
In this article, by Fabien Serny, author of PrestaShop Module Development, you will learn about controllers and object models. Controllers handle display on front and permit us to create new page type. Object models handle all required database requests. We will also see that, sometimes, hooks are not enough and can't change the way PrestaShop works. In these cases, we will use overrides, which permit us to alter the default process of PrestaShop without making changes in the core code. If you need to create a complex module, you will need to use front controllers. First of all, using front controllers will permit to split the code in several classes (and files) instead of coding all your module actions in the same class. Also, unlike hooks (that handle some of the display in the existing PrestaShop pages), it will allow you to create new pages. (For more resources related to this topic, see here.) Creating the front controller To make this section easier to understand, we will make an improvement on our current module. Instead of displaying all of the comments (there can be many), we will only display the last three comments and a link that redirects to a page containing all the comments of the product. First of all, we will add a limit to the Db request in the assignProductTabContent method of your module class that retrieves the comments on the product page: $comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$id_product.'ORDER BY `date_add` DESCLIMIT 3'); Now, if you go to a product, you should only see the last three comments. We will now create a controller that will display all comments concerning a specific product. Go to your module's root directory and create the following directory path: /controllers/front/ Create the file that will contain the controller. You have to choose a simple and explicit name since the filename will be used in the URL; let's name it comments.php. In this file, create a class and name it, ensuring that you follow the [ModuleName][ControllerFilename]ModuleFrontController convention, which extends the ModuleFrontController class. So in our case, the file will be as follows: <?phpclass MyModCommentsCommentsModuleFrontController extendsModuleFrontController{} The naming convention has been defined by PrestaShop and must be respected. The class names are a bit long, but they enable us to avoid having two identical class names in different modules. Now you just to have to set the template file you want to display with the following lines: class MyModCommentsCommentsModuleFrontController extendsModuleFrontController{public function initContent(){parent::initContent();$this->setTemplate('list.tpl');}} Next, create a template named list.tpl and place it in views/templates/front/ of your module's directory: <h1>{l s='Comments' mod='mymodcomments'}</h1> Now, you can check the result by loading this link on your shop: /index.php?fc=module&module=mymodcomments&controller=comments You should see the Comments title displayed. The fc parameter defines the front controller type, the module parameter defines in which module directory the front controller is, and, at last, the controller parameter defines which controller file to load. Maintaining compatibility with the Friendly URL option In order to let the visitor access the controller page we created in the preceding section, we will just add a link between the last three comments displayed and the comment form in the displayProductTabContent.tpl template. To maintain compatibility with the Friendly URL option of PrestaShop, we will use the getModuleLink method. This will generate a URL according to the URL settings (defined in Preferences | SEO & URLs). If the Friendly URL option is enabled, then it will generate a friendly URL (for example, /en/5-tshirts-doctor-who); if not, it will generate a classic URL (for example, /index.php?id_category=5&controller=category&id_lang=1). This function takes three parameters: the name of the module, the controller filename you want to call, and an array of parameters. The array of parameters must contain all of the data that's needed, which will be used by the controller. In our case, we will need at least the product identifier, id_product, to display only the comments related to the product. We can also add a module_action parameter just in case our controller contains several possible actions. Here is an example. As you will notice, I created the parameters array directly in the template using the assign Smarty method. From my point of view, it is easier to have the content of the parameters close to the link. However, if you want, you can create this array in your module class and assign it to your template in order to have cleaner code: <div class="rte">{assign var=params value=['module_action' => 'list','id_product'=> $smarty.get.id_product]}<a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}">{l s='See all comments' mod='mymodcomments'}</a></div> Now, go to your product page and click on the link; the URL displayed should look something like this: /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 Creating a small action dispatcher In our case, we won't need to have several possible actions in the comments controller. However, it would be great to create a small dispatcher in our front controller just in case we want to add other actions later. To do so, in controllers/front/comments.php, we will create new methods corresponding to each action. I propose to use the init[Action] naming convention (but this is not mandatory). So in our case, it will be a method named initList: protected function initList(){$this->setTemplate('list.tpl');} Now in the initContent method, we will create a $actions_list array containing all possible actions and associated callbacks: $actions_list = array('list' => 'initList'); Now, we will retrieve the id_product and module_action parameters in variables. Once complete, we will check whether the id_product parameter is valid and if the action exists by checking in the $actions_list array. If the method exists, we will dynamically call it: if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action](); Here's what your code should look like: public function initContent(){parent::initContent();$id_product = (int)Tools::getValue('id_product');$module_action = Tools::getValue('module_action');$actions_list = array('list' => 'initList');if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action]();} If you did this correctly nothing should have changed when you refreshed the page on your browser, and the Comments title should still be displayed. Displaying the product name and comments We will now display the product name (to let the visitor know he or she is on the right page) and associated comments. First of all, create a public variable, $product, in your controller class, and insert it in the initContent method with an instance of the selected product. This way, the product object will be available in every action method: $this->product = new Product((int)$id_product, false,$this->context->cookie->id_lang); In the initList method, just before setTemplate, we will make a DB request to get all comments associated with the product and then assign the product object and the comments list to Smarty: // Get comments$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESC');// Assign comments and product object$this->context->smarty->assign('comments', $comments);$this->context->smarty->assign('product', $this->product); Once complete, we will display the product name by changing the h1 title: <h1>{l s='Comments on product' mod='mymodcomments'}"{$product->name}"</h1> If you refresh your page, you should now see the product name displayed. I won't explain this part since it's exactly the same HTML code we used in the displayProductTabContent.tpl template. At this point, the comments should appear without the CSS style; do not panic, just go to the next section of this article. Including CSS and JS media in the controller As you can see, the comments are now displayed. However, you are probably asking yourself why the CSS style hasn't been applied properly. If you look back at your module class, you will see that it is the hookDisplayProductTab hook in the product page that includes the CSS and JS files. The problem is that we are not on a product page here. So we have to include them on this page. To do so, we will create a method named setMedia in our controller and add CS and JS files (as we did in the hookDisplayProductTab hook). It will override the default setMedia method contained in the FrontController class. Since this method includes general CSS and JS files used by PrestaShop, it is very important to call the setMedia parent method in our override: public function setMedia(){// We call the parent methodparent::setMedia();// Save the module path in a variable$this->path = __PS_BASE_URI__.'modules/mymodcomments/';// Include the module CSS and JS files needed$this->context->controller->addCSS($this->path.'views/css/starrating.css', 'all');$this->context->controller->addJS($this->path.'views/js/starrating.js');$this->context->controller->addCSS($this->path.'views/css/mymodcomments.css', 'all');$this->context->controller->addJS($this->path.'views/js/mymodcomments.js');} If you refresh your browser, the comments should now appear well formatted. In an attempt to improve the display, we will just add the date of the comment beside the author's name. Just replace <p>{$comment.firstname} {$comment.lastname|substr:0:1}.</p> in your list.tpl template with this line: <div>{$comment.firstname} {$comment.lastname|substr:0:1}. <small>{$comment.date_add|substr:0:10}</small></div> You can also replace the same line in the displayProductTabContent.tpl template if you want. If you want more information on how the Smarty method works, such as substr that I used for the date, you can check the official Smarty documentation. Adding a pagination system Your controller page is now fully working. However, if one of your products has thousands of comments, the display won't be quick. We will add a pagination system to handle this case. First of all, in the initList method, we need to set a number of comments per page and know how many comments are associated with the product: // Get number of comments$nb_comments = Db::getInstance()->getValue('SELECT COUNT(`id_product`)FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id);// Init$nb_per_page = 10; By default, I have set the number per page to 10, but you can set the number you want. The value is stored in a variable to easily change the number, if needed. Now we just have to calculate how many pages there will be : $nb_pages = ceil($nb_comments / $nb_per_page); Also, set the page the visitor is on: $page = 1;if (Tools::getValue('page') != '')$page = (int)$_GET['page']; Now that we have this data, we can generate the SQL limit and use it in the comment's DB request in such a way so as to display the 10 comments corresponding to the page the visitor is on: $limit_start = ($page - 1) * $nb_per_page;$limit_end = $nb_per_page;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESCLIMIT '.(int)$limit_start.','.(int)$limit_end); If you refresh your browser, you should only see the last 10 comments displayed. To conclude, we just need to add links to the different pages for navigation. First, assign the page the visitor is on and the total number of pages to Smarty: $this->context->smarty->assign('page', $page);$this->context->smarty->assign('nb_pages', $nb_pages); Then in the list.tpl template, we will display numbers in a list from 1 to the total number of pages. On each number, we will add a link with the getModuleLink method we saw earlier, with an additional parameter page: <ul class="pagination">{for $count=1 to $nb_pages}{assign var=params value=['module_action' => 'list','id_product' => $smarty.get.id_product,'page' => $count]}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{/for}</ul> To make the pagination clearer for the visitor, we can use the native CSS class to indicate the page the visitor is on: {if $page ne $count}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{else}<li class="active current"><span><span>{$count}</span></span></li>{/if} Your pagination should now be fully working. Creating routes for a module's controller At the beginning of this article, we chose to use the getModuleLink method to keep compatibility with the Friendly URL option of PrestaShop. Let's enable this option in the SEO & URLs section under Preferences. Now go to your product page and look at the target of the See all comments link; it should have changed from /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 to /en/module/mymodcomments/comments?module_action=list&id_product=1. The result is nice, but it is not really a Friendly URL yet. ISO code at the beginning of URLs appears only if you enabled several languages; so if you have only one language enabled, the ISO code will not appear in the URL in your case. Since PrestaShop 1.5.3, you can create specific routes for your module's controllers. To do so, you have to attach your module to the ModuleRoutes hook. In your module's install method in mymodcomments.php, add the registerHook method for ModuleRoutes: // Register hooksif (!$this->registerHook('displayProductTabContent') ||!$this->registerHook('displayBackOfficeHeader') ||!$this->registerHook('ModuleRoutes'))return false; Don't forget; you will have to uninstall/install your module if you want it to be attached to this hook. If you don't want to uninstall your module (because you don't want to lose all the comments you filled in), you can go to the Positions section under the Modules section of your back office and hook it manually. Now we have to create the corresponding hook method in the module's class. This method will return an array with all the routes we want to add. The array is a bit complex to explain, so let me write an example first: public function hookModuleRoutes(){return array('module-mymodcomments-comments' => array('controller' => 'comments','rule' => 'product-comments{/:module_action}{/:id_product}/page{/:page}','keywords' => array('id_product' => array('regexp' => '[d]+','param' => 'id_product'),'page' => array('regexp' => '[d]+','param' => 'page'),'module_action' => array('regexp' => '[w]+','param' => 'module_action'),),'params' => array('fc' => 'module','module' => 'mymodcomments','controller' => 'comments')));} The array can contain several routes. The naming convention for the array key of a route is module-[ModuleName]-[ModuleControllerName]. So in our case, the key will be module-mymodcomments-comments. In the array, you have to set the following: The controller; in our case, it is comments. The construction of the route (the rule parameter). You can use all the parameters you passed in the getModuleLink method by using the {/:YourParameter} syntax. PrestaShop will automatically add / before each dynamic parameter. In our case, I chose to construct the route this way (but you can change it if you want): product-comments{/:module_action}{/:id_product}/page{/:page} The keywords array corresponding to the dynamic parameters. For each dynamic parameter, you have to set Regexp, which will permit to retrieve it from the URL (basically, [d]+ for the integer values and '[w]+' for string values) and the parameter name. The parameters associated with the route. In the case of a module's front controller, it will always be the same three parameters: the fc parameter set with the fix value module, the module parameter set with the module name, and the controller parameter set with the filename of the module's controller. Very importantNow PrestaShop is waiting for a page parameter to build the link. To avoid fatal errors, you will have to set the page parameter to 1 in your getModuleLink parameters in the displayProductTabContent.tpl template: {assign var=params value=[ 'module_action' => 'list', 'id_product' => $smarty.get.id_product, 'page' => 1 ]} Once complete, if you go to a product page, the target of the See all comments link should now be: /en/product-comments/list/1/page/1 It's really better, but we can improve it a little more by setting the name of the product in the URL. In the assignProductTabContent method of your module, we will load the product object and assign it to Smarty: $product = new Product((int)$id_product, false,$this->context->cookie->id_lang);$this->context->smarty->assign('product', $product); This way, in the displayProductTabContent.tpl template, we will be able to add the product's rewritten link to the parameters of the getModuleLink method: (do not forget to add it in the list.tpl template too!): {assign var=params value=['module_action' => 'list','product_rewrite' => $product->link_rewrite,'id_product' => $smarty.get.id_product,'page' => 1]} We can now update the rule of the route with the product's link_rewrite variable: 'product-comments{/:module_action}{/:product_rewrite} {/:id_product}/page{/:page}' Do not forget to add the product_rewrite string in the keywords array of the route: 'product_rewrite' => array('regexp' => '[w-_]+','param' => 'product_rewrite'), If you refresh your browser, the link should look like this now: /en/product-comments/list/tshirt-doctor-who/1/page/1 Nice, isn't it? Installing overrides with modules As we saw in the introduction of this article, sometimes hooks are not sufficient to meet the needs of developers; hooks can't alter the default process of PrestaShop. We could add code to core classes; however, it is not recommended, as all those core changes will be erased when PrestaShop is updated using the autoupgrade module (even a manual upgrade would be difficult). That's where overrides take the stage. Creating the override class Installing new object models and controller overrides on PrestaShop is very easy. To do so, you have to create an override directory in the root of your module's directory. Then, you just have to place your override files respecting the path of the original file that you want to override. When you install the module, PrestaShop will automatically move the override to the overrides directory of PrestaShop. In our case, we will override the getProducts method of the /classes/Search.php class to display the grade and the number of comments on the product list. So we just have to create the Search.php file in /modules/mymodcomments/override/classes/Search.php, and fill it with: <?phpclass Search extends SearchCore{public static function find($id_lang, $expr, $page_number = 1,$page_size = 1, $order_by = 'position', $order_way = 'desc',$ajax = false, $use_cookie = true, Context $context = null){}} In this method, first of all, we will call the parent method to get the products list and return it: // Call parent method$find = parent::find($id_lang, $expr, $page_number, $page_size,$order_by, $order_way, $ajax, $use_cookie, $context);// Return productsreturn $find; We want to display the information (grade and number of comments) to the products list. So, between the find method call and the return statement, we will add some lines of code. First, we will check whether $find contains products. The find method can return an empty array when no products match the search. In this case, we don't have to change the way this method works. We also have to check whether the mymodcomments module has been installed (if the override is being used, the module is most likely to be installed, but as I said, it's just for security): if (isset($find['result']) && !empty($find['result']) &&Module::isInstalled('mymodcomments')){} If we enter these conditions, we will list the product identifier returned by the find parent method: // List id product$products = $find['result'];$id_product_list = array();foreach ($products as $p)$id_product_list[] = (int)$p['id_product']; Next, we will retrieve the grade average and number of comments for the products in the list: // Get grade average and nb comments for products in list$grades_comments = Db::getInstance()->executeS('SELECT `id_product`, AVG(`grade`) as grade_avg,count(`id_mymod_comment`) as nb_commentsFROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` IN ('.implode(',', $id_product_list).')GROUP BY `id_product`'); Finally, fill in the $products array with the data (grades and comments) corresponding to each product: // Associate grade and nb comments with productforeach ($products as $kp => $p)foreach ($grades_comments as $gc)if ($gc['id_product'] == $p['id_product']){$products[$kp]['mymodcomments']['grade_avg'] =round($gc['grade_avg']);$products[$kp]['mymodcomments']['nb_comments'] =$gc['nb_comments'];}$find['result'] = $products; Now, as we saw at the beginning of this section, the overrides of the module are installed when you install the module. So you will have to uninstall/install your module. Once this is done, you can check the override contained in your module; the content of /modules/mymodcomments/override/classes/Search.php should be copied in /override/classes/Search.php. If an override of the class already exists, PrestaShop will try to merge it by adding the methods you want to override to the existing override class. Once the override is added by your module, PrestaShop should have regenerated the cache/class_index.php file (which contains the path of every core class and controller), and the path of the Category class should have changed. Open the cache/class_index.php file and search for 'Search'; the content of this array should now be: 'Search' =>array ( 'path' => 'override/classes /Search.php','type' => 'class',), If it's not the case, it probably means the permissions of this file are wrong and PrestaShop could not regenerate it. To fix this, just delete this file manually and refresh any page of your PrestaShop. The file will be regenerated and the new path will appear. Since you uninstalled/installed the module, all your comments should have been deleted. So take 2 minutes to fill in one or two comments on a product. Then search for this product. As you must have noticed, nothing has changed. Data is assigned to Smarty, but not used by the template yet. To avoid deletion of comments each time you uninstall the module, you should comment the loadSQLFile call in the uninstall method of mymodcomments.php. We will uncomment it once we have finished working with the module. Editing the template file to display grades on products list In a perfect world, you should avoid using overrides. In this case, we could have used the displayProductListReviews hook, but I just wanted to show you a simple example with an override. Moreover, this hook exists only since PrestaShop 1.6, so it would not work on PrestaShop 1.5. Now, we will have to edit the product-list.tpl template of the active theme (by default, it is /themes/default-bootstrap/), so the module won't be a turnkey module anymore. A merchant who will install this module will have to manually edit this template if he wants to have this feature. In the product-list.tpl template, just after the short description, check if the $product.mymodcomments variable exists (to test if there are comments on the product), and then display the grade average and the number of comments: {if isset($product.mymodcomments)}<p><b>{l s='Grade:'}</b> {$product.mymodcomments.grade_avg}/5<br/><b>{l s='Number of comments:'}</b>{$product.mymodcomments.nb_comments}</p>{/if} Here is what the products list should look like now: Creating a new method in a native class In our case, we have overridden an existing method of a PrestaShop class. But we could have added a method to an existing class. For example, we could have added a method named getComments to the Product class: <?phpclass Product extends ProductCore{public function getComments($limit_start, $limit_end = false){$limit = (int)$limit_start;if ($limit_end)$limit = (int)$limit_start.','.(int)$limit_end;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->id.'ORDER BY `date_add` DESCLIMIT '.$limit);return $comments;}} This way, you could easily access the product comments everywhere in the code just with an instance of a Product class. Summary This article taught us about the main design patterns of PrestaShop and explained how to use them to construct a well-organized application. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Customizing PrestaShop Theme Part 2 [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 25240

article-image-using-classes
Packt
28 Nov 2014
26 min read
Save for later

Using Classes

Packt
28 Nov 2014
26 min read
In this article by Meir Bar-Tal and Jonathon Lee Wright, authors of Advanced UFT 12 for Test Engineers Cookbook, we will cover the following recipes: Implementing a class Implementing a simple search class Implementing a generic Login class Implementing function pointers (For more resources related to this topic, see here.) Introduction This article describes how to use classes in VBScript, along with some very useful and illustrative implementation examples. Classes are a fundamental feature of object-oriented programming languages such as C++, C#, and Java. Classes enable us to encapsulate data fields with the methods and properties that process them, in contrast to global variables and functions scattered in function libraries. UFT already uses classes, such as with reserved objects, and Test Objects are also instances of classes. Although elementary object-oriented features such as inheritance and polymorphism are not supported by VBScript, using classes can be an excellent choice to make your code more structured, better organized, and more efficient and reusable. Implementing a class In this recipe, you will learn the following: The basic concepts and the syntax required by VBScript to implement a class The different components of a class and interoperation How to implement a type of generic constructor function for VBScript classes How to use a class during runtime Getting ready From the File menu, navigate to New | Function Library…, or use the Alt + Shift + N shortcut. Name the new function library cls.MyFirstClass.vbs and associate it with your test. How to do it... We will build our MyFirstClass class from the ground up. There are several steps one must follow to implement a class; they are follows: Define the class as follows: Class MyFirstClass Next, we define the class fields. Fields are like regular variables, but encapsulated within the namespace defined by the class. The fields can be private or public. A private field can be accessed only by class members. A public field can be accessed from any block of code. The code is as follows: Class MyFirstClass Private m_sMyPrivateString Private m_oMyPrivateObject Public m_iMyPublicInteger End Class It is a matter of convention to use the prefix m_ for class member fields; and str for string, int for integer, obj for Object, flt for Float, bln for Boolean, chr for Character, lng for Long, and dbl for Double, to distinguish between fields of different data types. For examples of other prefixes to represent additional data types, please refer to sites such as https://en.wikipedia.org/wiki/Hungarian_notation. Hence, the private fields' m_sMyPrivateString and m_oMyPrivateObject will be accessible only from within the class methods, properties, and subroutines. The public field m_iMyPublicInteger will be accessible from any part of the code that will have a reference to an instance of the MyFirstClass class; and it can also allow partial or full access to private fields, by implementing public properties. By default, within a script file, VBScript treats as public identifiers such as function and subroutines and any constant or variable defined with Const and Dim respectively,, even if not explicitly defined. When associating function libraries to UFT, one can limit access to specific globally defined identifiers, by preceding them with the keyword Private. The same applies to members of a class, function, sub, and property. Class fields must be preceded either by Public or Private; the public scope is not assumed by VBScript, and failing to precede a field identifier with its access scope will result in a syntax error. Remember that, by default, VBScript creates a new variable if the explicit option is used at the script level to force explicit declaration of all variables in that script level. Next, we define the class properties. A property is a code structure used to selectively provide access to a class' private member fields. Hence, a property is often referred to as a getter (to allow for data retrieval) or setter (to allow for data change). A property is a special case in VBScript; it is the only code structure that allows for a duplicate identifier. That is, one can have a Property Get and a Property Let procedure (or Property Set, to be used when the member field actually is meant to store a reference to an instance of another class) with the same identifier. Note that Property Let and Property Set accept a mandatory argument. For example: Class MyFirstClass Private m_sMyPrivateString Private m_oMyPrivateObject    Public m_iMyPublicInteger  Property Get MyPrivateString()    MyPrivateString = m_sMyPrivateString End Property Property Let MyPrivateString(ByVal str)    m_sMyPrivateString = str End Property Property Get MyPrivateObject()    Set MyPrivateObject = m_oMyPrivateObject End Property Private Property Set MyPrivateObject(ByRef obj)    Set m_oMyPrivateObject = obj End Property End Class The public field m_iMyPublicInteger can be accessed from any code block, so defining a getter and setter (as properties are often referred to) for such a field is optional. However, it is a good practice to define fields as private and explicitly provide access through public properties. For fields that are for exclusive use of the class members, one can define the properties as private. In such a case, usually, the setter (Property Let or Property Set) would be defined as private, while the getter (Property Get) would be defined as public. This way, one can prevent other code components from making changes to the internal fields of the class to ensure data integrity and validity. Define the class methods and subroutines. A method is a function, which is a member of a class. Like fields and properties, methods (as well as subroutines) can be Private or Public. For example: Class MyFirstClass '… Continued Private Function MyPrivateFunction(ByVal str)    MsgBox TypeName(me) & " – Private Func: " & str    MyPrivateFunction = 0 End Function Function MyPublicFunction(ByVal str)    MsgBox TypeName(me) & " – Public Func: " & str    MyPublicFunction = 0 End Function Sub MyPublicSub(ByVal str)    MsgBox TypeName(me) & " – Public Sub: " & str End Sub End Class Keep in mind that subroutines do not return a value. Functions by design should not return a value, but they can be implemented as a subroutine. A better way is to, in any case, have a function return a value that tells the caller if it executed properly or not (usually zero (0) for no errors and one (1) for any fault). Recall that a function that is not explicitly assigned a value function and is not explicitly assigned a value, will return empty, which may cause problems if the caller attempts to evaluate the returned value. Now, we define how to initialize the class when a VBScript object is instantiated: Set obj = New MyFirstClass The Initialize event takes place at the time the object is created. It is possible to add code that we wish to execute every time an object is created. So, now define the standard private subroutine Class_Initialize, sometimes referred to (albeit only by analogy) as the constructor of the class. If implemented, the code will automatically be executed during the Initialize event. For example, if we add the following code to our class: Private Sub Class_Initialize MsgBox TypeName(me) & " started" End Sub Now, every time the Set obj = New MyFirstClass statement is executed, the following message will be displayed: Define how to finalize the class. We finalize a class when a VBScript object is disposed of (as follows), or when the script exits the current scope (such as when a local object is disposed when a function returns control to the caller), or a global object is disposed (when UFT ends its run session): Set obj = Nothing The Finalize event takes place at the time when the object is removed from memory. It is possible to add code that we wish to execute, every time an object is disposed of. If so, then define the standard private subroutine Class_Terminate, sometimes referred to (albeit only by analogy) as the destructor of the class. If implemented, the code will automatically be executed during the Finalize event. For example, if we add the following code to our class: Private Sub Class_Terminate MsgBox TypeName(me) & " ended" End Sub Now, every time the Set obj = Nothing statement is executed, the following message will be displayed: Invoking (calling) a class method or property is done as follows: 'Declare variables Dim obj, var 'Calling MyPublicFunction obj.MyPublicFunction("Hello") 'Retrieving the value of m_sMyPrivateString var = obj.MyPrivateString 'Setting the value of m_sMyPrivateString obj.MyPrivateString = "My String" Note that the usage of the public members is done by using the syntax obj.<method or property name>, where obj is the variable holding the reference to the object of class. The dot operator (.) after the variable identifier provides access to the public members of the class. Private members can be called only by other members of the class, and this is done like any other regular function call. VBScript supports classes with a default behavior. To utilize this feature, we need to define a single default method or property that will be invoked every time an object of the class is referred to, without specifying which method or property to call. For example, if we define the public method MyPublicFunction as default: Public Default Function MyPublicFunction(ByVal str) MsgBox TypeName(me) & " – Public Func: " & str MyPublicFunction = 0 End Function Now, the following statements would invoke the MyPublicFunction method implicitly: Set obj = New MyFirstClass obj("Hello") This is exactly the same as if we called the MyPublicFunction method explicitly: Set obj = New MyFirstClass obj.MyPublicFunction("Hello") Contrary to the usual standard for such functions, a default method or property must be explicitly defined as public. Now, we will see how to add a constructor-like function. When using classes stored in function libraries, UFT (know as QTP in previous versions), cannot create an object using the New operator inside a test Action. In general, the reason is linked to the fact that UFT uses a wrapper on top of WSH, which actually executes the VBScript (VBS 5.6) code. Therefore, in order to create instances of such a custom class, we need to use a kind of constructor function that will perform the New operation from the proper memory namespace. Add the following generic constructor to your function library: Function Constructor(ByVal sClass) Dim obj On Error Resume Next 'Get instance of sClass Execute "Set obj = New [" & sClass & "]" If Err.Number <> 0 Then    Set obj = Nothing      Reporter.ReportEvent micFail, "Constructor", "Failed      to create an instance of class '" & sClass & "'." End If Set Constructor = obj End Function We will then instantiate the object from the UFT Action, as follows: Set obj = Constructor("MyFirstClass") Consequently, use the object reference in the same fashion as seen in the previous line of code: obj.MyPublicFunction("Hello") How it works... As mentioned earlier, using the internal public fields, methods, subroutines, and properties is done using a variable followed by the dot operator and the relevant identifier (for example, the function name). As to the constructor, it accepts a string with the name of a class as an argument, and attempts to create an instance of the given class. By using the Execute command (which performs any string containing valid VBScript syntax), it tries to set the variable obj with a new reference to an instance of sClass. Hence, we can handle any custom class with this function. If the class cannot be instantiated (for instance, because the string passed to the function is faulty, the function library is not associated to the test, or there is a syntax error in the function library), then an error would arise, which is gracefully handled by using the error-handling mechanism, leading to the function returning nothing. Otherwise, the function will return a valid reference to the newly created object. See also The following articles at www.advancedqtp.com are part of a wider collection, which also discuss classes and code design in depth: An article by Yaron Assa at http://www.advancedqtp.com/introduction-to-classes An article by Yaron Assa at http://www.advancedqtp.com/introduction-to-code-design An article by Yaron Assa at http://www.advancedqtp.com/introduction-to-design-patterns Implementing a simple search class In this recipe, we will see how to create a class that can be used to execute a search on Google. Getting ready From the File menu, navigate to New | Test, and name the new test SimpleSearch. Then create a new function library by navigating to New | Function Library, or use the Alt + Shift + N shortcut. Name the new function library cls.Google.vbs and associate it with your test. How to do it... Proceed with the following steps: Define an environment variable as OPEN_URL. Insert the following code in the new library: Class GoogleSearch Public Function DoSearch(ByVal sQuery)    With me.Page_      .WebEdit("name:=q").Set sQuery      .WebButton("html id:=gbqfba").Click    End With    me.Browser_.Sync     If me.Results.WaitProperty("visible", 1, 10000) Then      DoSearch = GetNumResults()    Else      DoSearch = 0      Reporter.ReportEvent micFail, TypeName(Me),        "Search did not retrieve results until timeout"    End If End Function Public Function GetNumResults()    Dim tmpStr    tmpStr = me.Results.GetROProperty("innertext")    tmpStr = Split(tmpStr, " ")    GetNumResults = CLng(tmpStr(1)) 'Assumes the number      is always in the second entry End Function Public Property Get Browser_()    Set Browser_ = Browser(me.Title) End Property Public Property Get Page_()    Set Page_ = me.Browser_.Page(me.Title) End Property Public Property Get Results()    Set Results = me.Page_.WebElement(me.ResultsId) End Property Public Property Get ResultsId()    ResultsId = "html id:=resultStats" End Property Public Property Get Title()    Title = "title:=.*Google.*" End Property Private Sub Class_Initialize    If Not me.Browser_.Exist(0) Then      SystemUtil.Run "iexplore.exe",        Environment("OPEN_URL")      Reporter.Filter = rfEnableErrorsOnly      While Not Browser_.Exist(0)        Wait 0, 50      Wend      Reporter.Filter = rfEnableAll      Reporter.ReportEvent micDone, TypeName(Me),        "Opened browser"    Else      Reporter.ReportEvent micDone, TypeName(Me),        "Browser was already open"    End If End Sub Private Sub Class_Terminate    If me.Browser_.Exist(0) Then      me.Browser_.Close      Reporter.Filter = rfEnableErrorsOnly      While me.Browser_.Exist(0)        wait 0, 50      Wend      Reporter.Filter = rfEnableAll      Reporter.ReportEvent micDone, TypeName(Me),        "Closed browser"    End If End Sub End Class In Action, write the following code: Dim oGoogleSearch Dim oListResults Dim oDicSearches Dim iNumResults Dim sMaxResults Dim iMaxResults '--- Create these objects only in the first iteration If Not LCase(TypeName(oListResults)) = "arraylist" Then Set oListResults =    CreateObject("System.Collections.ArrayList") End If If Not LCase(TypeName(oDicSearches)) = "Dictionary" Then Set oDicSearches = CreateObject("Scripting.Dictionary") End If '--- Get a fresh instance of GoogleSearch Set oGoogleSearch = GetGoogleSearch() '--- Get search term from the DataTable for each action iteration sToSearch = DataTable("Query", dtLocalSheet) iNumResults = oGoogleSearch.DoSearch(sToSearch) '--- Store the results of the current iteration '--- Store the number of results oListResults.Add iNumResults '--- Store the search term attached to the number of results as key (if not exists) If Not oDicSearches.Exists(iNumResults) Then oDicSearches.Add iNumResults, sToSearch End If 'Last iteration (assuming we always run on all rows), so perform the comparison between the different searches If CInt(Environment("ActionIteration")) = DataTable.LocalSheet.GetRowCount Then 'Sort the results ascending oListResults.Sort 'Get the last item which is the largest iMaxResults = oListResults.item(oListResults.Count-1) 'Print to the Output pane for debugging Print iMaxResults 'Get the search text which got the most results sMaxResults = oDicSearches(iMaxResults) 'Report result Reporter.ReportEvent micDone, "Max search", sMaxResults    & " got " & iMaxResults 'Dispose of the objects used Set oListResults = Nothing Set oDicSearches = Nothing Set oGoogleSearch = Nothing End If In the local datasheet, create a parameter named Query and enter several values to be used in the test as search terms. Next, from the UFT home page navigate to View | Test Flow, and then right-click with the mouse on the Action component in the graphic display, then select Action Call Properties and set the Action to run on all rows. How it works... The Action takes care to preserve the data collected through the iterations in the array list oListResults and the dictionary oDicSearches. It checks if it reaches the last iteration after each search is done. Upon reaching the last iteration, it analyses the data to decide which term yielded the most results. A more detailed description of the workings of the code can be seen as follows. First, we create an instance of the GoogleSearch class, and the Class_Initialize subroutine automatically checks if the browser is not already open. If not open, Class_Initialize opens it with the SystemUtil.Run command and waits until it is open at the web address defined in Environment("OPEN_URL"). The Title property always returns the value of the Descriptive Programming (DP) value required to identify the Google browser and page. The Browser_, Page_, and Results properties always return a reference to the Google browser, page, and WebElement respectively, which hold the text with the search results. After the browser is open, we retrieve the search term from the local DataTable parameter Query and call the GoogleSearch DoSearch method with the search term string as parameter. The DoSearch method returns a value with the number of results, which are given by the internal method GetNumResults. In the Action, we store the number itself and add to the dictionary, an entry with this number as the key and the search term as the value. When the last iteration is reached, an analysis of the results is automatically done by invoking the Sort method of oListResults ArrayList, getting the last item (the greatest), and then retrieving the search term associated with this number from the dictionary; it reports the result. At last, we dispose off all the objects used, and then the Class_Terminate subroutine automatically checks if the browser is open. If open, then the Class_Terminate subroutine closes the browser. Implementing a generic Login class In this recipe, we will see how to implement a generic Login class. The class captures both, the GUI structure and the processes that are common to all applications with regard to their user access module. It is agnostic to the particular object classes, their technologies, and other identification properties. The class shown here implements the command wrapper design pattern, as it encapsulates a process (Login) with the main default method (Run). Getting ready You can use the same function library cls.Google.vbs as in the previous recipe Implementing a simple search class, or create a new one (for instance, cls.Login.vbs) and associate it with your test. How to do it... In the function library, we will write the following code to define the class Login: Class Login Private m_wndContainer 'Such as a Browser, Window,    SwfWindow Private m_wndLoginForm 'Such as a Page, Dialog,    SwfWindow Private m_txtUsername 'Such as a WebEdit, WinEdit,    SwfEdit Private m_txtIdField 'Such as a WebEdit, WinEdit,    SwfEdit Private m_txtPassword 'Such as a WebEdit, WinEdit,    SwfEdit Private m_chkRemember 'Such as a WebCheckbox,    WinCheckbox, SwfCheckbox Private m_btnLogin   'Such as a WebEdit, WinEdit,    SwfEdit End Class These fields define the test objects, which are required for any Login class, and the following fields are used to keep runtime data for the report: Public Status 'As Integer Public Info 'As String The Run function is defined as a Default method that accepts a Dictionary as argument. This way, we can pass a set of named arguments, some of which are optional, such as timeout. Public Default Function Run(ByVal ArgsDic)    'Check if the timeout parameter was passed, if not      assign it 10 seconds    If Not ArgsDic.Exists("timeout") Then ArgsDic.Add      "timeout", 10    'Check if the client window exists    If Not me.Container.Exist(ArgsDic("timeout")) Then      me.Status = micFail      me.Info   = "Failed to detect login        browser/dialog/window."      Exit Function    End If    'Set the Username    me.Username.Set ArgsDic("Username")    'If the login form has an additional mandatory field    If me.IdField.Exist(ArgsDic("timeout")) And      ArgsDic.Exists("IdField") Then      me.IdField.Set ArgsDic("IdField")    End If    'Set the password    me.Password.SetSecure ArgsDic("Password")    'It is a common practice that Login forms have a      checkbox to keep the user logged-in if set ON    If me.Remember.Exist(ArgsDic("timeout")) And      ArgsDic.Exists("Remember") Then      me.Remember.Set ArgsDic("Remember")    End If    me.LoginButton.Click End Function The Run method actually performs the login procedure, setting the username and password, as well as checking or unchecking the Remember Me or Keep me Logged In checkbox according to the argument passed with the ArgsDic dictionary. The Initialize method accepts Dictionary just like the Run method. However, in this case, we pass the actual TOs with which we wish to perform the login procedure. This way, we can actually utilize the class for any Login form, whatever the technology used to develop it. We can say that the class is technology agnostic. The parent client dialog/browser/window of the objects is retrieved using the GetTOProperty("parent") statement: Function Initialize(ByVal ArgsDic)    Set m_txtUsername = ArgsDic("Username")    Set m_txtIdField = ArgsDic("IdField")    Set m_txtPassword = ArgsDic("Password")    Set m_btnLogin   = ArgsDic("LoginButton")    Set m_chkRemember = ArgsDic("Remember")    'Get Parents    Set m_wndLoginForm =      me.Username.GetTOProperty("parent")    Set m_wndContainer =      me.LoginForm.GetTOProperty("parent") End Function In addition, here you can see the following properties used in the class for better readability: Property Get Container()    Set Container = m_wndContainer End Property Property Get LoginForm()    Set LoginForm = m_wndLoginForm End Property Property Get Username()    Set Username = m_txtUsername End Property Property Get IdField()    Set IdField = m_txtIdField End Property Property Get Password()    Set Password = m_txtPassword End Property Property Get Remember()    Set Remember = m_chkRemember End Property Property Get LoginButton()  Set LoginButton = m_btnLogin End Property Private Sub Class_Initialize()    'TODO: Additional initialization code here End Sub Private Sub Class_Terminate()    'TODO: Additional finalization code here End Sub We will also add a custom function to override the WinEdit and WinEditor Type methods: Function WinEditSet(ByRef obj, ByVal str) obj.Type str End Function This way, no matter which technology the textbox belongs to, the Set method will work seamlessly. To actually test the Login class, write the following code in the Test Action (this time we assume that the Login form was already opened by another procedure): Dim ArgsDic, oLogin 'Register the set method for the WinEdit and WinEditor RegisterUserFunc "WinEdit", "WinEditSet", "Set" RegisterUserFunc "WinEditor", "WinEditSet", "Set" 'Create a Dictionary object Set ArgsDic = CreateObject("Scripting.Dictionary") 'Create a Login object Set oLogin = New Login 'Add the test objects to the Dictionary With ArgsDic .Add "Username",    Browser("Gmail").Page("Gmail").WebEdit("txtUsername") .Add "Password",    Browser("Gmail").Page("Gmail").WebEdit("txtPassword") .Add "Remember",    Browser("Gmail").Page("Gmail")    .WebCheckbox("chkRemember") .Add "LoginButton",    Browser("Gmail").Page("Gmail").WebButton("btnLogin") End With 'Initialize the Login class oLogin.Initialize(ArgsDic) 'Initialize the dictionary to pass the arguments to the login ArgsDic.RemoveAll With ArgsDic .Add "Username", "myuser" .Add "Password", "myencriptedpassword" .Add "Remember", "OFF" End With 'Login oLogin.Run(ArgsDic) 'or: oLogin(ArgsDic) 'Report result Reporter.ReportEvent oLogin.Status, "Login", "Ended with " & GetStatusText(oLogin.Status) & "." & vbNewLine & oStatus.Info 'Dispose of the objects Set oLogin = Nothing Set ArgsDic = Nothing How it works... Here we will not delve into the parts of the code already explained in the Implementing a simple search class recipe. Let's see what we did in this recipe: We registered the custom function WinEditSet to the WinEdit and WinEditor TO classes using RegisterUserFunc. As discussed previously, this will make every call to the method set to be rerouted to our custom function, resulting in applying the correct method to the Standard Windows text fields. Next, we created the objects we need, a Dictionary object and a Login object. Then, we added the required test objects to Dictionary, and then invoked its Initialize method, passing the Dictionary as the argument. We cleared Dictionary and then added to it the values needed for actually executing the login (Username, Password, and the whether to remember the user or keep logged in checkboxes usually used in Login forms). We called the Run method for the Login class with the newly populated Dictionary. Later, we reported the result by taking the Status and Info public fields from the oLogin object. At the end of the script, we unregistered the custom function from all classes in the environment (StdWin in this case). Implementing function pointers What is a function pointer? A function pointer is a variable that stores the memory address of a block of code that is programmed to fulfill a specific function. Function pointers are useful to avoid complex switch case structures. Instead, they support direct access in runtime to previously loaded functions or class methods. This enables the construction of callback functions. A callback is, in essence, an executable code that is passed as an argument to a function. This enables more generic coding, by having lower-level modules calling higher-level functions or subroutines. This recipe will describe how to implement function pointers in VBScript, a scripting language that does not natively support the usage of pointers. Getting ready Create a new function library (for instance, cls.FunctionPointers.vbs) and associate it with your test. How to do it... Write the following code in the function library: Class WebEditSet    Public Default Function Run(ByRef obj, ByVal sText)        On Error Resume Next         Run = 1 'micFail (pessimistic initialization)        Select Case True            Case   obj.Exist(0) And _                    obj.GetROProperty("visible") And _                    obj.GetROProperty("enabled")                              'Perform the set operation                    obj.Set(sText)                              Case Else                Reporter.ReportEvent micWarning,                  TypeName(me), "Object not available."              Exit Function        End Select        If Err.Number = 0 Then            Run = 0 'micPass        End If    End Function End Class Write the following code in Action: Dim pFunctiontion Set pFunctiontion = New WebEditSet Reporter.ReportEvent pFunctiontion(Browser("Google").Page("Google") .WebEdit("q"), "UFT"), "Set the Google Search WebEdit", "Done" How it works... The WebEditSet class actually implements the command wrapper design pattern (refer to also the Implementing a generic Login class recipe). This recipe also demonstrates an alternative way of overriding any native UFT TO method without recurring to the RegisterUserFunc method. First, we create an instance of the WebEditSet class and set the reference to our pFunctiontion variable. Note that the Run method of WebEditSet is declared as a default function, so we can invoke its execution by merely referring to the object reference, as is done with the statement pFunctiontion in the last line of code in the How to do it… section. This way, pFunctiontion actually functions as if it were a function pointer. Let us take a close look at the following line of code, beginning with Reporter.ReportEvent: Reporter.ReportEvent pFunc(Browser("Google").Page("Google").WebEdit("q"), "UFT"), "Set the Google Search WebEdit", "Done" We call the ReportEvent method of Reporter, and as its first parameter, instead of a status constant such as micPass or micFail, we pass pFunctiontion and the arguments accepted by the Run method (the target TO and its parameter, a string). This way of using the function pointer actually implements a kind of callback. The value returned by the Run method of WebEditSet will determine whether UFT will report a success or failure in regard to the Set operation. It will return through the call invoked by accessing the function pointer. See also The following articles are part of a wider collection at www.advancedqtp.com, which also discusses function pointers in depth: An article by Meir Bar-Tal at http://www.advancedqtp.com/ function-pointers-in-vb-script-revised An article by Meir Bar-Tal at http://www.advancedqtp.com/using-to-custom-property-as-function-pointer Summary In this article, we learned how to implement a general class; basic concepts and the syntax required by VBScript to implement a class. Then we saw how to implement a simple class that can be used to execute a search on Google and a generic Login class. We also saw how to implement function pointers in VBScript along with various links to the articles that discusses function pointers. Resources for Article: Further resources on this subject: DOM and QTP [Article] Getting Started with Selenium Grid [Article] Quick Start into Selenium Tests [Article]
Read more
  • 0
  • 0
  • 20092
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-dealing-upstream-proxies
Packt
27 Nov 2014
6 min read
Save for later

Dealing with Upstream Proxies

Packt
27 Nov 2014
6 min read
This article is written by Akash Mahajan, the author of Burp Suite Essentials. We know that setting up Mozilla Firefox with the FoxyProxy Standard add-on to create a selective, pattern-based forwarding process allows us to ensure that only white-listed traffic from our browser reaches Burp. This is something that Burp allows us to set with its configuration options itself. Think of it like this: less traffic reaching Burp ensures that Burp is dealing with legitimate traffic, and its filters are working on ensuring that we remain within our scope. (For more resources related to this topic, see here.) As a security professional testing web application, scope is a term you hear and read about everywhere. Many times, we are expected to test only parts of an application, and usually, the scope is limited by domain, subdomain, folder name, and even certain filenames. Burp gives a nice, simple-to-use interface to add, edit, and remove targets from the scope. Dealing with upstream proxies and SOCKS proxies Sometimes, the application that we need to test lies inside some corporate network. The clients give access to a specific IP address that is white-listed in the corporate firewall. At other times, we work inside the client location but it requires us to provide an internal proxy to get access to the staging site for testing. In all such cases and more, we need to be able to add an additional proxy that Burp can send data to before it reaches our target. In some cases, this proxy can be the one that the browser requires to reach the intranet or even the Internet. Since we would like to intercept all the browser traffic and Burp has become the proxy for the browser, we need to be able to chain the proxy to set the same in Burp. Types of proxies supported by Burp We can configure additional proxies by navigating to Options | Connections. If you notice carefully, the upstream proxy rule editor looks like the FoxyProxy add-on proxy window. That is not surprising as both of them operate with URL patterns. We can carefully add the target as the destination that will require a proxy to reach to. Most standard proxies that support authentication are supported in Burp. Out of these, NTLM flavors are regularly found in networks with the Microsoft Active Directory infrastructure. The usage is straightforward. Add the destination and the other details that should be provided to you by the network administrators. Working with SOCKS proxies SOCKS proxies are another common form of proxies in use. The most popular SOCKS-based proxy is TOR, which allows your entire browser traffic, including DNS lookups, to occur at the proxy end. Since the SOCKS proxy protocol works by taking all the traffic through it, the destination server can see the IP address of the SOCKS proxy. You can give this a whirl by running the Tor browser bundle http://www.torproject.org/projects/torbrowser.html.en. Once the Tor browser bundle is running successfully, just add the following values in the SOCKS proxy settings of Burp. Make sure you check Use SOCKS proxy after adding the correct values. Have a look at the following screenshot: Using SSH tunneling as a SOCKS proxy Using SSH tunneling as a SOCKS proxy is quite useful when we want to give a white-listed IP address to a firewall administrator to access an application. So, the scenario here requires you to have access to a GNU/Linux server with a static IP address, which you can connect to using Secure Shell Server (SSH). In Mac OS X and GNU/Linux shell, the following command will start a local SOCKS proxy: ssh -D 12345 user@hostname.com Once you are successfully logged in to your server, leave it on so that Burp can keep using it. Now add localhost as SOCKS proxy host and 12345 as SOCKS proxy port, and you are good to go. In Windows, if we use a command-line SSH client that comes with GNU, the process remains the same. Otherwise, if you are a PuTTY fan, let's see how we can configure the same thing in it. In PuTTY, follow these steps to get the SSH tunnel working, which will be our SOCKS proxy: Start PuTTY and click on SSH and then on Tunnels. Here, add a newly forwarded port. Give it the value of 12345. Under Destination, there is a bunch of radio buttons; choose Auto and Dynamic, and then click on the Add button: Once this is set, connect to the server. Add the values localhost and 12345 in the Host and Port fields, respectively, in the Burp options for the SOCKS proxy. You can verify that your traffic is going through the SOCKS proxy by visiting any site that gives you your external IP address. I personally use my own web page for that http://akashm.com/ip.php; you might want to try http://icanhazip.com or http://whatismyip.com. Burp allows maximum connectivity with upstream and SOCKS proxies to make our job easier. By adding URL patterns, we can choose which proxy is connected in upstream proxy providers. SOCKS proxies, due to their nature, take all the traffic and send it to another computer, so we can't choose which URL to use it for. But this allows a simple-to-use workflow to test applications, which are behind corporate firewalls and need to white-list our static IP before allowing access. Setting up Burp to be a proxy server for other devices So far, we have run Burp on our computer. This is good enough when we want to intercept the traffic of browsers running on our computer. But what if we would like to intercept traffic from our television, from our iOS, or Android devices? Currently, in the default configuration, Burp has started one listener on an internal interface on port number 8080. We can start multiple listeners on different ports and interfaces. We can do this in the Options subtab under the Proxy tab. Note that this is different from the main Options tab. We can add more than one proxy listener at the same time by following these steps: Click on the Add button under Proxy Listeners. Enter a port number. It can be the same 8080, but if it confuses you, can give the number 8081. Specify an interface and choose your LAN IP address. Once you click on Ok, click on Running, and now you have started an external listener for Burp: You can add the LAN IP address and the port number you added as the proxy server on your mobile device, and all HTTP traffic will get intercepted by Burp. Have a look at the following screenshot: Summary In this article, you learned how to use the SOCKS proxy server, especially in a SSH tunnel kind of scenario. You also learned how simple it is to create multiple listeners for Burp, which allows other devices in the network to send their HTTP traffic to the Burp interception proxy. Resources for Article: Further resources on this subject: Quick start – Using Burp Proxy [article] Nginx proxy module [article] Using Nginx as a Reverse Proxy [article]
Read more
  • 0
  • 0
  • 4804

article-image-creating-css-stylus-preprocessor
Packt
26 Nov 2014
5 min read
Save for later

Creating CSS via the Stylus preprocessor

Packt
26 Nov 2014
5 min read
Instead of manually typing out each line of CSS you're going to require for your Ghost theme, we're going to get you setup to become highly efficient in your development through use of the CSS preprocessor named Stylus. Stylus can be described as a way of making CSS smart. It gives you the ability to define variables, create blocks of code that can be easily reused, perform mathematical calculations, and more. After Stylus code is written, it is compiled into a regular CSS file that is then linked into your design in the usual fashion. It is an extremely powerful tool with many capabilities, so we won't go into them all here; however, we will cover some of the essential features that will feature heavily in our theme development process. This article by Kezz Bracey, David Balderston, and Andy Boutte, author of the book Getting Started with Ghost, covers how to create CSS via the Stylus preprocessor. (For more resources related to this topic, see here.) Variables Stylus has the ability to create variables to hold any piece of information from color codes to numerical values for use in your layout. For example, you could map out the color scheme of your design like this: default_background_color = #F2F2F2 default_foreground_color = #333 default_highlight_color = #77b6f9 You could then use these variables all throughout your code instead of having to type them out multiple times: body { background-color: default_background_color; } a { color: default_highlight_color; } hr { border-color: default_foreground_color; } .post { border-color: default_highlight_color; color: default_foreground_color; } After the preceding Stylus code was compiled into CSS, it would look like this: body { background-color: #F2F2F2;} a { color: #77b6f9; } hr { border-color: #333; } .post { border-color: #77b6f9; color: #333; } So not only have you been saved the trouble of typing out these color code values repeatedly, which in a real style sheet means a lot of work, but you can also now easily update the color scheme of your site simply by changing the value of the variables you created. Variables come in very handy for many purposes, as you'll see when we get started on theme creation. Stylus syntax Stylus code uses a syntax that reads very much like CSS, but with the ability to take shortcuts in order to code faster and more smoothly. With Stylus, you don't need to include curly braces, colons, or semicolons. Instead, you use tab indentations, spaces, and new lines. For example, the code I used in the last section could actually be written like this in Stylus: body background-color default_background_color   a color default_highlight_color   hr border-color default_foreground_color   .post border-color default_highlight_color color default_foreground_color You may think at first glance that this code is more difficult to read than regular CSS; however, shortly we'll be getting you running with a syntax highlighting package that will make your code look like this: With the syntax highlighting package in place you don't need punctuation to make your code readable as the colors and emphasis allow you to easily differentiate between one thing and another. The chances are very high that you'll find coding in this manner much faster and easier than regular CSS syntax. However, if you're not comfortable, you can still choose to include the curly braces, colons, and semicolons you're used to and your code will still compile just fine. The golden rules of writing in Stylus syntax are as follows: After a class, ID, or element declaration, use a new line and then a tab indentation instead of curly braces Ensure each line of a style is also subsequently tab indented After a property, use a space instead of a colon At the end of a line, after a value, use a new line instead of a semicolon Mixins Mixins are a very useful way of preventing yourself from having to repeat code, and also to allow you to keep your code well organized and compartmentalized. The best way to understand what a mixin is, is to see one in action. For example, you may want to apply the same font-family, font-weight, and color to each of your heading tags. So instead of writing the same thing out manually for each H tag level, you could create a mixin as follows: header_settings() font-family Georgia font-weight 700 color #454545 You could then call that mixin into the styles for your heading tags: h1 header_settings() font-size 3em h2 header_settings() font-size 2.25em h3 header_settings() font-size 1.5em When compiled, you would get the following CSS: h1 { font-family: Georgia; font-weight: 700; color: #454545; font-size: 3em; }   h2 { font-family: Georgia; font-weight: 700; color: #454545; font-size: 2.25em; } h3 { font-family: Georgia; font-weight: 700; color: #454545; font-size: 1.5em; } As we move through the Ghost theme development process, you'll see just how useful and powerful Stylus is, and you'll never want to go back to handcoding CSS again! Summary You now have everything in place and ready to begin your Ghost theme development process. You understand the essentials of the Stylus, the means by which we'll be creating your theme's CSS. Resources for Article: Further resources on this subject: Advanced SOQL Statements [Article] Enabling your new theme in Magento [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 0
  • 14156

article-image-audio-processing-and-generation-maxmsp
Packt
25 Nov 2014
19 min read
Save for later

Audio Processing and Generation in Max/MSP

Packt
25 Nov 2014
19 min read
In this article, by Patrik Lechner, the author of Multimedia Programming Using Max/MSP and TouchDesigner, focuses on the audio-specific examples. We will take a look at the following audio processing and generation techniques: Additive synthesis Subtractive synthesis Sampling Wave shaping Nearly every example provided here might be understood very intuitively or taken apart in hours of math and calculation. It's up to you how deep you want to go, but in order to develop some intuition; we'll have to be using some amount of Digital Signal Processing (DSP) theory. We will briefly cover the DSP theory, but it is highly recommended that you study its fundamentals deeper to clearly understand this scientific topic in case you are not familiar with it already. (For more resources related to this topic, see here.) Basic audio principles We already saw and stated that it's important to know, see, and hear what's happening along a signal way. If we work in the realm of audio, there are four most important ways to measure a signal, which are conceptually partly very different and offer a very broad perspective on audio signals if we always have all of them in the back of our heads. These are the following important ways: Numbers (actual sample values) Levels (such as RMS, LUFS, and dB FS) Transversal waves (waveform displays, so oscilloscopes) Spectra (an analysis of frequency components) There are many more ways to think about audio or signals in general, but these are the most common and important ones. Let's use them inside Max right away to observe their different behavior. We'll feed some very basic signals into them: DC offset, a sinusoid, and noise. The one that might surprise you the most and get you thinking is the constant signal or DC offset (if it's digital-analog converted). In the following screenshot, you can see how the different displays react: In general, one might think, we don't want any constant signals at all; we don't want any DC offset. However, we will use audio signals a lot to control things later, say, an LFO or sequencers that should run with great timing accuracy. Also, sometimes, we just add a DC offset to our audio streams by accident. You can see in the preceding screenshot, that a very slowly moving or constant signal can be observed best by looking at its value directly, for example, using the [number~] object. In a level display, the [meter~] or [levelmeter~] objects will seem to imply that the incoming signal is very loud, in fact, it should be at -6 dB Full Scale (FS). As it is very loud, we just can't hear anything since the frequency is infinitely low. This is reflected by the spectrum display too; we see a very low frequency at -6 dB. In theory, we should just see an infinitely thin spike at 0 Hz, so everything else can be considered an (inevitable but reducible) measuring error. Audio synthesis Awareness of these possibilities of viewing a signal and their constraints, and knowing how they actually work, will greatly increase our productivity. So let's get to actually synthesizing some waveforms. A good example of different views of a signal operation is Amplitude Modulation (AM); we will also try to formulate some other general principles using the example of AM. Amplitude modulation Amplitude modulation means the multiplication of a signal with an oscillator. This provides a method of generating sidebands, which is partial in a very easy, intuitive, and CPU-efficient way. Amplitude modulation seems like a word that has a very broad meaning and can be used as soon as we change a signal's amplitude by another signal. While this might be true, in the context of audio synthesis, it very specifically means the multiplication of two (most often sine) oscillators. Moreover, there is a distinction between AM and Ring Modulation. But before we get to this distinction, let's look at the following simple multiplication of two sine waves, and we are first going to look at the result in an oscilloscope as a wave: So in the preceding screenshot, we can see the two sine waves and their product. If we imagine every pair of samples being multiplied, the operation seems pretty intuitive as the result is what we would expect. But what does this resulting wave really mean besides looking like a product of two sine waves? What does it sound like? The wave seems to have stayed in there certainly, right? Well, viewing the product as a wave and looking at the whole process in the time domain rather than the frequency domain is helpful but slightly misleading. So let's jump over to the following frequency domain and look what's happening with the spectrum: So we can observe here that if we multiply a sine wave a with a sine wave b, a having a frequency of 1000 Hz and b a frequency of 100 Hz, we end up with two sine waves, one at 900 Hz and another at 1100 Hz. The original sine waves have disappeared. In general, we can say that the result of multiplying a and b is equal to adding and subtracting the frequencies. This is shown in the Equivalence to Sum and difference subpatcher (in the following screenshot, the two inlets to the spectrum display overlap completely, which might be hard to see): So in the preceding screenshot, you see a basic AM patcher that produces sidebands that we can predict quite easily. Multiplication is commutative; you will say, 1000 + 100 = 1100, 1000 - 100 = 900; that's alright, but what about 100 - 1000 and 100 + 1000? We get -900 and 1100 once again? It still works out, and the fact that it does has to do with negative frequencies, or the symmetry of a real frequency spectrum around 0. So you can see that the two ways of looking at our signal and thinking about AM lend different opportunities and pitfalls. Here is another way to think about AM: it's the convolution of the two spectra. We didn't talk about convolution yet; we will at a later point. But keep it in mind or do a little research on your own; this aspect of AM is yet another interesting one. Ring modulation versus amplitude modulation The difference between ring modulation and what we call AM in this context is that the former one uses a bipolar modulator and the latter one uses a unipolar one. So actually, this is just about scaling and offsetting one of the factors. The difference in the outcome is yet a big one; if we keep one oscillator unipolar, the other one will be present in the outcome. If we do so, it starts making sense to call one oscillator on the carrier and the other (unipolar) on the modulator. Also, it therefore introduces a modulation depth that controls the amplitude of the sidebands. In the following screenshot, you can see the resulting spectrum; we have the original signal, so the carrier plus two sidebands, which are the original signals, are shifted up and down: Therefore, you can see that AM has a possibility to roughen up our spectrum, which means we can use it to let through an original spectrum and add sidebands. Tremolo Tremolo (from the Latin word tremare, to shake or tremble) is a musical term, which means to change a sound's amplitude in regular short intervals. Many people confuse it with vibrato, which is a modulating pitch at regular intervals. AM is tremolo and FM is vibrato, and as a simple reminder, think that the V of vibrato is closer to the F of FM than to the A of AM. So multiplying the two oscillators' results in a different spectrum. But of course, we can also use multiplication to scale a signal and to change its amplitude. If we wanted to have a sine wave that has a tremolo, that is an oscillating variation in amplitude, with, say, a frequency of 1 Hertz, we would again multiply two sine waves, one with 1000 Hz for example and another with a frequency of 0.5 Hz. Why 0.5 Hz? Think about a sine wave; it has two peaks per cycle, a positive one and a negative one. We can visualize all that very well if we think about it in the time domain, looking at the result in an oscilloscope. But what about our view of the frequency domain? Well, let's go through it; when we multiply a sine with 1000 Hz and one with 0.5 Hz, we actually get two sine waves, one with 999.5 Hz and one with 100.5 Hz. Frequencies that close create beatings, since once in a while, their positive and negative peaks overlap, canceling out each other. In general, the frequency of the beating is defined by the difference in frequency, which is 1 Hz in this case. So we see, if we look at it this way, we come to the same result again of course, but this time, we actually think of two frequencies instead of one being attenuated. Lastly, we could have looked up trigonometric identities to anticipate what happens if we multiply two sine waves. We find the following: Here, φ and θ are the two angular frequencies multiplied by the time in seconds, for example: This is the equation for the 1000 Hz sine wave. Feedback Feedback always brings the complexity of a system to the next level. It can be used to stabilize a system, but can also make a given system unstable easily. In a strict sense, in the context of DSP, stability means that for a finite input to a system, we get finite output. Obviously, feedback can give us infinite output for a finite input. We can use attenuated feedback, for example, not only to make our AM patches recursive, adding more and more sidebands, but also to achieve some surprising results as we will see in a minute. Before we look at this application, let's quickly talk about feedback in general. In the digital domain, feedback always demands some amount of delay. This is because the evaluation of the chain of operations would otherwise resemble an infinite amount of operations on one sample. This is true for both the Max message domain (we get a stack overflow error if we use feedback without delaying or breaking the chain of events) and the MSP domain; audio will just stop working if we try it. So the minimum network for a feedback chain as a block diagram looks something like this: In the preceding screenshot, X is the input signal and x[n] is the current input sample; Y is the output signal and y[n] is the current output sample. In the block marked Z-m, i is a delay of m samples (m being a constant). Denoting a delay with Z-m comes from a mathematical construct named the Z-transform. The a term is also a constant used to attenuate the feedback circle. If no feedback is involved, it's sometimes helpful to think about block diagrams as processing whole signals. For example, if you think of a block diagram that consists only of multiplication with a constant, it would make a lot of sense to think of its output signal as a scaled version of the input signal. We wouldn't think of the network's processing or its output sample by sample. However, as soon as feedback is involved, without calculation or testing, this is the way we should think about the network. Before we look at the Max version of things, let's look at the difference equation of the network to get a better feeling of the notation. Try to find it yourself before looking at it too closely! In Max, or rather in MSP, we can introduce feedback as soon as we use a [tapin~] [tapout~] pair that introduces a delay. The minimum delay possible is the signal vector size. Another way is to simply use a [send~] and [receive~] pair in our loop. The [send~] and [receive~] pair will automatically introduce this minimum amount of delay if needed, so the delay will be introduced only if there is a feedback loop. If we need shorter delays and feedback, we have to go into the wonderful world of gen~. Here, our shortest delay time is one sample, and can be introduced via the [history] object. In the Fbdiagram.maxpat patcher, you can find a Max version, an MSP version, and a [gen~] version of our diagram. For the time being, let's just pretend that the gen domain is just another subpatcher/abstraction system that allows shorter delays with feedback and has a more limited set of objects that more or less work the same as the MSP ones. In the following screenshot, you can see the difference between the output of the MSP and the [gen~] domain. Obviously, the length of the delay time has quite an impact on the output. Also, don't forget that the MSP version's output will vary greatly depending on our vector size settings. Let's return to AM now. Feedback can, for example, be used to duplicate and shift our spectrum again and again. In the following screenshot, you can see a 1000 Hz sine wave that has been processed by a recursive AM to be duplicated and shifted up and down with a 100 Hz spacing: In the maybe surprising result, we can achieve with this technique is this: if we the modulating oscillator and the carrier have the same frequency, we end up with something that almost sounds like a sawtooth wave. Frequency modulation Frequency modulation or FM is a technique that allows us to create a lot of frequency components out of just two oscillators, which is why it was used a lot back in the days when oscillators were a rare, expensive good, or CPU performance was low. Still, especially when dealing with real-time synthesis, efficiency is a crucial factor, and the huge variety of sounds that can be achieved with just two oscillators and very few parameters can be very useful for live performance and so on. The idea of FM is of course to modulate an oscillator's frequency. The basic, admittedly useless form is depicted in the following screenshot: While trying to visualize what happens with the output in the time domain, we can imagine it as shown in the following screenshot. In the preceding screenshot, you can see the signal that is controlling the frequency. It is a sine wave with a frequency of 50 Hz, scaled and offset to range from -1000 to 5000, so the center or carrier frequency is 2000 Hz, which is modulated to an amount of 3000 Hz. You can see the output of the modulated oscillator in the following screenshot: If we extend the upper patch slightly, we end up with this: Although you can't see it in the screenshot, the sidebands are appearing with a 100 Hz spacing here, that is, with a spacing equal to the modulator's frequency. Pretty similar to AM right? But depending on the modulation amount, we get more and more sidebands. Controlling FM If the ratio between F(c) and F(m) is an integer, we end up with a harmonic spectrum, therefore, it may be more useful to rather control F(m) indirectly via a ratio parameter as it's done inside the SimpleRatioAndIndex subpatcher. Also, an Index parameter is typically introduced to make an FM patch even more controllable. The modulation index is defined as follows: Here, I is the index, Am is the amplitude of the modulation, what we called amount before, and fm is the modulator's frequency. So finally, after adding these two controls, we might arrive here: FM offers a wide range of possibilities, for example, the fact that we have a simple control for how harmonic/inharmonic our spectrum is can be useful to synthesize the mostly noisy attack phase of many instruments if we drive the ratio and index with an envelope as it's done in the SimpleEnvelopeDriven subpatcher. However, it's also very easy to synthesize very artificial, strange sounds. This basically has the following two reasons: Firstly, the partials appearing have amplitudes governed by Bessel functions that may seem quite unpredictable; the partials sometimes seem to have random amplitudes. Secondly, negative frequencies and fold back. If we generate partials with frequencies below 0 Hz, it is equivalent to creating the same positive frequency. For frequencies greater than the sample rate/2 (sample rate/2 is what's called the Nyquist rate), the frequencies reflect back into the spectrum that can be described by our sampling rate (this is an effect also called aliasing). So at a sampling rate of 44,100 Hz, a partial with a frequency of -100 Hz will appear at 100 Hz, and a partial with a frequency of 43100 kHz will appear at 1000 Hz, as shown in the following screenshot: So, for frequencies between the Nyquist frequency and the sampling frequency, what we hear is described by this: Here, fs is the sampling rate, f0 is the frequency we hear, and fi is the frequency we are trying to synthesize. Since FM leads to many partials, this effect can easily come up, and can both be used in an artistically interesting manner or sometimes appear as an unwanted error. In theory, an FM signal's partials extend to even infinity, but the amplitudes become negligibly small. If we want to reduce this behavior, the [poly~] object can be used to oversample the process, generating a bit more headroom for high frequencies. The phenomenon of aliasing can be understood by thinking of a real (in contrast to imaginary) digital signal as having a symmetrical and periodical spectrum; let's not go into too much detail here and look at it in the time domain: In the previous screenshot, we again tried to synthesize a sine wave with 43100 Hz (the dotted line) at a sampling rate of 44100 Hz. What we actually get is the straight black line, a sine with 1000 Hz. Each big black dot represents an actual sample, and there is only one single band-limited signal connecting them: the 1000 Hz wave that is only partly visible here (about half its wavelength). Feedback It is very common to use feedback with FM. We can even frequency modulate one oscillator with itself, making the algorithm even cheaper since we have only one table lookup. The idea of feedback FM quickly leads us to the idea of making networks of oscillators that can be modulated by each other, including feedback paths, but let's keep it simple for now. One might think that modulating one oscillator with itself should produce chaos; FM being a technique that is not the easiest to control, one shouldn't care for playing around with single operator feedback FM. But the opposite is the case. Single operator FM yields very predictable partials, as shown in the following screenshot, and in the Single OP FBFM subpatcher: Again, we are using a gen~ patch, since we want to create a feedback loop and are heading for a short delay in the loop. Note that we are using the [param] object to pass a message into the gen~ object. What should catch your attention is that although the carrier frequency has been adjusted to 1000 Hz, the fundamental frequency in the spectrum is around 600 Hz. What can help us here is switching to phase modulation. Phase modulation If you look at the gen~ patch in the previous screenshot, you see that we are driving our sine oscillator with a phasor. The cycle object's phase inlet assumes an input that ranges from 0 to 1 instead of from 0 to 2π, as one might think. To drive a sine wave through one full cycle in math, we can use a variable ranging from 0 to 2π, so in the following formula, you can imagine t being provided by a phasor, which is the running phase. The 2π multiplication isn't necessary in Max since if we are using [cycle~], we are reading out a wavetable actually instead of really computing the sine or cosine of the input. This is the most common form of denoting a running sinusoid with frequency f0 and phase φ. Try to come up with a formula that describes frequency modulation! Simplifying the phases by setting it to zero, we can denote FM as follows: This can be shown to be nearly identical to the following formula: Here, f0 is the frequency of the carrier, fm is the frequency of the modulator, and A is the modulation amount. Welcome to phase modulation. If you compare it, the previous formula actually just inserts a scaled sine wave where the phase φ used to be. So phase modulation is nearly identical to frequency modulation. Phase modulation has some advantages though, such as providing us with an easy method of synchronizing multiple oscillators. But let's go back to the Max side of things and look at a feedback phase modulation patch right away (ignoring simple phase modulation, since it really is so similar to FM): This gen~ patcher resides inside the One OP FBPM subpatcher and implements phase modulation using one oscillator and feedback. Interestingly, the spectrum is very similar to the one of a sawtooth wave, with the feedback amount having a similar effect of a low-pass filter, controlling the amount of partials. If you take a look at the subpatcher, you'll find the following three sound sources: Our feedback FM gen~ patcher A [saw~] object for comparison A poly~ object We have already mentioned the problem of aliasing and the [poly~] object has already been proposed to treat the problem. However, it allows us to define the quality of parts of patches in general, so let's talk about the object a bit before moving on since we will make great use of it. Before moving on, I would like to tell you that you can double-click on it to see what is loaded inside, and you will see that the subpatcher we just discussed contains a [poly~] object that contains yet another version of our gen~ patcher. Summary In this article, we've finally come to talking about audio. We've introduced some very common techniques and thought about refining them and getting things done properly and efficiently (think about poly~). By now, you should feel quite comfortable building synths that mix techniques such as FM, subtractive synthesis, and feature modulation, as well as using matrices for routing both audio and modulation signals where you need them. Further resources on this subject: Moodle for Online Communities [Article] Techniques for Creating a Multimedia Database [Article] Moodle 2.0 Multimedia: Working with 2D and 3D Maps [Article]
Read more
  • 0
  • 0
  • 14435

article-image-components
Packt
25 Nov 2014
14 min read
Save for later

Components

Packt
25 Nov 2014
14 min read
This article by Timothy Moran, author of Mastering KnockoutJS, teaches you how to use the new Knockout components feature. (For more resources related to this topic, see here.) In Version 3.2, Knockout added components using the combination of a template (view) with a viewmodel to create reusable, behavior-driven DOM objects. Knockout components are inspired by web components, a new (and experimental, at the time of writing this) set of standards that allow developers to define custom HTML elements paired with JavaScript that create packed controls. Like web components, Knockout allows the developer to use custom HTML tags to represent these components in the DOM. Knockout also allows components to be instantiated with a binding handler on standard HTML elements. Knockout binds components by injecting an HTML template, which is bound to its own viewmodel. This is probably the single largest feature Knockout has ever added to the core library. The reason we started with RequireJS is that components can optionally be loaded and defined with module loaders, including their HTML templates! This means that our entire application (even the HTML) can be defined in independent modules, instead of as a single hierarchy, and loaded asynchronously. The basic component registration Unlike extenders and binding handlers, which are created by just adding an object to Knockout, components are created by calling the ko.components.register function: ko.components.register('contact-list, { viewModel: function(params) { }, template: //template string or object }); This will create a new component named contact-list, which uses the object returned by the viewModel function as a binding context, and the template as its view. It is recommended that you use lowercase, dash-separated names for components so that they can easily be used as custom elements in your HTML. To use this newly created component, you can use a custom element or the component binding. All the following three tags produce equivalent results: <contact-list params="data: contacts"><contact-list> <div data-bind="component: { name: 'contact-list', params: { data: contacts }"></div> <!-- ko component: { name: 'contact-list', params: { data: contacts } --><!-- /ko --> Obviously, the custom element syntax is much cleaner and easier to read. It is important to note that custom elements cannot be self-closing tags. This is a restriction of the HTML parser and cannot be controlled by Knockout. There is one advantage of using the component binding: the name of the component can be an observable. If the name of the component changes, the previous component will be disposed (just like it would if a control flow binding removed it) and the new component will be initialized. The params attribute of custom elements work in a manner that is similar to the data-bind attribute. Comma-separated key/value pairs are parsed to create a property bag, which is given to the component. The values can contain JavaScript literals, observable properties, or expressions. It is also possible to register a component without a viewmodel, in which case, the object created by params is directly used as the binding context. To see this, we'll convert the list of contacts into a component: <contact-list params="contacts: displayContacts, edit: editContact, delete: deleteContact"> </contact-list> The HTML code for the list is replaced with a custom element with parameters for the list as well as callbacks for the two buttons, which are edit and delete: ko.components.register('contact-list', { template: '<ul class="list-unstyled" data-bind="foreach: contacts">'    +'<li>'      +'<h3>'        +'<span data-bind="text: displayName"></span> <small data-          bind="text: phoneNumber"></small> '        +'<button class="btn btn-sm btn-default" data-bind="click:          $parent.edit">Edit</button> '        +'<button class="btn btn-sm btn-danger" data-bind="click:          $parent.delete">Delete</button>'      +'</h3>'    +'</li>' +'</ul>' }); This component registration uses an inline template. Everything still looks and works the same, but the resulting HTML now includes our custom element. Custom elements in IE 8 and higher IE 9 and later versions as well as all other major browsers have no issue with seeing custom elements in the DOM before they have been registered. However, older versions of IE will remove the element if it hasn't been registered. The registration can be done either with Knockout, with ko.components.register('component-name'), or with the standard document.createElement('component-name') expression statement. One of these must come before the custom element, either by the script containing them being first in the DOM, or by the custom element being added at runtime. When using RequireJS, being in the DOM first won't help as the loading is asynchronous. If you need to support older IE versions, it is recommended that you include a separate script to register the custom element names at the top of the body tag or in the head tag: <!DOCTYPE html> <html> <body>    <script>      document.createElement('my-custom-element');    </script>    <script src='require.js' data-main='app/startup'></script>      <my-custom-element></my-custom-element> </body> </html> Once this has been done, components will work in IE 6 and higher even with custom elements. Template registration The template property of the configuration sent to register can take any of the following formats: ko.components.register('component-name', { template: [OPTION] }); The element ID Consider the following code statement: template: { element: 'component-template' } If you specify the ID of an element in the DOM, the contents of that element will be used as the template for the component. Although it isn't supported in IE yet, the template element is a good candidate, as browsers do not visually render the contents of template elements. The element instance Consider the following code statement: template: { element: instance } You can pass a real DOM element to the template to be used. This might be useful in a scenario where the template was constructed programmatically. Like the element ID method, only the contents of the elements will be used as the template: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: { element: template } }); An array of DOM nodes Consider the following code statement: template: [nodes] If you pass an array of DOM nodes to the template configuration, then the entire array will be used as a template and not just the descendants: var template = document.getElementById('contact-list-template') nodes = Array.prototype.slice.call(template.content.childNodes); ko.components.register('contact-list', { template: nodes }); Document fragments Consider the following code statement: template: documentFragmentInstance If you pass a document fragment, the entire fragment will be used as a template instead of just the descendants: var template = document.getElementById('contact-list-template'); ko.components.register('contact-list', { template: template.content }); This example works because template elements wrap their contents in a document fragment in order to stop the normal rendering. Using the content is the same method that Knockout uses internally when a template element is supplied. HTML strings We already saw an example for HTML strings in the previous section. While using the value inline is probably uncommon, supplying a string would be an easy thing to do if your build system provided it for you. Registering templates using the AMD module Consider the following code statement: template: { require: 'module/path' } If a require property is passed to the configuration object of a template, the default module loader will load the module and use it as the template. The module can return any of the preceding formats. This is especially useful for the RequireJS text plugin: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'} }); Using this method, we can extract the HTML template into its own file, drastically improving its organization. By itself, this is a huge benefit to development. The viewmodel registration Like template registration, viewmodels can be registered using several different formats. To demonstrate this, we'll use a simple viewmodel of our contacts list components: function ListViewmodel(params) { this.contacts = params.contacts; this.edit = params.edit; this.delete = function(contact) {    console.log('Mock Deleting Contact', ko.toJS(contact)); }; }; To verify that things are getting wired up properly, you'll want something interactive; hence, we use the fake delete function. The constructor function Consider the following code statement: viewModel: Constructor If you supply a function to the viewModel property, it will be treated as a constructor. When the component is instantiated, new will be called on the function, with the params object as its first parameter: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: ListViewmodel //Defined above }); A singleton object Consider the following code statement: viewModel: { instance: singleton } If you want all your component instances to be backed by a shared object—though this is not recommended—you can pass it as the instance property of a configuration object. Because the object is shared, parameters cannot be passed to the viewmodel using this method. The factory function Consider the following code statement: viewModel: { createViewModel: function(params, componentInfo) {} } This method is useful because it supplies the container element of the component to the second parameter on componentInfo.element. It also provides you with the opportunity to perform any other setup, such as modifying or extending the constructor parameters. The createViewModel function should return an instance of a viewmodel component: ko.components.register('contact-list', { template: { require: 'text!contact-list.html'}, viewModel: { createViewModel: function(params, componentInfo) {    console.log('Initializing component for',      componentInfo.element);    return new ListViewmodel(params); }} }); Registering viewmodels using an AMD module Consider the following code statement: viewModel: { require: 'module-path' } Just like templates, viewmodels can be registered with an AMD module that returns any of the preceding formats. Registering AMD In addition to registering the template and the viewmodel as AMD modules individually, you can register the entire component with a require call: ko.components.register('contact-list', { require: 'contact-list' }); The AMD module will return the entire component configuration: define(['knockout', 'text!contact-list.html'], function(ko, templateString) {   function ListViewmodel(params) {    this.contacts = params.contacts;    this.edit = params.edit;    this.delete = function(contact) {      console.log('Mock Deleting Contact', ko.toJS(contact));    }; }   return { template: templateString, viewModel: ListViewmodel }; }); As the Knockout documentation points out, this method has several benefits: The registration call is just a require path, which is easy to manage. The component is composed of two parts: a JavaScript module and an HTML module. This provides both simple organization and clean separation. The RequireJS optimizer, which is r.js, can use the text dependency on the HTML module to bundle the HTML code with the bundled output. This means your entire application, including the HTML templates, can be a single file in production (or a collection of bundles if you want to take advantage of lazy loading). Observing changes in component parameters Component parameters will be passed via the params object to the component's viewmodel in one of the following three ways: No observable expression evaluation needs to occur, and the value is passed literally: <component params="name: 'Timothy Moran'"></component> <component params="name: nonObservableProperty"> </component> <component params="name: observableProperty"></component> <component params="name: viewModel.observableSubProperty "></component> In all of these cases, the value is passed directly to the component on the params object. This means that changes to these values will change the property on the instantiating viewmodel, except for the first case (literal values). Observable values can be subscribed to normally. An observable expression needs to be evaluated, so it is wrapped in a computed observable: <component params="name: name() + '!'"></component> In this case, params.name is not the original property. Calling params.name() will evaluate the computed wrapper. Trying to modify the value will fail, as the computed value is not writable. The value can be subscribed to normally. An observable expression evaluates an observable instance, so it is wrapped in an observable that unwraps the result of the expression: <component params="name: isFormal() ? firstName : lastName"></component> In this example, firstName and lastName are both observable properties. If calling params.name() returned the observable, you will need to call params.name()() to get the actual value, which is rather ugly. Instead, Knockout automatically unwraps the expression so that calling params.name() returns the actual value of either firstName or lastName. If you need to access the actual observable instances to, for example, write a value to them, trying to write to params.name will fail, as it is a computed observable. To get the unwrapped value, you can use the params.$raw object, which provides the unwrapped values. In this case, you can update the name by calling params.$raw.name('New'). In general, this case should be avoided by removing the logic from the binding expression and placing it in a computed observable in the viewmodel. The component's life cycle When a component binding is applied, Knockout takes the following steps. The component loader asynchronously creates the viewmodel factory and template. This result is cached so that it is only performed once per component. The template is cloned and injected into the container (either the custom element or the element with the component binding). If the component has a viewmodel, it is instantiated. This is done synchronously. The component is bound to either the viewmodel or the params object. The component is left active until it is disposed. The component is disposed. If the viewmodel has a dispose method, it is called, and then the template is removed from the DOM. The component's disposal If the component is removed from the DOM by Knockout, either because of the name of the component binding or a control flow binding being changed (for example, if and foreach), the component will be disposed. If the component's viewmodel has a dispose function, it will be called. Normal Knockout bindings in the components view will be automatically disposed, just as they would in a normal control flow situation. However, anything set up by the viewmodel needs to be manually cleaned up. Some examples of viewmodel cleanup include the following: The setInterval callbacks can be removed with clearInterval. Computed observables can be removed by calling their dispose method. Pure computed observables don't need to be disposed. Computed observables that are only used by bindings or other viewmodel properties also do not need to be disposed, as garbage collection will catch them. Observable subscriptions can be disposed by calling their dispose method. Event handlers can be created by components that are not part of a normal Knockout binding. Combining components with data bindings There is only one restriction of data-bind attributes that are used on custom elements with the component binding: the binding handlers cannot use controlsDescendantBindings. This isn't a new restriction; two bindings that control descendants cannot be on a single element, and since components control descendant bindings that cannot be combined with a binding handler that also controls descendants. It is worth remembering, though, as you might be inclined to place an if or foreach binding on a component; doing this will cause an error. Instead, wrap the component with an element or a containerless binding: <ul data-bind='foreach: allProducts'> <product-details params='product: $data'></product-details> </ul> It's also worth noting that bindings such as text and html will replace the contents of the element they are on. When used with components, this will potentially result in the component being lost, so it's not a good idea. Summary In this article, we learned that the Knockout components feature gives you a powerful tool that will help you create reusable, behavior-driven DOM elements. Resources for Article: Further resources on this subject: Deploying a Vert.x application [Article] The Dialog Widget [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 2586
article-image-look-responsive-design-frameworks
Packt
19 Nov 2014
11 min read
Save for later

A look into responsive design frameworks

Packt
19 Nov 2014
11 min read
In this article, by Thoriq Firdaus author of Responsive Web Design by Example Beginner's Guide Second Edition we will look into responsive web design which is one of the most discussed topics among the web design and development community. So I believe many of you have heard about it to certain extend. (For more resources related to this topic, see here.) Ethan Marcotte was the one who coined the term "Responsive Web Design". He suggests in his article, Responsive Web Design, that the web should seamlessly adjust and adapt to the environment where the users view the website, rather than addressing it exclusively for a specific platform. In other words, the website should be responsive; it should be presentable at any screen size and regardless of the platform in which the website is viewed. Take Time website as an example, the web page fits nicely in a desktop browser with large screen size and also in a mobile browser with limited viewable area. The layout shifts and adapts as the viewport size changes. As you can see from the following screenshot, the header background color turned into dark grey, the image is scaled down proportionally, and the Tap bar appears where Time hides the Latest news, Magazine and Videos section: Yet, building a responsive website could be very tedious work. There are many measurements to consider when building a responsive website, one of which would be creating the responsive grid. Grid helps us to build websites with proper alignment. If you have ever used 960.gs framework, which is one of the popular CSS Frameworks, you should’ve experienced how easy is to organize the web page layout by adding preset classes like grid_1 or push_1 in the elements. However, 960.gs grid is set in fixed unit, pixel (px), which is not applicable when it comes to building a responsive website. We need a Framework with the grid set in percentage (%) unit to build responsive websites; we need a Responsive Framework. A Responsive Framework provides the building blocks to build responsive websites. Generally, it includes the classes to assemble a responsive grid, the basic styles for typography and form inputs, and a few styles to address various browser quirks. Some frameworks even go further with a series of styles for creating common design patterns and Web User Interface such as buttons, navigation bars, and image slider. These predefined styles allow us to develop responsive websites faster with less of the hassle. And the following are a few other reasons why using a Responsive Framework is a favorable option to build responsive websites: Browser Compatibility: Assuring consistency of a web page on different browsers is really painful and more distressing than developing the website itself. But, with a framework, we can minimize the work to address Browser Compatibility issues. The framework developers most likely have tested the framework in various desktop browsers and mobile browsers with the most constrained environment prior to releasing it publicly. Documentation: A framework, in general, also comes with comprehensive documentation that records the bits and pieces on using the framework. The documentation would be very helpful for entry users to begin to study the framework. It is also a great advantage when we are working with a team. We can refer to the documentation to get everyone on the same page and follow the standard code of writing conventions. Community and Extensions: Some popular frameworks like Bootstrap and Foundation have an active community that helps addressing the bugs in the framework and extends the functionality. The jQuery UI Bootstrap is perhaps a good example, in this case. jQuery UI Bootstrap is a collection styles for jQuery UI widgets to match the feel and look of Bootstrap’s original theme. It’s now a common to find free WordPress and Joomla themes that are based using these frameworks. The Responsive.gs framework Responsive.gs is a lightweight responsive framework with merely 1kb of size when compressed. Responsive.gs is based on a width of 940px, and made in three variant of grids: 12, 16, and 24 columns. What’s more, Responsive.gs is shipped with Box Sizing polyfill that enables CSS3 box-sizing in Internet Explorer 8 to Internet Explorer 6, and make it decently presentable in those browsers. Polyfill is a piece code that enables certain web features and capabilities that are not built in the browser natively; usually, it addresses to the older version of Internet Explorer. For example, you can use HTML5 Shiv so that new HTML5 elements, such as <header>, <footer>, and <nav>, are recognized in Internet Explorer 8 to Internet Explorer 6. CSS Box model HTML elements, which are categorized as a block-level element, are essentially a box drawn with the content width, height, margin, padding, and border through CSS. Prior to CSS3, we have been facing a constraint when specifying a box. For instance, when we specify a <div> with width and height of 100px, as follows: div {width: 100px;height: 100px;} The browser will render the div as 100px, square box. However, this will only be true if the padding and border have not been added in. Since a box has four sides, a padding of 10px (padding: 10px;) will actually add 20px to the width and height — 10px for each side, as follows. While it takes up space on the page, the element's margin is space reserved outside the element rather than part of the element itself; thus, if we give an element a background color, the margin area will not take on that color. CSS3 Box sizing CSS3 introduced a new property called box-sizing that lets us to specify how the browser should calculate the CSS box model. There are a couple of values that we can apply within the box-sizing property, which are: content-box: this is the default value of the box model. This value specifies the padding and the border box's thickness outside the specified width and height of the content, as we have demonstrated in the preceding section. border-box: this value will do the opposite; it includes the padding and the border box as the width and height of the box. padding-box: at the time of writing this article, this value is experimental and has just been added recently. This value specifies the box dimensions. Let’s take our preceding as our example, but this time we will set the box-sizing model to border-box. As mentioned in the table above, the border-box value will retain the box’s width and the height for 100px regardless of the padding and border addition. The following illustration shows a comparison between the outputs of the two different values, the content-box (the default) and the border-box. The Bootstrap framework Bootstrap was originally built by Mark Otto and was initially only intended for internal use in Twitter. Short story, Bootstrap was then launched for free for public consumption. Bootstrap has long been associated with Twitter, but since the author has departed from Twitter and Bootstrap itself has grown beyond his expectations..... Date back to the initial development, the responsive feature was not yet added, it was then added in version 2 along with the increasing demand for creating responsive websites. Bootstrap also comes with a lot more added features as compared to Responsive.gs. It is packed with preset user interface styles, which comprise of common User Interfaces used on websites such as buttons, navigation bars, pagination, and forms so you don’t have to create them from scratch again when starting off a new project. On top of that, Bootstrap is also powered with some custom jQuery plugins like image slider, carousel, popover and modal box. You can use and customize Bootstrap in many ways. You can directly customize Bootstrap theme and components directly through the CSS style sheets, the Bootstrap Customization page, or the Bootstrap LESS variables and mixins, which are used to generate the style sheets. The Foundation framework Foundation is a framework created by ZURB, a design agency based in California. Similar to Bootstrap, Foundation is beyond just a responsive CSS framework; it is shipped with preset grid, components, and a number of jQuery plugins to present interactive features. Some high-profile brands have built their website using of Foundation such as McAfee, which is one the most respectable brands for computer anti-virus. Foundation style sheet is powered by Sass, a Ruby-based CSS Pre-processor. There are many complaint that the code in responsive frameworks is excessive; since a framework like Bootstrap is used widely, it has to cover every design scenario and thus it comes with some extra styles that you might not need for your website. Fortunately, we can easily minimize this issue by using the right tools like CSS Preprocessors and following a proper workflow. And speaking the truth, there isn’t a perfect solution, and certainly using a framework isn’t for everyone. It all comes down to your need, your website need, and in particular your client needs and budgets. In reality, you will have to weigh these factors to decide whether you will go with responsive framework or not. Jem Kremer has an extensive discussion on this regard in her article: Responsive Design Frameworks: Just Because You Can, Should You? A brief Introduction to CSS preprocessors Both Bootstrap and Foundation uses CSS Pre-processors to generate their style sheets. Bootstrap uses LESS — though the official support for Sass has just been released recently. Foundation, on the contrary, uses Sass as the only way to generate its style sheets. CSS pre-processor is not an entirely new language. If you have known CSS, you should be accustomed to CSS Pre-preprocessor immediately. CSS Pre-processor simply extends CSS by allowing the use of programming features like Variables, Functions, and Operations. Below is an example of how we write CSS with LESS syntax. @color: #f3f3f3;body {background-color: @color;}p {color: darken(@color, 50%);} When the above code is compiled, it takes the @color variable that we have defined and place the value in the output, as follows. body {background-color: #f3f3f3;}p {color: #737373;} The variable is reusable throughout the style sheet that enables us to retain style consistency and make the style sheet more maintainable. Delve into responsive web design Our discussion on Responsive Web Design herein, though essential, is merely a tip of the iceberg. There are so much more about Responsive Web Design than what have recently covered in the preceding sections. I would suggest that you take your time to get yourself more insight and apprehension on Responsive Web Design including the concept, the technicalities, and some constraints. The following are some of the best recommendations of reference to follow: Also a good place to start Responsive Web Design by Rachel Shillcock. Don’t Forget the Viewport Meta Tag by Ian Yates. How To Use CSS3 Media Queries To Create a Mobile Version of Your Website by Rachel Andrew. Read about the future standard on responsive image using HTML5 Picture Element Responsive Images Done Right: A Guide To <picture> And srcset by Eric Portis a roundup of methods of making data table responsive. Responsive web design inspiration sources Now before we jump down into the next Chapters and start off building responsive websites, it may be a good idea to spend some time looking for ideas and inspiration of responsive websites; to see how they are built, and how the layout is organized in desktop browsers as well as in mobile browsers. It’s a common thing for websites to be redesigned from time to time to stay fresh. So herein, instead of making a pile of website screenshots, which may no longer be relevant in the next several months because of the redesign, we’re better going straight to the websites that curates websites, and following is the places to go: MediaQueries Awwwards CSS Awards WebDesignServed Bootstrap Expo Zurb Responsive Summary Using a framework is the easier and faster way to build responsive websites up and running rather than building everything from scratch on our own. Alas, as mentioned, using a framework also has some negative sides. If it is not done properly, the end result could all go wrong. The website could be stuffed and stuck with unnecessary styles and JavaScript, which at the end makes the website load slowly and hard to maintain. We need to set up the right tools that, not only will they facilitate the projects, but they also help us making the website more easily maintainable. Resources for Article:  Further resources on this subject: Linking Dynamic Content from External Websites [article] Building Responsive Image Sliders [article] Top Features You Need to Know About – Responsive Web Design [article]
Read more
  • 0
  • 0
  • 9964

article-image-deployment-and-post-deployment
Packt
17 Nov 2014
30 min read
Save for later

Deployment and Post Deployment

Packt
17 Nov 2014
30 min read
In this article by Shalabh Aggarwal, the author of Flask Framework Cookbook, we will talk about various application-deployment techniques, followed by some monitoring tools that are used post-deployment. (For more resources related to this topic, see here.) Deployment of an application and managing the application post-deployment is as important as developing it. There can be various ways of deploying an application, where choosing the best way depends on the requirements. Deploying an application correctly is very important from the points of view of security and performance. There are multiple ways of monitoring an application after deployment of which some are paid and others are free to use. Using them again depends on requirements and features offered by them. Each of the tools and techniques has its own set of features. For example, adding too much monitoring to an application can prove to be an extra overhead to the application and the developers as well. Similarly, missing out on monitoring can lead to undetected user errors and overall user dissatisfaction. Hence, we should choose the tools wisely and they will ease our lives to the maximum. In the post-deployment monitoring tools, we will discuss Pingdom and New Relic. Sentry is another tool that will prove to be the most beneficial of all from a developer's perspective. Deploying with Apache First, we will learn how to deploy a Flask application with Apache, which is, unarguably, the most popular HTTP server. For Python web applications, we will use mod_wsgi, which implements a simple Apache module that can host any Python applications that support the WSGI interface. Remember that mod_wsgi is not the same as Apache and needs to be installed separately. Getting ready We will start with our catalog application and make appropriate changes to it to make it deployable using the Apache HTTP server. First, we should make our application installable so that our application and all its libraries are on the Python load path. This can be done using a setup.py script. There will be a few changes to the script as per this application. The major changes are mentioned here: packages=[    'my_app',    'my_app.catalog', ], include_package_data=True, zip_safe = False, First, we mentioned all the packages that need to be installed as part of our application. Each of these needs to have an __init__.py file. The zip_safe flag tells the installer to not install this application as a ZIP file. The include_package_data statement reads from a MANIFEST.in file in the same folder and includes any package data mentioned here. Our MANIFEST.in file looks like: recursive-include my_app/templates * recursive-include my_app/static * recursive-include my_app/translations * Now, just install the application using the following command: $ python setup.py install Installing mod_wsgi is usually OS-specific. Installing it on a Debian-based distribution should be as easy as just using the packaging tool, that is, apt or aptitude. For details, refer to https://code.google.com/p/modwsgi/wiki/InstallationInstructions and https://github.com/GrahamDumpleton/mod_wsgi. How to do it… We need to create some more files, the first one being app.wsgi. This loads our application as a WSGI application: activate_this = '<Path to virtualenv>/bin/activate_this.py' execfile(activate_this, dict(__file__=activate_this))   from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) As we perform all our installations inside virtualenv, we need to activate the environment before our application is loaded. In the case of system-wide installations, the first two statements are not needed. Then, we need to import our app object as application, which is used as the application being served. The last two lines are optional, as they just stream the output to the the standard logger, which is disabled by mod_wsgi by default. The app object needs to be imported as application, because mod_wsgi expects the application keyword. Next comes a config file that will be used by the Apache HTTP server to serve our application correctly from specific locations. The file is named apache_wsgi.conf: <VirtualHost *>      WSGIScriptAlias / <Path to application>/flask_catalog_deployment/app.wsgi      <Directory <Path to application>/flask_catalog_deployment>        Order allow,deny        Allow from all    </Directory>   </VirtualHost> The preceding code is the Apache configuration, which tells the HTTP server about the various directories where the application has to be loaded from. The final step is to add the apache_wsgi.conf file to apache2/httpd.conf so that our application is loaded when the server runs: Include <Path to application>/flask_catalog_deployment/ apache_wsgi.conf How it works… Let's restart the Apache server service using the following command: $ sudo apachectl restart Open up http://127.0.0.1/ in the browser to see the application's home page. Any errors coming up can be seen at /var/log/apache2/error_log (this path can differ depending on OS). There's more… After all this, it is possible that the product images uploaded as part of the product creation do not work. For this, we should make a small modification to our application's configuration: app.config['UPLOAD_FOLDER'] = '<Some static absolute path>/flask_test_uploads' We opted for a static path because we do not want it to change every time the application is modified or installed. Now, we will include the path chosen in the preceding code to apache_wsgi.conf: Alias /static/uploads/ "<Some static absolute path>/flask_test_uploads/" <Directory "<Some static absolute path>/flask_test_uploads">    Order allow,deny    Options Indexes    Allow from all    IndexOptions FancyIndexing </Directory> After this, install the application and restart apachectl. See also http://httpd.apache.org/ https://code.google.com/p/modwsgi/ http://wsgi.readthedocs.org/en/latest/ https://pythonhosted.org/setuptools/setuptools.html#setting-the-zip-safe-flag Deploying with uWSGI and Nginx For those who are already aware of the usefulness of uWSGI and Nginx, there is not much that can be explained. uWSGI is a protocol as well as an application server and provides a complete stack to build hosting services. Nginx is a reverse proxy and HTTP server that is very lightweight and capable of handling virtually unlimited requests. Nginx works seamlessly with uWSGI and provides many under-the-hood optimizations for better performance. Getting ready We will use our application from the last recipe, Deploying with Apache, and use the same app.wsgi, setup.py, and MANIFEST.in files. Also, other changes made to the application's configuration in the last recipe will apply to this recipe as well. Disable any other HTTP servers that might be running, such as Apache and so on. How to do it… First, we need to install uWSGI and Nginx. On Debian-based distributions such as Ubuntu, they can be easily installed using the following commands: # sudo apt-get install nginx # sudo apt-get install uWSGI You can also install uWSGI inside a virtualenv using the pip install uWSGI command. Again, these are OS-specific, so refer to the respective documentations as per the OS used. Make sure that you have an apps-enabled folder for uWSGI, where we will keep our application-specific uWSGI configuration files, and a sites-enabled folder for Nginx, where we will keep our site-specific configuration files. Usually, these are already present in most installations in the /etc/ folder. If not, refer to the OS-specific documentations to figure out the same. Next, we will create a file named uwsgi.ini in our application: [uwsgi] http-socket   = :9090 plugin   = python wsgi-file = <Path to application>/flask_catalog_deployment/app.wsgi processes   = 3 To test whether uWSGI is working as expected, run the following command: $ uwsgi --ini uwsgi.ini The preceding file and command are equivalent to running the following command: $ uwsgi --http-socket :9090 --plugin python --wsgi-file app.wsgi Now, point your browser to http://127.0.0.1:9090/; this should open up the home page of the application. Create a soft link of this file to the apps-enabled folder mentioned earlier using the following command: $ ln -s <path/to/uwsgi.ini> <path/to/apps-enabled> Before moving ahead, edit the preceding file to replace http-socket with socket. This changes the protocol from HTTP to uWSGI (read more about it at http://uwsgi-docs.readthedocs.org/en/latest/Protocol.html). Now, create a new file called nginx-wsgi.conf. This contains the Nginx configuration needed to serve our application and the static content: location /{    include uwsgi_params;    uwsgi_pass 127.0.0.1:9090; } location /static/uploads/{    alias <Some static absolute path>/flask_test_uploads/; } In the preceding code block, uwsgi_pass specifies the uWSGI server that needs to be mapped to the specified location. Create a soft link of this file to the sites-enabled folder mentioned earlier using the following command: $ ln -s <path/to/nginx-wsgi.conf> <path/to/sites-enabled> Edit the nginx.conf file (usually found at /etc/nginx/nginx.conf) to add the following line inside the first server block before the last }: include <path/to/sites-enabled>/*; After all of this, reload the Nginx server using the following command: $ sudo nginx -s reload Point your browser to http://127.0.0.1/ to see the application that is served via Nginx and uWSGI. The preceding instructions of this recipe can vary depending on the OS being used and different versions of the same OS can also impact the paths and commands used. Different versions of these packages can also have some variations in usage. Refer to the documentation links provided in the next section. See also Refer to http://uwsgi-docs.readthedocs.org/en/latest/ for more information on uWSGI. Refer to http://nginx.com/ for more information on Nginx. There is a good article by DigitalOcean on this. I advise you to go through this to have a better understanding of the topic. It is available at https://www.digitalocean.com/community/tutorials/how-to-deploy-python-wsgi-applications-using-uwsgi-web-server-with-nginx. To get an insight into the difference between Apache and Nginx, I think the article by Anturis at https://anturis.com/blog/nginx-vs-apache/ is pretty good. Deploying with Gunicorn and Supervisor Gunicorn is a WSGI HTTP server for Unix. It is very simple to implement, ultra light, and fairly speedy. Its simplicity lies in its broad compatibility with various web frameworks. Supervisor is a monitoring tool that controls various child processes and handles the starting/restarting of these child processes when they exit abruptly due to some reason. It can be extended to control the processes via the XML-RPC API over remote locations without logging in to the server (we won't discuss this here as it is out of the scope of this book). One thing to remember is that these tools can be used along with the other tools mentioned in the applications in the previous recipe, such as using Nginx as a proxy server. This is left to you to try on your own. Getting ready We will start with the installation of both the packages, that is, gunicorn and supervisor. Both can be directly installed using pip: $ pip install gunicorn $ pip install supervisor How to do it… To check whether the gunicorn package works as expected, just run the following command from inside our application folder: $ gunicorn -w 4 -b 127.0.0.1:8000 my_app:app After this, point your browser to http://127.0.0.1:8000/ to see the application's home page. Now, we need to do the same using Supervisor so that this runs as a daemon and will be controlled by Supervisor itself rather than human intervention. First of all, we need a Supervisor configuration file. This can be achieved by running the following command from virtualenv. Supervisor, by default, looks for an etc folder that has a file named supervisord.conf. In system-wide installations, this folder is /etc/, and in virtualenv, it will look for an etc folder in virtualenv and then fall back to /etc/: $ echo_supervisord_conf > etc/supervisord.conf The echo_supervisord_conf program is provided by Supervisor; it prints a sample config file to the location specified. This command will create a file named supervisord.conf in the etc folder. Add the following block in this file: [program:flask_catalog] command=<path/to/virtualenv>/bin/gunicorn -w 4 -b 127.0.0.1:8000 my_app:app directory=<path/to/virtualenv>/flask_catalog_deployment user=someuser # Relevant user autostart=true autorestart=true stdout_logfile=/tmp/app.log stderr_logfile=/tmp/error.log Make a note that one should never run the applications as a root user. This is a huge security flaw in itself as the application crashes, which can harm the OS itself. How it works… Now, run the following commands: $ supervisord $ supervisorctl status flask_catalog   RUNNING   pid 40466, uptime 0:00:03 The first command invokes the supervisord server, and the next one gives a status of all the child processes. The tools discussed in this recipe can be coupled with Nginx to serve as a reverse proxy server. I suggest that you try it by yourself. Every time you make a change to your application and then wish to restart Gunicorn in order for it to reflect the changes, run the following command: $ supervisorctl restart all You can also give specific processes instead of restarting everything: $ supervisorctl restart flask_catalog See also http://gunicorn-docs.readthedocs.org/en/latest/index.html http://supervisord.org/index.html Deploying with Tornado Tornado is a complete web framework and a standalone web server in itself. Here, we will use Flask to create our application, which is basically a combination of URL routing and templating, and leave the server part to Tornado. Tornado is built to hold thousands of simultaneous standing connections and makes applications very scalable. Tornado has limitations while working with WSGI applications. So, choose wisely! Read more at http://www.tornadoweb.org/en/stable/wsgi.html#running-wsgi-apps-on-tornado-servers. Getting ready Installing Tornado can be simply done using pip: $ pip install tornado How to do it… Next, create a file named tornado_server.py and put the following code in it: from tornado.wsgi import WSGIContainer from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from my_app import app   http_server = HTTPServer(WSGIContainer(app)) http_server.listen(5000) IOLoop.instance().start() Here, we created a WSGI container for our application; this container is then used to create an HTTP server, and the application is hosted on port 5000. How it works… Run the Python file created in the previous section using the following command: $ python tornado_server.py Point your browser to http://127.0.0.1:5000/ to see the home page being served. We can couple Tornado with Nginx (as a reverse proxy to serve static content) and Supervisor (as a process manager) for the best results. It is left for you to try this on your own. Using Fabric for deployment Fabric is a command-line tool in Python; it streamlines the use of SSH for application deployment or system-administration tasks. As it allows the execution of shell commands on remote servers, the overall process of deployment is simplified, as the whole process can now be condensed into a Python file, which can be run whenever needed. Therefore, it saves the pain of logging in to the server and manually running commands every time an update has to be made. Getting ready Installing Fabric can be simply done using pip: $ pip install fabric We will use the application from the Deploying with Gunicorn and Supervisor recipe. We will create a Fabric file to perform the same process to the remote server. For simplicity, let's assume that the remote server setup has been already done and all the required packages have also been installed with a virtualenv environment, which has also been created. How to do it… First, we need to create a file called fabfile.py in our application, preferably at the application's root directory, that is, along with the setup.py and run.py files. Fabric, by default, expects this filename. If we use a different filename, then it will have to be explicitly specified while executing. A basic Fabric file will look like: from fabric.api import sudo, cd, prefix, run   def deploy_app():    "Deploy to the server specified"    root_path = '/usr/local/my_env'      with cd(root_path):        with prefix("source %s/bin/activate" % root_path):            with cd('flask_catalog_deployment'):                run('git pull')                run('python setup.py install')              sudo('bin/supervisorctl restart all') Here, we first moved into our virtualenv, activated it, and then moved into our application. Then, the code is pulled from the Git repository, and the updated application code is installed using setup.py install. After this, we restarted the supervisor processes so that the updated application is now rendered by the server. Most of the commands used here are self-explanatory, except prefix, which wraps all the succeeding commands in its block with the command provided. This means that the command to activate virtualenv will run first and then all the commands in the with block will execute with virtualenv activated. The virtualenv will be deactivated as soon as control goes out of the with block. How it works… To run this file, we need to provide the remote server where the script will be executed. So, the command will look something like: $ fab -H my.remote.server deploy_app Here, we specified the address of the remote host where we wish to deploy and the name of the method to be called from the fab script. There's more… We can also specify the remote host inside our fab script, and this can be good idea if the deployment server remains the same most of the times. To do this, add the following code to the fab script: from fabric.api import settings   def deploy_app_to_server():    "Deploy to the server hardcoded"    with settings(host_string='my.remote.server'):        deploy_app() Here, we have hardcoded the host and then called the method we created earlier to start the deployment process. S3 storage for file uploads Amazon explains S3 as the storage for the Internet that is designed to make web-scale computing easier for developers. S3 provides a very simple interface via web services; this makes storage and retrieval of any amount of data very simple at any time from anywhere on the Internet. Until now, in our catalog application, we saw that there were issues in managing the product images uploaded as a part of the creating process. The whole headache will go away if the images are stored somewhere globally and are easily accessible from anywhere. We will use S3 for the same purpose. Getting ready Amazon offers boto, a complete Python library that interfaces with Amazon Web Services via web services. Almost all of the AWS features can be controlled using boto. It can be installed using pip: $ pip install boto How to do it… Now, we should make some changes to our existing catalog application to accommodate support for file uploads and retrieval from S3. First, we need to store the AWS-specific configuration to allow boto to make calls to S3. Add the following statements to the application's configuration file, that is, my_app/__init__.py: app.config['AWS_ACCESS_KEY'] = 'Amazon Access Key' app.config['AWS_SECRET_KEY'] = 'Amazon Secret Key' app.config['AWS_BUCKET'] = 'flask-cookbook' Next, we need to change our views.py file: from boto.s3.connection import S3Connection This is the import that we need from boto. Next, replace the following two lines in create_product(): filename = secure_filename(image.filename) image.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) Replace these two lines with: filename = image.filename conn = S3Connection(    app.config['AWS_ACCESS_KEY'], app.config['AWS_SECRET_KEY'] ) bucket = conn.create_bucket(app.config['AWS_BUCKET']) key = bucket.new_key(filename) key.set_contents_from_file(image) key.make_public() key.set_metadata(    'Content-Type', 'image/' + filename.split('.')[-1].lower() ) The last change will go to our product.html template, where we need to change the image src path. Replace the original img src statement with the following statement: <img src="{{ 'https://s3.amazonaws.com/' + config['AWS_BUCKET'] + '/' + product.image_path }}"/> How it works… Now, run the application as usual and create a product. When the created product is rendered, the product image will take a bit of time to come up as it is now being served from S3 (and not from a local machine). If this happens, then the integration with S3 has been successfully done. Deploying with Heroku Heroku is a cloud application platform that provides an easy and quick way to build and deploy web applications. Heroku manages the servers, deployment, and related operations while developers spend their time on developing applications. Deploying with Heroku is pretty simple with the help of the Heroku toolbelt, which is a bundle of some tools that make deployment with Heroku a cakewalk. Getting ready We will proceed with the application from the previous recipe that has S3 support for uploads. As mentioned earlier, the first step will be to download the Heroku toolbelt, which can be downloaded as per the OS from https://toolbelt.heroku.com/. Once the toolbelt is installed, a certain set of commands will be available at the terminal; we will see them later in this recipe. It is advised that you perform Heroku deployment from a fresh virtualenv where we have only the required packages for our application installed and nothing else. This will make the deployment process faster and easier. Now, run the following command to log in to your Heroku account and sync your machined SSH key with the server: $ heroku login Enter your Heroku credentials. Email: shalabh7777@gmail.com Password (typing will be hidden): Authentication successful. You will be prompted to create a new SSH key if one does not exist. Proceed accordingly. Remember! Before all this, you need to have a Heroku account on available on https://www.heroku.com/. How to do it… Now, we already have an application that needs to be deployed to Heroku. First, Heroku needs to know the command that it needs to run while deploying the application. This is done in a file named Procfile: web: gunicorn -w 4 my_app:app Here, we will tell Heroku to run this command to run our web application. There are a lot of different configurations and commands that can go into Procfile. For more details, read the Heroku documentation. Heroku needs to know the dependencies that need to be installed in order to successfully install and run our application. This is done via the requirements.txt file: Flask==0.10.1 Flask-Restless==0.14.0 Flask-SQLAlchemy==1.0 Flask-WTF==0.10.0 Jinja2==2.7.3 MarkupSafe==0.23 SQLAlchemy==0.9.7 WTForms==2.0.1 Werkzeug==0.9.6 boto==2.32.1 gunicorn==19.1.1 itsdangerous==0.24 mimerender==0.5.4 python-dateutil==2.2 python-geoip==1.2 python-geoip-geolite2==2014.0207 python-mimeparse==0.1.4 six==1.7.3 wsgiref==0.1.2 This file contains all the dependencies of our application, the dependencies of these dependencies, and so on. An easy way to generate this file is using the pip freeze command: $ pip freeze > requirements.txt This will create/update the requirements.txt file with all the packages installed in virtualenv. Now, we need to create a Git repo of our application. For this, we will run the following commands: $ git init $ git add . $ git commit -m "First Commit" Now, we have a Git repo with all our files added. Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to create a Heroku application and push our application to Heroku: $ heroku create Creating damp-tor-6795... done, stack is cedar http://damp-tor-6795.herokuapp.com/ | git@heroku.com:damp-tor- 6795.git Git remote heroku added $ git push heroku master After the last command, a whole lot of stuff will get printed on the terminal; this will indicate all the packages being installed and finally, the application being launched. How it works… After the previous commands have successfully finished, just open up the URL provided by Heroku at the end of deployment in a browser or run the following command: $ heroku open This will open up the application's home page. Try creating a new product with an image and see the image being served from Amazon S3. To see the logs of the application, run the following command: $ heroku logs There's more… There is a glitch with the deployment we just did. Every time we update the deployment via the git push command, the SQLite database gets overwritten. The solution to this is to use the Postgres setup provided by Heroku itself. I urge you to try this by yourself. Deploying with AWS Elastic Beanstalk In the last recipe, we saw how deployment to servers becomes easy with Heroku. Similarly, Amazon has a service named Elastic Beanstalk, which allows developers to deploy their application to Amazon EC2 instances as easily as possible. With just a few configuration options, a Flask application can be deployed to AWS using Elastic Beanstalk in a couple of minutes. Getting ready We will start with our catalog application from the previous recipe, Deploying with Heroku. The only file that remains the same from this recipe is requirement.txt. The rest of the files that were added as a part of that recipe can be ignored or discarded for this recipe. Now, the first thing that we need to do is download the AWS Elastic Beanstalk command-line tool library from the Amazon website (http://aws.amazon.com/code/6752709412171743). This will download a ZIP file that needs to be unzipped and placed in a suitable place, preferably your workspace home. The path of this tool should be added to the PATH environment so that the commands are available throughout. This can be done via the export command as shown: $ export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ This can also be added to the ~/.profile or ~/.bash_profile file using: export PATH=$PATH:<path to unzipped EB CLI package>/eb/linux/python2.7/ How to do it… There are a few conventions that need to be followed in order to deploy using Beanstalk. Beanstalk assumes that there will be a file called application.py, which contains the application object (in our case, the app object). Beanstalk treats this file as the WSGI file, and this is used for deployment. In the Deploying with Apache recipe, we had a file named app.wgsi where we referred our app object as application because apache/mod_wsgi needed it to be so. The same thing happens here too because Amazon, by default, deploys using Apache behind the scenes. The contents of this application.py file can be just a few lines as shown here: from my_app import app as application import sys, logging logging.basicConfig(stream = sys.stderr) Now, create a Git repo in the application and commit with all the files added: $ git init $ git add . $ git commit -m "First Commit" Make sure that you have a .gitignore file in your repo or at a global level to prevent temporary files such as .pyc from being added to the repo. Now, we need to deploy to Elastic Beanstalk. Run the following command to do this: $ eb init The preceding command initializes the process for the configuration of your Elastic Beanstalk instance. It will ask for the AWS credentials followed by a lot of other configuration options needed for the creation of the EC2 instance, which can be selected as needed. For more help on these options, refer to http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html. After this is done, run the following command to trigger the creation of servers, followed by the deployment of the application: $ eb start Behind the scenes, the preceding command creates the EC2 instance (a volume), assigns an elastic IP, and then runs the following command to push our application to the newly created server for deployment: $ git aws.push This will take a few minutes to complete. When done, you can check the status of your application using the following command: $ eb status –verbose Whenever you need to update your application, just commit your changes using the git and push commands as follows: $ git aws.push How it works… When the deployment process finishes, it gives out the application URL. Point your browser to it to see the application being served. Yet, you will find a small glitch with the application. The static content, that is, the CSS and JS code, is not being served. This is because the static path is not correctly comprehended by Beanstalk. This can be simply fixed by modifying the application's configuration on your application's monitoring/configuration page in the AWS management console. See the following screenshots to understand this better: Click on the Configuration menu item in the left-hand side menu. Notice the highlighted box in the preceding screenshot. This is what we need to change as per our application. Open Software Settings. Change the virtual path for /static/, as shown in the preceding screenshot. After this change is made, the environment created by Elastic Beanstalk will be updated automatically, although it will take a bit of time. When done, check the application again to see the static content also being served correctly. Application monitoring with Pingdom Pingdom is a website-monitoring tool that has the USP of notifying you as soon as your website goes down. The basic idea behind this tool is to constantly ping the website at a specific interval, say, 30 seconds. If a ping fails, it will notify you via an e-mail, SMS, tweet, or push notifications to mobile apps, which inform that your site is down. It will keep on pinging at a faster rate until the site is back up again. There are other monitoring features too, but we will limit ourselves to uptime checks in this book. Getting ready As Pingdom is a SaaS service, the first step will be to sign up for an account. Pingdom offers a free trial of 1 month in case you just want to try it out. The website for the service is https://www.pingdom.com. We will use the application deployed to AWS in the Deploying with AWS Elastic Beanstalk recipe to check for uptime. Here, Pingdom will send an e-mail in case the application goes down and will send an e-mail again when it is back up. How to do it… After successful registration, create a check for time. Have a look at the following screenshot: As you can see, I already added a check for the AWS instance. To create a new check, click on the ADD NEW button. Fill in the details asked by the form that comes up. How it works… After the check is successfully created, try to break the application by consciously making a mistake somewhere in the code and then deploying to AWS. As soon as the faulty application is deployed, you will get an e-mail notifying you of this. This e-mail will look like: Once the application is fixed and put back up again, the next e-mail should look like: You can also check how long the application has been up and the downtime instances from the Pingdom administration panel. Application performance management and monitoring with New Relic New Relic is an analytics software that provides near real-time operational and business analytics related to your application. It provides deep analytics on the behavior of the application from various aspects. It does the job of a profiler as well as eliminating the need to maintain extra moving parts in the application. It actually works in a scenario where our application sends data to New Relic rather than New Relic asking for statistics from our application. Getting ready We will use the application from the last recipe, which is deployed to AWS. The first step will be to sign up with New Relic for an account. Follow the simple signup process, and upon completion and e-mail verification, it will lead to your dashboard. Here, you will have your license key available, which we will use later to connect our application to this account. The dashboard should look like the following screenshot: Here, click on the large button named Reveal your license key. How to do it… Once we have the license key, we need to install the newrelic Python library: $ pip install newrelic Now, we need to generate a file called newrelic.ini, which will contain details regarding the license key, the name of our application, and so on. This can be done using the following commands: $ newrelic-admin generate-config LICENSE-KEY newrelic.ini In the preceding command, replace LICENSE-KEY with the actual license key of your account. Now, we have a new file called newrelic.ini. Open and edit the file for the application name and anything else as needed. To check whether the newrelic.ini file is working successfully, run the following command: $ newrelic-admin validate-config newrelic.ini This will tell us whether the validation was successful or not. If not, then check the license key and its validity. Now, add the following lines at the top of the application's configuration file, that is, my_app/__init__.py in our case. Make sure that you add these lines before anything else is imported: import newrelic.agent newrelic.agent.initialize('newrelic.ini') Now, we need to update the requirements.txt file. So, run the following command: $ pip freeze > requirements.txt After this, commit the changes and deploy the application to AWS using the following command: $ git aws.push How it works… Once the application is successfully updated on AWS, it will start sending statistics to New Relic, and the dashboard will have a new application added to it. Open the application-specific page, and a whole lot of statistics will come across. It will also show which calls have taken the most amount of time and how the application is performing. You will also see multiple tabs that correspond to a different type of monitoring to cover all the aspects. Summary In this article, we have seen the various techniques used to deploy and monitor Flask applications. Resources for Article: Further resources on this subject: Understanding the Python regex engine [Article] Exploring Model View Controller [Article] Plotting Charts with Images and Maps [Article]
Read more
  • 0
  • 0
  • 2187

article-image-web-application-testing
Packt
14 Nov 2014
15 min read
Save for later

Web Application Testing

Packt
14 Nov 2014
15 min read
This article is written by Roberto Messora, the author of the Web App Testing Using Knockout.JS book. This article will give you an overview of various design patterns used in web application testing. It will also tech you web development using jQuery. (For more resources related to this topic, see here.) Presentation design patterns in web application testing The Web has changed a lot since HTML5 has made its appearance. We are witnessing a gradual shift from a classical full server-side web development, to a new architectural asset that moves much of the application logic to the client-side. The general objective is to deliver rich internet applications (commonly known as RIA) with a desktop-like user experience. Think about web applications such as Gmail or Facebook: if you maximize your browser, they look like complete desktop applications in terms of usability, UI effects, responsiveness, and richness. Once we have established that testing is a pillar of our solutions, we need to understand which is the best way to proceed, in terms of software architecture and development. In this regard, it's very important to determine the very basic design principles that allow a proper approach to unit testing. In fact, even though HTML5 is a recent achievement, HTML in general and JavaScript are technologies that have been in use for quite some time. The problem here is that many developers tend to approach the modern web development in the same old way. This is a grave mistake because, back in time, client-side JavaScript development was a lot underrated and mostly confined to simple UI graphic management. Client-side development is historically driven by libraries such as Prototype, jQuery, and Dojo, whose primary feature is DOM (HTML Document Object Model, in other words HTML markup) management. They can work as-is in small web applications, but as soon as these grow in complexity, code base starts to become unmanageable and unmaintainable. We can't really think that we can continue to develop JavaScript in the same way we did 10 years ago. In those days, we only had to dynamically apply some UI transformations. Today we have to deliver complete working applications. We need a better design, but most of all we need to reconsider client-side JavaScript development and apply the advanced design patterns and principles. jQuery web application development JavaScript is the programming language of the web, but its native DOM API is something rudimental. We have to write a lot of code to manage and transform HTML markup to bring UI to life with some dynamic user interaction. Also not full standardization means that the same code can work differently (or not work at all) in different browsers. Over the past years, developers decided to resolve this situation: JavaScript libraries, such as Prototype, jQuery and Dojo have come to light. jQuery is one of the most known open-source JavaScript libraries, which was published for the first time in 2006. Its huge success is mainly due to: A simple and detailed API that allows you to manage HTML DOM elements Cross-browser support Simple and effective extensibility Since its appearance, it's been used by thousands of developers as the foundation library. A large amount of JavaScript code all around the world has been built with keeping jQuery in mind. jQuery ecosystem grew up very quickly and nowadays there are plenty of jQuery plugins that implement virtually everything related to the web development. Despite its simplicity, a typical jQuery web application is virtually untestable. There are two main reasons: User interface items are tightly coupled with the user interface logic User interface logic spans inside event handler callback functions The real problem is that everything passes through a jQuery reference, which is a jQuery("something") call. This means that we will always need a live reference of the HTML page, otherwise these calls will fail, and this is also true for a unit test case. We can't think about testing a piece of user interface logic running an entire web application! Large jQuery applications tend to be monolithic because jQuery itself allows callback function nesting too easily, and doesn't really promote any particular design strategy. The result is often spaghetti code. jQuery is a good option if you want to develop some specific custom plugin, also we will continue to use this library for pure user interface effects and animations, but we need something different to maintain a large web application logic. Presentation design patterns To move a step forward, we need to decide what's the best option in terms of testable code. The main topic here is the application design, in other words, how we can build our code base following a general guideline with keeping testability in mind. In software engineering there's nothing better than not to reinvent the wheel, we can rely on a safe and reliable resource: design patterns. Wikipedia provides a good definition for the term design pattern (http://en.wikipedia.org/wiki/Software_design_pattern): In software engineering, a design pattern is a general reusable solution to a commonly occurring problem within a given context in software design. A design pattern is not a finished design that can be transformed directly into source or machine code. It is a description or template for how to solve a problem that can be used in many different situations. Patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. There are tens of specific design patterns, but we also need something that is related to the presentation layer because this is where a JavaScript web application belongs to. The most important aspect in terms of design and maintainability of a JavaScript web application is a clear separation between the user interface (basically, the HTML markup) and the presentation logic. (The JavaScript code that turns a web page dynamic and responsive to user interaction.) This is what we learned digging into a typical jQuery web application. At this point, we need to identify an effective implementation of a presentation design pattern and use it in our web applications. In this regard, I have to admit that the JavaScript community has done an extraordinary job in the last two years: up to the present time, there are literally tens of frameworks and libraries that implement a particular presentation design pattern. We only have to choose the framework that fits our needs, for example, we can start taking a look at MyTodo MVC website (http://todomvc.com/): this is an open source project that shows you how to build the same web application using a different library each time. Most of these libraries implement a so-called MV* design pattern (also Knockout.JS does). MV* means that every design pattern belongs to a broader family with a common root: Model-View-Controller. The MVC pattern is one of the oldest and most enduring architectural design patterns: originally designed by Trygve Reenskaug working on Smalltalk-80 back in 1979, it has been heavily refactored since then. Basically, the MVC pattern enforces the isolation of business data (Models) from user interfaces (Views), with a third component (Controllers) that manages the logic and user-input. It can be described as (Addy Osmani, Learning JavaScript Design Patterns, http://addyosmani.com/resources/essentialjsdesignpatterns/book/#detailmvc): A Model represented domain-specific data and was ignorant of the user-interface (Views and Controllers). When a model changed, it would inform its observers A View represented the current state of a Model. The Observer pattern was used for letting the View know whenever the Model was updated or modified Presentation was taken care of by the View, but there wasn't just a single View and Controller - a View-Controller pair was required for each section or element being displayed on the screen The Controllers role in this pair was handling user interaction (such as key-presses and actions e.g. clicks), making decisions for the View This general definition has slightly changed over the years, not only to adapt its implementation to different technologies and programming languages, but also because changes have been made to the Controller part. Model-View-Presenter, Model-View-ViewModel are the most known alternatives to the MVC pattern. MV* presentation design patterns are a valid answer to our need: an architectural design guideline that promotes the separation of concerns and isolation, the two most important factors that are needed for software testing. In this way, we can separately test models, views, and the third actor whatever it is (a Controller, Presenter, ViewModel, etc.). On the other hand, adopting a presentation design pattern doesn't mean at all that we cease to use jQuery. jQuery is a great library, we will continue to add its reference to our pages, but we will also integrate its use wisely in a better design context. Knockout.JS and Model-View-ViewModel Knockout.JS is one of the most popular JavaScript presentation libraries, it implements the Model-View-ViewModel design pattern. The most important concepts that feature Knockout:JS are: An HTML fragment (or an entire page) is considered as a View. A View is always associated with a JavaScript object called ViewModel: this is a code representation of the View that contains the data (model) to be shown (in the form of properties) and the commands that handle View events triggered by the user (in the form of methods). The association between View and ViewModel is built around the concept of data-binding, a mechanism that provides automatic bidirectional synchronization: In the View, it's declared placing the data-bind attributes into DOM elements, the attributes' value must follow a specific syntax that specifies the nature of the association and the target ViewModel property/method. In the ViewModel, methods are considered as commands and properties that are defined as special objects called observables: their main feature is the capability to notify every state modification A ViewModel is a pure-code representation of the View: it contains data to show and commands that handle events triggered by the user. It's important to remember that a ViewModel shouldn't have any knowledge about the View and the UI: pure-code representation means that a ViewModel shouldn't contain any reference to HTML markup elements (buttons, textboxes, and so on), but only pure JavaScript properties and methods. Model-View-ViewModel's objective is to promote a clear separation between View and ViewModel, this principle is called Separation of Concerns. Why is this so important? The answer is quite easy: because, in this way a developer can achieve a real separation of responsibilities: the View is only responsible for presenting data to the user and react to her/his inputs, the ViewModel is only responsible for holding the data and providing the presentation logic. The following diagram from Microsoft MSDN depicts the existing relationships between the three pattern actors very well (http://msdn.microsoft.com/en-us/library/ff798384.aspx): Thinking about a web application in these terms leads to a ViewModel development without any reference to DOM elements' IDs or any other markup related code as in the classic jQuery style. The two main reasons behind this are: As the web application becomes more complex, the number of DOM elements increases and is not uncommon to reach a point where it becomes very difficult to manage all those IDs with the typical jQuery fluent interface style: the JavaScript code base turns into a spaghetti code nightmare very soon. A clear separation between View and ViewModel allows a new way of working: JavaScript developers can concentrate on the presentation logic, UX experts on the other hand, can provide an HTML markup that focuses on the user interaction and how a web application will look. The two groups can work quite independently and agree on the basic contact points using the data-bind tag attributes. The key feature of a ViewModel is the observable object: a special object that is capable to notify its state modifications to any subscribers. There are three types of observable objects: The basic observable that is based on JavaScript data types (string, number, and so on) The computed observable that is dependent on other observables or computed observables The observable array that is a standard JavaScript array, with a built-in change notification mechanism On the View-side, we talk about declarative data-binding because we need to place the data-bind attributes inside HTML tags, and specify what kind of binding is associated to a ViewModel property/command. MVVM and unit testing Why a clear separation between the user interface and presentation logic is a real benefit? There are several possible answers, but, if we want to remain in the unit testing context, we can assert that we can apply proper unit testing specifications to the presentation logic, independently, from the concrete user interface. In Model-View-ViewModel, the ViewModel is a pure-code representation of the View. The View itself must remain a thin and simple layer, whose job is to present data and receive the user interaction. This is a great scenario for unit testing: all the logic in the presentation layer is located in the ViewModel, and this is a JavaScript object. We can definitely test almost everything that takes place in the presentation layer. Ensuring a real separation between View and ViewModel means that we need to follow a particular development procedure: Think about a web application page as a composition of sub-views: we need to embrace the divide et impera principle when we build our user interface, the more sub-views are specific and simple, the more we can test them easily. Knockout.JS supports this kind of scenario very well. Write a class for every View and a corresponding class for its ViewModel: the first one is the starting point to instantiate the ViewModel and apply bindings, after all, the user interface (the HTML markup) is what the browser loads initially. Keep each View class as simple as possible, so simple that it might not even need be tested, it should be just a container for:     Its ViewModel instance     Sub-View instances, in case of a bigger View that is a composition of smaller ones     Pure user interface code, in case of particular UI JavaScript plugins that cannot take place in the ViewModel and simply provide graphical effects/enrichments (in other words they don't change the logical functioning) If we look carefully at a typical ViewModel class implementation, we can see that there are no HTML markup references: no tag names, no tag identifiers, nothing. All of these references are present in the View class implementation. In fact, if we were to test a ViewModel that holds a direct reference to an UI item, we also need a live instance of the UI, otherwise accessing that item reference would cause a null reference runtime error during the test. This is not what we want, because it is very difficult to test a presentation logic having to deal with a live instance of the user interface: there are many reasons, from the need of a web server that delivers the page, to the need of a separate instance of a web browser to load the page. This is not very different from debugging a live page with Mozilla Firebug or Google Chrome Developer Tools, our objective is the test automation, but also we want to run the tests easily and quickly in isolation: we don't want to run the page in any way! An important application asset is the event bus: this is a global object that works as an event/message broker for all the actors that are involved in the web page (Views and ViewModels). Event bus is one of the alternative forms of the Event Collaboration design pattern (http://martinfowler.com/eaaDev/EventCollaboration.html): Multiple components work together by communicating with each other by sending events when their internal state changes (Marting Fowler) The main aspect of an event bus is that: The sender is just broadcasting the event, the sender does not need to know who is interested and who will respond, this loose coupling means that the sender does not have to care about responses, allowing us to add behaviour by plugging new components (Martin Fowler) In this way, we can maintain all the different components of a web page that are completely separated: every View/ViewModel couple sends and receives events, but they don't know anything about all the other couples. Again, every ViewModel is completely decoupled from its View (remember that the View holds a reference to the ViewModel, but not the other way around) and in this case, it can trigger some events in order to communicate something to the View. Concerning unit testing, loose coupling means that we can test our presentation logic a single component at a time, simply ensuring that events are broadcasted when they need to. Event buses can also be mocked so we don't need to rely on concrete implementation. In real-world development, the production process is an iterative task. Usually, we need to: Define a View markup skeleton, without any data-bind attributes. Start developing classes for the View and the ViewModel, which are empty at the beginning. Start developing the presentation logic, adding observables to the ViewModel and their respective data bindings in the View. Start writing test specifications. This process is repetitive, adds more presentation logic at every iteration, until we reach the final result. Summary In this article, you learned about web development using jQuery, presentation design patters, and unit testing using MVVM. Resources for Article: Further resources on this subject: Big Data Analysis [Article] Advanced Hadoop MapReduce Administration [Article] HBase Administration, Performance Tuning [Article]
Read more
  • 0
  • 0
  • 2624
article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3
Mike Ball
07 Nov 2014
11 min read
Save for later

Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
07 Nov 2014
11 min read
Part 1: Getting up and running with Middleman Many of today’s most prominent web frameworks, such as Ruby on Rails, Django, Wordpress, Drupal, Express, and Spring MVC, rely on a server-side language to process HTTP requests, query data at runtime, and serve back dynamically constructed HTML. These platforms are great, yet developers of dynamic web applications often face complex performance challenges under heavy user traffic, independent of the underlying technology. High traffic, and frequent requests, may exploit processing-intensive code or network latency, in effect yielding a poor user experience or production outage. Static site generators such as Middleman, Jeckyll, and Wintersmith offer developers an elegant, highly scalable alternative to complex, dynamic web applications. Such tools perform dynamic processing and HTML construction during build time rather than runtime. These tools produce a directory of static HTML, CSS, and JavaScript files that can be deployed directly to a web server such as Nginx or Apache. This architecture reduces complexity and encourages a sensible separation of concerns; if necessary, user-specific customization can be handled via client-side interaction with third-party satellite services. In this three part series, we'll walk-through how to get started in developing a Middleman site, some basics of Middleman blogging, how to migrate content from an existing WordPress blog, and how to deploy a Middleman blog to production. We will also learn how to create automated tests, continuous integration, and automated deployments. In this part, we’ll cover the following: Creating a basic Middleman project Middleman configuration basics A quick overview of the Middleman template system Creating a basic Middleman blog Why should you use middleman? Middleman is a mature, full-featured static site generator. It supports a strong templating system, numerous Ruby-based HTML templating tools such as ERb and HAML, as well as a Sprockets-based asset pipeline used to manage CSS, JavaScript, and third-party client-side code. Middleman also integrates well with CoffeeScript, SASS, and Compass. Environment For this tutorial, I’m using an RVM-installed Ruby 2.1.2. I’m on Mac OSX 10.9.4. Installing middleman Install middleman via bundler: $ gem install middleman Create a basic middleman project called middleman-demo: $ middleman init middleman-demo This results in a middleman-demo directory with the following layout: ├── Gemfile ├── Gemfile.lock ├── config.rb └── source    ├── images    │   ├── background.png    │   └── middleman.png    ├── index.html.erb    ├── javascripts    │   └── all.js    ├── layouts    │   └── layout.erb    └── stylesheets        ├── all.css        └── normalize.css[SB4]  There are 5 directories and 10 files. A quick tour Here are a few notes on the middleman-demo layout: The Ruby Gemfile  cites Ruby gem dependencies; Gemfile.lock cites the full dependency chain, including  middleman-demo’s dependencies’ dependencies The config.rb  houses middleman-demo’s configuration The source directory houses middleman-demo ’s source code–the templates, style sheets, images, JavaScript, and other source files required by the  middleman-demo [SB7] site While a Middleman production build is simply a directory of static HTML, CSS, JavaScript, and image files, Middleman sites can be run via a simple web server in development. Run the middleman-demo development server: $ middleman Now, the middleman-demo site can be viewed in your web browser at  http://localhost:4567. Set up live-reloading Middleman comes with the middleman-livereload gem. The gem detects source code changes and automatically reloads the Middleman app. Activate middleman-livereload  by uncommenting the following code in config.rb: # Reload the browser automatically whenever files change configure :development do activate :livereload end Restart the middleman server to allow the configuration change to take effect. Now, middleman-demo should automatically reload on change to config.rb and your web browser should automatically refresh when you edit the source/* code. Customize the site’s appearance Middleman offers a mature HTML templating system. The source/layouts directory contains layouts, the common HTML surrounding individual pages and shared across your site. middleman-demo uses ERb as its template language, though Middleman supports other options such as HAML and Slim. Also note that Middleman supports the ability embed metadata within templates via frontmatter. Frontmatter allows page-specific variables to be embedded via YAML or JSON. These variables are available in a current_page.data namespace. For example, source/index.html.erb contains the following frontmatter specifying a title; it’s available to ERb templates as current_page.data.title: --- title: Welcome to Middleman --- Currently, middleman-demo is a default Middleman installation. Let’s customize things a bit. First, remove all the contents of source/stylesheets/all.css  to remove the default Middleman styles. Next, edit source/index.html.erb to be the following: --- title: Welcome to Middleman Demo --- <h1>Middleman Demo</h1> When viewing middleman-demo at http://localhost:4567, you’ll now see a largely unstyled HTML document with a single Middleman Demo heading. Install the middleman-blog plugin The middleman-blog plugin offers blog functionality to middleman applications. We’ll use middleman-blog in middleman-demo. Add the middleman-blog version 3.5.3 gem dependency to middleman-demo by adding the following to the Gemfile: gem "middleman-blog", "3.5.3 Re-install the middleman-demo gem dependencies, which now include middleman-blog: $ bundle install Activate middleman-blog and specify a URL pattern at which to serve blog posts by adding the following to config.rb: activate :blog do |blog| blog.prefix = "blog" blog.permalink = "{year}/{month}/{day}/{title}.html" end Write a quick blog post Now that all has been configured, let’s write a quick blog post to confirm that middleman-blog works. First, create a directory to house the blog posts: $ mkdir source/blog The source/blog directory will house markdown files containing blog post content and any necessary metadata. These markdown files highlight a key feature of middleman: rather than query a relational database within which content is stored, a middleman application typically reads data from flat files, simple text files–usually markdown–stored within the site’s source code repository. Create a markdown file for middleman-demo ’s first post: $ touch source/blog/2014-08-20-new-blog.markdown Next, add the required frontmatter and content to source/blog/2014-08-20-new-blog.markdown: --- title: New Blog date: 2014/08/20 tags: middleman, blog --- Hello world from Middleman! Features Rich templating system Built-in helpers Easy configuration Asset pipeline Lots more  Note that the content is authored in markdown, a plain text syntax, which is evaluated by Middleman as HTML. You can also embed HTML directly in the markdown post files. GitHub’s documentation provides a good overview of markdown. Next, add the following ERb template code to source/index.html.erb [SB37] to display a list of blog posts on middleman-demo ’s home page: <ul> <% blog.articles.each do |article| %> <li> <%= link_to article.title, article.path %> </li> <% end %> </ul> Now, when running middleman-demo and visiting http://localhost:4567, a link to the new blog post is listed on middleman-demo ’s home page. Clicking the link renders the permalink for the New Blog blog post at blog/2014-08-20/new-blog.html, as is specified in the blog configuration in config.rb. A few notes on the template code Note the use of a link_to method. This is a built-in middleman template helper. Middleman provides template helpers to simplify many common template tasks, such as rendering an anchor tag. In this case, we pass the link_to method two arguments, the intended anchor tag text and the intended href value. In turn, link_to generates the necessary HTML. Also note the use of a blog variable, within which an article’s method houses an array of all blog posts. Where did this come from?  middleman-demo is an instance of  Middleman::Application;  a blog  method on this instance. To explore other Middleman::Application methods, open middleman-demo via the built-in Middleman console by entering the following in your terminal: $ middleman console To view all the methods on the blog, including the aforementioned articles method, enter the following within the console: 2.1.2 :001 > blog.methods To view all the additional methods, beyond the blog, available to the Middleman::Application instance, enter the following within the console: 2.1.2 :001 > self.methods More can be read about all these methods on Middleman::Application’s rdoc.info class documentation. Cleaner URLs Note that the current new blog URL ends in .html. Let’s customize middleman-demo to omit .html from URLs. Add the following config.rb: activate :directory_indexes Now, rather than generating files such as /blog/2014-08-20/new-blog.html,  middleman-demo generates files such as /blog/2014-08-20/new-blog/index.html, thus enabling the page to be served by most web servers at a /blog/2014-08-20/new-blog/ path. Adjusting the templates Let’s adjust our the middleman-demo ERb templates a bit. First, note that <h1>Middleman Demo</h1> only displays on the home page; let’s make it render on all of the site’s pages. Move <h1>Middleman Demo</h1> from  source/index.html.erb  to source/layouts/layout.erb. Put it just inside the <body> tag: <body class="<%= page_classes %>"> <h1>Middleman Demo</h1> <%= yield %> </body> Next, let’s create a custom blog post template. Create the template file: $ touch source/layout/post.erb Add the following to extend the site-wide functionality of source/layouts/layout.erb to  source/layouts/post.erb: <% wrap_layout :layout do %> <h2><%= current_article.title %></h2> <p>Posted <%= current_article.date.strftime('%B %e, %Y') %></p> <%= yield %> <ul> <% current_article.tags.each do |tag| %> <li><a href="/blog/tags/<%= tag %>/"><%= tag %></a></li> <% end %> </ul> <% end %> Note the use of the wrap_layout  ERb helper.  The wrap_layout ERb helper takes two arguments. The first is the name of the layout to wrap, in this case :layout. The second argument is a Ruby block; the contents of the block are evaluated within the <%= yield %> call of source/layouts/layout.erb. Next, instruct  middleman-demo  to use  source/layouts/post.erb  in serving blog posts by adding the necessary configuration to  config.rb : page "blog/*", :layout => :post Now, when restarting the Middleman server and visiting  http://localhost:4567/blog/2014/08/20/new-blog/,  middleman-demo renders a more comprehensive blog template that includes the post’s title, date published, and tags. Let’s add a simple template to render a tags page that lists relevant tagged content. First, create the template: $ touch source/tag.html.erb And add the necessary ERb to list the relevant posts assigned a given tag: <h2>Posts tagged <%= tagname %></h2> <ul> <% page_articles.each do |post| %> <li> <a href="<%= post.url %>"><%= post.title %></a> </li> <% end %> </ul> Specify the blog’s tag template by editing the blog configuration in config.rb: activate :blog do |blog| blog.prefix = 'blog' blog.permalink = "{year}/{month}/{day}/{title}.html" # tag template: blog.tag_template = "tag.html" end Edit config.rb to configure middleman-demo’s tag template to use source/layout.erb rather than source/post.erb: page "blog/tags/*", :layout => :layout Now, when visiting http://localhost:4567/2014/08/20/new-blog/, you should see a linked list of New Blog’s tags. Clicking a tag should correctly render the tags page. Part 1 recap Thus far, middleman-demo serves as a basic Middleman-based blog example. It demonstrates Middleman templating, how to set up the middleman-blog  plugin, and how to make author markdown-based blog posts in Middleman. In part 2, we’ll cover migrating content from an existing Wordpress blog. We’ll also step through establishing an Amazon S3 bucket, building middleman-demo, and deploying to production. In part 3, we’ll cover how to create automated tests, continuous integration, and automated deployments. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 20173

article-image-how-to-deploy-a-blog-with-ghost-and-docker
Felix Rabe
07 Nov 2014
6 min read
Save for later

How to Deploy a Blog with Ghost and Docker

Felix Rabe
07 Nov 2014
6 min read
2013 gave birth to two wonderful Open Source projects: Ghost and Docker. This post will show you what the buzz is all about, and how you can use them together. So what are Ghost and Docker, exactly? Ghost is an exciting new blogging platform, written in JavaScript running on Node.js. It features a simple and modern user experience, as well as very transparent and accessible developer communications. This blog post covers Ghost 0.4.2. Docker is a very useful new development tool to package applications together with their dependencies for automated and portable deployment. It is based on Linux Containers (lxc) for lightweight virtualization, and AUFS for filesystem layering. This blog post covers Docker 1.1.2. Install Docker If you are on Windows or Mac OS X, the easiest way to get started using Docker is Boot2Docker. For Linux and more in-depth instructions, consult one of the Docker installation guides. Go ahead and install Docker via one of the above links, then come back and run: docker version You run this in your terminal to verify your installation. If you get about eight lines of detailed version information, the installation was successful. Just running docker will provide you with a list of commands, and docker help <command> will show a command's usage. If you use Boot2Docker, remember to export DOCKER_HOST=tcp://192.168.59.103:2375. Now, to get the Ubuntu 14.04 base image downloaded (which we'll use in the next sections), run the following command: docker run --rm ubuntu:14.04 /bin/true This will take a while, but only for the first time. There are many more Docker images available at the Docker Hub Registry. Hello Docker To give you a quick glimpse into what Docker can do for you, run the following command: docker run --rm ubuntu:14.04 /bin/echo Hello Docker This runs /bin/echo Hello Docker in its own virtual Ubuntu 14.04 environment, but since it uses Linux Containers instead of booting a complete operating system in a virtual machine, this only takes less than a second to complete. Pretty sweet, huh? To run Bash, provide the -ti flags for interactivity: docker run --rm -ti ubuntu:14.04 /bin/bash The --rm flag makes sure that the container gets removed after use, so any files you create in that Bash session get removed after logging out. For more details, see the Docker Run Reference. Build the Ghost image In the previous section, you've run the ubuntu:14.04 image. In this section, we'll build an image for Ghost that we can then use to quickly launch a new Ghost container. While you could get a pre-made Ghost Docker image, for the sake of learning, we'll build our own. About the terminology: A Docker image is analogous to a program stored on disk, while a Docker container is analogous to a process running in memory. Now create a new directory, such as docker-ghost, with the following files — you can also find them in this Gist on GitHub: package.json: {} This is the bare minimum actually required, and will be expanded with the current Ghost dependency by the Dockerfile command npm install --save ghost when building the Docker image. server.js: #!/usr/bin/env node var ghost = require('ghost'); ghost({ config: __dirname + '/config.js' }); This is all that is required to use Ghost as an NPM module. config.js: config = require('./node_modules/ghost/config.example.js'); config.development.server.host = '0.0.0.0'; config.production.server.host = '0.0.0.0'; module.exports = config; This will make the Ghost server accessible from outside of the Docker container. Dockerfile: # DOCKER-VERSION 1.1.2 FROM ubuntu:14.04 # Speed up apt-get according to https://gist.github.com/jpetazzo/6127116 RUN echo "force-unsafe-io" > /etc/dpkg/dpkg.cfg.d/02apt-speedup RUN echo "Acquire::http {No-Cache=True;};" > /etc/apt/apt.conf.d/no-cache # Update the distribution ENV DEBIAN_FRONTEND noninteractive RUN apt-get update RUN apt-get upgrade -y # https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager RUN apt-get install -y software-properties-common RUN add-apt-repository -y ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y python-software-properties python g++ make nodejs git # git needed by 'npm install' ADD . /src RUN cd /src; npm install --save ghost ENTRYPOINT ["node", "/src/server.js"] # Override ubuntu:14.04 CMD directive: CMD [] EXPOSE 2368 This Dockerfile will create a Docker image with Node.js and the dependencies needed to build the Ghost NPM module, and prepare Ghost to be run via Docker. See Documentation for details on the syntax. Now build the Ghost image using: cd docker-ghost docker build -t ghost-image . This will take a while, but you might have to Ctrl-C and re-run the command if, for more than a couple of minutes, you are stuck at the following step: > node-pre-gyp install --fallback-to-build Run Ghost Now start the Ghost container: docker run --name ghost-container -d -p 2368:2368 ghost-image If you run Boot2Docker, you'll have to figure out its IP address: boot2docker ip Usually, that's 192.168.59.103, so by going to http://192.168.59.103:2368, you will see your fresh new Ghost blog. Yay! For the admin interface, go to http://192.168.59.103:2368/ghost. Manage the Ghost container The following commands will come in handy to manage the Ghost container: # Show all running containers: docker ps -a # Show the container logs: docker logs [-f] ghost-container # Stop Ghost via a simulated Ctrl-C: docker kill -s INT ghost-container # After killing Ghost, this will restart it: docker start ghost-container # Remove the container AND THE DATA (!): docker rm ghost-container What you'll want to do next Some steps that are outside the scope of this post, but some steps that you might want to pursue next, are: Copy and change the Ghost configuration that currently resides in node_modules/ghost/config.js. Move the Ghost content directory into a separate Docker volume to allow for upgrades and data backups. Deploy the Ghost image to production on your public server at your hosting provider. Also, you might want to change the Ghost configuration to match your domain and change the port to 80. How I use Ghost with Docker I run Ghost in Docker successfully over at Named Data Education, a new blog about Named Data Networking. I like the fact that I can replicate an isolated setup identically on that server as well as on my own laptop. Ghost resources Official docs: The Ghost Guide, and the FAQ- / How-To-like User Guide. How To Install Ghost, Ghost for Beginners and All About Ghost are a collection of sites that provide more in-depth material on operating a Ghost blog. By the same guys: All Ghost Themes. Ghost themes on ThemeForest is also a great collection of themes. Docker resources The official documentation provides many guides and references. Docker volumes are explained here and in this post by Michael Crosby. About the Author Felix Rabe has been programming and working with different technologies and companies at different levels since 1993. Currently he is researching and promoting Named Data Networking (http://named-data.net/), an evolution of the Internet architecture that currently relies on the host-bound Internet Protocol. You can find our very best Docker content on our dedicated Docker page. Whatever you do with software, Docker will help you do it better.
Read more
  • 0
  • 0
  • 29148
Modal Close icon
Modal Close icon