Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Web Development

1802 Articles
article-image-customizing-elgg-themes
Packt
27 Oct 2009
8 min read
Save for later

Customizing Elgg Themes

Packt
27 Oct 2009
8 min read
Why Customize? Elgg ships with a professional looking and slick grayish-blue default theme. Depending on how you want to use Elgg, you'd like your network to have a personalized look. If you are using the network for your own personal use, you really don't care a lot about how it looks. But if you are using the network as part of your existing online infrastructure, would you really want it to look like every other default Elgg installation on the planet? Of course not! Visitors to your network should easily identify your network and relate it to you. But theming isn't just about glitter. If you thought themes were all about gloss, think again. A theme helps you brand your network. As the underlying technology is the same, a theme is what really separates your network from the others out there. What Makes Up a Theme? There are several components that define your network. Let's take a look at the main components and some practices to follow while using them. Colors: Colors are an important part of your website's identity. If you have an existing website, you'd want your Elgg network to have the same family of colors as your main website. If the two (your website and social network) are very different, the changeover from one to another could be jarring. While this isn't necessary, maintaining color consistency is a good practice. Graphics: Graphics help you brand the network to make it your own. Every institution has a logo. Using a logo in your Elgg theme is probably the most basic change you'd want to make. But make sure the logo blends in with the theme, that is, it has the same background color. Code: It takes a little bit of HTML, a sprinkle of PHP, and some CSS magic to Code: It takes a little bit of HTML, a sprinkle of PHP, and some CSS magic to manipulate and control a theme. A CSS file: As the name suggests, this file contains all the CSS decorations. You can choose to alter colors and fonts and other elements in this file. A Pageshell file: In this pageshell file, you define the structure of your Elgg network. If you want to change the position of the Search bar or instead of the standard two-column format, move to a three-column display, this is the file you need to modify. Front page files: Two files control how the landing page of your Elgg network appears to logged out or logged in users. Optional images folder: This folder houses all the logos and other artwork that'll be directly used by the theme. Please note that this folder does not include any other graphic elements we've covered in previous tutorials such as your picture, or icons to communities, and so on. Controlling Themes Rather than being single humongous files, themes in Elgg are a bunch of small manageable files. The CSS decoration is separated from the placement code. Before getting our hands dirty creating a theme, let's take a look at the files that control the look and feel of your network. All themes must have these files: The Default Template Elgg ships with a default template that you can find under your Elgg installation. This is the structure of the files and folders that make up the default template. Before we look at the individual files and examine their contents in detail, let's first understand their content in general. All three files, pageshell, frontpage_logedin, and frontpage_loggedout are made up of two types of components. Keywords are used to pull content from the database and display them on the page. Arranging these keywords are the div<.em> and span tags along with several others like h1, ul, and so on that have been defined in the CSS file. What are <div> and <span>? The <div> and <span> are two very important tags especially when it comes to handling CSS files. In a snap, these two tags are used to style arbitrary sections of HTML. <div> does much the same thing as a paragraph tag <p>, but it divides the page up into larger sections. With <div>, you can also define the style of whole sections of HTML. This is especially useful when you want to give particular sections a different style from the surrounding text. The <span> tag is similar to the <div> tag. It is also used to change the style of the text it encloses. The difference between <span> and <div> is that a span element is in-line and usually used for a small chunk of in-line HTML. Both <div> and <span> work with two important attributes, class and id. The most common use of these containers together with the class or id attributes is when this is done with CSS to apply layout, color, and other presentation attributes to the page's content. In forthcoming sections, we'll see how the two container items use their two attributes to influence themes. The pageshell Now, let's dive into understanding the themes. Here's an exact replica of the pageshell of the Default template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head><title>{{title}}</title> {{metatags}}</head> <body>{{toolbar}} <div id="container"><!-- open container which wraps the header, maincontent and sidebar --><div id="header"><!-- open div header --><div id="header-inner"> <div id="logo"><!-- open div logo --> <h1><a href="{{url}}">{{sitename}}</a></h1> <h2>{{tagline}}</h2> </div><!-- close div logo --> {{searchbox}}</div> </div><!-- close div header --> <div id="content-holder"><!-- open content-holder div --><div id="content-holder-inner"> <div id="splitpane-sidebar"><!-- open splitpane-sidebar div --> <ul><!-- open sidebar lists --> {{sidebar}} </ul><!-- close sidebar lists --></div><!-- close splitpane-sidebar div --> <div id="splitpane-content"><!-- open splitpane-content div --> {{messageshell}} {{mainbody}}</div><!-- close open splitpane-content div --></div> </div><!-- close content-holder div --> <div id="footer"><!-- start footer --><div id="footer-inner"><span style="color:#FF934B">{{sitename}}</span> <a href="{{url}}content/terms.php">Terms and conditions</a> | <a href="{{url}}content/privacy.php">Privacy Policy</a><br /><a href="http://elgg.org/"><img src="{{url}}mod/template/ icons/elgg_powered.png" title="Elgg powered" border="0" alt="Elgg powered"/></a><br /> {{perf}}</div> </div><!-- end footer --> </div><!-- close container div --> </body> </html> CSS Elements in the pageshell Phew! That's a lot of mumbo-jumbo. But wait a second! Don't jump to a conclusion! Browse through this section, where we disassemble the file into easy-to-understand chunks. First, we'll go over the elements that control the layout of the pageshell. <div id="container">: This container wraps the complete page and all its elements, including the header, main content, and sidebar. In the CSS file, this is defined as: div#container {width:940px;margin:0 auto;padding:0;background:#fff;border-top:1px solid #fff;} <div id="header">: This houses all the header content including the logo and search box. The CSS definition for the header element: div#header {margin:0;padding:0;text-align:left;background:url({{url}}mod/template/templates/Default_Template/images/header-bg.gif) repeat-x;position:relative;width:100%;height:120px;} The CSS definition for the logo: div#header #logo{margin: 0px;padding:10px;float:left;} The search box is controlled by the search-header element: div#header #search-header {float:right;background:url({{url}}mod/template/templates/Default_Template/images/search_icon.gif) no-repeat left top;width:330px;margin:0;padding:0;position:absolute;top:10px;right:0;} <div id="header-inner">: While the CSS file of the default template doesn't define the header-inner element, you can use it to control the area allowed to the elements in the header. When this element isn't defined, the logo and search box take up the full area of the header. But if you want padding in the header around all the elements it houses, specify that using this element. <div id="content-holder">: This wraps the main content area. #content-holder {padding:0px;margin:0px;width:100%;min-height:500px;overflow:hidden;position:relative;} <div id="splitpane-sidebar">: In the default theme, the main content area has a two-column layout, split between the content and the sidebar area. div#splitpane-sidebar {width: 220px;margin:0;padding:0;background:#fff url({{url}}mod/template/templates/Default_Template/images/side-back.gif) repeat-y;margin:0;float: right;}div#splitpane-content {margin: 0;padding: 0 0 0 10px;width:690px;text-align: left;color:#000;overflow:hidden;min-height:500px;} <div id="single-page">: While not used in the Default template, the content area can also have a simple single page layout, without the sidebar. div#single-page {margin: 0;padding: 0 15px 0 0;width:900px;text-align: left;border:1px solid #eee;} <div id="content-holder-inner">: Just like header-inner, is used only if you would like a full page layout but a defined width for the actual content. <div id="footer">: Wraps the footer of the page including the links to the terms and conditions and the privacy policy, along with the Elgg powered icon. div#footer {clear: both;position: relative;background:url({{url}}mod/template/templates/Default_Template/images/footer.gif) repeat-x top;text-align: center;padding:10px 0 0 0;font-size:1em;height:80px;margin:0;color:#fff;width:100%;} <div id="footer-inner">: Like the other inner elements, this is only used if you would like a full page layout but restrict the width for the actual footer content.
Read more
  • 0
  • 101
  • 97207

article-image-making-simple-web-based-ssh-client-using-nodejs-and-socketio
Jakub Mandula
28 Oct 2015
7 min read
Save for later

Making a simple Web based SSH client using Node.js and Socket.io

Jakub Mandula
28 Oct 2015
7 min read
If you are reading this post, you probably know what SSH stands for. But just for the sake of formality, here we go: SSH stands for Secure Shell. It is a network protocol for secure access to the shell on a remote computer. You can do much more over SSH besides commanding your computer. Here you can find further information: http://en.wikipedia.org/wiki/Secure_Shell. In this post, we are going to create a very simple web terminal. And when I say simple, I mean it! However much you like colors, it will not support them because the parsing is just beyond the scope of this post. If you want a good client-side terminal library use term.js. It is made by the same guy who wrote pty.js, which we will be using. It is able to handle pretty much all key events and COLORS!!!! Installation I am going to assume you already have your node and npm installed. First we will install all of the npm packages we will be using: npm install express pty.js socket.io Express is a super cool web framework for Node. We are going to use it to serve our static files. I know it is a bit overkill, but I like Express. pty.js is where the magic will be happening. It forks processes into virtual pseudo terminals and provides bindings for communication. Socket.io is what we will use to transmit the data from the web browser to the server and back. It uses modern WebSockets, but provides fallbacks for backward compatibility. Anytime you want to create a real-time application, Socket.io is the way to go. Planning First things first, we need to think what we want the program to do. We want the program to create an instance of a shell on the server (remote machine) and send all of the text to the browser. Back in the browser, we want to capture any user events and send them back to the server shell. The WebSSH server This is the code that will power the terminal forwarding. Open a new file named server.js and start by importing all of the libraries: var express = require('express'); var https = require('https'); var http = require('http'); var fs = require('fs'); var pty = require('pty.js'); Set up express: // Setup the express app var app = express(); // Static file serving app.use("/",express.static("./")); Next we are going to create the server. // Creating an HTTP server var server = http.createServer(app).listen(8080) If you want to use HTTPS, which you probably will, you need to generate a key and certificate and import them as shown. var options = { key: fs.readFileSync('keys/key.pem'), cert: fs.readFileSync('keys/cert.pem') }; Then use the options object to create the actual server. Notice that this time we are using the https package. // Create an HTTPS server var server = https.createServer(options, app).listen(8080) CAUTION: Even if you use HTTPS, do not use this example program on the Internet. You are not authenticating the client in any way and thus providing a free open gate to your computer. Please make sure you only use this on your Private network protected by a firewall!!! Now bind the socket.io instance to the server: var io = require('socket.io')(server); After this, we can set up the place where the magic happens. // When a new socket connects io.on('connection', function(socket){ // Create terminal var term = pty.spawn('sh', [], { name: 'xterm-color', cols: 80, rows: 30, cwd: process.env.HOME, env: process.env }); // Listen on the terminal for output and send it to the client term.on('data', function(data){ socket.emit('output', data); }); // Listen on the client and send any input to the terminal socket.on('input', function(data){ term.write(data); }); // When socket disconnects, destroy the terminal socket.on("disconnect", function(){ term.destroy(); console.log("bye"); }); }); In this block, all we do is wait for new connections. When we get one, we spawn a new virtual terminal and start to pump the data from the terminal to the socket and vice versa. After the socket disconnects, we make sure to destroy the terminal. If you have noticed, I am using the simple sh shell. I did this mainly because I don't have a fancy prompt on it. Because we are not adding any parsing logic, my bash prompt would show up like this: ]0;piman@mothership: ~ _[01;32m✓ [33mpiman_[0m ↣ _[1;34m[~]_[37m$[0m - Eww! But you may use any shell you like. This is all that we need on the server side. Save the file and close it. Client side The client side is going to be just a very simple HTML file. Start with a very simple HTML markup: <!doctype html> <html> <head> <title>SSH Client</title> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <style> body { margin: 0; padding: 0; } .terminal { font-family: monospace; color: white; background: black; } </style> </head> <body> <h1>SSH</h1> <div class="terminal"> </div> <script> </script> </body> </html> I am downloading the client side libraries jquery and socket.io from cdnjs. All of the client code will be written in the script tag below the terminal div. Surprisingly the code is very simple: // Connect to the socket.io server var socket = io.connect('http://localhost:8080'); // Wait for data from the server socket.on('output', function (data) { // Insert some line breaks where they belong data = data.replace("n", "<br>"); data = data.replace("r", "<br>"); // Append the data to our terminal $('.terminal').append(data); }); // Listen for user input and pass it to the server $(document).on("keypress",function(e){ var char = String.fromCharCode(e.which); socket.emit("input", char); }); Notice that we do not have to explicitly append the text the client types to the terminal mainly because the server echos it back anyways. Now we are done! Run the server and open up the URL in your browser. node server.js You should see a small prompt and be able to start typing commands. You can now explore you machine from the browser! Remember that our Web Terminal does not support Tab, Ctrl, Backspace or Esc characters. Implementing this is your homework. Conclusion I hope you found this tutorial useful. You can apply the knowledge in any real-time application where communication with the server is critical. All the code is available here. Please note, that if you'd like to use a browser terminal I strongly recommend term.js. It supports colors and styles and all the basic keys including Tabs, Backspace etc. I use it in my PiDashboard project. It is much cleaner and less tedious than the example I have here. I can't wait what amazing apps you will invent based on this. About the Author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 6
  • 33107

article-image-is-web-development-dying
Richard Gall
23 May 2018
7 min read
Save for later

Is web development dying?

Richard Gall
23 May 2018
7 min read
It's not hard to find people asking whether web development is dying. A quick search throws up questions on Quora, Reddit, and other forums. "Is web development a dying profession or does it just smell funny?" asks one Reddit user. The usual suspects in the world of content (Forbes et al) have responded with their own takes and think pieces on whether web development is dead. And why wouldn't they? I, for one, would never miss out on an opportunity to write something with an outlandish and provocative headline for clicks. So, is web development dying or simply very unwell? Why do people think web development is dying? The question might seem a bit overwrought, but there are good reasons for people to ask the question. One reason is that getting a website has never been easier or cheaper. Think about it: if you want to create a content site, it doesn't take much to set one up with WordPress. You barely need to be technically literate, let alone a developer. Similarly, if you want an eCommerce store there are plenty of off-the-shelf solutions that allow people to start running an online business with very little work at all. Even if you do want a custom solution, you can now do that pretty cheaply. On the Treehouse forums, one user comments that thanks to sites like SquareSpace, businesses can now purchase a website for less than £100 (about $135). The commenter remarks that whereas he'd typically charge around £3000 for a complete website build, potential clients are coming back puzzled as to why he would think they'd spend so much when they could get the same result for a fraction of the price. From a professional perspective, this sort of anecdotal evidence indicates that it's becoming more and more difficult to be successful in web development. For all the talk around 'learning to code' and the digital economy, maybe building websites isn't the best area to get into. Web development is getting easier When people say web development is dying, they might actually be saying that there isn't as much money in it any more. If freelancers are struggling to charge the rates that they used to, that's because there is someone out there who is going to do it for a lot less money. The reason for this isn't that there's a new generation of web developers able to subsist on a paltry sum of money. It's actually getting a lot easier. Aside from solutions like WordPress and Shopify, the task of building websites from scratch (sort of scratch) is now easier than it has ever been. Are templates killing web development? Templates make everything easy for web developers and designers. Why would you want to do much more than drag and drop templates if you could? If the result looks good and does the job, then why spend time doing more? The more you do yourself, the more you're likely to break things. And the more you break things the more you've got to fix. Of course, templates are lowering the barrier to entry into web development and design. And while we shouldn't be precious about new web developers entering the industry, it is understandable that many experienced web developers are anxious about what the future might hold. From this perspective, templates aren't killing web development, but they are changing what the profession looks like. And without wishing to sound euphemistic, this is both a challenge and an opportunity for everyone in web development. Whether you're experienced or new to the industry, these changes mean people are going to have to adapt. Web development isn't dying, it's fragmenting The way web developers are going to have to adapt is by choosing what path they want to take in their career. Web development as we've always known it is, perhaps well and truly dead. Instead, it's fragmenting into specialized areas; design on the one hand, and full-stack on the other. This means your skill set needs to be unique. In a world where building websites takes very little skill or technical knowledge, specific expertise is vital. This is something journalist Andrew Pierno noted in a blog post on Medium. Pierno writes:  ...we are in a scenario where the web developer no longer has the skill set to build that interesting differentiator anymore, particularly if the main value prop is around A.I, computer vision, machine learning, AR, VR, blockchain, etc. Building websites is no longer remarkable - as we've seen, people that can do it are ubiquitous. But building a native application; that's not quite so easy. Building a mobile app that uses computer vision to compare you to Renaissance paintings - that's even harder to do. These are the sorts of things that are going to be valuable - and these are the sorts of things that web developers are going to need to learn how to do. Full-stack development and the expansion of the developer skill set In his piece, Pierno argues that the scope of the web developers role is shrinking. However, I don't think that's quite right. Yes, it might be fragmenting, but the scope of, say, full-stack development, is huge. In fact, full-stack developers need to know a huge range of technologies and tools. If they're to differentiate themselves in the job market, as Pierno suggests they should, they need to know machine learning, they need to know mobile, databases, and maybe even Blockchain. From this perspective, it's not hard to see how the 'web' part of web development might be dying. To some extent, as the web becomes more ubiquitous and less of a rarefied 'space' in people's lives, the more we have to get into the detail of how we utilize the technologies around it. Web development's decline is design's gain If web development as a discipline is dying, that's only going to make design more important. If, as we saw earlier, building websites is going to become a free for all for just about anyone with an internet connection and enough confidence, standards and quality might start to slip. That means the value of someone who understands good design will be higher than ever. As a web developer you might disappear into the ether of everyone else out there. But if you market yourself as a designer, someone who understands the intricacies of UI and UX implicitly, you immediately start to look a little different. Think of it like a sandwich shop - anyone can start making sandwiches. But to make a great sandwich shop, the type that wins awards and the type that people want to Instagram, requires extra attention to detail. It demands more skill and more culinary awareness. Maybe web development is dying, but maybe it just needs to change Clearly, what we call web development is very different in 2018 than what it was 5 years ago. There are a huge number of reasons for this, but perhaps the most important is that it doesn't really make sense to talk about 'the web' any more. Because 'the web' is now an outdated concept, perhaps web development needs to die. Maybe we're holding on to something which is only going to play into the hands of poor design and poor quality software. It's might even damage the careers of talented engineers and designers. You could make a pretty good comparison between 'the web' and 'big data'. Even reading those words feels oddly outdated today, but they're still at the center of the tech landscape. Big data, for example, is everywhere - it's hard to imagine our lives outside of it, but it doesn't make sense to talk about it in the abstract. Instead, what's interesting is how it's applied, how engineers make data accessible, usable and secure. The same is true of the web. It's not dead, but it has certainly assumed a slightly different form. And web development might well be dying, but the world will always need developers and designers. It's simply time to adapt. Read next Why is everyone talking about JavaScript fatigue? Is novelty ruining web development?
Read more
  • 0
  • 4
  • 29546

article-image-installing-jquery
Packt
04 Jun 2015
25 min read
Save for later

Installing jQuery

Packt
04 Jun 2015
25 min read
 In this article by Alex Libby, author of the book Mastering jQuery, we will examine some of the options available to help develop your skills even further. (For more resources related to this topic, see here.) Local or CDN, I wonder…? Which version…? Do I support old IE…? Installing jQuery is a thankless task that has to be done countless times by any developer—it is easy to imagine that person asking some of the questions. It is easy to imagine why most people go with the option of using a Content Delivery Network (CDN) link, but there is more to installing jQuery than taking the easy route! There are more options available, where we can be really specific about what we need to use—throughout this article, we will. We'll cover a number of topics, which include: Downloading and installing jQuery Customizing jQuery downloads Building from Git Using other sources to install jQuery Adding source map support Working with Modernizr as a fallback Intrigued? Let's get started. Downloading and installing jQuery As with all projects that require the use of jQuery, we must start somewhere—no doubt you've downloaded and installed jQuery a thousand times; let's just quickly recap to bring ourselves up to speed. If we browse to http://www.jquery.com/download, we can download jQuery using one of the two methods: downloading the compressed production version or the uncompressed development version. If we don't need to support old IE (IE6, 7, and 8), then we can choose the 2.x branch. If, however, you still have some diehards who can't (or don't want to) upgrade, then the 1.x branch must be used instead. To include jQuery, we just need to add this link to our page: <script src="http://code.jquery.com/jquery-X.X.X.js"></script> Here, X.X.X marks the version number of jQuery or the Migrate plugin that is being used in the page. Conventional wisdom states that the jQuery plugin (and this includes the Migrate plugin too) should be added to the <head> tag, although there are valid arguments to add it as the last statement before the closing <body> tag; placing it here may help speed up loading times to your site. This argument is not set in stone; there may be instances where placing it in the <head> tag is necessary and this choice should be left to the developer's requirements. My personal preference is to place it in the <head> tag as it provides a clean separation of the script (and the CSS) code from the main markup in the body of the page, particularly on lighter sites. I have even seen some developers argue that there is little perceived difference if jQuery is added at the top, rather than at the bottom; some systems, such as WordPress, include jQuery in the <head> section too, so either will work. The key here though is if you are perceiving slowness, then move your scripts to just before the <body> tag, which is considered a better practice. Using jQuery in a development capacity A useful point to note at this stage is that best practice recommends that CDN links should not be used within a development capacity; instead, the uncompressed files should be downloaded and referenced locally. Once the site is complete and is ready to be uploaded, then CDN links can be used. Adding the jQuery Migrate plugin If you've used any version of jQuery prior to 1.9, then it is worth adding the jQuery Migrate plugin to your pages. The jQuery Core team made some significant changes to jQuery from this version; the Migrate plugin will temporarily restore the functionality until such time that the old code can be updated or replaced. The plugin adds three properties and a method to the jQuery object, which we can use to control its behavior: Property or Method Comments jQuery.migrateWarnings This is an array of string warning messages that have been generated by the code on the page, in the order in which they were generated. Messages appear in the array only once even if the condition has occurred multiple times, unless jQuery.migrateReset() is called. jQuery.migrateMute Set this property to true in order to prevent console warnings from being generated in the debugging version. If this property is set, the jQuery.migrateWarnings array is still maintained, which allows programmatic inspection without console output. jQuery.migrateTrace Set this property to false if you want warnings but don't want traces to appear on the console. jQuery.migrateReset() This method clears the jQuery.migrateWarnings array and "forgets" the list of messages that have been seen already. Adding the plugin is equally simple—all you need to do is add a link similar to this, where X represents the version number of the plugin that is used: <script src="http://code.jquery.com/jquery-migrate- X.X.X.js"></script> If you want to learn more about the plugin and obtain the source code, then it is available for download from https://github.com/jquery/jquery-migrate. Using a CDN We can equally use a CDN link to provide our jQuery library—the principal link is provided by MaxCDN for the jQuery team, with the current version available at http://code.jquery.com. We can, of course, use CDN links from some alternative sources, if preferred—a reminder of these is as follows: Google (https://developers.google.com/speed/libraries/devguide#jquery) Microsoft (http://www.asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0) CDNJS (http://cdnjs.com/libraries/jquery/) jsDelivr (http://www.jsdelivr.com/#%!jquery) Don't forget though that if you need, we can always save a copy of the file provided on CDN locally and reference this instead. The jQuery CDN will always have the latest version, although it may take a couple of days for updates to appear via the other links. Using other sources to install jQuery Right. Okay, let's move on and develop some code! "What's next?" I hear you ask. Aha! If you thought downloading and installing jQuery from the main site was the only way to do this, then you are wrong! After all, this is about mastering jQuery, so you didn't think I will only talk about something that I am sure you are already familiar with, right? Yes, there are more options available to us to install jQuery than simply using the CDN or main download page. Let's begin by taking a look at using Node. Each demo is based on Windows, as this is the author's preferred platform; alternatives are given, where possible, for other platforms. Using Node JS to install jQuery So far, we've seen how to download and reference jQuery, which is to use the download from the main jQuery site or via a CDN. The downside of this method is the manual work required to keep our versions of jQuery up to date! Instead, we can use a package manager to help manage our assets. Node.js is one such system. Let's take a look at the steps that need to be performed in order to get jQuery installed: We first need to install Node.js—head over to http://www.nodejs.org in order to download the package for your chosen platform; accept all the defaults when working through the wizard (for Mac and PC). Next, fire up a Node Command Prompt and then change to your project folder. In the prompt, enter this command: npm install jquery Node will fetch and install jQuery—it displays a confirmation message when the installation is complete: You can then reference jQuery by using this link: <name of drive>:websitenode_modulesjquerydistjquery.min.js. Node is now installed and ready for use—although we've installed it in a folder locally, in reality, we will most likely install it within a subfolder of our local web server. For example, if we're running WampServer, we can install it, then copy it into the /wamp/www/js folder, and reference it using http://localhost/js/jquery.min.js. If you want to take a look at the source of the jQuery Node Package Manager (NPM) package, then check out https://www.npmjs.org/package/jquery. Using Node to install jQuery makes our work simpler, but at a cost. Node.js (and its package manager, NPM) is primarily aimed at installing and managing JavaScript components and expects packages to follow the CommonJS standard. The downside of this is that there is no scope to manage any of the other assets that are often used within websites, such as fonts, images, CSS files, or even HTML pages. "Why will this be an issue?," I hear you ask. Simple, why make life hard for ourselves when we can manage all of these assets automatically and still use Node? Installing jQuery using Bower A relatively new addition to the library is the support for installation using Bower—based on Node, it's a package manager that takes care of the fetching and installing of packages from over the Internet. It is designed to be far more flexible about managing the handling of multiple types of assets (such as images, fonts, and CSS files) and does not interfere with how these components are used within a page (unlike Node). For the purpose of this demo, I will assume that you have already installed it; if not, you will need to revisit it before continuing with the following steps: Bring up the Node Command Prompt, change to the drive where you want to install jQuery, and enter this command: bower install jquery This will download and install the script, displaying the confirmation of the version installed when it has completed. The library is installed in the bower_components folder on your PC. It will look similar to this example, where I've navigated to the jquery subfolder underneath. By default, Bower will install jQuery in its bower_components folder. Within bower_components/jquery/dist/, we will find an uncompressed version, compressed release, and source map file. We can then reference jQuery in our script using this line: <script src="/bower_components/jquery/jquery.js"></script> We can take this further though. If we don't want to install the extra files that come with a Bower installation by default, we can simply enter this in a Command Prompt instead to just install the minified version 2.1 of jQuery: bower install http://code.jquery.com/jquery-2.1.0.min.js Now, we can be really clever at this point; as Bower uses Node's JSON files to control what should be installed, we can use this to be really selective and set Bower to install additional components at the same time. Let's take a look and see how this will work—in the following example, we'll use Bower to install jQuery 2.1 and 1.10 (the latter to provide support for IE6-8). In the Node Command Prompt, enter the following command: bower init This will prompt you for answers to a series of questions, at which point you can either fill out information or press Enter to accept the defaults. Look in the project folder; you should find a bower.json file within. Open it in your favorite text editor and then alter the code as shown here: {"ignore": [ "**/.*", "node_modules", "bower_components","test", "tests" ] ,"dependencies": {"jquery-legacy": "jquery#1.11.1","jquery-modern": "jquery#2.10"}} At this point, you have a bower.json file that is ready for use. Bower is built on top of Git, so in order to install jQuery using your file, you will normally need to publish it to the Bower repository. Instead, you can install an additional Bower package, which will allow you to install your custom package without the need to publish it to the Bower repository: In the Node Command Prompt window, enter the following at the prompt: npm install -g bower-installer When the installation is complete, change to your project folder and then enter this command line: bower-installer The bower-installer command will now download and install both the versions of jQuery. At this stage, you now have jQuery installed using Bower. You're free to upgrade or remove jQuery using the normal Bower process at some point in the future. If you want to learn more about how to use Bower, there are plenty of references online; https://www.openshift.com/blogs/day-1-bower-manage-your-client-side-dependencies is a good example of a tutorial that will help you get accustomed to using Bower. In addition, there is a useful article that discusses both Bower and Node, available at http://tech.pro/tutorial/1190/package-managers-an-introductory-guide-for-the-uninitiated-front-end-developer. Bower isn't the only way to install jQuery though—while we can use it to install multiple versions of jQuery, for example, we're still limited to installing the entire jQuery library. We can improve on this by referencing only the elements we need within the library. Thanks to some extensive work undertaken by the jQuery Core team, we can use the Asynchronous Module Definition (AMD) approach to reference only those modules that are needed within our website or online application. Using the AMD approach to load jQuery In most instances, when using jQuery, developers are likely to simply include a reference to the main library in their code. There is nothing wrong with it per se, but it loads a lot of extra code that is surplus to our requirements. A more efficient method, although one that takes a little effort in getting used to, is to use the AMD approach. In a nutshell, the jQuery team has made the library more modular; this allows you to use a loader such as require.js to load individual modules when needed. It's not suitable for every approach, particularly if you are a heavy user of different parts of the library. However, for those instances where you only need a limited number of modules, then this is a perfect route to take. Let's work through a simple example to see what it looks like in practice. Before we start, we need one additional item—the code uses the Fira Sans regular custom font, which is available from Font Squirrel at http://www.fontsquirrel.com/fonts/fira-sans. Let's make a start using the following steps: The Fira Sans font doesn't come with a web format by default, so we need to convert the font to use the web font format. Go ahead and upload the FiraSans-Regular.otf file to Font Squirrel's web font generator at http://www.fontsquirrel.com/tools/webfont-generator. When prompted, save the converted file to your project folder in a subfolder called fonts. We need to install jQuery and RequireJS into our project folder, so fire up a Node.js Command Prompt and change to the project folder. Next, enter these commands one by one, pressing Enter after each: bower install jquerybower install requirejs We need to extract a copy of the amd.html and amd.css files—it contains some simple markup along with a link to require.js; the amd.css file contains some basic styling that we will use in our demo. We now need to add in this code block, immediately below the link for require.js—this handles the calls to jQuery and RequireJS, where we're calling in both jQuery and Sizzle, the selector engine for jQuery: <script>require.config({paths: {"jquery": "bower_components/jquery/src","sizzle": "bower_components/jquery/src/sizzle/dist/sizzle"}});require(["js/app"]);</script> Now that jQuery has been defined, we need to call in the relevant modules. In a new file, go ahead and add the following code, saving it as app.js in a subfolder marked js within our project folder: define(["jquery/core/init", "jquery/attributes/classes"],function($) {$("div").addClass("decoration");}); We used app.js as the filename to tie in with the require(["js/app"]); reference in the code. If all went well, when previewing the results of our work in a browser. Although we've only worked with a simple example here, it's enough to demonstrate how easy it is to only call those modules we need to use in our code rather than call the entire jQuery library. True, we still have to provide a link to the library, but this is only to tell our code where to find it; our module code weighs in at 29 KB (10 KB when gzipped), against 242 KB for the uncompressed version of the full library! Now, there may be instances where simply referencing modules using this method isn't the right approach; this may apply if you need to reference lots of different modules regularly. A better alternative is to build a custom version of the jQuery library that only contains the modules that we need to use and the rest are removed during build. It's a little more involved but worth the effort—let's take a look at what is involved in the process. Customizing the downloads of jQuery from Git If we feel so inclined, we can really push the boat out and build a custom version of jQuery using the JavaScript task runner, Grunt. The process is relatively straightforward but involves a few steps; it will certainly help if you have some prior familiarity with Git! The demo assumes that you have already installed Node.js—if you haven't, then you will need to do this first before continuing with the exercise. Okay, let's make a start by performing the following steps: You first need to install Grunt if it isn't already present on your system—bring up the Node.js Command Prompt and enter this command: npm install -g grunt-cli Next, install Git—for this, browse to http://msysgit.github.io/ in order to download the package. Double-click on the setup file to launch the wizard, accepting all the defaults is sufficient for our needs. If you want more information on how to install Git, head over and take a look at https://github.com/msysgit/msysgit/wiki/InstallMSysGit for more details. Once Git is installed, change to the jquery folder from within the Command Prompt and enter this command to download and install the dependencies needed to build jQuery: npm install The final stage of the build process is to build the library into the file we all know and love; from the same Command Prompt, enter this command: grunt Browse to the jquery folder—within this will be a folder called dist, which contains our custom build of jQuery, ready for use. If there are modules within the library that we don't need, we can run a custom build. We can set the Grunt task to remove these when building the library, leaving in those that are needed for our project. For a complete list of all the modules that we can exclude, see https://github.com/jquery/jquery#modules. For example, to remove AJAX support from our build, we can run this command in place of step 5, as shown previously: grunt custom:-ajax This results in a file saving on the original raw version of 30 KB as shown in the following screenshot: The JavaScript and map files can now be incorporated into our projects in the usual way. For a detailed tutorial on the build process, this article by Dan Wellman is worth a read (https://www.packtpub.com/books/content/building-custom-version-jquery). Using a GUI as an alternative There is an online GUI available, which performs much the same tasks, without the need to install Git or Grunt. It's available at hhttp://projects.jga.me/jquery-builder/, although it is worth noting that it hasn't been updated for a while! Okay, so we have jQuery installed; let's take a look at one more useful function that will help in the event of debugging errors in our code. Support for source maps has been made available within jQuery since version 1.9. Let's take a look at how they work and see a simple example in action. Adding source map support Imagine a scenario, if you will, where you've created a killer site, which is running well, until you start getting complaints about problems with some of the jQuery-based functionality that is used on the site. Sounds familiar? Using an uncompressed version of jQuery on a production site is not an option; instead we can use source maps. Simply put, these map a compressed version of jQuery against the relevant line in the original source. Historically, source maps have given developers a lot of heartache when implementing, to the extent that the jQuery Team had to revert to disabling the automatic use of maps! For best effects, it is recommended that you use a local web server, such as WAMP (PC) or MAMP (Mac), to view this demo and that you use Chrome as your browser. Source maps are not difficult to implement; let's run through how you can implement them: Extract a copy of the sourcemap folder and save it to your project area locally. Press Ctrl + Shift + I to bring up the Developer Tools in Chrome. Click on Sources, then double-click on the sourcemap.html file—in the code window, and finally click on 17. Now, run the demo in Chrome—we will see it paused; revert back to the developer toolbar where line 17 is highlighted. The relevant calls to the jQuery library are shown on the right-hand side of the screen: If we double-click on the n.event.dispatch entry on the right, Chrome refreshes the toolbar and displays the original source line (highlighted) from the jQuery library, as shown here: It is well worth spending the time to get to know source maps—all the latest browsers support it, including IE11. Even though we've only used a simple example here, it doesn't matter as the principle is exactly the same, no matter how much code is used in the site. For a more in-depth tutorial that covers all the browsers, it is worth heading over to http://blogs.msdn.com/b/davrous/archive/2014/08/22/enhance-your-javascript-debugging-life-thanks-to-the-source-map-support-available-in-ie11-chrome-opera-amp-firefox.aspx—it is worth a read! Adding support for source maps We've just previewed the source map, source map support has already been added to the library. It is worth noting though that source maps are not included with the current versions of jQuery by default. If you need to download a more recent version or add support for the first time, then follow these steps: Source maps can be downloaded from the main site using http://code.jquery.com/jquery-X.X.X.min.map, where X represents the version number of jQuery being used. Open a copy of the minified version of the library and then add this line at the end of the file: //# sourceMappingURL=jquery.min.map Save it and then store it in the JavaScript folder of your project. Make sure you have copies of both the compressed and uncompressed versions of the library within the same folder. Let's move on and look at one more critical part of loading jQuery: if, for some unknown reason, jQuery becomes completely unavailable, then we can add a fallback position to our site that allows graceful degradation. It's a small but crucial part of any site and presents a better user experience than your site simply falling over! Working with Modernizr as a fallback A best practice when working with jQuery is to ensure that a fallback is provided for the library, should the primary version not be available. (Yes, it's irritating when it happens, but it can happen!) Typically, we might use a little JavaScript, such as the following example, in the best practice suggestions. This would work perfectly well but doesn't provide a graceful fallback. Instead, we can use Modernizr to perform the check for us and provide a graceful degradation if all fails. Modernizr is a feature detection library for HTML5/CSS3, which can be used to provide a standardized fallback mechanism in the event of a functionality not being available. You can learn more at http://www.modernizr.com. As an example, the code might look like this at the end of our website page. We first try to load jQuery using the CDN link, falling back to a local copy if that hasn't worked or an alternative if both fail: <body><script src="js/modernizr.js"></script><script type="text/javascript">Modernizr.load([{load: 'http://code.jquery.com/jquery-2.1.1.min.js',complete: function () {// Confirm if jQuery was loaded using CDN link// if not, fall back to local versionif ( !window.jQuery ) {Modernizr.load('js/jquery-latest.min.js');}}},// This script would wait until fallback is loaded, beforeloading{ load: 'jquery-example.js' }]);</script></body> In this way, we can ensure that jQuery either loads locally or from the CDN link—if all else fails, then we can at least make a graceful exit. Best practices for loading jQuery So far, we've examined several ways of loading jQuery into our pages, over and above the usual route of downloading the library locally or using a CDN link in our code. Now that we have it installed, it's a good opportunity to cover some of the best practices we should try to incorporate into our pages when loading jQuery: Always try to use a CDN to include jQuery on your production site. We can take advantage of the high availability and low latency offered by CDN services; the library may already be precached too, avoiding the need to download it again. Try to implement a fallback on your locally hosted library of the same version. If CDN links become unavailable (and they are not 100 percent infallible), then the local version will kick in automatically, until the CDN link becomes available again: <script type="text/javascript" src="//code.jquery.com/jquery-1.11.1.min.js"></script><script>window.jQuery || document.write('<scriptsrc="js/jquery-1.11.1.min.js"></script>')</script> Note that although this will work equally well as using Modernizr, it doesn't provide a graceful fallback if both the versions of jQuery should become unavailable. Although one hopes to never be in this position, at least we can use CSS to provide a graceful exit! Use protocol-relative/protocol-independent URLs; the browser will automatically determine which protocol to use. If HTTPS is not available, then it will fall back to HTTP. If you look carefully at the code in the previous point, it shows a perfect example of a protocol-independent URL, with the call to jQuery from the main jQuery Core site. If possible, keep all your JavaScript and jQuery inclusions at the bottom of your page—scripts block the rendering of the rest of the page until they have been fully rendered. Use the jQuery 2.x branch, unless you need to support IE6-8; in this case, use jQuery 1.x instead—do not load multiple jQuery versions. If you load jQuery using a CDN link, always specify the complete version number you want to load, such as jquery-1.11.1.min.js. If you are using other libraries, such as Prototype, MooTools, Zepto, and so on, that use the $ sign as well, try not to use $ to call jQuery functions and simply use jQuery instead. You can return the control of $ back to the other library with a call to the $.noConflict() function. For advanced browser feature detection, use Modernizr. It is worth noting that there may be instances where it isn't always possible to follow best practices; circumstances may dictate that we need to make allowances for requirements, where best practices can't be used. However, this should be kept to a minimum where possible; one might argue that there are flaws in our design if most of the code doesn't follow best practices! Summary If you thought that the only methods to include jQuery were via a manual download or using a CDN link, then hopefully this article has opened your eyes to some alternatives—let's take a moment to recap what we have learned. We kicked off with a customary look at how most developers are likely to include jQuery before quickly moving on to look at other sources. We started with a look at how to use Node, before turning our attention to using the Bower package manager. Next, we had a look at how we can reference individual modules within jQuery using the AMD approach. We then moved on and turned our attention to creating custom builds of the library using Git. We then covered how we can use source maps to debug our code, with a look at enabling support for them within Google's Chrome browser. To round out our journey of loading jQuery, we saw what might happen if we can't load jQuery at all and how we can get around this, by using Modernizr to allow our pages to degrade gracefully. We then finished the article with some of the best practices that we can follow when referencing jQuery. Resources for Article: Further resources on this subject: Using different jQuery event listeners for responsive interaction [Article] Building a Custom Version of jQuery [Article] Learning jQuery [Article]
Read more
  • 0
  • 0
  • 29448

article-image-installing-php-nuke
Packt
22 Feb 2010
10 min read
Save for later

Installing PHP-Nuke

Packt
22 Feb 2010
10 min read
The steps to install and configure PHP-Nuke are simple: Download and extract the PHP-Nuke files. Download and apply ChatServ's patches. Create the database for PHP-Nuke. Create a database user, and fill the database with data. Make some simple changes to the PHP-Nuke configuration file. Copy the PHP-Nuke files to the document root of the web server. Test it out! Let's get started. Downloading PHP-Nuke The latest version of PHP-Nuke can be downloaded at phpnuke.org downloads page: http://www.phpnuke.org/modules.php?name=Downloads&d_op=viewdownload&cid=1 You can also obtain older versions of PHP-Nuke, including version 1.0, from SourceForge: http://sourceforge.net/project/showfiles.php?group_id=7511&package_id=7622 SourceForge is the world's largest home of open-source projects. Many projects use SourceForge's facilities to host and maintain their projects. You can find almost anything you want on SourceForge—whether it is in a usable state or has been updated recently is another matter. Extracting PHP-Nuke Once you have downloaded PHP-Nuke, you should extract the contents of the PHP-Nuke ZIP archive to the root of your c: drive. You will have to create a folder called PHP-Nuke-7.8 in the root of your c: drive. (If you extract the files elsewhere, create the folder PHP-Nuke-7.8 and copy the contents of the main unzipped folder to this new folder). If you don't have a tool for extracting the files, you can download an evaluation edition (or buy a full edition) of WinZip from www.winzip.com. There are also free, powerful, extracting tools such as ZipGenius (http://www.zipgenius.it/index_eng.htm) and 7-Zip (http://sourceforge.net/projects/sevenzip/) among others. In the PHP-Nuke-7.8 folder, you will find three subfolders called html, sql, and upgrades. The upgrades folder contains scripts that handle upgrading the database between different versions, the sql folder contains the definition of the PHP-Nuke database that we will be working with, and the html folder contains the guts of your PHP-Nuke installation. The html folder contains all the PHP scripts, HTML files, images, CSS stylesheets, and so on that drive PHP-Nuke. Within the html folder are further subfolders; some of these include: modules: Contains the modules that make up your PHP-Nuke site. Modules are the essence of PHP-Nuke's operation; we look at them from article Your First Page onwards. blocks: Contains PHP-Nuke's blocks. Blocks are 'mini-functionality' units and usually provide snippet views of modules. We will look at blocks in article Managing the Site. language: Contains PHP-Nuke language files. These allow the language of PHP-Nuke's interface to be changed. images: Contains images used in the display of the PHP-Nuke site. themes: Contains the themes for PHP-Nuke. The use of themes allows you to completely change the look of a PHP-Nuke site with a click of a button. includes, db: Contain code to support the running of PHP-Nuke. The db folder, for example, contains database access code. admin: Contains code to power the administration area of your site. Downloading the Patches No software is without its flaws, and PHP-Nuke is no exception. After a release, the large user community invariably finds problems and potential security holes. Furthermore, PHP-Nuke also contains features such as its forum, which is in fact the phpBB application specially modified to work with PHP-Nuke. phpBB itself is updated on a regular basis to correct critical security vulnerabilities or to fix other problems, and consequently the corresponding part of PHP-Nuke also needs to be updated. Rather than releasing a new version of PHP-Nuke for these situations, patches for its various parts are released. ChatServ's patches from www.nukeresources.com are mostly concerned with variable validation, in other words, making sure that the variables used in the application are of the right type for storing in the database. This has been an area of weakness with many earlier versions of PHP-Nuke. The patches are often incorporated into subsequent versions of PHP-Nuke so that each new version becomes more robust. Note that you don't have to apply the patches, and PHP-Nuke will still work without them. However, by applying them you will have taken a good step towards improving the security of your site. If you navigate to http://www.nukeresources.com, there is a handy menu on the front page to access the patches: To obtain the patch corresponding to your version, click the link and you will be taken to the relevant file (of course, www.nukeresources is a PHP-Nuke powered site!). Click on the Nuke 7.8 link to go to the Downloads page of www.nukeresources.com. On this page, clicking the Download this file Now! link will download the patches for PHP-Nuke 7.8. The name of this file will be of the form 78patched.tar.gz. This is a GZIP compressed file that contains all the patches that we are about to apply. The GZIP file can be extracted with WinZip, or any of the other utilities we discussed earlier. The patches are simply modified versions of the original PHP-Nuke files. The original files have been modified to address various security issues that may have been identified since the initial release, or maybe since the last version of the patch. Applying the Patches To apply the patches, first we need to extract the 78patched.tar.gz file. We will extract the files into a folder called patches that we will create in the PHP-Nuke-7.8 folder. After extracting the files, copy the contents of the patches folder to your html folder. Do not copy the patches folder, copy its contents. The patches folder contains files that replace the files in the default installation, and you get a Confirm File Replace window. Select Yes for all the files, and when the copying is complete, your installation is ready to go. We have performed this patching immediately after installing PHP-Nuke, but we could have done this at any time. Doing it at this point is more sensible as it means that we are working on the most secure version of PHP-Nuke. Also, the patch process we have described here overwrites existing PHP-Nuke installation files. If you have modified these files, then the changes will be lost on applying the patch. Thus applying the patches later without disturbing any of your changes becomes more demanding. There is one further thing to watch for after applying the patches. You may find that the patched files have had their permissions set to read-only, and that you are unable to modify the files. To modify the files (and we do have to modify at least the config.php file in this article) you will need to remove this setting. You can do this on Windows by right-clicking on the file, selecting Properties from the menu, unchecking the Read-only setting, and clicking the OK button: We've done almost all the work with the files that we need to; now we turn our attention to creating and populating PHP-Nuke's database. Preparing the PHP-Nuke Database We'll be using the phpMyAdmin tool to do our database work. phpMyAdmin is part of the XAMPP installation (detailed in Appendix A), or can be downloaded from www.phpmyadmin.net, if you don't already have it. phpMyAdmin provides a powerful web interface for working with your MySQL databases. First of all, open your browser and navigate to http://localhost/phpmyadmin/, or whatever the location of your phpMyAdmin installation is: Creating the Database We need to create an empty database for PHP-Nuke to hold all the data about our site. To do this, we simply enter a name for our database into the Create new database textbox: We will call our database nuke. Enter this, and click the Create button. The name you give doesn't particularly matter, as long as it is not the name of some already existing database. If you try to use the same name as an already existing database, phpMyAdmin will inform you of this, and no action will be taken. The exact name isn't particularly important at this point because there is another configuration step coming up, which requires us to tell PHP-Nuke the name of the database we've created for it. After clicking Create, the screen will reload and you will be notified of the successful creation of your database: Creating a Database User Before we start populating the database, we will create a database user that can access only the PHP-Nuke database. This user is not a human, but will be used by PHP-Nuke to connect to the database while it performs its data-handling activities. The advantage of creating a database user is that it adds an extra level of security to our installation. PHP-Nuke will be able to work with data only in this database of the MySQL server, and no other. Also, PHP-Nuke will be restricted in the operations it can perform on the tables in the database. We will need to create a username for this boxed-in user to access the nuke database. Let's call our user nuker and go with the password nukepassword. However, in order to add an extra level of security we will introduce some digits into nukepassword, and some other slight twists, to strengthen it, and so use the word No0kPassv0rd as our database user password. To create the database user, click the SQL tab, and enter the following into the Run SQL query/queries on database textbox: GRANT ALL PRIVILEGES ON nuke.* TO nuker@localhost IDENTIFIED BY 'No0kPassv0rd' WITH GRANT OPTION Your screen should look like this: Click the Go button, and the database user will be created: Populating the Database Now we are ready to fill our database with data for PHP-Nuke. This doesn't mean we start typing the data in ourselves; the data comes with the PHP-Nuke installation. This data is found in a file called nuke.sql in the sql folder of the PHP-Nuke installation. This file contains a number of SQL statements that define the tables within the database and also fill them with 'raw' data for the site. However, before we fill the database with the tables from this file, we need to make a modification to this file. By default, the name of each database table in PHP-Nuke begins with nuke_. For example, there is a table with the name nuke_stories that holds information about stories, and a table called nuke_topics that holds information about story topics. These are just two of the tables; there are more than 90 in the standard installation. The word nuke_ is a 'table prefix', and is used to ensure that there are no clashes between the names of PHP-Nuke's tables and tables from another application in the same database, since the rest of the table name is descriptive of the data stored in the table, and other applications may have similarly named tables. What this does mean is that unless this table prefix is changed, the table names in your PHP-Nuke database will be known to anyone attempting to hack your site. Many of the typical attacks used to damage PHP-Nuke are based around the fact that the names of the tables in the database powering a PHP-Nuke site are known. By changing the table prefix to something less obvious, you have taken another step to making your site more secure.
Read more
  • 0
  • 8
  • 26252

article-image-vue-js-3-0-is-ditching-javascript-for-typescript-what-else-is-new
Bhagyashree R
01 Oct 2018
5 min read
Save for later

Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new?

Bhagyashree R
01 Oct 2018
5 min read
Last week, Evan You, the creator of Vue.js gave a summary of what to expect in the coming major release of Vue.js 3.0. To provide a better support for TypeScript, the codebase is being written in TypeScript leaving behind vanilla JS. This new codebase currently targets evergreen browsers such as Google Chrome, and assumes baseline native ES2015 support. Let’s see what else we will see in this major iteration: High-level API changes Template syntax will not see much changes, except some tweaks in the scoped slots syntax. Vue.js 3.0 will come with native support for class-based components. This will provide users with an API that is pleasant to use in native ES2015 without the need of any transpilation or stage-x features. The Vue.js 3.x codebase will be written in TypeScript, providing improved support for TypeScript. Support for the 2.x object-based component format will be provided by internally transforming the object to a corresponding class. Functional components can now be plain functions, however, the async components will need to be explicitly created via a helper function. The virtual DOM format used in render functions will see major changes. Upgrading will be easier if you don’t heavily rely on handwritten (non-JSX) render functions in your app. Mixins will still be supported. Cleaner and more maintainable source code architecture To make contributing to Vue.js easier, Vue.js 3.0 is being re-written from the ground up for a cleaner and more maintainable architecture. To do this, the developers are breaking some internal functionalities into individual packages to isolate the scope of complexity. For example, the observer module will be converted to its own package, with its own public API and tests. As mentioned earlier, the codebase is being re-written in TypeScript. This makes proficiency in TypeScript a primary prerequisite for contributing to the new codebase. However, the type information and IDE support will enable a new contributor to easily make meaningful contributions. Proxy-based observation mechanism Vue.js 3.0 will come with a Proxy-based observer implementation that provides reactivity tracking with full language coverage. This aims to eliminate a number of limitations of the current implementation of Vue.js 2, which is based on Object.defineProperty: Detection of property addition or deletion Detection of Array index mutation or .length mutation Support for Map, Set, WeakMap and WeakSet Additionally, this new observer will have the following features: Exposed API for creating observables: This provides a lightweight and simple cross-component state management solution for small to medium scale scenarios. Lazy observation by default: In Vue.js 3.x, only the data used to render the initially visible part of an app will need to be observed. This will eliminate the overhead on app startup if your dataset is huge. Immutable observables: Immutable versions of a value can be created to prevent mutations even on nested properties, except when the system temporarily unlocks it internally. Better debugging capabilities: Two new hooks, renderTracked and renderTriggered are added. These will help you precisely trace when and why a component re-render is tracked or triggered. Other runtime improvements Smaller runtime The new codebase is designed to be tree-shaking friendly. The built-in components and directive runtime helpers will be imported on-demand and are tree-shakable. As a result, the constant baseline size for the new runtime is <10kb gzipped. Improved performance On initial benchmarks, the developers are observing up to 100% performance improvement across the board. Vue.js 3.0 will reduce the time spent in JavaScript when your app boots up. Built-in support for Fragments and Portals Vue 3.0 will come with built-in support for Fragments and Portals. Fragments are the components returning multiple root nodes. Portals are introduced to render a sub-tree in another part of the DOM, instead of inside the component. Improved slots mechanism All compiler-generated slots are now functions and invoked during the child component’s render call. This will ensure dependencies in slots are collected as dependencies for the child instead of the parent. This means that: When a slot content changes, only the child re-renders When the parent re-renders the child does not have to if its slot content did not change This improvement will provide even more precise change detection at the component tree level. Custom Renderer API Using this API you will be able to create custom renderers. With this API, it will be easier for the render-to-native projects like Weex and NativeScript Vue to stay up-to-date with upstream changes. This API will also make the creation of custom renderers for various other purposes trivially easier. Along with these, they have announced few compiler improvements and IE11 support. They haven’t revealed any date yet but we can expect Vue.js 3.0 to release in 2019. To know more, check out their official announcement on Medium. Vue CLI 3.0 is here as the standard build toolchain behind Vue applications Introducing Vue Native for building native mobile apps with Vue.js Testing Single Page Applications (SPAs) using Vue.js developer tools
Read more
  • 0
  • 0
  • 25779
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-setup-postgresql-nodejs
Antonio Cucciniello
14 Feb 2017
7 min read
Save for later

How to Setup PostgreSQL with Node.js

Antonio Cucciniello
14 Feb 2017
7 min read
Have you ever wanted to add a PostgreSQL database to the backend of your web application? If so, by the end of this tutorial, you should have a PostgreSQL database up and running with your Node.js web application. PostgreSQL is a popular open source relational database. This tutorial assumes that you have Node and NPM installed on your machine; if you need help installing that, check out this link. First, let's download PostgreSQL. PostgreSQL I am writing and testing this tutorial on a Mac, so it will primarily caterMac, but I will include links in the reference section for downloading PostgreSQL on select Linux distributions and Windows as well. If you are on a Mac, however, you can follow these steps. First, you must have Homebrew. If you do not have it, you may install it byfollowing the directions here. Once Homebrew is installed and working, you can run the following: $ brew update $ brew install postgres This downloads and installs PostgreSQL for you. Command-line Setup Now, open a new instance of a terminal by pressing Command+T. Once you have the new tab, you can start a PostgreSQL server with the command: postgres -D /usr/local/var/postgres.This allows you to use Postgres locally and gives you a logger for all of the commands you run on your databases. Next, open a new instance of a terminal with Command+Tand enter$ psql. This is similar to a command center for Postgres. It allows you to create things in your database and plenty more. You can manually enter commands here to set up your environment. For this example, we willcreatea database called example. To do that, while in the terminal tab with psql running, enter CREATE DATABASE example;. To confirm that the database was made, you should seeCREATE DATABASE as the output. Also, to list all databases, you would usel. Then, we will want to connect to our new database with the command connect example. This should give you the following message telling you that you are connected: You are connected to database "example" as user In order to store things in this database, we need to create a table. Enter the following: CREATE TABLE numbers(   age integer   ); So, this format is probably confusing if you have never seen it before. This is telling Postgres to create a table in this database called numbers, with one column called age, and all items in the age column will be of the data type integer. Now, this should give us the output CREATE TABLE,but if you want to list all tables in a database, you shoulduse the dt command. For the sake of this example, we are going to add a row to the table so that we have some data to play with and prove that this works. When you want to add something to a database in PostgreSQL, you use the INSERT command. Enter this command to have the first row in the table equal to 732: INSERT INTO numbers VALUES (732); This should give you an output of INSERT 0 1. To check the contents of the table, simply typeTABLE numbers;. Now that we have a database up and running with a table with a value, we can setup our code to access this table and pull the value from it. Code Setup In order to follow this example, you will need to make sure that you have the following packages: pg, pg-format, and express. Enter the project directory you plan on working in (and where you have Node and NPMinstalled). For pg, usenpm install -pg.This is a Postgres client for Node. For pg-format, usenpm install pg-format.This allows us to safely make dynamic SQL queries. For express,use npm install express --save.This allows us to create a quick and basic server. Now that those packages are installed, we can code! Actual Code Let's create a file called app.js for this as the main point in our program. At the top, establish your variables: const express = require('express') const app = express() var pg = require('pg') var format = require('pg-format') var PGUSER = 'yourUserName' var PGDATABASE = 'example' var age = 732 The first two lines allow us to use the package express and help us make our server. The next two lines allow us to use the packages pg and pg-format. PGUSER is a variable that holds the user to your database. Enter your username here in place of yourUserName. PGDATABASE is a variable to hold the database name that we wouldlike to connect to. Then, the last variable is to hold the number that we stored in the database. Next, add this: var config = {   user: PGUSER, // name of the user account   database: PGDATABASE, // name of the database   max: 10, // max number of clients in the pool   idleTimeoutMillis: 30000 // how long a client is allowed to remain idle before being closed } var pool = new pg.Pool(config) var myClient Here, we establish a config object that allows pg to know that we want to connect to the database specified as the user specified, with a maximum of 10 clients in a pool of clients with a time out of 30,000 milliseconds of how long a client can be idle before disconnected from the database. Then, we create a new pool of clients according to that config file. Afterwards, we create a variable called myClient to store the client we get from the database in the next step. Now, enter the last bit of code here: pool.connect(function (err, client, done) { if (err) console.log(err) app.listen(3000, function () { console.log('listening on 3000') }) myClient = client var ageQuery = format('SELECT * from numbers WHERE age = %L', age) myClient.query(ageQuery, function (err, result) { if (err) { console.log(err) } console.log(result.rows[0]) }) }) This tries to connect to the database with one of the clients from the pool. If a client successfully connects to the database, we start our server by listening on a port (here, I use 3000). Then, we get access to our client. Next,we create a variable called ageQuery to make a dynamic SQL query. A query is a command to a database. Here, we are making a SELECT query to the database, checking all rows in the table called numbers where the age column is equal to 732. If that is a successful query (meaning, it finds a row with 732 as the value), then we will log the answer. It's now time to test all your hard work! Save the file and run the command in a terminal: node app.js Your output should look like this: listening on 3000 { age: 732 } Conclusion There you go! You now have a PostgreSQL database connected to your web app. To summarize our work, here is a quick breakdown of what happened: We installed PostgreSQL through Homebrew. We started our Local PostgreSQL server. We opened psql in a terminal to use commands manually. We created a database called example. We created a table in that database called numbers. We added a value to that table. We installed pg, pg-format, and express. We used Express to create a server. We created a pool of clients using a config object to access the database. We queried the table in the database for 732. We logged the value. Check out the code for this tutorial on GitHub. About the Author Antonio Cucciniello is a software engineer with a background in C, C++, and Javascript (Node.Js) from New Jersey. His most recent project called Edit Docs is an Amazon Echo skill that allows users to edit Google Drive files using our voice. He loves building cool things with software and reading books on self-help and improvement, finance, and entrepreneurship. You can find Antonio on Twitter @antocucciniello and on GitHub.
Read more
  • 0
  • 1
  • 25763

article-image-role-angularjs
Packt
16 Dec 2014
7 min read
Save for later

Role of AngularJS

Packt
16 Dec 2014
7 min read
This article by Sandeep Kumar Patel, author of Responsive Web Design with AngularJS we will explore the role of AngularJS for responsive web development. Before going into AngularJS, you will learn about responsive "web development in general. Responsive" web development can be performed "in two ways: Using the browser sniffing approach Using the CSS3 media queries approach (For more resources related to this topic, see here.) Using the browser sniffing approach When we view" web pages through our browser, the browser sends a user agent string to the server. This string" provides information like browser and device details. Reading these details, the browser can be redirected to the appropriate view. This method of reading client details is known as browser sniffing. The browser string has a lot of different information about the source from where the request is generated. The following diagram shows the information shared by the user string:   Details of the parameters" present in the user agent string are as follows: Browser name: This" represents the actual name of the browser from where the request has originated, for example, Mozilla or Opera Browser version: This represents" the browser release version from the vendor, for example, Firefox has the latest version 31 Browser platform: This represents" the underlying engine on which the browser is running, for example, Trident or WebKit Device OS: This represents" the operating system running on the device from where the request has originated, for example, Linux or Windows Device processor: This represents" the processor type on which the operating system is running, for example, 32 or 64 bit A different browser string is generated based on the combination of the device and type of browser used while accessing a web page. The following table shows examples of browser "strings: Browser Device User agent string Firefox Windows desktop Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0 Chrome OS X 10 desktop Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.66 Safari/537.36 Opera Windows desktop Opera/9.80 (Windows NT 6.0) Presto/2.12.388 Version/12.14 Safari OS X 10 desktop Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/537.13+ (KHTML, like Gecko) Version/5.1.7 Safari/534.57.2 Internet Explorer Windows desktop Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0   AngularJS has features like providers or services which can be most useful for this browser user-agent sniffing and a redirection approach. An AngularJS provider can be created that can be used in the configuration in the routing module. This provider can have reusable properties and reusable methods that can be used to identify the device and route the specific request to the appropriate template view. To discover more about user agent strings on various browser and device combinations, visit http://www.useragentstring.com/pages/Browserlist/. CSS3 media queries approach CSS3 brings a "new horizon to web application development. One of the key features" is media queries to develop a responsive web application. Media queries uses media types and features as "deciding parameters to apply the style to the current web page. Media type CSS3 media queries" provide rules for media types to have different styles applied to a web page. In the media queries specification, media types that should be supported by the implemented browser are listed. These media types are as follows: all: This is used" for all media type devices aural: This is "used for speech and sound synthesizers braille: This is used "for braille tactile feedback devices embossed: This" is used for paged braille printers handheld: This is "used for small or handheld devices, for example, mobile print: This" is used for printers, for example, an A4 size paper document projection: This is" used for projection-based devices, such as a projector screen with a slide screen: This is "used for computer screens, for example, desktop and "laptop screens tty: This is" used for media using a fixed-pitch character grid, such as teletypes and terminals tv: This is used "for television-type devices, for example, webOS "or Android-based television A media rule can be declared using the @media keyword with the specific type for the targeted media. The following code shows an example of the media rule usage, where the background body color" is black and text is white for the screen type media, and background body color is white and text is black for the printer media type: @media screen { body {    background:black;    color:white; } }   @media print{ body {    background:white;    color:black; } } An external style "sheet can be downloaded and applied to the current page based on the media type with the HTML link tag. The following code uses the link type in conjunction with media type: <link rel='stylesheet' media='screen' href='<fileName.css>' /> To learn more about" different media types,visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_types. Media feature Conditional styles can be "applied to a page based on different features of a device. The features that are "supported by CSS3 media queries to apply styles are as follows: color: Based on the" number of bits used for a color component by the device-specific style sheet, this can be applied to a page. color-index: Based "on the color look up, table styles can be applied "to a page. aspect-ratio: Based "on the aspect ratio, display area style sheets can be applied to a page. device-aspect-ratio: Based "on the device aspect ratio, styles can be applied to a page. device-height: Based "on device height, styles can be applied to a page. "This includes the entire screen. device-width: Based "on device width, styles can be applied to a page. "This includes the entire screen. grid: Based "on the device type, bitmap or grid, styles can be applied "to a page. height: Based on" the device rendering area height, styles can be used "to a page. monochrome: Based on" the monochrome type, styles can be applied. "This represents the number of bits used by the device in the grey scale. orientation: Based" on the viewport mode, landscape or portrait, styles can be applied to a page. resolution: Based" on the pixel density, styles can be applied to a page. scan: Based on the "scanning type used by the device for rendering, styles can be applied to a page. width: Based "on the device screen width, specific styles can be applied. The following" code shows some examples" of CSS3 media queries using different device features for conditional styles used: //for screen devices with a minimum aspect ratio 0.5 @media screen and (min-aspect-ratio: 1/2) { img {    height: 70px;    width: 70px; } } //for all device in portrait viewport @media all and (orientation: portrait) { img {    height: 100px;    width: 200px; } } //For printer devices with a minimum resolution of 300dpi pixel density @media print and (min-resolution: 300dpi) { img {    height: 600px;    width: 400px; } } To learn more" about different media features, visit https://developer.mozilla.org/en-US/docs/Web/CSS/@media#Media_features. Summary In this chapter, you learned about responsive design and the SPA architecture. "You now understand the role of the AngularJS library when developing a responsive application. We quickly went through all the important features of AngularJS with the coded syntax. In the next chapter, you will set up your AngularJS application and learn to create dynamic routing-based on the devices. Resources for Article:  Further resources on this subject: Best Practices for Modern Web Applications [article] Important Aspect of AngularJS UI Development [article] A look into responsive design frameworks [article]
Read more
  • 0
  • 23
  • 25321

article-image-using-handlebars-express
Packt
10 Aug 2015
17 min read
Save for later

Using Handlebars with Express

Packt
10 Aug 2015
17 min read
In this article written by Paul Wellens, author of the book Practical Web Development, we cover a brief description about the following topics: Templates Node.js Express 4 Templates Templates come in many shapes or forms. Traditionally, they are non-executable files with some pre-formatted text, used as the basis of a bazillion documents that can be generated with a computer program. I worked on a project where I had to write a program that would take a Microsoft Word template, containing parameters like $first, $name, $phone, and so on, and generate a specific Word document for every student in a school. Web templating does something very similar. It uses a templating processor that takes data from a source of information, typically a database and a template, a generic HTML file with some kind of parameters inside. The processor then merges the data and template to generate a bunch of static web pages or dynamically generates HTML on the fly. If you have been using PHP to create dynamic webpages, you will have been busy with web templating. Why? Because you have been inserting PHP code inside HTML files in between the <?php and ?> strings. Your templating processor was the Apache web server that has many additional roles. By the time your browser gets to see the result of your code, it is pure HTML. This makes this an example of server side templating. You could also use Ajax and PHP to transfer data in the JSON format and then have the browser process that data using JavaScript to create the HTML you need. Combine this with using templates and you will have client side templating. Node.js What Le Sacre du Printemps by Stravinsky did to the world of classical music, Node.js may have done to the world of web development. At its introduction, it shocked the world. By now, Node.js is considered by many as the coolest thing. Just like Le Sacre is a totally different kind of music—but by now every child who has seen Fantasia has heard it—Node.js is a different way of doing web development. Rather than writing an application and using a web server to soup up your code to a browser, the application and the web server are one and the same. This may sound scary, but you should not worry as there is an entire community that developed modules you can obtain using the npm tool. Before showing you an example, I need to point out an extremely important feature of Node.js: the language in which you will write your web server and application is JavaScript. So Node.js gives you server side JavaScript. Installing Node.js How to install Node.js will be different, depending on your OS, but the result is the same everywhere. It gives you two programs: Node and npm. npm The node packaging manager (npm)is the tool that you use to look for and install modules. Each time you write code that needs a module, you will have to add a line like this in: var module = require('module'); The module will have to be installed first, or the code will fail. This is how it is done: npm install module or npm -g install module The latter will attempt to install the module globally, the former, in the directory where the command is issued. It will typically install the module in a folder called node_modules. node The node program is the command to use to start your Node.js program, for example: node myprogram.js node will start and interpret your code. Type Ctrl-C to stop node. Now create a file myprogram.js containing the following text: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); So, if you installed Node.js and the required http module, typing node myprogram.js in a terminal window, your console will start up a web server. And, when you type http://localhost:8080 in a browser, you will see the world famous two word program example on your screen. This is the equivalent of getting the It works! thing, after testing your Apache web server installation. As a matter of fact, if you go to http://localhost:8080/it/does/not/matterwhat, the same will appear. Not very useful maybe, but it is a web server. Serving up static content This does not work in a way we are used to. URLs typically point to a file (or a folder, in which case the server looks for an index.html file) , foo.html, or bar.php, and when present, it is served up to the client. So what if we want to do this with Node.js? We will need a module. Several exist to do the job. We will use node-static in our example. But first we need to install it: npm install node-static In our app, we will now create not only a web server, but a fileserver as well. It will serve all the files in the local directory public. It is good to have all the so called static content together in a separate folder. These are basically all the files that will be served up to and interpreted by the client. As we will now end up with a mix of client code and server code, it is a good practice to separate them. When you use the Express framework, you have an option to have Express create these things for you. So, here is a second, more complete, Node.js example, including all its static content. hello.js, our node.js app var http = require('http'); var static = require('node-static'); var fileServer = new static.Server('./public'); http.createServer(function (req, res) { fileServer.serve(req,res); }).listen(8080, 'localhost'); console.log('Server running at http://localhost:8080'); hello.html is stored in ./public. <!DOCTYPE html> <html> <head> <meta charset="UTF-8" /> <title>Hello world document</title> <link href="./styles/hello.css" rel="stylesheet"> </head> <body> <h1>Hello, World</h1> </body> </html> hello.css is stored in public/styles. body { background-color:#FFDEAD; } h1 { color:teal; margin-left:30px; } .bigbutton { height:40px; color: white; background-color:teal; margin-left:150px; margin-top:50px; padding:15 15 25 15; font-size:18px; } So, if we now visit http://localhost:8080/hello, we will see our, by now too familiar, Hello World message with some basic styling, proving that our file server also delivered the CSS file. You can easily take it one step further and add JavaScript and the jQuery library and put it in, for example, public/js/hello.js and public/js/jquery.js respectively. Too many notes With Node.js, you only install the modules that you need, so it does not include the kitchen sink by default! You will benefit from that for as far as performance goes. Back in California, I have been a proud product manager of a PC UNIX product, and one of our coolest value-add was a tool, called kconfig, that would allow people to customize what would be inside the UNIX kernel, so that it would only contain what was needed. This is what Node.js reminds me of. And it is written in C, as was UNIX. Deja vu. However, if we wanted our Node.js web server to do everything the Apache Web Server does, we would need a lot of modules. Our application code needs to be added to that as well. That means a lot of modules. Like the critics in the movie Amadeus said: Too many notes. Express 4 A good way to get the job done with fewer notes is by using the Express framework. On the expressjs.com website, it is called a minimal and flexible Node.js web application framework, providing a robust set of features for building web applications. This is a good way to describe what Express can do for you. It is minimal, so there is little overhead for the framework itself. It is flexible, so you can add just what you need. It gives a robust set of features, which means you do not have to create them yourselves, and they have been tested by an ever growing community. But we need to get back to templating, so all we are going to do here is explain how to get Express, and give one example. Installing Express As Express is also a node module, we install it as such. In your project directory for your application, type: npm install express You will notice that a folder called express has been created inside node_modules, and inside that one, there is another collection of node-modules. These are examples of what is called middleware. In the code example that follows, we assume app.js as the name of our JavaScript file, and app for the variable that you will use in that file for your instance of Express. This is for the sake of brevity. It would be better to use a string that matches your project name. We will now use Express to rewrite the hello.js example. All static resources in the public directory can remain untouched. The only change is in the node app itself: var express = require('express'); var path = require('path'); var app = express(); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options )); app.listen(app.get('port'), function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); This code uses so called middleware (static) that is included with express. There is a lot more available from third parties. Well, compared to our node.js example, it is about the same number of lines. But it looks a lot cleaner and it does more for us. You no longer need to explicitly include the HTTP module and other such things. Templating and Express We need to get back to templating now. Imagine all the JavaScript ecosystem we just described. Yes, we could still put our client JavaScript code in between the <script> tags but what about the server JavaScript code? There is no such thing as <?javascript ?> ! Node.js and Express, support several templating languages that allow you to separate layout and content, and which have the template system do the work of fetching the content and injecting it into the HTML. The default templating processor for Express appears to be Jade, which uses its own, albeit more compact than HTML, language. Unfortunately, that would mean that you have to learn yet another syntax to produce something. We propose to use handlebars.js. There are two reasons why we have chosen handlebars.js: It uses <html> as the language It is available on both the client and server side Getting the handlebars module for Express Several Express modules exist for handlebars. We happen to like the one with the surprising name express-handlebars. So, we install it, as follows: npm install express-handlebars Layouts I almost called this section templating without templates as our first example will not use a parameter inside the templates. Most websites will consist of several pages, either static or dynamically generated ones. All these pages usually have common parts; a header and footer part, a navigation part or menu, and so on. This is the layout of our site. What distinguishes one page from another, usually, is some part in the body of the page where the home page has different information than the other pages. With express-handlebars, you can separate layout and content. We will start with a very simple example. Inside your project folder that contains public, create a folder, views, with a subdirectory layout. Inside the layouts subfolder, create a file called main.handlebars. This is your default layout. Building on top of the previous example, have it say: <!doctype html> <html> <head> <title>Handlebars demo</title> </head> <link href="./styles/hello.css" rel="stylesheet"> <body> {{{body}}} </body> </html> Notice the {{{body}}} part. This token will be replaced by HTML. Handlebars escapes HTML. If we want our HTML to stay intact, we use {{{ }}}, instead of {{ }}. Body is a reserved word for handlebars. Create, in the folder views, a file called hello.handlebars with the following content. This will be one (of many) example of the HTML {{{body}}}, which will be replaced by: <h1>Hello, World</h1> Let’s create a few more june.handlebars with: <h1>Hello, June Lake</h1> And bodie.handlebars containing: <h1>Hello, Bodie</h1> Our first handlebars example Now, create a file, handlehello.js, in the project folder. For convenience, we will keep the relevant code of the previous Express example: var express = require('express'); var path = require('path'); var app = express(); var exphbs = require(‘express-handlebars’); app.engine('handlebars', exphbs({defaultLayout: 'main'})); app.set('view engine', 'handlebars'); app.set('port', process.env.PORT || 3000); var options = { dotfiles: 'ignore', etag: false, extensions: ['htm', 'html'], index: false }; app.use(express.static(path.join(__dirname, 'public') , options  )); app.get('/', function(req, res) { res.render('hello');   // this is the important part }); app.get('/bodie', function(req, res) { res.render('bodie'); }); app.get('/june', function(req, res) { res.render('june'); }); app.listen(app.get('port'),  function () { console.log('Hello express started on http://localhost:' + app.get('port') + '; press Ctrl-C to terminate.' ); }); Everything that worked before still works, but if you type http://localhost:3000/, you will see a page with the layout from main.handlebars and {{{body}}} replaced by, you guessed it, the same Hello World with basic markup that looks the same as our hello.html example. Let’s look at the new code. First, of course, we need to add a require statement for our express-handlebars module, giving us an instance of express-handlebars. The next two lines specify what the view engine is for this app and what the extension is that is used for the templates and layouts. We pass one option to express-handlebars, defaultLayout, setting the default layout to be main. This way, we could have different versions of our app with different layouts, for example, one using Bootstrap and another using Foundation. The res.render calls determine which views need to be rendered, so if you type http:// localhost:3000/june, you will get Hello, June Lake, rather than Hello World. But this is not at all useful, as in this implementation, you still have a separate file for each Hello flavor. Let’s create a true template instead. Templates In the views folder, create a file, town.handlebars, with the following content: {{!-- Our first template with tokens --}} <h1>Hello, {{town}} </h1> Please note the comment line. This is the syntax for a handlebars comment. You could HTML comments as well, of course, but the advantage of using handlebars comments is that it will not show up in the final output. Next, add this to your JavaScript file: app.get('/lee', function(req, res) { res.render('town', { town: "Lee Vining"}); }); Now, we have a template that we can use over and over again with different context, in this example, a different town name. All you have to do is pass a different second argument to the res.render call, and {{town}} in the template, will be replaced by the value of town in the object. In general, what is passed as the second argument is referred to as the context. Helpers The token can also be replaced by the output of a function. After all, this is JavaScript. In the context of handlebars, we call those helpers. You can write your own, or use some of the cool built-in ones, such as #if and #each. #if/else Let us update town.handlebars as follows: {{#if town}} <h1>Hello, {{town}} </h1> {{else}} <h1>Hello, World </h1> {{/if}} This should be self explanatory. If the variable town has a value, use it, if not, then show the world. Note that what comes after #if can only be something that is either true of false, zero or not. The helper does not support a construct such as #if x < y. #each A very useful built-in helper is #each, which allows you to walk through an array of things and generate HTML accordingly. This is an example of the code that could be inside your app and the template you could use in your view folder: app.js code snippet var californiapeople = {    people: [ {“name":"Adams","first":"Ansel","profession":"photographer",    "born"       :"SanFrancisco"}, {“name":"Muir","first":"John","profession":"naturalist",    "born":"Scotland"}, {“name":"Schwarzenegger","first":"Arnold",    "profession":"governator","born":"Germany"}, {“name":"Wellens","first":"Paul","profession":"author",    "born":"Belgium"} ]   }; app.get('/californiapeople', function(req, res) { res.render('californiapeople', californiapeople); }); template (californiapeople.handlebars) <table class=“cooltable”> {{#each people}}    <tr><td>{{first}}</td><td>{{name}}</td>    <td>{{profession}}</td></tr> {{/each}} </table> Now we are well on our way to do some true templating. You can also write your own helpers, which is beyond the scope of an introductory article. However, before we leave you, there is one cool feature of handlebars you need to know about: partials. Partials In web development, where you dynamically generate HTML to be part of a web page, it is often the case that you repetitively need to do the same thing, albeit on a different page. There is a cool feature in express-handlebars that allows you to do that very same thing: partials. Partials are templates you can refer to inside a template, using a special syntax and drastically shortening your code that way. The partials are stored in a separate folder. By default, that would be views/partials, but you can even use subfolders. Let's redo the previous example but with a partial. So, our template is going to be extremely petite: {{!-- people.handlebars inside views  --}}    {{> peoplepartial }} Notice the > sign; this is what indicates a partial. Now, here is the familiar looking partial template: {{!-- peoplepartialhandlebars inside views/partials --}} <h1>Famous California people </h1> <table> {{#each people}} <tr><td>{{first}}</td><td>{{name}}</td> <td>{{profession}}</td></tr> {{/each}} </table> And, following is the JavaScript code that triggers it: app.get('/people', function(req, res) { res.render('people', californiapeople); }); So, we give it the same context but the view that is rendered is ridiculously simplistic, as there is a partial underneath that will take care of everything. Of course, these were all examples to demonstrate how handlebars and Express can work together really well, nothing more than that. Summary In this article, we talked about using templates in web development. Then, we zoomed in on using Node.js and Express, and introduced Handlebars.js. Handlebars.js is cool, as it lets you separate logic from layout and you can use it server-side (which is where we focused on), as well as client-side. Moreover, you will still be able to use HTML for your views and layouts, unlike with other templating processors. For those of you new to Node.js, I compared it to what Le Sacre du Printemps was to music. To all of you, I recommend the recording by the Los Angeles Philharmonic and Esa-Pekka Salonen. I had season tickets for this guy and went to his inaugural concert with Mahler’s third symphony. PHP had not been written yet, but this particular performance I had heard on the radio while on the road in California, and it was magnificent. Check it out. And, also check out Express and handlebars. Resources for Article: Let's Build with AngularJS and Bootstrap The Bootstrap grid system MODx Web Development: Creating Lists
Read more
  • 0
  • 2
  • 24380

article-image-creating-test-suites-specs-and-expectations-jest
Packt
12 Aug 2015
7 min read
Save for later

Creating test suites, specs and expectations in Jest

Packt
12 Aug 2015
7 min read
In this article by Artemij Fedosejev, the author of React.js Essentials, we will take a look at test suites, specs, and expectations. To write a test for JavaScript functions, you need a testing framework. Fortunately, Facebook built their own unit test framework for JavaScript called Jest. It is built on top of Jasmine - another well-known JavaScript test framework. If you’re familiar with Jasmine you’ll find Jest's approach to testing very similar. However I'll make no assumptions about your prior experience with testing frameworks and discuss the basics first. The fundamental idea of unit testing is that you test only one piece of functionality in your application that usually is implemented by one function. And you test it in isolation - meaning that all other parts of your application which that function depends on are not used by your tests. Instead, they are imitated by your tests. To imitate a JavaScript object is to create a fake one that simulates the behavior of the real object. In unit testing the fake object is called mock and the process of creating it is called mocking. Jest automatically mocks dependencies when you're running your tests. Better yet, it automatically finds tests to execute in your repository. Let's take a look at the example. Create a directory called ./snapterest/source/js/utils/ and create a new file called TweetUtils.js within it, with the following contents: function getListOfTweetIds(tweets) {  return Object.keys(tweets);}module.exports.getListOfTweetIds = getListOfTweetIds; TweetUtils.js file is a module with the getListOfTweetIds() utility function for our application to use. Given an object with tweets, getListOfTweetIds() returns an array of tweet IDs. Using the CommonJS module pattern we export this function: module.exports.getListOfTweetIds = getListOfTweetIds; Jest Unit Testing Now let's write our first unit test with Jest. We'll be testing our getListOfTweetIds() function. Create a new directory: ./snapterest/source/js/utils/__tests__/. Jest will run any tests in any __tests__ directories that it finds within your project structure. So it's important to name your directories with tests: __tests__. Create a TweetUtils-test.js file inside of __tests__:jest.dontMock('../TweetUtils');describe('Tweet utilities module', function () {  it('returns an array of tweet ids', function () {    var TweetUtils = require('../TweetUtils');    var tweetsMock = {      tweet1: {},      tweet2: {},      tweet3: {}    };    var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3'];    var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock);    expect(actualListOfTweetIds).toBe(expectedListOfTweetIds);  });}); First we tell Jest not to mock our TweetUtils module: jest.dontMock('../TweetUtils'); We do this because Jest will automatically mock modules returned by the require() function. In our test we're requiring the TweetUtils module: var TweetUtils = require('../TweetUtils'); Without the jest.dontMock('../TweetUtils') call, Jest would return an imitation of our TweetUtils module, instead of the real one. But in this case we actually need the real TweetUtils module, because that's what we're testing. Creating test suites Next we call a global Jest function describe(). In our TweetUtils-test.js file we're not just creating a single test, instead we're creating a suite of tests. A suite is a collection of tests that collectively test a bigger unit of functionality. For example a suite can have multiple tests which tests all individual parts of a larger module. In our example, we have a TweetUtils module with a number of utility functions. In that situation we would create a suite for the TweetUtils module and then create tests for each individual utility function, like getListOfTweetIds(). describe defines a suite and takes two parameters: Suite name - the description of what is being tested: 'Tweet utilities module'. Suit implementation: the function that implements this suite. In our example, the suite is: describe('Tweet utilities module', function () {  // Suite implementation goes here...}); Defining specs How do you create an individual test? In Jest, individual tests are called specs. They are defined by calling another global Jest function it(). Just like describe(), it() takes two parameters: Spec name: the title that describes what is being tested by this spec: 'returns an array of tweet ids'. Spec implementation: the function that implements this spec. In our example, the spec is: it('returns an array of tweet ids', function () {  // Spec implementation goes here...}); Let's take a closer look at the implementation of our spec: var TweetUtils = require('../TweetUtils');var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}};var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3'];var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock);expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); This spec tests whether getListOfTweetIds() method of our TweetUtils module returns an array of tweet IDs when given an object with tweets. First we import the TweetUtils module: var TweetUtils = require('../TweetUtils'); Then we create a mock object that simulates the real tweets object: var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}}; The only requirement for this mock object is to have tweet IDs as object keys. The values are not important hence we choose empty objects. Key names are not important either, so we can name them tweet1, tweet2 and tweet3. This mock object doesn't fully simulate the real tweet object. Its sole purpose is to simulate the fact that its keys are tweet IDs. The next step is to create an expected list of tweet IDs: var expectedListOfTweetIds = ['tweet1', 'tweet2', 'tweet3']; We know what tweet IDs to expect because we've mocked a tweets object with the same IDs. The next step is to extract the actual tweet IDs from our mocked tweets object. For that we use getListOfTweetIds()that takes the tweets object and returns an array of tweet IDs: var actualListOfTweetIds = TweetUtils.getListOfTweetIds(tweetsMock); We pass tweetsMock to that method and store the results in actualListOfTweetIds. The reason this variable is named actualListOfTweetIds is because this list of tweet IDs is produced by the actual getListOfTweetIds() function that we're testing. Setting Expectations The final step will introduce us to a new important concept: expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); Let's think about the process of testing. We need to take an actual value produced by the method that we're testing - getListOfTweetIds(), and match it to the expected value that we know in advance. The result of that match will determine if our test has passed or failed. The reason why we can guess what getListOfTweetIds() will return in advance is because we've prepared the input for it - that's our mock object: var tweetsMock = {  tweet1: {},  tweet2: {},  tweet3: {}}; So we can expect the following output from calling TweetUtils.getListOfTweetIds(tweetsMock): ['tweet1', 'tweet2', 'tweet3'] But because something can go wrong inside of getListOfTweetIds() we cannot guarantee this result - we can only expect it. That's why we need to create an expectation. In Jest, an Expectation is built using expect()which takes an actual value, for example: actualListOfTweetIds. expect(actualListOfTweetIds) Then we chain it with a Matcher function that compares the actual value with the expected value and tells Jest whether the expectation was met. expect(actualListOfTweetIds).toEqual(expectedListOfTweetIds); In our example we use the toEqual() matcher function to compare two arrays. Click here for a list of all built-in matcher functions in Jest. And that's how you create a spec. A spec contains one or more expectations. Each expectation tests the state of your code. A spec can be either a passing spec or a failing spec. A spec is a passing spec only when all expectations are met, otherwise it's a failing spec. Well done, you've written your first testing suite with a single spec that has one expectation. Continue reading React.js Essentials to continue your journey into testing.
Read more
  • 0
  • 0
  • 24329
article-image-storing-passwords-using-freeradius-authentication
Packt
08 Sep 2011
6 min read
Save for later

Storing Passwords Using FreeRADIUS Authentication

Packt
08 Sep 2011
6 min read
In the previous article we covered the Authentication Methods used while working with FreeRADIUS. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches methods for storing passwords and how they work. Passwords do not need to be stored in clear text and it is better to store them in a hashed format. There are, however, limitations to the kind of authentication protocols that can be used when the passwords are stored as a hash which we will explore in this article. (For more resources on this subject, see here.) Storing passwords Username and password combinations have to be stored somewhere. The following list mentions some of the popular places: Text files: You should be familiar with this method by now. SQL databases: FreeRADIUS includes modules to interact with SQL databases. MySQL is very popular and widely used with FreeRADIUS. Directories: Microsoft's Active Directory or Novell's e-Directory are typical enterprise-size directories. OpenLDAP is a popular open source alternative. The users file and the SQL database that can be used by FreeRADIUS store the username and password as AVPs. When the value of this AVP is in clear text, it can be dangerous if the wrong person gets hold of it. Let's see how this risk can be minimized. Hash formats To reduce this risk, we can store the passwords in a hashed format. A hashed format of a password is like a digital fingerprint of that password's text value. There are many different ways to calculate this hash, for example MD5 or SHA1. The end result of a hash should be a one-way fixed-length encrypted string that uniquely represents the password. It should be impossible to retrieve the original password out of the hash. To make the hash even more secure and more immune to dictionary attacks we can add a salt to the function that generates the hash. A salt is randomly generated bits to be used in combination with the password as input to the one way hash function. With FreeRADIUS we store the salt along with the hash. It is therefore essential to have a random salt with each hash to make a rainbow table attack difficult. The pap module, which is used for PAP authentication, can use passwords stored in the following hash formats to authenticate users: Both MD5 and SSH1 hash functions can be used with a salt to make it more secure. Time for action – hashing our password We will replace the Cleartext-Password AVP in the users file with a more secure hashed password AVP in this section. There seems to be a general confusion on how the hashed password should be created and presented. We will help you clarify this issue in order to produce working hashes for each format. A valuable URL to assist us with the hashes is the OpenLDAP FAQ: http://www.openldap.org/faq/data/cache/419.html There are a few sections that show how to create different types of password hashes. We can adapt this for our own use in FreeRADIUS. Crypt-Password Crypt password hashes have their origins in Unix computing. Stronger hashing methods are preferred over crypt, although crypt is still widely used. The following Perl one-liner will produce a crypt password for passme with the salt value of 'salt': #> perl -e 'print(crypt("passme","salt")."n");' Use this output and change Alice's check entry in the users file from: "alice" Cleartext-Password := "passme" to: "alice" Crypt-Password := "sa85/iGj2UWlA" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the crypt password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using CRYPT password "sa85/iGj2UWlA" MD5-Password The MD5 hash is often used to check the integrity of a file. When downloading a Linux ISO image you are also typically supplied with the MD5 sum of the file. You can then confirm the integrity of the file by using the md5sum command. We can also generate an MD5 hash from a password. We will use Perl to generate and encode the MD5 hash in the correct format that is required by the pap module. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. The following steps will show you how: Create a Perl script with the following contents; we'll name it 4088_04_md5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless($ARGV[0]){ print "Please supply a password to create a MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);print encode_base64($ctx->digest,'')."n"; Make the 4088_04_md5.pl file executable: chmod 755 4088_04_md5.pl Get the MD5 password for passme: ./4088_04_md5.pl passme Use this output and update Alice's entry in the user's file to: "alice" MD5-Password := "ugGBYPwm4MwukpuOBx8FLQ==" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the MD5 password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using MD5 encryption. SMD5-Password This is an MD5 password with salt. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. Create a Perl script with the following contents; we'll name it 4088_04_smd5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless(($ARGV[0])&&($ARGV[1])){ print "Please supply a password and salt to create a salted MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);my $salt = $ARGV[1];$ctx->add($salt);print encode_base64($ctx->digest . $salt ,'')."n"; Make the 4088_04_smd5.pl file executable: chmod 755 4088_04_smd5.pl Get the SMD5 value for passme using a salt value of 'salt': ./4088_04_smd5.pl passme salt Remember that you should use a random value for the salt. We only used salt here for the demonstration. Use this output and update Alice's entry in the user's file to: "alice" SMD5-Password := "Vr6uPTrGykq4yKig67v5kHNhbHQ=" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the SMD5 password by looking for the following line in the FreeRADIUS debug feedback. [pap] Using SMD5 encryption.
Read more
  • 0
  • 0
  • 22845

article-image-how-get-started-redux-react-native
Emilio Rodriguez
04 Apr 2016
5 min read
Save for later

How To Get Started with Redux in React Native

Emilio Rodriguez
04 Apr 2016
5 min read
In mobile development there is a need for architectural frameworks, but complex frameworks designed to be used in web environments may end up damaging the development process or even the performance of our app. Because of this, some time ago I decided to introduce in all of my React Native projects the leanest framework I ever worked with: Redux. Redux is basically a state container for JavaScript apps. It is 100 percent library-agnostic so you can use it with React, Backbone, or any other view library. Moreover, it is really small and has no dependencies, which makes it an awesome tool for React Native projects. Step 1: Install Redux in your React Native project. Redux can be added as an npm dependency into your project. Just navigate to your project’s main folder and type: npm install --save react-redux By the time this article was written React Native was still depending on React Redux 3.1.0 since versions above depended on React 0.14, which is not 100 percent compatible with React Native. Because of this, you will need to force version 3.1.0 as the one to be dependent on in your project. Step 2: Set up a Redux-friendly folder structure. Of course, setting up the folder structure for your project is totally up to every developer but you need to take into account that you will need to maintain a number of actions, reducers, and components. Besides, it’s also useful to keep a separate folder for your API and utility functions so these won’t be mixing with your app’s core functionality. Having this in mind, this is my preferred folder structure under the src folder in any React Native project: Step 3: Create your first action. In this article we will be implementing a simple login functionality to illustrate how to integrate Redux inside React Native. A good point to start this implementation is the action, a basic function called from the component whenever we want the whole state of the app to be changed (i.e. changing from the logged out state into the logged in state). To keep this example as concise as possible we won’t be doing any API calls to a backend – only the pure Redux integration will be explained. Our action creator is a simple function returning an object (the action itself) with a type attribute expressing what happened with the app. No business logic should be placed here; our action creators should be really plain and descriptive. Step 4: Create your first reducer. Reducers are the ones in charge of updating the state of the app. Unlike in Flux, Redux only has one store for the whole app, but it will be conveniently name-spaced automatically by Redux once the reducers have been applied. In our example, the user reducer needs to be aware of when the user is logged in. Because of that, it needs to import the LOGIN_SUCCESS constant we defined in our actions before and export a default function, which will be called by Redux every time an action occurs in the app. Redux will automatically pass the current state of the app and the action occurred. It’s up to the reducer to realize if it needs to modify the state or not based on the action.type. That’s why almost every time our reducer will be a function containing a switch statement, which modifies and returns the state based on what action occurred. It’s important to state that Redux works with object references to identify when the state is changed. Because of this, the state should be cloned before any modification. It’s also interesting to know that the action passed to the reducers can contain other attributes apart from type. For example, when doing a more complex login, the user first name and last name can be added to the action by the action created and used by the reducer to update the state of the app. Step 5: Create your component. This step is almost pure React Native coding. We need a component to trigger the action and to respond to the change of state in the app. In our case it will be a simple View containing a button that disappears when logged in. This is a normal React Native component except for some pieces of the Redux boilerplate: The three import lines at the top will require everything we need from Redux ‘mapStateToProps’ and ‘mapDispatchToProps’ are two functions bound with ‘connect’ to the component: this makes Redux know that this component needs to be passed a piece of the state (everything under ‘userReducers’) and all the actions available in the app. Just by doing this, we will have access to the login action (as it is used in the onLoginButtonPress) and to the state of the app (as it is used in the !this.props.user.loggedIn statement) Step 6: Glue it all from your index.ios.js. For Redux to apply its magic, some initialization should be done in the main file of your React Native project (index.ios.js). This is pure boilerplate and only done once: Redux needs to inject a store holding the app state into the app. To do so, it requires a ‘Provider’ wrapping the whole app. This store is basically a combination of reducers. For this article we only need one reducer, but a full app will include many others and each of them should be passed into the combineReducers function to be taken into account by Redux whenever an action is triggered. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 0
  • 22517

article-image-building-a-web-service-with-laravel-5
Kunal Chaudhari
24 Apr 2018
15 min read
Save for later

Building a Web Service with Laravel 5

Kunal Chaudhari
24 Apr 2018
15 min read
A web service is an application that runs on a server and allows a client (such as a browser) to remotely write/retrieve data to/from the server over HTTP. In this article we will be covering the following set of topics: Using Laravel to create a web service Writing database migrations and seed files Creating API endpoints to make data publicly accessible Serving images from Laravel The interface of a web service will be one or more API endpoints, sometimes protected with authentication, that will return data in an XML or JSON payload: Web services are a speciality of Laravel, so it won't be hard to create one for Vuebnb. We'll use routes for our API endpoints and represent the listings with Eloquent models that Laravel will seamlessly synchronize with the database: Laravel also has inbuilt features to add API architectures such as REST, though we won't need this for our simple use case. Mock data The mock listing data is in the file database/data.json. This file includes a JSON- encoded array of 30 objects, with each object representing a different listing. Having built the listing page prototype, you'll no doubt recognize a lot of the same properties on these objects, including the title, address, and description. database/data.json: [ { "id": 1, "title": "Central Downtown Apartment with Amenities", "address": "...", "about": "...", "amenity_wifi": true, "amenity_pets_allowed": true, "amenity_tv": true, "amenity_kitchen": true, "amenity_breakfast": true, "amenity_laptop": true, "price_per_night": "$89" "price_extra_people": "No charge", "price_weekly_discount": "18%", "price_monthly_discount": "50%", }, { "id": 2, ... }, ... ] Each mock listing includes several images of the room as well. Images aren't really part of a web service, but they will be stored in a public folder in our app to be served as needed. Database Our web service will require a database table for storing the mock listing data. To set this up we'll need to create a schema and migration. We'll then create a seeder that will load and parse our mock data file and insert it into the database, ready for use in the app. Migration A migration is a special class that contains a set of actions to run against the database, such as creating or modifying a database table. Migrations ensure your database gets set up identically every time you create a new instance of your app, for example, installing in production or on a teammate's machine. To create a new migration, use the make:migration Artisan CLI command. The argument of the command should be a snake-cased description of what the migration will do: $ php artisan make:migration create_listings_table You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. You'll now see your new migration in the database/migrations directory. You'll notice the filename has a prefixed timestamp, such as 2017_06_20_133317_create_listings_table.php. The timestamp allows Laravel to determine the proper order of the migrations, in case it needs to run more than one at a time. Your new migration declares a class that extends Migration. It overrides two methods: up, which is used to add new tables, columns, or indexes to your database; and down, which is used to delete them. We'll implement these methods shortly. 2017_06_20_133317_create_listings_table.php: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateListingsTable extends Migration { public function up() { // } public function down() { // } } Schema A schema is a blueprint for the structure of a database. For a relational database such as MySQL, the schema will organize data into tables and columns. In Laravel, schemas are declared by using the Schema facade's create method. We'll now make a schema for a table to hold Vuebnb listings. The columns of the table will match the structure of our mock listing data. Note that we set a default false value for the amenities and allow the prices to have a NULL value. All other columns require a value. The schema will go inside our migration's up method. We'll also fill out the down with a call to Schema::drop. 2017_06_20_133317_create_listings_table.php: public function up() { Schema::create('listings', function (Blueprint $table) { $table->primary('id'); $table->unsignedInteger('id'); $table->string('title'); $table->string('address'); $table->longText('about'); // Amenities $table->boolean('amenity_wifi')->default(false); $table->boolean('amenity_pets_allowed')->default(false); $table->boolean('amenity_tv')->default(false); $table->boolean('amenity_kitchen')->default(false); $table->boolean('amenity_breakfast')->default(false); $table->boolean('amenity_laptop')->default(false); // Prices $table->string('price_per_night')->nullable(); $table->string('price_extra_people')->nullable(); $table->string('price_weekly_discount')->nullable(); $table->string('price_monthly_discount')->nullable(); }); } public function down() { Schema::drop('listings'); } A facade is an object-oriented design pattern for creating a static proxy to an underlying class in the service container. The facade is not meant to provide any new functionality; its only purpose is to provide a more memorable and easily readable way of performing a common action. Think of it as an object-oriented helper function. Execution Now that we've set up our new migration, let's run it with this Artisan command: $ php artisan migrate You should see an output like this in the Terminal: Migrating: 2017_06_20_133317_create_listings_table Migrated:            2017_06_20_133317_create_listings_table To confirm the migration worked, let's use Tinker to show the new table structure. If you've never used Tinker, it's a REPL tool that allows you to interact with a Laravel app on the command line. When you enter a command into Tinker it will be evaluated as if it were a line in your app code. Firstly, open the Tinker shell: $ php artisan tinker Now enter a PHP statement for evaluation. Let's use the DB facade's select method to run an SQL DESCRIBE query to show the table structure: >>>> DB::select('DESCRIBE listings;'); The output is quite verbose so I won't reproduce it here, but you should see an object with all your table details, confirming the migration worked. Seeding mock listings Now that we have a database table for our listings, let's seed it with the mock data. To do so we're going to have to do the following:  Load the database/data.json file  Parse the file  Insert the data into the listings table Creating a seeder Laravel includes a seeder class that we can extend called Seeder. Use this Artisan command to implement it: $ php artisan make:seeder ListingsTableSeeder When we run the seeder, any code in the run method is executed. database/ListingsTableSeeder.php: <?php use Illuminate\Database\Seeder; class ListingsTableSeeder extends Seeder { public function run() { // } } Loading the mock data Laravel provides a File facade that allows us to open files from disk as simply as File::get($path). To get the full path to our mock data file we can use the base_path() helper function, which returns the path to the root of our application directory as a string. It's then trivial to convert this JSON file to a PHP array using the built-in json_decode method. Once the data is an array, it can be directly inserted into the database given that the column names of the table are the same as the array keys. database/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); } Inserting the data In order to insert the data, we'll use the DB facade again. This time we'll call the table method, which returns an instance of Builder. The Builder class is a fluent query builder that allows us to query the database by chaining constraints, for example, DB::table(...)->where(...)->join(...) and so on. Let's use the insert method of the builder, which accepts an array of column names and values. database/seeds/ListingsTableSeeder.php: public  function  run() { $path  = base_path()  . '/database/data.json'; $file  = File::get($path); $data  = json_decode($file,  true); DB::table('listings')->insert($data); } Executing the seeder To execute the seeder we must call it from the DatabaseSeeder.php file, which is in the same directory. database/seeds/DatabaseSeeder.php: <?php use Illuminate\Database\Seeder; class DatabaseSeeder extends Seeder { public function run() { $this->call(ListingsTableSeeder::class); } } With that done, we can use the Artisan CLI to execute the seeder: $ php artisan db:seed You should see the following output in your Terminal: Seeding: ListingsTableSeeder We'll again use Tinker to check our work. There are 30 listings in the mock data, so to confirm the seed was successful, let's check for 30 rows in the database: $ php artisan tinker >>>> DB::table('listings')->count(); # Output: 30 Finally, let's inspect the first row of the table just to be sure its content is what we expect: >>>> DB::table('listings')->get()->first(); Here is the output: => {#732 +"id": 1, +"title": "Central Downtown Apartment with Amenities", +"address": "No. 11, Song-Sho Road, Taipei City, Taiwan 105", +"about": "...", +"amenity_wifi": 1, +"amenity_pets_allowed": 1, +"amenity_tv": 1, +"amenity_kitchen": 1, +"amenity_breakfast": 1, +"amenity_laptop": 1, +"price_per_night": "$89", +"price_extra_people": "No charge", +"price_weekly_discount": "18%", +"price_monthly_discount": "50%" } If yours looks like that you're ready to move on! Listing model We've now successfully created a database table for our listings and seeded it with mock listing data. How do we access this data now from the Laravel app? We saw how the DB facade lets us execute queries on our database directly. But Laravel provides a more powerful way to access data via the Eloquent ORM. Eloquent ORM Object-Relational Mapping (ORM) is a technique for converting data between incompatible systems in object-oriented programming languages. Relational databases such as MySQL can only store scalar values such as integers and strings, organized within tables. We want to make use of rich objects in our app, though, so we need a means of robust conversion. Eloquent is the ORM implementation used in Laravel. It uses the active record design pattern, where a model is tied to a single database table, and an instance of the model is tied to a single row. To create a model in Laravel using Eloquent ORM, simply extend the Illuminate\Database\Eloquent\Model class using Artisan: $ php artisan make:model Listing This generates a new file. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { // } How do we tell the ORM what table to map to, and what columns to include? By default, the Model class uses the class name (Listing) in lowercase (listing) as the table name to use. And, by default, it uses all the fields from the table. Now, any time we want to load our listings we can use code such as this, anywhere in our app: <?php // Load all listings $listings = \App\Listing::all(); // Iterate listings, echo the address foreach ($listings as $listing) { echo $listing->address . '\n' ; } /* * Output: * * No. 11, Song-Sho Road, Taipei City, Taiwan 105 * 110, Taiwan, Taipei City, Xinyi District, Section 5, Xinyi Road, 7 * No. 51, Hanzhong Street, Wanhua District, Taipei City, Taiwan 108 * ... */ Casting The data types in a MySQL database don't completely match up to those in PHP. For example, how does an ORM know if a database value of 0 is meant to be the number 0, or the Boolean value of false? An Eloquent model can be given a $casts property to declare the data type of any specific attribute. $casts is an array of key/values where the key is the name of the attribute being cast, and the value is the data type we want to cast to. For the listings table, we will cast the amenities attributes as Booleans. app/Listing.php: <?php namespace App; use Illuminate\Database\Eloquent\Model; class Listing extends Model { protected $casts = [ 'amenity_wifi' => 'boolean', 'amenity_pets_allowed' => 'boolean', 'amenity_tv' => 'boolean', 'amenity_kitchen' => 'boolean', 'amenity_breakfast' => 'boolean', 'amenity_laptop' => 'boolean' ]; } Now these attributes will have the correct type, making our model more robust: echo  gettype($listing->amenity_wifi()); //  boolean Public interface The final piece of our web service is the public interface that will allow a client app to request the listing data. Since the Vuebnb listing page is designed to display one listing at a time, we'll at least need an endpoint to retrieve a single listing. Let's now create a route that will match any incoming GET requests to the URI /api/listing/{listing} where {listing} is an ID. We'll put this in the routes/api.php file, where routes are automatically given the /api/ prefix and have middleware optimized for use in a web service by default. We'll use a closure function to handle the route. The function will have a $listing argument, which we'll type hint as an instance of the Listing class, that is, our model. Laravel's service container will resolve this as an instance with the ID matching {listing}. We can then encode the model as JSON and return it as a response. routes/api.php: <?php use App\Listing; Route::get('listing/{listing}', function(Listing $listing) { return $listing->toJson(); }); We can test this works by using the curl command from the Terminal: $ curl http://vuebnb.test/api/listing/1 The response will be the listing with ID 1: Controller We'll be adding more routes to retrieve the listing data as the project progresses. It's a best practice to use a controller class for this functionality to keep a separation of concerns. Let's create one with Artisan CLI: $ php artisan make:controller ListingController We'll then move the functionality from the route into a new method, get_listing_api. app/Http/Controllers/ListingController.php: <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Listing; class ListingController extends Controller { public function get_listing_api(Listing $listing) { return $listing->toJson(); } } For the Route::get method we can pass a string as the second argument instead of a closure function. The string should be in the form [controller]@[method], for example, ListingController@get_listing_web. Laravel will correctly resolve this at runtime. routes/api.php: <?php Route::get('/listing/{listing}', 'ListingController@get_listing_api'); Images As stated at the beginning of the article, each mock listing comes with several images of the room. These images are not in the project code and must be copied from a parallel directory in the code base called images. Copy the contents of this directory into the public/images folder: $ cp -a ../images/. ./public/images Once you've copied these files, public/images will have 30 sub-folders, one for each mock listing. Each of these folders will contain exactly four main images and a thumbnail image: Accessing images Files in the public directory can be directly requested by appending their relative path to the site URL. For example, the default CSS file, public/css/app.css, can be requested at http://vuebnb.test/css/app.css. The advantage of using the public folder, and the reason we've put our images there, is to avoid having to create any logic for accessing them. A frontend app can then directly call the images in an img tag. You may think it's inefficient for our web server to serve images like this, and you'd be right. Let's try to open one of the mock listing images in our browser to test this thesis: http://vuebnb.test/images/1/Image_1.jpg: Image links The payload for each listing in the web service should include links to these new images so a client app knows where to find them. Let's add the image paths to our listing API payload so it looks like this: { "id": 1, "title": "...", "description": "...", ... "image_1": "http://vuebnb.test/app/image/1/Image_1.jpg", "image_2": "http://vuebnb.test/app/image/1/Image_2.jpg", "image_3": "http://vuebnb.test/app/image/1/Image_3.jpg", "image_4": "http://vuebnb.test/app/image/1/Image_4.jpg" } To implement this, we'll use our model's toArray method to make an array representation of the model. We'll then easily be able to add new fields. Each mock listing has exactly four images, numbered 1 to 4, so we can use a for loop and the asset helper to generate fully- qualified URLs to files in the public folder. We finish by creating an instance of the Response class by calling the response helper. We use the json; method and pass in our array of fields, returning the result. app/Http/Controllers/ListingController.php: public function get_listing_api(Listing $listing) { $model = $listing->toArray(); for($i = 1; $i <=4; $i++) { $model['image_' . $i] = asset( 'images/' . $listing->id . '/Image_' . $i . '.jpg' ); } return response()->json($model); } The /api/listing/{listing} endpoint is now ready for consumption by a client app. To summarize, we built a web service with Laravel to make the data publicly accessible. This involved setting up a database table using a migration and schema, then seeding the database with mock listing data. We then created a public interface for the web service using routes. You enjoyed an excerpt from a book written by Anthony Gore, titled Full-Stack Vue.js 2 and Laravel 5 which would help you bring the frontend and backend together with Vue, Vuex, and Laravel. Read More Testing RESTful Web Services with Postman How to develop RESTful web services in Spring        
Read more
  • 0
  • 0
  • 21934
article-image-anatomy-wordpress-plugin
Packt
25 Mar 2011
7 min read
Save for later

Anatomy of a WordPress Plugin

Packt
25 Mar 2011
7 min read
  WordPress 3 Plugin Development Essentials Create your own powerful, interactive plugins to extend and add features to your WordPress site         Read more about this book       WordPress is a popular content management system (CMS), most renowned for its use as a blogging / publishing application. According to usage statistics tracker, BuiltWith (http://builtWith.com), WordPress is considered to be the most popular blogging software on the planet—not bad for something that has only been around officially since 2003. Before we develop any substantial plugins of our own, let's take a few moments to look at what other people have done, so we get an idea of what the final product might look like. By this point, you should have a fresh version of WordPress installed and running somewhere for you to play with. It is important that your installation of WordPress is one with which you can tinker. In this article by Brian Bondari and Everett Griffiths, authors of WordPress 3 Plugin Development Essentials, we will purposely break a few things to help see how they work, so please don't try anything in this article on a live production site. Deconstructing an existing plugin: "Hello Dolly" WordPress ships with a simple plugin named "Hello Dolly". Its name is a whimsical take on the programmer's obligatory "Hello, World!", and it is trotted out only for pedantic explanations like the one that follows (unless, of course, you really do want random lyrics by Jerry Herman to grace your administration screens). Activating the plugin Let's activate this plugin so we can have a look at what it does: Browse to your WordPress Dashboard at http://yoursite.com/wp-admin/. Navigate to the Plugins section. Under the Hello Dolly title, click on the Activate link. You should now see a random lyric appear in the top-right portion of the Dashboard. Refresh the page a few times to get the full effect. Examining the hello.php file Now that we've tried out the "Hello Dolly" plugin, let's have a closer look. In your favorite text editor, open up the /wp-content/plugins/hello.php file. Can you identify the following integral parts? The Information Header which describes details about the plugin (author and description). This is contained in a large PHP /* comment */. User-defined functions, such as the hello_dolly() function. The add_action() and/or add_filter() functions, which hook a WordPress event to a user-defined function. It looks pretty simple, right? That's all you need for a plugin: An information header Some user-defined functions add_action() and/or add_filter() functions In your WordPress Dashboard, ensure that the "Hello Dolly" plugin has been activated. If applicable, use your preferred (s)FTP program to connect to your WordPress installation. Using your text editor, temporarily delete the information header from wpcontent/ plugins/hello.php and save the file (you can save the header elsewhere for now). Save the file. Refresh the Plugins page in your browser. You should get a warning from WordPress stating that the plugin does not have a valid header: Ensure that the "Hello Dolly" plugin is active. Open the /wp-content/plugins/hello.php file in your text editor. Immediately before the line that contains function hello_dolly_get_lyric, type in some gibberish text, such as "asdfasdf" and save the file. Reload the plugins page in your browser. This should generate a parse error, something like: pre width="70"> Parse error: syntax error, unexpected T_FUNCTION in /path/to/ wordpress/html/wp-content/plugins/hello.php on line 16 Author: Listed below the plugin name Author URI: Together with "Author", this creates a link to the author's site Description: Main block of text describing the plugin Plugin Name: The displayed name of the plugin Plugin URI: Destination of the "Visit plugin site" link Version: Use this to track your changes over time Now that we've identified the critical component parts, let's examine them in more detail. Information header Don't just skim this section thinking it's a waste of breath on the self-explanatory header fields. Unlike a normal PHP file in which the comments are purely optional, in WordPress plugin and theme files, the Information Header is required! It is this block of text that causes a file to show up on WordPress' radar so that you can activate it or deactivate it. If your plugin is missing a valid information header, you cannot use it! Exercise—breaking the header To reinforce that the information header is an integral part of a plugin, try the following exercise: After you've seen the tragic consequences, put the header information back into the hello.php file. This should make it abundantly clear to you that the information header is absolutely vital for every WordPress plugin. If your plugin has multiple files, the header should be inside the primary file—in this article we use index.php as our primary file, but many plugins use a file named after the plugin name as their primary file. Location, name, and format The header itself is similar in form and function to other content management systems, such as Drupal's module.info files or Joomla's XML module configurations—it offers a way to store additional information about a plugin in a standardized format. The values can be extended, but the most common header values are listed below: For more information about header blocks, see the WordPress codex at: http://codex.wordpress.org/File_Header. In order for a PHP file to show up in WordPress' Plugins menu: The file must have a .php extension. The file must contain the information header somewhere in it (preferably at the beginning). The file must be either in the /wp-content/plugins directory, or in a subdirectory of the plugins directory. It cannot be more deeply nested. Understanding the Includes When you activate a plugin, the name of the file containing the information header is stored in the WordPress database. Each time a page is requested, WordPress goes through a laundry list of PHP files it needs to load, so activating a plugin ensures that your own files are on that list. To help illustrate this concept, let's break WordPress again. Exercise – parse errors Try the following exercise: Yikes! Your site is now broken. Why did this happen? We introduced errors into the plugin's main file (hello.php), so including it caused PHP and WordPress to choke. Delete the gibberish line from the hello.php file and save to return the plugin back to normal. The parse error only occurs if there is an error in an active plugin. Deactivated plugins are not included by WordPress and therefore their code is not parsed. You can try the same exercise after deactivating the plugin and you'll notice that WordPress does not raise any errors. Bonus for the curious In case you're wondering exactly where and how WordPress stores the information about activated plugins, have a look in the database. Using your MySQL client, you can browse the wp_options table or execute the following query: SELECT option_value FROM wp_options WHERE option_name='active_ plugins'; The active plugins are stored as a serialized PHP hash, referencing the file containing the header. The following is an example of what the serialized hash might contain if you had activated a plugin named "Bad Example". You can use PHP's unserialize() function to parse the contents of this string into a PHP variable as in the following script: <?php $active_plugin_str = 'a:1:{i:0;s:27:"bad-example/bad-example. php";}'; print_r( unserialize($active_plugin_str) ); ?> And here's its output: Array ( [0] => bad-example/bad-example.php )
Read more
  • 0
  • 1
  • 20993

article-image-shapefiles-leaflet
Packt
18 Aug 2014
5 min read
Save for later

Shapefiles in Leaflet

Packt
18 Aug 2014
5 min read
This article written by Paul Crickard III, the author of Leaflet.js Essentials, describes the use of shapefiles in Leaflet. It shows us how a shapefile can be used to create geographical features on a map. This article explains how shapefiles can be used to add a pop up or for styling purposes. (For more resources related to this topic, see here.) Using shapefiles in Leaflet A shapefile is the most common geographic file type that you will most likely encounter. A shapefile is not a single file, but rather several files used to create geographic features on a map. When you download a shapefile, you will have .shp, .shx, and .dbf at a minimum. These files are the shapefiles that contain the geometry, the index, and a database of attributes. Your shapefile will most likely include a projection file (.prj) that will tell that application the projection of the data so the coordinates make sense to the application. In the examples, you will also have a .shp.xml file that contains metadata and two spatial index files, .sbn and .sbx. To find shapefiles, you can usually search for open data and a city name. In this example, we will be using a shapefile from ABQ Data, the City of Albuquerque data portal. You can find more data on this at http://www.cabq.gov/abq-data. When you download a shapefile, it will most likely be in the ZIP format because it will contain multiple files. To open a shapefile in Leaflet using the leaflet-shpfile plugin, follow these steps: First, add references to two JavaScript files. The first, leaflet-shpfile, is the plugin, and the second depends on the shapefile parser, shp.js: <script src="leaflet.shpfile.js"></script> <script src="shp.js"></script> Next, create a new shapefile layer and add it to the map. Pass the layer path to the zipped shapefile: var shpfile = new L.Shapefile('council.zip'); shpfile.addTo(map); Your map should display the shapefile as shown in the following screenshot: Performing the preceding steps will add the shapefile to the map. You will not be able to see any individual feature properties. When you create a shapefile layer, you specify the data, followed by specifying the options. The options are passed to the L.geoJson class. The following code shows you how to add a pop up to your shapefile layer: var shpfile = new L.Shapefile('council.zip',{onEachFeature:function(feature, layer) { layer.bindPopup("<a href='"+feature.properties.WEBPAGE+"'>Page</a><br><a href='"+feature. properties.PICTURE+"'>Image</a>"); }}); In the preceding code, you pass council.zip to the shapefile, and for options, you use the onEachFeature option, which takes a function. In this case, you use an anonymous function and bind the pop up to the layer. In the text of the pop up, you concatenate your HTML with the name of the property you want to display using the format feature.properties.NAME-OF-PROPERTY. To find the names of the properties in a shapefile, you can open .dbf and look at the column headers. However, this can be cumbersome, and you may want to add all of the shapefiles in a directory without knowing its contents. If you do not know the names of the properties for a given shapefile, the following example shows you how to get them and then display them with their value in a pop up: var holder=[]; for (var key in feature.properties){holder.push(key+": "+feature.properties[key]+"<br>");popupContent=holder.join(""); layer.bindPopup(popupContent);} shapefile.addTo(map); In the preceding code, you first create an array to hold all of the lines in your pop up, one for each key/value pair. Next, you run a for loop that iterates through the object, grabbing each key and concatenating the key name with the value and a line break. You push each line into the array and then join all of the elements into a single string. When you use the .join() method, it will separate each element of the array in the new string with a comma. You can pass empty quotes to remove the comma. Lastly, you bind the pop up with the string as the content and then add the shapefile to the map. You now have a map that looks like the following screenshot: The shapefile also takes a style option. You can pass any of the path class options, such as the color, opacity, or stroke, to change the appearance of the layer. The following code creates a red polygon with a black outline and sets it slightly transparent: var shpfile = new L.Shapefile('council.zip',{style:function(feature){return {color:"black",fillColor:"red",fillOpacity:.75}}}); Summary In this article, we learned how shapefiles can be added to a geographical map. We learned how pop ups are added to the maps. This article also showed how these pop ups would look once added to the map. You will also learn how to connect to an ESRI server that has an exposed REST service. Resources for Article: Further resources on this subject: Getting started with Leaflet [Article] Using JavaScript Effects with Joomla! [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 20660