Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-server-side-swift-building-slack-bot-part-1
Peter Zignego
12 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 1

Peter Zignego
12 Oct 2016
5 min read
As a remote iOS developer, I love Slack. It’s my meeting room and my water cooler over the course of a work day. If you’re not familiar with Slack, it is a group communication tool popular in Silicon Valley and beyond. What makes Slack valuable beyond replacing email as the go-to communication method for buisnesses is that it is more than chat; it is a platform. Thanks to Slack’s open attitude toward developers with its API, hundreds of developers have been building what have become known as Slack bots. There are many different libraries available to help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server companies such as Linode and Digital Ocean. But last June, Apple open sourced Swift, including official support for Linux (Ubuntu 14 and 15 specifically). This made it possible to deploy Swift code on Linux servers, and developers hit the ground running to build out the infrastructure needed to make Swift a viable language for server applications. Even with this huge developer effort, it is still early days for server-side Swift. Apple’s Linux Foundation port is a huge undertaking, as is the work to get libdispatch, a concurrency framework that provides much of the underpinning for Foundation. In addition to rough official tooling, writing code for server-side Swift can be a bit like hitting a moving target, with biweekly snapshot releases and multiple, ABI-incompatible versions to target. Zewo to Sixty on Linux Fortunately, there are some good options for deploying Swift code on servers right now, even with Apple’s libraries in flux. I’m going to focus in on one in particular: Zewo. Zewo is modular by design, allowing us to use the Swift Package Manager to pull in only what we need instead of a monolithic framework. It’s open source and is a great community of developers that spans the globe. If you’re interested in the world of server-side Swift, you should get involved! Oh, and of course they have a Slack. Using Zewo and a few other open source libraries, I was able to build a version of SlackKit that runs on Linux. A Swift Tutorial In this two-part post series I have detailed a step-by-step guide to writing a Slack bot in Swift and deploying it to Heroku. I’m going to be using OS X but this is also achievable on Linux using the editor of your choice. Prerequisites Install Homebrew: /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Install swiftenv: brew install kylef/formulae/swiftenv Configure your shell: echo ‘if which swiftenv > /dev/null; then eval “$(swiftenv init -)”; fi’ >> ~/.bash_profile Download and install the latest Zewo-compatible snapshot: swiftenv install DEVELOPMENT-SNAPSHOT-2016-05-09-a swiftenv local DEVELOPMENT-SNAPSHOT-2016-05-09-a Install and Link OpenSSL: brew install openssl brew link openssl --force Let’s Keep Score The sample application we’ll be building is a leaderboard for Slack, like PlusPlus++ by Betaworks. It works like this: add a point for every @thing++, subtract a point for every @thing--, and show a leaderboard when asked @botname leaderboard. First, we need to create the directory for our application and initialize the basic project structure. mkdir leaderbot && cd leaderbot swift build --init Next, we need to edit Package.swift to add our dependency, SlackKit: importPackageDescription let package = Package( name: "Leaderbot", targets: [], dependencies: [ .Package(url: "https://github.com/pvzig/SlackKit.git", majorVersion: 0, minor: 0), ] ) SlackKit is dependent on several Zewo libraries, but thanks to the Swift Package Manager, we don’t have to worry about importing them explicitly. Then we need to build our dependencies: swift build And our development environment (we need to pass in some linker flags so that swift build knows where to find the version of OpenSSL we installed via Homebrew and the C modules that some of our Zewo libraries depend on): swift build -Xlinker -L$(pwd)/.build/debug/ -Xswiftc -I/usr/local/include -Xlinker -L/usr/local/lib -X In Part 2, I will show all of the Swift code, how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina. He writes at bytesized.co, tweets @pvzig, and freelances at Launch Software.fto help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server 
Read more
  • 0
  • 0
  • 5013

article-image-frontend-development-bootstrap-4
Packt
06 Oct 2016
19 min read
Save for later

Frontend development with Bootstrap 4

Packt
06 Oct 2016
19 min read
In this article by Bass Jobsen author of the book Bootstrap 4 Site Blueprints explains Bootstrap's popularity as a frontend web development framework is easy to understand. It provides a palette of user-friendly, cross-browser-tested solutions for the most standard UI conventions. Its ready-made, community-tested combination of HTML markup, CSS styles, and JavaScript plugins greatly speed up the task of developing a frontend web interface, and it yields a pleasing result out of the gate. With the fundamental elements in place, we can customize the design on top of a solid foundation. (For more resources related to this topic, see here.) However, not all that is popular, efficient, and effective is good. Too often, a handy tool can generate and reinforce bad habits; not so with Bootstrap, at least not necessarily so. Those who have followed it from the beginning know that its first release and early updates have occasionally favored pragmatic efficiency over best practices. The fact is that some best practices, including from semantic markup, mobile-first design, and performance-optimized assets, require extra time and effort for implementation. Quantity and quality If handled well, I feel that Bootstrap is a boon for the web development community in terms of quality and efficiency. Since developers are attracted to the web development framework, they become part of a coding community that draws them increasingly to the current best practices. From the start, Bootstrap has encouraged the implementation of tried, tested, and future-friendly CSS solutions, from Nicholas Galagher's CSS normalize to CSS3's displacement of image-heavy design elements. It has also supported (if not always modeled) HTML5 semantic markup. Improving with age With the release of v2.0, Bootstrap took responsive design into the mainstream, ensuring that its interface elements could travel well across devices, from desktops to tablets to handhelds. With the v3.0 release, Bootstrap stepped up its game again by providing the following features: The responsive grid was now mobile-first friendly Icons now utilize web fonts and, thus, were mobile- and retina-friendly With the drop of the support for IE7, markup and CSS conventions were now leaner and more efficient Since version 3.2, autoprefixer was required to build Bootstrap This article is about the v4.0 release. This release contains many improvements and also some new components, while some other components and plugins are dropped. In the following overview, you will find the most important improvements and changes in Bootstrap 4: Less (Leaner CSS) has been replaced with Sass. CSS code has been refactored to avoid tag and child selectors. There is an improved grid system with a new grid tier to better target the mobile devices. The navbar has been replaced. It has an opt-in flexbox support. It has a new HTML reset module called Reboot. Reboot extends Nicholas Galagher's CSS normalize and handles the box-sizing: border-box declarations. jQuery plugins are written in ES6 now and come with a UMD support. There is an improved auto-placement of tooltips and popovers, thanks to the help of a library called Tether. It has dropped the support for Internet Explorer 8, which enables us to swap pixels with rem and em units. It has added the Card component, which replaces the Wells, thumbnails, and Panels in earlier versions. It has dropped the icons in the font format from the Glyphicon Halflings set. The Affix plugin is dropped, and it can be replaced with the position: sticky polyfill (https://github.com/filamentgroup/fixed-sticky). The power of Sass When working with Bootstrap, there is the power of Sass to consider. Sass is a preprocessor for CSS. It extends the CSS syntax with variables, mixins, and functions and helps you in DRY (Don't Repeat Yourself) coding your CSS code. Sass has originally been written in Ruby. Nowadays, a fast port of Sass written in C++, called libSass, is available. Bootstrap uses the modern SCSS syntax for Sass instead of the older indented syntax of Sass. Using Bootstrap CLI You will be introduced to Bootstrap CLI. Instead of using Bootstrap's bundled build process, you can also start a new project by running the Bootstrap CLI. Bootstrap CLI is the command-line interface for Bootstrap 4. It includes some built-in example projects, but you can also use it to employ and deliver your own projects. You'll need the following software installed to get started with Bootstrap CLI: Node.js 0.12+: Use the installer provided on the NodeJS website, which can be found at http://nodejs.org/ With Node installed, run [sudo] npm install -g grunt bower Git: Use the installer for your OS Windows users can also try Git Gulp is another task runner for the Node.js system. Note that if you prefer Gulp over Grunt, you should install gulp instead of grunt with the following command: [sudo] npm install -g gulp bower The Bootstrap CLI is installed through npm by running the following command in your console: npm install -g bootstrap-cli This will add the bootstrap command to your system. Preparing a new Bootstrap project After installing the Bootstrap CLI, you can create a new Bootstrap project by running the following command in your console: bootstrap new --template empty-bootstrap-project-gulp Enter the name of your project for the question "What's the project called? (no spaces)". A new folder with the project name will be created. After the setup process, the directory and file structure of your new project folder should look as shown in the following figure: The project folder also contains a Gulpfile.js file. Now, you can run the bootstrap watch command in your console and start editing the html/pages/index.html file. The HTML templates are compiled with Panini. Panini is a flat file compiler that helps you to create HTML pages with consistent layouts and reusable partials with ease. You can read more about Panini at http://foundation.zurb.com/sites/docs/panini.html. Responsive features and breakpoints Bootstrap has got four breakpoints at 544, 768, 992, and 1200 pixels by default. At these breakpoints, your design may adapt to and target specific devices and viewport sizes. Bootstrap's mobile-first and responsive grid(s) also use these breakpoints. You can read more about the grids later on. You can use these breakpoints to specify and name the viewport ranges. The extra small (xs) range is for portrait phones with a viewport smaller than 544 pixels, the small (sm) range is for landscape phones with viewports smaller than 768pixels, the medium (md) range is for tablets with viewports smaller than 992pixels, the large (lg) range is for desktop with viewports wider than 992pixels, and finally the extra-large (xl) range is for desktops with a viewport wider than 1200 pixels. The breakpoints are in pixel values, as the viewport pixel size does not depend on the font size and modern browsers have already fixed some zooming bugs. Some people claim that em values should be preferred. To learn more about this, check out the following link: http://zellwk.com/blog/media-query-units/. Those who still prefer em values over pixels value can simply change the $grid-breakpointsvariable declaration in the scss/includes/_variables.scssfile. To use em values for media queries, the SCSS code should as follows: $grid-breakpoints: ( // Extra small screen / phone xs: 0, // Small screen / phone sm: 34em, // 544px // Medium screen / tablet md: 48em, // 768px // Large screen / desktop lg: 62em, // 992px // Extra large screen / wide desktop xl: 75em //1200px ); Note that you also have to change the $container-max-widths variable declaration. You should change or modify Bootstrap's variables in the local scss/includes/_variables.scss file, as explained at http://bassjobsen.weblogs.fm/preserve_settings_and_customizations_when_updating_bootstrap/. This will ensure that your changes are not overwritten when you update Bootstrap. The new Reboot module and Normalize.css When talking about cascade in CSS, there will, no doubt, be a mention of the browser default settings getting a higher precedence than the author's preferred styling. In other words, anything that is not defined by the author will be assigned a default styling set by the browser. The default styling may differ for each browser, and this behavior plays a major role in many cross-browser issues. To prevent these sorts of problems, you can perform a CSS reset. CSS or HTML resets set a default author style for commonly used HTML elements to make sure that browser default styles do not mess up your pages or render your HTML elements to be different on other browsers. Bootstrap uses Normalize.css written by Nicholas Galagher. Normalize.css is a modern, HTML5-ready alternative to CSS resets and can be downloaded from http://necolas.github.io/normalize.css/. It lets browsers render all elements more consistently and makes them adhere to modern standards. Together with some other styles, Normalize.css forms the new Reboot module of Bootstrap. Box-sizing The Reboot module also sets the global box-sizing value from content-box to border-box. The box-sizing property is the one that sets the CSS-box model used for calculating the dimensions of an element. In fact, box-sizing is not new in CSS, but nonetheless, switching your code to box-sizing: border-box will make your work a lot easier. When using the border-box settings, calculation of the width of an element includes border width and padding. So, changing the border width or padding of an element won't break your layouts. Predefined CSS classes Bootstrap ships with predefined CSS classes for everything. You can build a mobile-first responsive grid for your project by only using div elements and the right grid classes. CSS classes for styling other elements and components are also available. Consider the styling of a button in the following HTML code: <button class="btn btn-warning">Warning!</button> Now, your button will be as shown in the following screenshot: You will notice that Bootstrap uses two classes to style a single button. The first is the .btn class that gives the button the general button layout styles. The second class is the .btn-warning class that sets the custom colors of the buttons. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file called app.scss. Then, create a new subdirectory in the new scss directory named includes. Now, you will have to copy bootstrap.scss and _variables.scss from the Bootstrap source code in the bower_components directory to the new scss/includes directory as follows: cp bower_components/bootstrap/scss/bootstrap.scss scss/includes/_bootstrap.scss cp bower_components/bootstrap/scss/_variables.scss scss/includes/ You will notice that the bootstrap.scss file has been renamed to _bootstrap.scss, starting with an underscore, and has become a partial file now. Import the files you have copied in the previous step into the app.scss file, as follows: @import "includes/variables"; @import "includes/bootstrap"; Then, open the scss/includes/_bootstrap.scss file and change the import part for the Bootstrap partial files so that the original code in the bower_components directory will be imported here. Note that we will set the include path for the Sass compiler to the bower_components directory later on. The @import statements should look as shown in the following SCSS code: // Core variables and mixins @import "bootstrap/scss/variables"; @import "bootstrap/scss/mixins"; // Reset and dependencies @import "bootstrap/scss/normalize"; You're importing all of Bootstrap's SCSS code in your project now. When preparing your code for production, you can consider commenting out the partials that you do not require for your project. Modification of scss/includes/_variables.scss is not required, but you can consider removing the !default declarations because the real default values are set in the original _variables.scss file, which is imported after the local one. Note that the local scss/includes/_variables.scss file does not have to contain a copy of all of the Bootstrap's variables. Having them all just makes it easier to modify them for customization; it also ensures that your default values do not change when you are updating Bootstrap. Setting up your project and requirements For this project, you'll use the Bootstrap CLI again, as it helps you create a setup for your project comfortably. Bootstrap CLI requires you to have Node.js and Gulp already installed on your system. Now, create a new project by running the following command in your console: bootstrap new Enter the name of your project and choose the An empty new Bootstrap project. Powered by Panini, Sass and Gulp. template. Now your project is ready to start with your design work. However, before you start, let's first go through the introduction to Sass and the strategies for customization. The power of Sass in your project Sass is a preprocessor for CSS code and is an extension of CSS3, which adds nested rules, variables, mixins, functions, selector inheritance, and more. Creating a local Sass structure Before we can start with compiling Bootstrap's Sass code into CSS code, we have to create some local Sass or SCSS files. First, create a new scss subdirectory in your project directory. In the scss directory, create your main project file and name it app.scss. Using the CLI and running the code from GitHub Install the Bootstrap CLI using the following commands in your console: [sudo] npm install -g gulp bower npm install bootstrap-cli --global Then, use the following command to set up a Bootstrap 4 Weblog project: bootstrap new --repo https://github.com/bassjobsen/bootstrap-weblog.git The following figure shows the end result of your efforts: Turning our design into a WordPress theme WordPress is a very popular CMS (Content Management System) system; it now powers 25 percent of all sites across the web. WordPress is a free and open source CMS system and is based on PHP. To learn more about WordPress, you can also visit Packt Publishing’s WordPress Tech Page at https://www.packtpub.com/tech/wordpress. Now let's turn our design into a WordPress theme. There are many Bootstrap-based themes that we could choose from. We've taken care to integrate Bootstrap's powerful Sass styles and JavaScript plugins with the best practices found for HTML5. It will be to our advantage to use a theme that does the same. We'll use the JBST4 theme for this exercise. JBST4 is a blank WordPress theme built with Bootstrap 4. Installing the JBST 4 theme Let's get started by downloading the JBST theme. Navigate to your wordpress/wp-content/themes/ directory and run the following command in your console: git clone https://github.com/bassjobsen/jbst-4-sass.git jbst-weblog-theme Then navigate to the new jbst-weblog-theme directory and run the following command to confirm whether everything is working: npm install gulp Download from GitHub You can download the newest and updated version of this theme from GitHub too. You will find it at https://github.com/bassjobsen/jbst-weblog-theme. JavaScript events of the Carousel plugin Bootstrap provides custom events for most of the plugins' unique actions. The Carousel plugin fires the slide.bs.carousel (at the beginning of the slide transition) and slid.bs.carousel (at the end of the slide transition) events. You can use these events to add custom JavaScript code. You can, for instance, change the background color of the body on the events by adding the following JavaScript into the js/main.js file: $('.carousel').on('slide.bs.carousel', function () { $('body').css('background-color','#'+(Math.random()*0xFFFFFF<<0).toString(16)); }); You will notice that the gulp watch task is not set for the js/main.js file, so you have to run the gulp or bootstrap watch command manually after you are done with the changes. For more advanced changes of the plugin's behavior, you can overwrite its methods by using, for instance, the following JavaScript code: !function($) { var number = 0; var tmp = $.fn.carousel.Constructor.prototype.cycle; $.fn.carousel.Constructor.prototype.cycle = function (relatedTarget) { // custom JavaScript code here number = (number % 4) + 1; $('body').css('transform','rotate('+ number * 90 +'deg)'); tmp.call(this); // call the original function }; }(jQuery); The preceding JavaScript sets the transform CSS property without vendor prefixes. The autoprefixer only prefixes your static CSS code. For full browser compatibility, you should add the vendor prefixes in the JavaScript code yourself. Bootstrap exclusively uses CSS3 for its animations, but Internet Explorer 9 doesn’t support the necessary CSS properties. Adding drop-down menus to our navbar Bootstrap’s JavaScript Dropdown Plugin enables you to create drop-down menus with ease. You can also add these drop-down menus in your navbar too. Open the html/includes/header.html file in your text editor. You will notice that the Gulp build process uses the Panini HTML compiler to compile our HTML templates into HTML pages. Panini is powered by the Handlebars template language. You can use helpers, iterations, and custom data in your templates. In this example, you'll use the power of Panini to build the navbar items with drop-down menus. First, create a html/data/productgroups.yml file that contains the titles of the navbar items: Shoes Clothing Accessories Women Men Kids All Departments The preceding code is written in the YAML format. YAML is a human-readable data serialization language that takes concepts from programming languages and ideas from XML; you can read more about it at http://yaml.org/. Using the data described in the preceding code, you can use the following HTML and template code to build the navbar items: <ul class="nav navbar-nav navbar-toggleable-sm collapse" id="collapsiblecontent"> {{#each productgroups}} <li class="nav-item dropdown {{#ifCond this 'Shoes'}}active{{/ifCond}}"> <a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#" role="button" aria-haspopup="true" aria-expanded="false"> {{ this }} </a> <div class="dropdown-menu"> <a class="dropdown-item" href="#">Action</a> <a class="dropdown-item" href="#">Another action</a> <a class="dropdown-item" href="#">Something else here</a> <div class="dropdown-divider"></div> <a class="dropdown-item" href="#">Separated link</a> </div> </li> {{/each}} </ul> The preceding code uses a (for) each loop to build the seven navbar items; each item gets the same drop-down menu. The Shoes menu got the active class. Handlebars, and so Panini, does not support conditional comparisons by default. The if-statement can only handle a single value, but you can add a custom helper to enable conditional comparisons. The custom helper, which enables us to use the ifCond statement can be found in the html/helpers/ifCond.js file. Read my blog post, How to set up Panini for different environment, at http://bassjobsen.weblogs.fm/set-panini-different-environments/, to learn more about Panini and custom helpers. The HTML code for the drop-down menu is in accordance with the code for drop-down menus as described for the Dropdown plugin at http://getbootstrap.com/components/dropdowns/. The navbar collapses for smaller screen sizes. By default, the drop-down menus look the same on all grids: Now, you will use your Bootstrap skills to build an Angular 2 app. Angular 2 is the successor of AngularJS. You can read more about Angular 2 at https://angular.io/. It is a toolset for building the framework that is most suited to your application development; it lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. Angular is maintained by Google and a community of individuals and corporations. I have also published the source for Angular 2 with Bootstrap 4 starting point at GitHub. You will find it at the following URL: https://github.com/bassjobsen/angular2-bootstrap4-website-builder. You can install it by simply running the following command in your console: git clone https://github.com/bassjobsen/angular2-bootstrap4-website-builder.git yourproject Next, navigate to the new folder and run the following commands and verify that it works: npm install npm start Other tools to deploy Bootstrap 4 A Brunch skeleton using Bootstrap 4 is available at https://github.com/bassjobsen/brunch-bootstrap4. Brunch is a frontend web app build tool that builds, lints, compiles, concatenates, and shrinks your HTML5 apps. Read more about Brunch at the official website, which can be found at http://brunch.io/. You can try Brunch by running the following commands in your console: npm install -g brunch brunch new -s https://github.com/bassjobsen/brunch-bootstrap4 Notice that the first command requires administrator rights to run. After installing the tool, you can run the following command to build your project: brunch build The preceding command will create a new public/index.html file, after which you can open it in your browser. You'll find that it look like this: Yeoman Yeoman is another build tool. It’s a command-line utility that allows the creation of projects utilizing scaffolding templates, called generators. A Yeoman generator that scaffolds out a frontend Bootstrap 4 web app can be found at the following URL: https://github.com/bassjobsen/generator-bootstrap4 You can run the Yeoman Bootstrap 4 generator by running the following commands in your console: npm install -g yo npm install -g generator-bootstrap4 yo bootstrap4 grunt serve Again, note that the first two commands require administrator rights. The grunt serve command runs a local web server at http://localhost:9000. Point your browser to that address and check whether it look as follows: Summary Beyond this, there are a plethora of resources available for pushing further with Bootstrap. The Bootstrap community is an active and exciting one. This is truly an exciting point in the history of frontend web development. Bootstrap has made a mark in history, and for a good reason. Check out my GitHub pages at http://github.com/bassjobsen for new projects and updated sources or ask me a question on Stack Overflow (http://stackoverflow.com/users/1596547/bass-jobsen). Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] Creating a Responsive Magento Theme with Bootstrap 3 [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 34510

article-image-getting-organized-npm-and-bower
Packt
06 Oct 2016
13 min read
Save for later

Getting Organized with NPM and Bower

Packt
06 Oct 2016
13 min read
In this article by Philip Klauzinski and John Moore, the authors of the book Mastering JavaScript Single Page Application Development, we will learn about the basics of NMP and Bower. JavaScript was the bane of the web development industry during the early days of the browser-rendered Internet. Now, powers hugely impactful libraries such as jQuery, and JavaScript-rendered content (as opposed to server-side-rendered content) is even indexed by many search engines. What was once largely considered an annoying language used primarily to generate popup windows and alert boxes has now become, arguably, the most popular programming language in the world. (For more resources related to this topic, see here.) Not only is JavaScript now more prevalent than ever in frontend architecture, but it has become a server-side language as well, thanks to the Node.js runtime. We have also now seen the proliferation of document-oriented databases, such as MongoDB, which store and return JSON data. With JavaScript present throughout the development stack, the door is now open for JavaScript developers to become full-stack developers without the need to learn a traditional server-side language. Given the right tools and know-how, any JavaScript developer can create single page applications (SPAs) comprising entirely the language they know best, and they can do so using an architecture such as MEAN (MongoDB, Express, AngularJS, and Node.js). Organization is key to the development of any complex single page application. If you don't get organized from the beginning, you are sure to introduce an inordinate number of regressions to your app. The Node.js ecosystem will help you do this with a full suite of indispensable and open source tools, three of which we will discuss here. In this article, you will learn about: Node Package Manager The Bower front-end package manager What is Node Package Manager? Within any full-stack JavaScript environment, Node Package Manager (NPM) will be your go-to tool for setting up your development environment and managing server-side libraries. NPM can be used within both global and isolated environment contexts. We will first explore the use of NPM globally. Installing Node.js and NPM NPM is a component of Node.js, so before you can use it, you must install Node.js. You can find installers for both Mac and Windows at nodejs.org. Once you have Node.js installed, using NPM is incredibly easy and is done from the command-line interface (CLI). Start by ensuring you have the latest version of NPM installed, as it is updated more often than Node.js itself: $ npm install -g npm When using NPM, the -g option will apply your changes to your global environment. In this case, you want your version of NPM to apply globally. As stated previously, NPM can be used to manage packages both globally and within isolated environments. Therefore, we want essential development tools to be applied globally so that you can use them in multiple projects on the same system. On Mac and some Unix-based systems, you may have to run the npm command as the superuser (prefix the command with sudo) in order to install packages globally, depending on how NPM was installed. If you run into this issue and wish to remove the need to prefix npm with sudo, see docs.npmjs.com/getting-started/fixing-npm-permissions. Configuring your package.json file For any project you develop, you will keep a local package.json file to manage your Node.js dependencies. This file should be stored at the root of your project directory, and it will only pertain to that isolated environment. This allows you to have multiple Node.js projects with different dependency chains on the same system. When beginning a new project, you can automate the creation of the package.json file from the command line: $ npm init Running npm init will take you through a series of JSON property names to define through command-line prompts, including your app's name, version number, description, and more. The name and version properties are required, and your Node.js package will not install without them being defined. Several of the properties will have a default value given within parentheses in the prompt so that you may simply hit Enter to continue. Other properties will simply allow you to hit Enter with a blank entry and will not be saved to the package.json file or be saved with a blank value: name: (my-app) version: (1.0.0) description: entry point: (index.js) The entry point prompt will be defined as the main property in package.json and is not necessary unless you are developing a Node.js application. In our case, we can forgo this field. The npm init command may in fact force you to save the main property, so you will have to edit package.json afterward to remove it; however, that field will have no effect on your web app. You may also choose to create the package.json file manually using a text editor if you know the appropriate structure to employ. Whichever method you choose, your initial version of the package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "description": "My JavaScript single page application." } If you want your project to be private and want to ensure that it does not accidently get published to the NPM registry, you may want to add the private property to your package.json file and set it to true. Additionally, you may remove some properties that only apply to a registered package: { "name": "my-app", "author": "Philip Klauzinski", "description": "My JavaScript single page application.", "private": true } Once you have your package.json file set up the way you like it, you can begin installing Node.js packages locally for your app. This is where the importance of dependencies begins to surface. NPM dependencies There are three types of dependencies that can be defined for any Node.js project in your package.json file: dependencies, devDependencies, and peerDependencies. For the purpose of building a web-based SPA, you will only need to use the devDependencies declaration. The devDependencies ones are those that are required for developing your application, but not required for its production environment or for simply running it. If other developers want to contribute to your Node.js application, they will need to run npm install from the command line to set up the proper development environment. For information on the other types of dependencies, see docs.npmjs.com. When adding devDependencies to your package.json file, the command line again comes to the rescue. Let's use the installation of Browserify as an example: $ npm install browserify --save-dev This will install Browserify locally and save it along with its version range to the devDependencies object in your package.json file. Once installed, your package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "devDependencies": { "browserify": "^12.0.1" } } The devDependencies object will store each package as key-value pairs, in which the key is the package name and the value is the version number or version range. Node.js uses semantic versioning, where the three digits of the version number represent MAJOR.MINOR.PATCH. For more information on semantic version formatting, see semver.org. Updating your development dependencies You will notice that the version number of the installed package is preceded by a caret (^) symbol by default. This means that package updates will only allow patch and minor updates for versions above 1.0.0. This is meant to prevent major version changes from breaking your dependency chain when updating your packages to the latest versions. To update your devDependencies and save the new version numbers, you will enter the following from the command line. $ npm update --save-dev Alternatively, you can use the -D option as a shortcut for --save-dev: $ npm update -D To update all globally installed NPM packages to their latest versions, run npm update with the -g option: $ npm update -g For more information on semantic versioning within NPM, see docs.npmjs.com/misc/semver. Now that you have NPM set up and you know how to install your development dependencies, you can move on to installing Bower. Bower Bower is a package manager for frontend web assets and libraries. You will use it to maintain your frontend stack and control version chains for libraries such as jQuery, AngularJS, and any other components necessary to your app's web interface. Installing Bower Bower is also a Node.js package, so you will install it using NPM, much like you did with the Browserify example installation in the previous section, but this time you will be installing the package globally. This will allow you to run bower from the command line anywhere on your system without having to install it locally for each project. $ npm install -g bower You can alternatively install Bower locally as a development dependency so that you may maintain different versions of it for different projects on the same system, but this is generally not necessary. $ npm install bower --save-dev Next, check that Bower is properly installed by querying the version from the command line. $ bower -v Bower also requires the Git version control system (VCS) to be installed on your system in order to work with packages. This is because Bower communicates directly with GitHub for package management data. If you do not have Git installed on your system, you can find instructions for Linux, Mac, and Windows at git-scm.com. Configuring your bower.json file The process of setting up your bower.json file is comparable to that of the package.json file for NPM. It uses the same JSON format, has both dependencies and devDependencies, and can also be automatically created. $ bower init Once you type bower init from the command line, you will be prompted to define several properties with some defaults given within parentheses: ? name: my-app ? version: 0.0.0 ? description: My app description. ? main file: index.html ? what types of modules does this package expose? (Press <space> to? what types of modules does this package expose? globals ? keywords: my, app, keywords ? authors: Philip Klauzinski ? license: MIT ? homepage: http://gui.ninja ? set currently installed components as dependencies? No ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes These questions may vary depending on the version of Bower you install. Most properties in the bower.json file are not necessary unless you are publishing your project to the Bower registry, indicated in the final prompt. You will most likely want to mark your package as private unless you plan to register it and allow others to download it as a Bower package. Once you have created the bower.json file, you can open it in a text editor and change or remove any properties you wish. It should look something like the following example: { "name": "my-app", "version": "0.0.0", "authors": [ "Philip Klauzinski" ], "description": "My app description.", "main": "index.html", "moduleType": [ "globals" ], "keywords": [ "my", "app", "keywords" ], "license": "MIT", "homepage": "http://gui.ninja", "ignore": [ "**/.*", "node_modules", "bower_components", "test", "tests" ], "private": true } If you wish to keep your project private, you can reduce your bower.json file to two properties before continuing: { "name": "my-app", "private": true } Once you have the initial version of your bower.json file set up the way you like it, you can begin installing components for your app. Bower components location and the .bowerrc file Bower will install components into a directory named bower_components by default. This directory will be located directly under the root of your project. If you wish to install your Bower components under a different directory name, you must create a local system file named .bowerrc and define the custom directory name there: { "directory": "path/to/my_components" } An object with only a single directory property name is all that is necessary to define a custom location for your Bower components. There are many other properties that can be configured within a .bowerrc file. For more information on configuring Bower, see bower.io/docs/config/. Bower dependencies Bower also allows you to define both the dependencies and devDependencies objects like NPM. The distinction with Bower, however, is that the dependencies object will contain the components necessary for running your app, while the devDependencies object is reserved for components that you might use for testing, transpiling, or anything that does not need to be included in your frontend stack. Bower packages are managed using the bower command from the CLI. This is a user command, so it does not require super user (sudo) permissions. Let's begin by installing jQuery as a frontend dependency for your app: $ bower install jquery --save The --save option on the command line will save the package and version number to the dependencies object in bower.json. Alternatively, you can use the -S option as a shortcut for --save: $ bower install jquery -S Next, let's install the Mocha JavaScript testing framework as a development dependency: $ bower install mocha --save-dev In this case, we will use --save-dev on the command line to save the package to the devDependencies object instead. Your bower.json file should now look similar to the following example: { "name": "my-app", "private": true, "dependencies": { "jquery": "~2.1.4" }, "devDependencies": { "mocha": "~2.3.4" } } Alternatively, you can use the -D option as a shortcut for --save-dev: $ bower install mocha –D You will notice that the package version numbers are preceded by the tilde (~) symbol by default, in contrast to the caret (^) symbol, as is the case with NPM. The tilde serves as a more stringent guard against package version updates. With a MAJOR.MINOR.PATCH version number, running bower update will only update to the latest patch version. If a version number is composed of only the major and minor versions, bower update will update the package to the latest minor version. Searching the Bower registry All registered Bower components are indexed and searchable through the command line. If you don't know the exact package name of a component you wish to install, you can perform a search to retrieve a list of matching names. Most components will have a list of keywords within their bower.json file so that you can more easily find the package without knowing the exact name. For example, you may want to install PhantomJS for headless browser testing: $ bower search phantomjs The list returned will include any package with phantomjs in the package name or within its keywords list: phantom git://github.com/ariya/phantomjs.git dt-phantomjs git://github.com/keesey/dt-phantomjs qunit-phantomjs-runner git://github.com/jonkemp/... parse-cookie-phantomjs git://github.com/sindresorhus/... highcharts-phantomjs git://github.com/pesla/highcharts-phantomjs.git mocha-phantomjs git://github.com/metaskills/mocha-phantomjs.git purescript-phantomjs git://github.com/cxfreeio/purescript-phantomjs.git You can see from the returned list that the correct package name for PhantomJS is in fact phantom and not phantomjs. You can then proceed to install the package now that you know the correct name: $ bower install phantom --save-dev Now, you have Bower installed and know how to manage your frontend web components and development tools, but how do you integrate them into your SPA? This is where Grunt comes in. Summary Now that you have learned to set up an optimal development environment with NPM and supply it with frontend dependencies using Bower, it's time to start learning more about building a real app. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 12818

article-image-saying-hello
Packt
04 Oct 2016
6 min read
Save for later

Bootstrap and Angular: Saying Hello!

Packt
04 Oct 2016
6 min read
In this article by Sergey Akopkokhyants, author of the book Learning Web Development with Bootstrap and Angular (Second Edition), will establish a development environment for the simplest application possible. (For more resources related to this topic, see here.) Development environment setup It's time to set up your development environment. This process is one of the most overlooked and often frustrating parts of learning to program because developers don't want to think about it. The developers must know nuances how to install and configure many different programs before they start real development. Everyone's computers are different as a result; the same setup may not work on your computer. We will expose and eliminate all of these problems by defining the various pieces of environment you need to setup. Defining shell The shell is a required part of your software development environment. We will use the shell to install software, run commands to build and start the web server to bring the life to your web project. If your computer has installed Linux operating system then you will use the shell called Terminal. There are many Linux-based distributions out there that use diverse desktop environments, but most of them use the equivalent keyboard shortcut to open the Terminal. Use keyboard shortcut Ctrl + Alt + T to open Terminal in Linux. If you have a Mac computer with installed OS X, then you will use the Terminal shell as well. Use keyboard shortcut Command + Space to open the Spotlight, type Terminal to search and run. If you have a computer with installed Windows operation system, you can use the standard command prompt, but we can do better. In a minute later I will show you how can you install the Git on your computer, and you will have Git Bash free. You can open a Terminal with Git Bash shell program on Windows. I will use the shell bash for all exercises in this book whenever I need to work in the Terminal. Installing Node.js The Node.js is technology we will use as a cross-platform runtime environment for running server-side Web applications. It is a combination of native, platform independent runtime based on Google's V8 JavaScript engine and a huge number of modules written in JavaScript. Node.js ships with different connectors and libraries help you use HTTP, TLS, compression, file system access, raw TCP and UDP, and more. You as a developer can write own modules on JavaScript and run them inside Node.js engine. The Node.js runtime makes ease build a network, event-driven application servers. The terms package and library are synonymous in JavaScript so that we will use them interchangeably. Node.js is utilizing JavaScript Object Notation (JSON) format widely in data exchange between server and client sides because it readily expressed in several parse diagrams, notably without complexities of XML, SOAP, and other data exchange formats. You can use Node.js for the development of the service-oriented applications, doing something different than web servers. One of the most popular service-oriented application is Node Package Manager (NPM) we will use to manage library dependencies, deployment systems, and underlies the many platform-as-a-service (PaaS) providers for Node.js. If you do not have Node.js installed on your computer, you shall download the pre-build installer from https://nodejs.org/en/download. You can start to use the Node.js immediately after installation. Open the Terminal and type: node ––version The Node.js must respond with version number of installed runtime: v4.4.3 Setting up NPM The NPM is a package manager for JavaScript. You can use it to find, share, and reuse packages of code from many developers across the world. The number of packages dramatically grows every day and now is more than 250K. NPM is a Node.js package manager and utilizes it to run itself. NPM is included in setup bundle of Node.js and available just after installation. Open the Terminal and type: npm ––version The NPM must answer on your command with version number: 2.15.1 The following command gives us information about Node.js and NPM install: npm config list There are two ways to install NPM packages: locally or globally. In cases when you would like to use the package as a tool better install it globally: npm install ––global <package_name> If you need to find the folder with globally installed packages you can use the next command: npm config get prefix Installation global packages are important, but best avoid if not needed. Mostly you will install packages locally. npm install <package_name> You may find locally installed packages in a node_modules folder of your project. Installing Git You missed a lot if you are not familiar with Git. Git is a distributed version control system and each Git working directory is a full-fledged repository. It keeps the complete history of changes and has full version tracking capabilities. Each repository is entirely independent of network access or a central server. You can install Git on your computer via a set of pre-build installers available on official website https://git-scm.com/downloads. After installation, you can open the Terminal and type git –version Git must respond with version number git version 2.8.1.windows.1 As I said for developers who use computers with installed Windows operation system now, you have Git Bash free on your system. Code editor You can imagine how many programs for code editing exists but we will talk today only about free, open source and runs everywhere Visual Studio Code from Microsoft. You can use any program you prefer for development, but I use only Visual Studio Code in our future exercises, so please install it from http://code.visualstudio.com/Download. Summary This article, we learned about shell concept, how to install Node.js and Git, and setting up node packages. Resources for Article: Further resources on this subject: Gearing Up for Bootstrap 4 [article] API with MongoDB and Node.js [article] Mapping Requirements for a Modular Web Shop App [article]
Read more
  • 0
  • 0
  • 29958

article-image-extending-yii
Packt
03 Oct 2016
14 min read
Save for later

Extending Yii

Packt
03 Oct 2016
14 min read
Introduction      In this article by Dmitry Eliseev, the author of the book Yii Application Development Cookbook Third Edition, we will see three Yii extensions—helpers, behaviors, and components. In addition, we will learn how to make your extension reusable and useful for the community and will focus on the many things you should do in order to make your extension as efficient as possible. (For more resources related to this topic, see here.) Helpers There are a lot of built-in framework helpers, like StringHelper in the yiihelpers namespace. It contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. In many cases, for additional behavior you can create your own helper and put any static functions into one. For example, we will implement a number helper in this recipe. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… Create the helpers directory in your project and write the NumberHelper class: <?php namespace apphelpers; class NumberHelper { public static function format($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Add the actionNumbers method into SiteController: <?php ... class SiteController extends Controller { … public function actionNumbers() { return $this->render('numbers', ['value' => 18878334526.3]); } } Add the views/site/numbers.php view: <?php use apphelpersNumberHelper; use yiihelpersHtml; /* @var $this yiiwebView */ /* @var $value float */ $this->title = 'Numbers'; $this->params['breadcrumbs'][] = $this->title; ?> <div class="site-numbers"> <h1><?= Html::encode($this->title) ?></h1> <p> Raw number:<br /> <b><?= $value ?></b> </p> <p> Formatted number:<br /> <b><?= NumberHelper::format($value) ?></b> </p> </div> Open the action and see this result: In other cases you can specify another count of decimal numbers; for example: NumberHelper::format($value, 3) How it works… Any helper in Yii2 is just a set of functions implemented as static methods in corresponding classes. You can use one to implement any different format of output for manipulations with values of any variable, and for other cases. Note: Usually, static helpers are light-weight clean functions with a small count of arguments. Avoid putting your business logic and other complicated manipulations into helpers . Use widgets or other components instead of helpers in other cases. See also For more information about helpers, refer to http://www.yiiframework.com/doc-2.0/guide-helper-overview.html. And for examples of built-in helpers, see sources in the helpers directory of the framework, refer to https://github.com/yiisoft/yii2/tree/master/framework/helpers. Creating model behaviors There are many similar solutions in today's web applications. Leading products such as Google's Gmail are defining nice UI patterns; one of these is soft delete. Instead of a permanent deletion with multiple confirmations, Gmail allows users to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. Let's create a behavior that will allow marking models as deleted, restoring models, selecting not yet deleted models, deleted models, and all models. In this recipe we'll follow a test-driven development approach to plan the behavior and test if the implementation is correct. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Create two databases for working and for tests. Configure Yii to use the first database in your primary application in config/db.php. Make sure the test application uses a second database in tests/codeception/config/config.php. Create a new migration: <?php use yiidbMigration; class m160427_103115_create_post_table extends Migration { public function up() { $this->createTable('{{%post}}', [ 'id' => $this->primaryKey(), 'title' => $this->string()->notNull(), 'content_markdown' => $this->text(), 'content_html' => $this->text(), ]); } public function down() { $this->dropTable('{{%post}}'); } } Apply the migration to both working and testing databases: ./yii migrate tests/codeception/bin/yii migrate Create a Post model: <?php namespace appmodels; use appbehaviorsMarkdownBehavior; use yiidbActiveRecord; /** * @property integer $id * @property string $title * @property string $content_markdown * @property string $content_html */ class Post extends ActiveRecord { public static function tableName() { return '{{%post}}'; } public function rules() { return [ [['title'], 'required'], [['content_markdown'], 'string'], [['title'], 'string', 'max' => 255], ]; } } How to do it… Let's prepare a test environment, starting with defining the fixtures for the Post model. Create the tests/codeception/unit/fixtures/PostFixture.php file: <?php namespace apptestscodeceptionunitfixtures; use yiitestActiveFixture; class PostFixture extends ActiveFixture { public $modelClass = 'appmodelsPost'; public $dataFile = '@tests/codeception/unit/fixtures/data/post.php'; } Add a fixture data file in tests/codeception/unit/fixtures/data/post.php: <?php return [ [ 'id' => 1, 'title' => 'Post 1', 'content_markdown' => 'Stored *markdown* text 1', 'content_html' => "<p>Stored <em>markdown</em> text 1</p>n", ], ]; Then, we need to create a test case tests/codeception/unit/MarkdownBehaviorTest: . .php: <?php namespace apptestscodeceptionunit; use appmodelsPost; use apptestscodeceptionunitfixturesPostFixture; use yiicodeceptionDbTestCase; class MarkdownBehaviorTest extends DbTestCase { public function testNewModelSave() { $post = new Post(); $post->title = 'Title'; $post->content_markdown = 'New *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>New <em>markdown</em> text</p>n", $post->content_html); } public function testExistingModelSave() { $post = Post::findOne(1); $post->content_markdown = 'Other *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>Other <em>markdown</em> text</p>n", $post->content_html); } public function fixtures() { return [ 'posts' => [ 'class' => PostFixture::className(), ] ]; } } Run unit tests: codecept run unit MarkdownBehaviorTest and ensure that tests have not passed Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Error Trying to test ... MarkdownBehaviorTest::testExistingModelSave Error --------------------------------------------------------------------------- Time: 289 ms, Memory: 16.75MB Now we need to implement a behavior, attach it to the model, and make sure the test passes. Create a new directory, behaviors. Under this directory, create the MarkdownBehavior class: <?php namespace appbehaviors; use yiibaseBehavior; use yiibaseEvent; use yiibaseInvalidConfigException; use yiidbActiveRecord; use yiihelpersMarkdown; class MarkdownBehavior extends Behavior { public $sourceAttribute; public $targetAttribute; public function init() { if (empty($this->sourceAttribute) || empty($this->targetAttribute)) { throw new InvalidConfigException('Source and target must be set.'); } parent::init(); } public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } private function processContent() { $model = $this->owner; $source = $model->{$this->sourceAttribute}; $model->{$this->targetAttribute} = Markdown::process($source); } } Let's attach the behavior to the Post model: class Post extends ActiveRecord { ... public function behaviors() { return [ 'markdown' => [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Run the test and make sure it passes: Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Ok Trying to test ... MarkdownBehaviorTest::testExistingModelSave Ok --------------------------------------------------------------------------- Time: 329 ms, Memory: 17.00MB That's it. We've created a reusable behavior and can use it for all future projects by just connecting it to a model. How it works… Let's start with the test case. Since we want to use a set of models, we will define fixtures. A fixture set is put into the DB each time the test method is executed. We will prepare unit tests for specifying how the behavior works: First, we test processing new model content. The behavior must convert Markdown text from a source attribute to HTML and store the second one to target attribute. Second, we test updated content of an existing model. After changing Markdown content and saving the model, we must get updated HTML content. Now let's move to the interesting implementation details. In behavior, we can add our own methods that will be mixed into the model that the behavior is attached to. We can also subscribe to our own component events. We are using it to add our own listener: public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } And now we can implement this listener: public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } In all methods, we can use the owner property to get the object the behavior is attached to. In general we can attach any behavior to your models, controllers, application, and other components that extend the yiibaseComponent class. We can also attach one behavior again and again to model for the processing of different attributes: class Post extends ActiveRecord { ... public function behaviors() { return [ [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'description_markdown', 'targetAttribute' => 'description_html', ], [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Besides, we can also extend the yiibaseAttributeBehavior class, like yiibehaviorsTimestampBehavior, to update specified attributes for any event. See also To learn more about behaviors and events, refer to the following pages: http://www.yiiframework.com/doc-2.0/guide-concept-behaviors.html http://www.yiiframework.com/doc-2.0/guide-concept-events.html For more information about Markdown syntax, refer to http://daringfireball.net/projects/markdown/. Creating components If you have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, it's most probably a component. The component should be inherited from the yiibaseComponent class. Later on, the component can be attached to the application and configured using the components section of a configuration file. That's the main benefit compared to using just a plain PHP class. We are also getting behaviors, events, getters, and setters support. For our example, we'll implement a simple Exchange application component that will be able to get currency rates from the http://fixer.io site, attach them to the application, and use them. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… To get a currency rate, our component should send an HTTP GET query to a service URL, like http://api.fixer.io/2016-05-14?base=USD. The service must return all supported rates on the nearest working day: { "base":"USD", "date":"2016-05-13", "rates": { "AUD":1.3728, "BGN":1.7235, ... "ZAR":15.168, "EUR":0.88121 } } The component should extract needle currency from the response in a JSON format and return a target rate. Create a components directory in your application structure. Create the component class example with the following interface: <?php namespace appcomponents; use yiibaseComponent; class Exchange extends Component { public function getRate($source, $destination, $date = null) { } } Implement the component functional: <?php namespace appcomponents; use yiibaseComponent; use yiibaseInvalidConfigException; use yiibaseInvalidParamException; use yiicachingCache; use yiidiInstance; use yiihelpersJson; class Exchange extends Component { /** * @var string remote host */ public $host = 'http://api.fixer.io'; /** * @var bool cache results or not */ public $enableCaching = false; /** * @var string|Cache component ID */ public $cache = 'cache'; public function init() { if (empty($this->host)) { throw new InvalidConfigException('Host must be set.'); } if ($this->enableCaching) { $this->cache = Instance::ensure($this->cache, Cache::className()); } parent::init(); } public function getRate($source, $destination, $date = null) { $this->validateCurrency($source); $this->validateCurrency($destination); $date = $this->validateDate($date); $cacheKey = $this->generateCacheKey($source, $destination, $date); if (!$this->enableCaching || ($result = $this->cache->get($cacheKey)) === false) { $result = $this->getRemoteRate($source, $destination, $date); if ($this->enableCaching) { $this->cache->set($cacheKey, $result); } } return $result; } private function getRemoteRate($source, $destination, $date) { $url = $this->host . '/' . $date . '?base=' . $source; $response = Json::decode(file_get_contents($url)); if (!isset($response['rates'][$destination])) { throw new RuntimeException('Rate not found.'); } return $response['rates'][$destination]; } private function validateCurrency($source) { if (!preg_match('#^[A-Z]{3}$#s', $source)) { throw new InvalidParamException('Invalid currency format.'); } } private function validateDate($date) { if (!empty($date) && !preg_match('#d{4}-d{2}-d{2}#s', $date)) { throw new InvalidParamException('Invalid date format.'); } if (empty($date)) { $date = date('Y-m-d'); } return $date; } private function generateCacheKey($source, $destination, $date) { return [__CLASS__, $source, $destination, $date]; } } Attach our component in the config/console.php or config/web.php configuration files: 'components' => [ 'cache' => [ 'class' => 'yiicachingFileCache', ], 'exchange' => [ 'class' => 'appcomponentsExchange', 'enableCaching' => true, ], // ... db' => $db, ], We can now use a new component directly or via a get method: echo Yii::$app->exchange->getRate('USD', 'EUR'); echo Yii::$app->get('exchange')->getRate('USD', 'EUR', '2014-04-12'); Create a demonstration console controller: <?phpnamespace appcommands;use yiiconsoleController;class ExchangeController extends Controller{ public function actionTest($currency, $date = null) { echo Yii::$app->exchange->getRate('USD', $currency, $date) . PHP_EOL; }} And try to run any commands: $ ./yii exchange/test EUR > 0.90196 $ ./yii exchange/test EUR 2015-11-24 > 0.93888 $ ./yii exchange/test OTHER > Exception 'yiibaseInvalidParamException' with message 'Invalid currency format.' $ ./yii exchange/test EUR 2015/24/11 Exception 'yiibaseInvalidParamException' with message 'Invalid date format.' $ ./yii exchange/test ASD > Exception 'RuntimeException' with message 'Rate not found.' As a result you must see rate values in success cases or specific exceptions in error ones. In addition to creating your own components, you can do more. Overriding existing application components Most of the time there will be no need to create your own application components, since other types of extensions, such as widgets or behaviors, cover almost all types of reusable code. However, overriding core framework components is a common practice and can be used to customize the framework's behavior for your specific needs without hacking into the core. For example, to be able to format numbers using the Yii::app()->formatter->asNumber($value) method instead of the NumberHelper::format method from the Helpers recipe, follow the next steps: Extend the yiii18nFormatter component like the following: <?php namespace appcomponents; class Formatter extends yiii18nFormatter { public function asNumber($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Override the class of the built-in formatter component: 'components' => [ // ... formatter => [ 'class' => 'appcomponentsFormatter, ], // … ], Right now, we can use this method directly: echo Yii::app()->formatter->asNumber(1534635.2, 3); or as a new format for GridView and DetailView widgets: <?= yiigridGridView::widget([ 'dataProvider' => $dataProvider, 'columns' => [ 'id', 'created_at:datetime', 'title', 'value:number', ], ]) ?> You can also extend every existing component without overwriting its source code. How it works… To be able to attach a component to an application it can be extended from the yiibaseComponent class. Attaching is as simple as adding a new array to the components’ section of configuration. There, a class value specifies the component's class and all other values are set to a component through the corresponding component's public properties and setter methods. Implementation itself is very straightforward; We are wrapping http://api.fixer.io calls into a comfortable API with validators and caching. We can access our class by its component name using Yii::$app. In our case, it will be Yii::$app->exchange. See also For official information about components, refer to http://www.yiiframework.com/doc-2.0/guide-concept-components.html. For the NumberHelper class sources, see Helpers recipe. Summary In this article we learnt about the Yii extensions—helpers, behavior, and components. Helpers contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. Behaviors allow you to enhance the functionality of an existing component class without needing to change the class's inheritance. Components are the main building blocks of Yii applications. A component is an instance of CComponent or its derived class. Using a component mainly involves accessing its properties and raising/handling its events. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Atmosfall – Managing Game Progress with Coroutines [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 13205

article-image-build-universal-javascript-app-part-2
John Oerter
30 Sep 2016
10 min read
Save for later

Build a Universal JavaScript App, Part 2

John Oerter
30 Sep 2016
10 min read
In this post series, we will walk through how to write a universal (or isomorphic) JavaScript app. Part 1 covered what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating our app, which are serving post data and adding React. In this second part of the series, we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. Let’s get started. Save on some of our very best React and Angular product from the 7th to 13th November - it's a perfect opportunity to get stuck into two tools that are truly redefining modern web development. Save 50% on featured eBooks and 80% on featured video courses here. Step 3: Client-side routing with React Router git checkout client-side-routing && npm install Now that we're pulling and displaying posts, let's add some navigation to individual pages for each post. To do this, we will turn our list of posts from step 2 (see the Part 1 post) into links that are always present on the page. Each post will live at http://localhost:3000/:postId/:postSlug. We can use React Router and a routes.js file to set up this structure: // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" component={Post} /> </Route> ) We've changed the render method in App.js to render links to posts instead of just <li> tags: // components/App.js import React from 'react' import { Link } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } ... render() { const posts = this.state.posts.map((post) => { const linkTo = `/${post.id}/${post.slug}`; return ( <li key={post.id}> <Link to={linkTo}>{post.title}</Link> </li> ) }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> {this.props.children} </div> ) } } export default App And, we'll add a Post.js component to render each post's content: // components/Post.js import React from 'react' class Post extends React.Component { constructor(props) { super(props) this.state = { title: '', content: '' } } fetchPost(id) { const request = new XMLHttpRequest() request.open('GET', '/api/post/' + id, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { const response = JSON.parse(request.response) this.setState({ title: response.title, content: response.content }); } } request.send(); } componentDidMount() { this.fetchPost(this.props.params.postId) } componentWillReceiveProps(nextProps) { this.fetchPost(nextProps.params.postId) } render() { return ( <div> <h3>{this.state.title}</h3> <p>{this.state.content}</p> </div> ) } } export default Post The componentDidMount() and componentWillReceiveProps() methods are important because they let us know when we should fetch a post from the server. componentDidMount() will handle the first time the Post.js component is rendered, and then componentWillReceiveProps() will take over as React Router handles rerendering the component with different props. Run npm build:client && node server.js again to build and run the app. You will now be able to go to http://localhost:3000 and navigate around to the different posts. However, if you try to refresh on a single post page, you will get something like Cannot GET /3/debugging-node-apps. That's because our Express server doesn't know how to handle that kind of route. React Router is handling it completely on the front end. Onward to server rendering! Step 4: Server rendering git checkout server-rendering && npm install Okay, now we're finally getting to the good stuff. In this step, we'll use React Router to help our server take application requests and render the appropriate markup. To do that, we need to also build a server bundle like we build a client bundle, so that the server can understand JSX. Therefore, we've added the below webpack.server.config.js: // webpack.server.config.js var fs = require('fs') var path = require('path') module.exports = { entry: path.resolve(__dirname, 'server.js'), output: { filename: 'server.bundle.js' }, target: 'node', // keep node_module paths out of the bundle externals: fs.readdirSync(path.resolve(__dirname, 'node_modules')).concat([ 'react-dom/server', 'react/addons', ]).reduce(function (ext, mod) { ext[mod] = 'commonjs ' + mod return ext }, {}), node: { __filename: true, __dirname: true }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } We've also added the following code to server.js: // server.js import React from 'react' import { renderToString } from 'react-dom/server' import { match, RouterContext } from 'react-router' import routes from './components/routes' const app = express() ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const appHtml = renderToString(<RouterContext {...props} />) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) function renderPage(appHtml) { return ` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Universal Blog</title> </head> <body> <div id="app">${appHtml}</div> <script src="/bundle.js"></script> </body> </html> ` } ... Using React Router's match function, the server can find the appropriate requested route, renderToString, and send the markup down the wire. Run npm start to build the client and server bundles and start the app. Fantastic right? We're not done yet. Even though the markup is being generated on the server, we're still fetching all the data client side. Go ahead and click through the posts with your dev tools open, and you'll see the requests. It would be far better to load the data while we're rendering the markup instead of having to request it separately on the client. Since server rendering and universal apps are still bleeding-edge, there aren't really any established best practices for data loading. If you're using some kind of Flux implementation, there may be some specific guidance. But for this use case, we will simply grab all the posts and feed them through our app. In order to this, we first need to do some refactoring on our current architecture. Step 5: Data Flow Refactor git checkout data-flow-refactor && npm install It's a little weird how each post page has to make a request to the server for its content, even though the App component already has all the posts in its state. A better solution would be to have an App simply pass the appropriate content down to the Post component. // components/routes.js import React from 'react' import { Route } from 'react-router' import App from './App' import Post from './Post' module.exports = ( <Route path="/" component={App}> <Route path="/:postId/:postName" /> </Route> ) In our routes.js, we've made the Post route a componentless route. It's still a child of the App route, but now has to completely rely on the App component for rendering. Below are the changes to App.js: // components/App.js ... render() {    const posts = this.state.posts.map((post) => {      const linkTo = `/${post.id}/${post.slug}`;      return (        <li key={post.id}>          <Link to={linkTo}>{post.title}</Link>        </li>      )    })    const { postId, postName } = this.props.params;    let postTitle, postContent    if (postId && postName) {      const post = this.state.posts.find(p => p.id == postId)      postTitle = post.title      postContent = post.content    }    return (      <div>        <h3>Posts</h3>        <ul>          {posts}        </ul>        {postTitle && postContent ? (          <Post title={postTitle} content={postContent} />        ) : (          <h1>Welcome to the Universal Blog!</h1>        )}      </div>    ) } } export default App If we are on a post page, then props.params.postId and props.params.postName will both be defined and we can use them to grab the desired post and pass the data on to the Post component to be rendered. If those properties are not defined, then we're on the home page and can simply render a greeting. Now, our Post.js component can be a simple stateless functional component that simply renders its properties. // components/Post.js import React from 'react' const Post = ({title, content}) => ( <div> <h3>{title}</h3> <p>{content}</p> </div>) export default Post With that refactoring complete, we're ready to implement data loading. Step 6: Data Loading git checkout data-loading && npm install For this final step, we just need to make two small changes in server.js and App.js: // server.js ... app.get('*', (req, res) => { match({ routes: routes, location: req.url }, (err, redirect, props) => { if (err) { res.status(500).send(err.message) } else if (redirect) { res.redirect(redirect.pathname + redirect.search) } else if (props) { const routerContextWithData = ( <RouterContext {...props} createElement={(Component, props) => { return <Component posts={posts} {...props} /> }} /> ) const appHtml = renderToString(routerContextWithData) res.send(renderPage(appHtml)) } else { res.status(404).send('Not Found') } }) }) ... // components/App.js import React from 'react' import Post from './Post' import { Link, IndexLink } from 'react-router' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: props.posts || [] } } ... In server.js, we're changing how the RouterContext creates elements by overwriting its createElement function and passing in our data as additional props. These props will get passed to any component that is matched by the route, which in this case will be our App component. Then, when the App component is initialized, it sets its posts state property to what it got from props or an empty array. That's it! Run npm start one last time, and cruise through your app. You can even disable JavaScript, and the app will automatically degrade to requesting whole pages. Thanks for reading! About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9870
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-learning-how-manage-records-visualforce
Packt
29 Sep 2016
7 min read
Save for later

Learning How to Manage Records in Visualforce

Packt
29 Sep 2016
7 min read
In this article by Keir Bowden, author of the book, Visualforce Development Cookbook - Second Edition we will cover the following styling fields and table columns as per requirement One of the common use cases for Visualforce pages is to simplify, streamline, or enhance the management of sObject records. In this article, we will use Visualforce to carry out some more advanced customization of the user interface—redrawing the form to change available picklist options, or capturing different information based on the user's selections. (For more resources related to this topic, see here.) Styling fields as required Standard Visualforce input components, such as <apex:inputText />, can take an optional required attribute. If set to true, the component will be decorated with a red bar to indicate that it is required, and form submission will fail if a value has not been supplied, as shown in the following screenshot: In the scenario where one or more inputs are required and there are additional validation rules, for example, when one of either the Email or Phone fields is defined for a contact, this can lead to a drip feed of error messages to the user. This is because the inputs make repeated unsuccessful attempts to submit the form, each time getting slightly further in the process. Now, we will create a Visualforce page that allows a user to create a contact record. The Last Name field is captured through a non-required input decorated with a red bar identical to that created for required inputs. When the user submits the form, the controller validates that the Last Name field is populated and that one of the Email or Phone fields is populated. If any of the validations fail, details of all errors are returned to the user. Getting ready This topic makes use of a controller extension so this must be created before the Visualforce page. How to do it… Navigate to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredStylingExt.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredStyling in the Label field. Accept the default RequiredStyling that is automatically generated for the Name field. Paste the contents of the RequiredStyling.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredStyling page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredStyling page to create a new contact record: https://<instance>/apex/RequiredStyling. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. Clicking on the Save button without populating any of the fields results in the save failing with a number of errors: The Last Name field is constructed from a label and text input component rather than a standard input field, as an input field would enforce the required nature of the field and stop the submission of the form: <apex:pageBlockSectionItem > <apex:outputLabel value="Last Name"/> <apex:outputPanel id="detailrequiredpanel" layout="block" styleClass="requiredInput"> <apex:outputPanel layout="block" styleClass="requiredBlock" /> <apex:inputText value="{!Contact.LastName}"/> </apex:outputPanel> </apex:pageBlockSectionItem> The required styles are defined in the Visualforce page rather than relying on any existing Salesforce style classes to ensure that if Salesforce changes the names of its style classes, this does not break the page. The controller extension save action method carries out validation of all fields and attaches error messages to the page for all validation failures: if (String.IsBlank(cont.name)) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please enter the contact name')); error=true; } if ( (String.IsBlank(cont.Email)) && (String.IsBlank(cont.Phone)) ) { ApexPages.addMessage(new ApexPages.Message( ApexPages.Severity.ERROR, 'Please supply the email address or phone number')); error=true; } Styling table columns as required When maintaining records that have required fields through a table, using regular input fields can end up with an unsightly collection of red bars striped across the table. Now, we will create a Visualforce page to allow a user to create a number of contact records via a table. The contact Last Name column header will be marked as required, rather than the individual inputs. Getting ready This topic makes use of a custom controller, so this will need to be created before the Visualforce page. How to do it… First, create the custom controller by navigating to the Apex Classes setup page by clicking on Your Name | Setup | Develop | Apex Classes. Click on the New button. Paste the contents of the RequiredColumnController.cls Apex class from the code downloaded into the Apex Class area. Click on the Save button. Next, create a Visualforce page by navigating to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Click on the New button. Enter RequiredColumn in the Label field. Accept the default RequiredColumn that is automatically generated for the Name field. Paste the contents of the RequiredColumn.page file from the code downloaded into the Visualforce Markup area and click on the Save button. Navigate to the Visualforce setup page by clicking on Your Name | Setup | Develop | Visualforce Pages. Locate the entry for the RequiredColumn page and click on the Security link. On the resulting page, select which profiles should have access and click on the Save button. How it works… Opening the following URL in your browser displays the RequiredColumn page: https://<instance>/apex/RequiredColumn. Here, <instance> is the Salesforce instance specific to your organization, for example, na6.salesforce.com. The Last Name column header is styled in red, indicating that this is a required field. Attempting to create a record where only First Name is specified results in an error message being displayed against the Last Name input for the particular row: The Visualforce page sets the required attribute on the inputField components in the Last Name column to false, which removes the red bar from the component: <apex:column > <apex:facet name="header"> <apex:outputText styleclass="requiredHeader" value="{!$ObjectType.Contact.fields.LastName.label}" /> </apex:facet> <apex:inputField value="{!contact.LastName}" required="false"/> </apex:column> The Visualforce page custom controller Save method checks if any of the fields in the row are populated, and if this is the case, it checks that the last name is present. If the last name is missing from any record, an error is added. If an error is added to any record, the save does not complete: if ( (!String.IsBlank(cont.FirstName)) || (!String.IsBlank(cont.LastName)) ) { // a field is defined - check for last name if (String.IsBlank(cont.LastName)) { error=true; cont.LastName.addError('Please enter a value'); } String.IsBlank() is used as this carries out three checks at once: to check that the supplied string is not null, it is not empty, and it does not only contain whitespace. Summary Thus in this article we successfully mastered the techniques to style fields and table columns as per the custom needs. Resources for Article: Further resources on this subject: Custom Components in Visualforce [Article] Visualforce Development with Apex [Article] Using Spring JMX within Java Applications [Article]
Read more
  • 0
  • 0
  • 8891

article-image-build-universal-javascript-app-part-1
John Oerter
27 Sep 2016
8 min read
Save for later

Build a Universal JavaScript App, Part 1

John Oerter
27 Sep 2016
8 min read
In this two part post series, we will walk through how to write a universal (or isomorphic) JavaScript app. This first part will cover what a universal JavaScript application is, why it is such an exciting concept, and the first two steps for creating the app, which are serving post data and adding React. In Part 2 of this series we walk through steps 3-6, which are client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. What is a Universal JavaScript app? To put it simply, a universal JavaScript app is an application that can render itself on the client and the server. It combines the features of traditional server-side MVC frameworks (Rails, ASP.NET MVC, and Spring MVC), where markup is generated on the server and sent to the client, with the features of SPA frameworks (Angular, Ember, Backbone, and so on), where the server is only responsible for the data and the client generates markup. Universal or Isomorphic? There has been some debate in the JavaScript community over the terms "universal" and "isomorphic" to describe apps that can run on the client and server. I personally prefer the term "universal," simply because it's a more familiar word and makes the concept easier to understand. If you're interested in this discussion, you can read the below articles: Isomorphic JavaScript: The Future of Web Apps by Spike Brehm popularizes the term "isomorphic". Universal JavaScript by Michael Jackson puts forth the term "universal" as a better alternative. Is "Isomorphic JavaScript" a good term? by Dr. Axel Rauschmayer says that maybe certain applications should be called isomorphic and others should be called universal. What are the advantages? Switching between one language on the server and JavaScript on the client can harm your productivity. JavaScript is a unique language that, for better or worse, behaves in a very different way from most server-side languages. Writing universal JavaScript apps allows you to simplify your workflow and immerse yourself in JavaScript. If you're writing a web application today, chances are that you're writing a lot of JavaScript anyway. Why not dive in? Node continues to improve with better performance and more features thanks to V8 and it's well run community, and npm is a fantastic package manager with thousands of quality packages available. There is tremendous brain power being devoted to JavaScript right now. Take advantage of it! On top of that, maintainability of a universal app is better because it allows more code reuse. How many times have you implemented the same validation logic in your server and front end code? Or rewritten utility functions? With some careful architecture and decoupling, you can write and test code once that will work on the server and client. Performance SPAs are great because they allow the user to navigate applications without waiting for full pages to be sent down from the server. The cost, however, is longer wait times for the application to be initialized on the first load because the browser needs to receive all the assets needed to run the full app up front. What if there are rarely visited areas in your app? Why should every client have to wait for the logic and assets needed for those areas? This was the problem Netflix solved using universal JavaScript. MVC apps have the inverse problem. Each page only has the markup, assets, and JavaScript needed for that page, but the trade-off is round trips to the server for every page. SEO Another disadvantage of SPAs is their weakness on SEO. Although web crawlers are getting better at understanding JavaScript, a site generated on the server will always be superior. With universal JavaScript, any public-facing page on your site can be easily requested and indexed by search engines. Building an Example Universal JavaScript App Now that we've gained some background on universal JavaScript apps, let's walk through building a very simple blog website as an example. Here are the tools we'll use: Express React React Router Babel Webpack I've chosen these tools because of their popularity and ease of accomplishing our task. I won't be covering how to use Redux or other Flux implementations because, while useful in a production application, they are not necessary for demoing how to create a universal app. To keep things simple, we will forgo a database and just store our data in a flat file. We'll also keep the Webpack shenanigans to a minimum and only do what is necessary to transpile and bundle our code. You can grab the code for this walkthrough at here, and follow along. There are branches for each step along the way. Be sure to run npm install for each step. Let's get started! Step 1: Serving Post Data git checkout serving-post-data && npm install We're going to start off slow, and simply set up the data we want to serve. Our posts are stored in the posts.js file, and we just have a simple Express server in server.js that takes requests at /api/post/{id}. Snippets of these files are below. // posts.js module.exports = [ ... { id: 2, title: 'Expert Node', slug: 'expert-node', content: 'Street art 8-bit photo booth, aesthetic kickstarter organic raw denim hoodie non kale chips pour-over occaecat. Banjo non ea, enim assumenda forage excepteur typewriter dolore ullamco. Pickled meggings dreamcatcher ugh, church-key brooklyn portland freegan normcore meditation tacos aute chicharrones skateboard polaroid. Delectus affogato assumenda heirloom sed, do squid aute voluptate sartorial. Roof party drinking vinegar franzen mixtape meditation asymmetrical. Yuccie flexitarian est accusamus, yr 3 wolf moon aliqua mumblecore waistcoat freegan shabby chic. Irure 90's commodo, letterpress nostrud echo park cray assumenda stumptown lumbersexual magna microdosing slow-carb dreamcatcher bicycle rights. Scenester sartorial duis, pop-up etsy sed man bun art party bicycle rights delectus fixie enim. Master cleanse esse exercitation, twee pariatur venmo eu sed ethical. Plaid freegan chambray, man braid aesthetic swag exercitation godard schlitz. Esse placeat VHS knausgaard fashion axe cred. In cray selvage, waistcoat 8-bit excepteur duis schlitz. Before they sold out bicycle rights fixie excepteur, drinking vinegar normcore laboris 90's cliche aliqua 8-bit hoodie post-ironic. Seitan tattooed thundercats, kinfolk consectetur etsy veniam tofu enim pour-over narwhal hammock plaid.' }, ... ] // server.js ... app.get('/api/post/:id?', (req, res) => { const id = req.params.id if (!id) { res.send(posts) } else { const post = posts.find(p => p.id == id); if (post) res.send(post) else res.status(404).send('Not Found') } }) ... You can start the server by running node server.js, and then request all posts by going to localhost:3000/api/post or a single post by id such as localhost:3000/api/post/0. Great! Let's move on. Step 2: Add React git checkout add-react && npm install Now that we have the data exposed via a simple web service, let's use React to render a list of posts on the page. Before we get there, however, we need to set up webpack to transpile and bundle our code. Below is our simple webpack.config.js to do this: // webpack.config.js var webpack = require('webpack') module.exports = { entry: './index.js', output: { path: 'public', filename: 'bundle.js' }, module: { loaders: [ { test: /.js$/, exclude: /node_modules/, loader: 'babel-loader?presets[]=es2015&presets[]=react' } ] } } All we're doing is bundling our code with index.js as an entry point and writing the bundle to a public folder that will be served by Express. Speaking of index.js, here it is: // index.js import React from 'react' import { render } from 'react-dom' import App from './components/app' render ( <App />, document.getElementById('app') ) And finally, we have App.js: // components/App.js import React from 'react' const allPostsUrl = '/api/post' class App extends React.Component { constructor(props) { super(props) this.state = { posts: [] } } componentDidMount() { const request = new XMLHttpRequest() request.open('GET', allPostsUrl, true) request.setRequestHeader('Content-type', 'application/json'); request.onload = () => { if (request.status === 200) { this.setState({ posts: JSON.parse(request.response) }); } } request.send(); } render() { const posts = this.state.posts.map((post) => { return <li key={post.id}>{post.title}</li> }) return ( <div> <h3>Posts</h3> <ul> {posts} </ul> </div> ) } } export default App Once the App component is mounted, it sends a request for the posts, and renders them as a list. To see this step in action, build the webpack bundle first with npm run build:client. Then, you can run node server.js just like before. http://localhost:3000 will now display a list of our posts. Conclusion Now that React has been added, take a look at Part 2 where we cover client-side routing with React Router, server rendering, data flow refactoring, and data loading of the app. About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs at here.
Read more
  • 0
  • 0
  • 9257

article-image-using-model-serializers-eliminate-duplicate-code
Packt
23 Sep 2016
12 min read
Save for later

Using model serializers to eliminate duplicate code

Packt
23 Sep 2016
12 min read
In this article by Gastón C. Hillar, author of, Building RESTful Python Web Services, we will cover the use of model serializers to eliminate duplicate code and use of default parsing and rendering options. (For more resources related to this topic, see here.) Using model serializers to eliminate duplicate code The GameSerializer class declares many attributes with the same names that we used in the Game model and repeats information such as the types and the max_length values. The GameSerializer class is a subclass of the rest_framework.serializers.Serializer, it declares attributes that we manually mapped to the appropriate types, and overrides the create and update methods. Now, we will create a new version of the GameSerializer class that will inherit from the rest_framework.serializers.ModelSerializer class. The ModelSerializer class automatically populates both a set of default fields and a set of default validators. In addition, the class provides default implementations for the create and update methods. In case you have any experience with Django Web Framework, you will notice that the Serializer and ModelSerializer classes are similar to the Form and ModelForm classes. Now, go to the gamesapi/games folder folder and open the serializers.py file. Replace the code in this file with the following code that declares the new version of the GameSerializer class. The code file for the sample is included in the restful_python_chapter_02_01 folder. from rest_framework import serializers from games.models import Game class GameSerializer(serializers.ModelSerializer): class Meta: model = Game fields = ('id', 'name', 'release_date', 'game_category', 'played') The new GameSerializer class declares a Meta inner class that declares two attributes: model and fields. The model attribute specifies the model related to the serializer, that is, the Game class. The fields attribute specifies a tuple of string whose values indicate the field names that we want to include in the serialization from the related model. There is no need to override either the create or update methods because the generic behavior will be enough in this case. The ModelSerializer superclass provides implementations for both methods. We have reduced boilerplate code that we didn’t require in the GameSerializer class. We just needed to specify the desired set of fields in a tuple. Now, the types related to the game fields is included only in the Game class. Press Ctrl + C to quit Django’s development server and execute the following command to start it again. python manage.py runserver Using the default parsing and rendering options and move beyond JSON The APIView class specifies default settings for each view that we can override by specifying appropriate values in the gamesapi/settings.py file or by overriding the class attributes in subclasses. As previously explained the usage of the APIView class under the hoods makes the decorator apply these default settings. Thus, whenever we use the decorator, the default parser classes and the default renderer classes will be associated with the function views. By default, the value for the DEFAULT_PARSER_CLASSES is the following tuple of classes: ( 'rest_framework.parsers.JSONParser', 'rest_framework.parsers.FormParser', 'rest_framework.parsers.MultiPartParser' ) When we use the decorator, the API will be able to handle any of the following content types through the appropriate parsers when accessing the request.data attribute. application/json application/x-www-form-urlencoded multipart/form-data When we access the request.data attribute in the functions, Django REST Framework examines the value for the Content-Type header in the incoming request and determines the appropriate parser to parse the request content. If we use the previously explained default values, Django REST Framework will be able to parse the previously listed content types. However, it is extremely important that the request specifies the appropriate value in the Content-Type header. We have to remove the usage of the rest_framework.parsers.JSONParser class in the functions to make it possible to be able to work with all the configured parsers and stop working with a parser that only works with JSON. The game_list function executes the following two lines when request.method is equal to 'POST': game_data = JSONParser().parse(request) game_serializer = GameSerializer(data=game_data) We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(data=request.data) The game_detail function executes the following two lines when request.method is equal to 'PUT': game_data = JSONParser().parse(request) game_serializer = GameSerializer(game, data=game_data) We will make the same edits done for the code in the game_list function. We will remove the first line that uses the JSONParser and we will pass request.data as the data argument for the GameSerializer. The following line will replace the previous lines: game_serializer = GameSerializer(game, data=request.data) By default, the value for the DEFAULT_RENDERER_CLASSES is the following tuple of classes: ( 'rest_framework.renderers.JSONRenderer', 'rest_framework.renderers.BrowsableAPIRenderer', ) When we use the decorator, the API will be able to render any of the following content types in the response through the appropriate renderers when working with the rest_framework.response.Response object. application/json text/html By default, the value for the DEFAULT_CONTENT_NEGOTIATION_CLASS is the rest_framework.negotiation.DefaultContentNegotiation class. When we use the decorator, the API will use this content negotiation class to select the appropriate renderer for the response based on the incoming request. This way, when a request specifies that it will accept text/html, the content negotiation class selects the rest_framework.renderers.BrowsableAPIRenderer to render the response and generate text/html instead of application/json. We have to replace the usages of both the JSONResponse and HttpResponse classes in the functions with the rest_framework.response.Response class. The Response class uses the previously explained content negotiation features, renders the received data into the appropriate content type and returns it to the client. Now, go to the gamesapi/games folder folder and open the views.py file. Replace the code in this file with the following code that removes the JSONResponse class, uses the @api_view decorator for the functions and the rest_framework.response.Response class. The modified lines are highlighted. The code file for the sample is included in the restful_python_chapter_02_02 folder. from rest_framework.parsers import JSONParser from rest_framework import status from rest_framework.decorators import api_view from rest_framework.response import Response from games.models import Game from games.serializers import GameSerializer @api_view(['GET', 'POST']) def game_list(request): if request.method == 'GET': games = Game.objects.all() games_serializer = GameSerializer(games, many=True) return Response(games_serializer.data) elif request.method == 'POST': game_serializer = GameSerializer(data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data, status=status.HTTP_201_CREATED) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) @api_view(['GET', 'PUT', 'POST']) def game_detail(request, pk): try: game = Game.objects.get(pk=pk) except Game.DoesNotExist: return Response(status=status.HTTP_404_NOT_FOUND) if request.method == 'GET': game_serializer = GameSerializer(game) return Response(game_serializer.data) elif request.method == 'PUT': game_serializer = GameSerializer(game, data=request.data) if game_serializer.is_valid(): game_serializer.save() return Response(game_serializer.data) return Response(game_serializer.errors, status=status.HTTP_400_BAD_REQUEST) elif request.method == 'DELETE': game.delete() return Response(status=status.HTTP_204_NO_CONTENT) After you save the previous changes, run the following command: http OPTIONS :8000/games/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/. The request will match and run the views.game_list function, that is, the game_list function declared within the games/views.py file. We added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS, PUT Content-Type: application/json Date: Thu, 09 Jun 2016 21:35:58 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game Detail", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with a comma-separated list of HTTP verbs supported by the resource collection as its value: GET, POST, OPTIONS. As our request didn’t specify the allowed content type, the function rendered the response with the default application/json content type. The response body specifies the Content-type that the resource collection parses and the Content-type that it renders. Run the following command to compose and send and HTTP request with the OPTIONS verb for a game resource. Don’t forget to replace 3 with a primary key value of an existing game in your configuration: http OPTIONS :8000/games/3/ The following is the equivalent curl command: curl -iX OPTIONS :8000/games/3/ The previous command will compose and send the following HTTP request: OPTIONS http://localhost:8000/games/3/. The request will match and run the views.game_detail function, that is, the game_detail function declared within the games/views.py file. We also added the @api_view decorator to this function, and therefore, it is capable of determining the supported HTTP verbs, parsing and rendering capabilities. The following lines show the output: HTTP/1.0 200 OK Allow: GET, POST, OPTIONS Content-Type: application/json Date: Thu, 09 Jun 2016 20:24:31 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "description": "", "name": "Game List", "parses": [ "application/json", "application/x-www-form-urlencoded", "multipart/form-data" ], "renders": [ "application/json", "text/html" ] } The response header includes an Allow key with comma-separated list of HTTP verbs supported by the resource as its value: GET, POST, OPTIONS, PUT. The response body specifies the content-type that the resource parses and the content-type that it renders, with the same contents received in the previous OPTIONS request applied to a resource collection, that is, to a games collection. When we composed and sent POST and PUT commands, we had to use the use the -H "Content-Type: application/json" option to indicate curl to send the data specified after the -d option as application/json instead of the default application/x-www-form-urlencoded. Now, in addition to application/json, our API is capable of parsing application/x-www-form-urlencoded and multipart/form-data data specified in the POST and PUT requests. Thus, we can compose and send a POST command that sends the data as application/x-www-form-urlencoded with the changes made to our API. We will compose and send an HTTP request to create a new game. In this case, we will use the -f option for HTTPie that serializes data items from the command line as form fields and sets the Content-Type header key to the application/x-www-form-urlencoded value. http -f POST :8000/games/ name='Toy Story 4' game_category='3D RPG' played=false release_date='2016-05-18T03:02:00.776594Z' The following is the equivalent curl command. Notice that we don’t use the -H option and curl will send the data in the default application/x-www-form-urlencoded: curl -iX POST -d '{"name":"Toy Story 4", "game_category":"3D RPG", "played": "false", "release_date": "2016-05-18T03:02:00.776594Z"}' :8000/games/ The previous commands will compose and send the following HTTP request: POST http://localhost:8000/games/ with the Content-Type header key set to the application/x-www-form-urlencoded value and the following data: name=Toy+Story+4&game_category=3D+RPG&played=false&release_date=2016-05-18T03%3A02%3A00.776594Z The request specifies /games/, and therefore, it will match '^games/$' and run the views.game_list function, that is, the updated game_detail function declared within the games/views.py file. As the HTTP verb for the request is POST, the request.method property is equal to 'POST', and therefore, the function will execute the code that creates a GameSerializer instance and passes request.data as the data argument for its creation. The rest_framework.parsers.FormParser class will parse the data received in the request, the code creates a new Game and, if the data is valid, it saves the new Game. If the new Game was successfully persisted in the database, the function returns an HTTP 201 Created status code and the recently persisted Game serialized to JSON in the response body. The following lines show an example response for the HTTP request, with the new Game object in the JSON response: HTTP/1.0 201 Created Allow: OPTIONS, POST, GET Content-Type: application/json Date: Fri, 10 Jun 2016 20:38:40 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "game_category": "3D RPG", "id": 20, "name": "Toy Story 4", "played": false, "release_date": "2016-05-18T03:02:00.776594Z" } After the changes we made in the code, we can run the following command to see what happens when we compose and send an HTTP request with an HTTP verb that is not supported: http PUT :8000/games/ The following is the equivalent curl command: curl -iX PUT :8000/games/ The previous command will compose and send the following HTTP request: PUT http://localhost:8000/games/. The request will match and try to run the views.game_list function, that is, the game_list function declared within the games/views.py file. The @api_view decorator we added to this function doesn’t include 'PUT' in the string list with the allowed HTTP verbs, and therefore, the default behavior returns a 405 Method Not Allowed status code. The following lines show the output with the response from the previous request. A JSON content provides a detail key with a string value that indicates the PUT method is not allowed. HTTP/1.0 405 Method Not Allowed Allow: GET, OPTIONS, POST Content-Type: application/json Date: Sat, 11 Jun 2016 00:49:30 GMT Server: WSGIServer/0.2 CPython/3.5.1 Vary: Accept, Cookie X-Frame-Options: SAMEORIGIN { "detail": "Method "PUT" not allowed." } Summary This article covers the use of model serializers and how it is effective in removing duplicate code. Resources for Article: Further resources on this subject: Making History with Event Sourcing [article] Implementing a WCF Service in the Real World [article] WCF – Windows Communication Foundation [article]
Read more
  • 0
  • 0
  • 9087

article-image-how-to-build-and-deploy-node-app-docker
John Oerter
20 Sep 2016
7 min read
Save for later

How to Build and Deploy a Node App with Docker

John Oerter
20 Sep 2016
7 min read
How many times have you deployed your app that was working perfectly in your local environment to production, only to see it break? Whether it was directly related to the bug or feature you were working on, or another random issue entirely, this happens all too often for most developers. Errors like this not only slow you down, but they're also embarrassing. Why does this happen? Usually, it's because your development environment on your local machine is different from the production environment you're deploying to. The tenth factor of the Twelve-Factor App is Dev/prod parity. This means that your development, staging, and production environments should be as similar as possible. The authors of the Twelve-Factor App spell out three "gaps" that can be present. They are: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deployment uses Apache, MySQL, and Linux. (Source) In this post, we will mostly focus on the tools gap, and how to bridge that gap in a Node application with Docker. The Tools Gap In the Node ecosystem, the tools gap usually manifests itself either in differences in Node and npm versions, or differences in package dependency versions. If a package author publishes a breaking change in one of your dependencies or your dependencies' dependencies, it is entirely possible that your app will break on the next deployment (assuming you reinstall dependencies with npm install on every deployment), while it runs perfectly on your local machine. Although you can work around this issue using tools like npm shrinkwrap, adding Docker to the mix will streamline your deployment life cycle and minimize broken deployments to production. Why Docker? Docker is unique because it can be used the same way in development and production. When you enable the architecture of your app to run inside containers, you can easily scale out and create small containers that can be composed together to make one awesome system. Then, you can mimic this architecture in development so you never have to guess how your app will behave in production. In regards to the time gap and the personnel gap, Docker makes it easier for developers to automate deployments, thereby decreasing time to production and making it easier for full-stack teams to own deployments. Tools and Concepts When developing inside Docker containers, the two most important concepts are docker-compose and volumes. docker-compose helps define mulit-container environments and the ability to run them with one command. Here are some of the more often used docker-compose commands: docker-compose build: Builds images for services defined in docker-compose.yml docker-compose up: Creates and starts services. This is the same as running docker-compose create && docker-compose start docker-compose run: Runs a one-off command inside a container Volumes allow you to mount files from the host machine into the container. When the files on your host machine change, they change inside the container as well. This is important so that we don't have to constantly rebuild containers during development every time we make a change. You can also use a tool like node-mon to automatically restart the node app on changes. Let's walk through some tips and tricks with developing Node apps inside Docker containers. Set up Dockerfile and docker-compose.yml When you start a new project with Docker, you'll first want to define a barebones Dockerfile and docker-compose.yml to get you started. Here's an example Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user USER app-user WORKDIR $HOME/app This Dockerfile displays two best practices: Favor exact version tags over floating tags such as latest. Node releases often these days, and you don't want to implicitly upgrade when building your container on another machine. By specifying a version such as 6.2.1, you ensure that anyone who builds the image will always be working from the same node version. Create a new user to run the app inside the container. Without this step, everything would run under root in the container. You certainly wouldn't do that on a physical machine, so don't do in Docker containers either. Here's an example starter docker-compose.yml: web: build: . volumes: - .:/home/app-user/app Pretty simple right? Here we are telling Docker to build the web service based on our Dockerfile and create a volume from our current host directory to /home/app-user/app inside the container. This simple setup lets you build the container with docker-compose build and then run bash inside it with docker-compose run --rm web /bin/bash. Now, it's essentially the same as if you were SSH'd into a remote server or working off a VM, except that any file you create inside the container will be on your host machine and vice versa. With that in mind, you can bootstrap your Node app from inside your container using npm init -y and npm shrinkwrap. Then, you can install any modules you need such as Express. Install node modules on build With that done, we need to update our Dockerfile to install dependencies from npm when the image is built. Here is the updated Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install Notice that we had to change the ownership of the copied files to app-user. This is because files copied into a container are automatically owned by root. Add a volume for the node_modules directory We also need to make an update to our docker-compose.yml to make sure that our modules are installed inside the container properly. web: build: . volumes: - .:/home/app-user/app - /home/app-user/app/node_modules Without adding a data volume to /home/app-user/app/node_modules, the node_modules wouldn't exist at runtime in the container because our host directory, which won't contain the node_modules directory, would be mounted and hide the node_modules directory that was created when the container was built. For more information, see this Stack Overflow post. Running your app Once you've got an entry point to your app ready to go, simply add it as a CMD in your Dockerfile: CMD ["node", "index.js"] This will automatically start your app on docker-compose up. Running tests inside your container is easy as well. docker-compose --rm run web npm test You could easily hook this into CI. Production Now going to production with your Docker-powered Node app is a breeze! Just use docker-compose again. You will probably want to define another docker-compose.yml that is especially written for production use. This means removing volumes, binding to different ports, setting NODE_ENV=production, and so on. Once you have a production config file, you can tell docker-compose to use it, like so: docker-compose -f docker-compose.yml -f docker-compose.production.yml up The -f lets you specify a list of files that are merged in the order specified. Here is a complete Dockerfile and docker-compose.yml for reference: # Dockerfile FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install CMD ["node", "index.js"] # docker-compose.yml web: build: . ports: - '3000:3000' volumes: - .:/home/app-user/app - /home/app-user/app/node_modules About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs here.
Read more
  • 0
  • 0
  • 18098
article-image-hello-tdd
Packt
14 Sep 2016
6 min read
Save for later

Hello TDD!

Packt
14 Sep 2016
6 min read
In this article by Gaurav Sood, the author of the book Scala Test-Driven Development, tells  basics of Test-Driven Development. We will explore: What is Test-Driven Development? What is the need for Test-Driven Development? Brief introduction to Scala and SBT (For more resources related to this topic, see here.) What is Test-Driven Development? Test-Driven Development or TDD(as it is referred to commonly)is the practice of writing your tests before writing any application code. This consists of the following iterative steps: This process is also referred to asRed-Green-Refactor-Repeat. TDD became more prevalent with the use of agile software development process, though it can be used as easily with any of the Agile's predecessors like Waterfall. Though TDD is not specifically mentioned in agile manifesto (http://agilemanifesto.org), it has become a standard methodology used with agile. Saying this, you can still use agile without using TDD. Why TDD? The need for TDD arises from the fact that there can be constant changes to the application code. This becomes more of a problem when we are using agile development process, as it is inherently an iterative development process. Here are some of the advantages, which underpin the need for TDD: Code quality: Tests on itself make the programmer more confident of their code. Programmers can be sure of syntactic and semantic correctness of their code. Evolving architecture: A purely test-driven application code gives way to an evolving architecture. This means that we do not have to pre define our architectural boundaries and the design patterns. As the application grows so does the architecture. This results in an application that is flexible towards future changes. Avoids over engineering: Tests that are written before the application code define and document the boundaries. These tests also document the requirements and application code. Agile purists normally regard comments inside the code as a smell. According to them your tests should document your code. Since all the boundaries are predefined in the tests, it is hard to write application code, which breaches these boundaries. This however assumes that TDD is following religiously. Paradigm shift: When I had started with TDD, I noticed that the first question I asked myself after looking at the problem was; "How can I solve it?" This however is counterproductive. TDD forces the programmer to think about the testability of the solution before its implementation. To understand how to test a problem would mean a better understanding of the problem and its edge cases. This in turn can result into refinement of the requirements or discovery or some new requirements. Now it had become impossible for me not to think about testability of the problem before the solution. Now the first question I ask myself is; "How can I test it?". Maintainable code: I have always found it easier to work on an application that has historically been test-driven rather than on one that is not. Why? Only because when I make change to the existing code, the existing tests make sure that I do not break any existing functionality. This results in highly maintainable code, where many programmers can collaborate simultaneously. Brief introduction to Scala and SBT Let us look at Scala and SBT briefly. It is assumed that the reader is familiar with Scala and therefore will not go into the depth of it. What is Scala Scala is a general-purpose programming language. Scala is an acronym for Scalable Language. This reflects the vision of its creators of making Scala a language that grows with the programmer's experience of it. The fact that Scala and Java objects can be freely mixed, makes transition from Java to Scala quite easy. Scala is also a full-blown functional language. Unlike Haskell, which is a pure functional language, Scala allows interoperability with Java and support for objectoriented programming. Scala also allows use of both pure and impure functions. Impure functions have side affect like mutation, I/O and exceptions. Purist approach to Scala programming encourages use of pure functions only. Scala is a type-safe JVM language that incorporates both object oriented and functional programming into an extremely concise, logical, and extraordinarily powerful language. Why Scala? Here are some advantages of using Scala: Functional solution to problem is always better: This is my personal view and open for contention. Elimination of mutation from application code allows application to be run in parallelacross hosts and cores without any deadlocks. Better concurrency model: Scala has an actor model that is better than Java's model of locks on thread. Concise code:Scala code is more concise than itsmore verbose cousin Java. Type safety/ static typing: Scala does type checking at compile time. Pattern matching: Case statements in Scala are superpowerful. Inheritance:Mixin traits are great and they definitely reduce code repetition. There are other features of Scala like closure and monads, which will need more understanding of functional language concepts to learn. Scala Build Tool Scala Build Tool (SBT) is a build tool that allows compiling, running, testing, packaging, and deployment of your code. SBT is mostly used with Scala projects, but it can as easily be used for projects in other languages. Here, we will be using SBT as a build tool for managing our project and running our tests. SBT is written in Scala and can use many of the features of Scala language. Build definitions for SBT are also written in Scala. These definitions are both flexible and powerful. SBT also allows use of plugins and dependency management. If you have used a build tool like Maven or Gradlein any of your previous incarnations, you will find SBT a breeze. Why SBT? Better dependency management Ivy based dependency management Only-update-on-request model Can launch REPL in project context Continuous command execution Scala language support for creating tasks Resources for learning Scala Here are few of the resources for learning Scala: http://www.scala-lang.org/ https://www.coursera.org/course/progfun https://www.manning.com/books/functional-programming-in-scala http://www.tutorialspoint.com/scala/index.htm Resources for SBT Here are few of the resources for learning SBT: http://www.scala-sbt.org/ https://twitter.github.io/scala_school/sbt.html Summary In this article we learned what is TDD and why to use it. We also learned about Scala and SBT.  Resources for Article: Further resources on this subject: Overview of TDD [article] Understanding TDD [article] Android Application Testing: TDD and the Temperature Converter [article]
Read more
  • 0
  • 0
  • 8952

article-image-gearing-bootstrap-4
Packt
12 Sep 2016
28 min read
Save for later

Gearing Up for Bootstrap 4

Packt
12 Sep 2016
28 min read
In this article by Benjamin Jakobus and Jason Marah, the authors of the book Mastering Bootstrap 4, we will be discussing the key points about Bootstrap as a web development framework that helps developers build web interfaces. Originally conceived at Twitter in 2011 by Mark Otto and Jacob Thornton, the framework is now open source and has grown to be one of the most popular web development frameworks to date. Being freely available for private, educational, and commercial use meant that Bootstrap quickly grew in popularity. Today, thousands of organizations rely on Bootstrap, including NASA, Walmart, and Bloomberg. According to BuiltWith.com, over 10% of the world's top 1 million websites are built using Bootstrap (http://trends.builtwith.com/docinfo/Twitter-Bootstrap). As such, knowing how to use Bootstrap will be an important skill and serve as a powerful addition to any web developer’s tool belt. (For more resources related to this topic, see here.) The framework itself consists of a mixture of JavaScript and CSS, and provides developers with all the essential components required to develop a fully functioning web user interface. Over the course of the book, we will be introducing you to all of the most essential features that Bootstrap has to offer by teaching you how to use the framework to build a complete website from scratch. As CSS and HTML alone are already the subject of entire books in themselves, we assume that you, the reader, has at least a basic knowledge of HTML, CSS, and JavaScript. We begin this article by introducing you to our demo website—MyPhoto. This website will accompany us throughout the book, and serve as a practical point of reference. Therefore, all lessons learned will be taught within the context of MyPhoto. We will then discuss the Bootstrap framework, listing its features and contrasting the current release to the last major release (Bootstrap 3). Last but not least, this article will help you set up your development environment. To ensure equal footing, we will guide you towards installing the right build tools, and precisely detail the various ways in which you can integrate Bootstrap into a project. To summarize, this article will do the following: Introduce you to what exactly we will be doing Explain what is new in the latest version of Bootstrap, and how the latest version differs to the previous major release Show you how to include Bootstrap in our web project Introducing our demo project The book will teach you how to build a complete Bootstrap website from scratch. We will build and improve the website's various sections as we progress through the book. The concept behind our website is simple. To develop a landing page for photographers. Using this landing page, (hypothetical) users will be able to exhibit their wares and services. While building our website, we will be making use of the same third-party tools and libraries that you would if you were working as a professional software developer. We chose these tools and plugins specifically because of their widespread use. Learning how to use and integrate them will save you a lot of work when developing websites in the future. Specifically, the tools that we will use to assist us throughout the development of MyPhoto are Bower, node package manager (npm) and Grunt. From a development perspective, the construction of MyPhoto will teach you how to use and apply all of the essential user interface concepts and components required to build a fully functioning website. Among other things, you will learn how to do the following: Use the Bootstrap grid system to structure the information presented on your website. Create a fixed, branded, navigation bar with animated scroll effects. Use an image carousel for displaying different photographs, implemented using Bootstrap's carousel.js and jumbotron (jumbotron is a design principle for displaying important content). It should be noted that carousels are becoming an increasingly unpopular design choice, however, they are still heavily used and are an important feature of Bootstrap. As such, we do not argue for or against the use of carousels as their effectiveness depends very much on how they are used, rather than on whether they are used. Build custom tabs that allow users to navigate across different contents. Use and apply Bootstrap's modal dialogs. Apply a fixed page footer. Create forms for data entry using Bootstrap's input controls (text fields, text areas, and buttons) and apply Bootstrap's input validation styles. Make best use of Bootstrap's context classes. Create alert messages and learn how to customize them. Rapidly develop interactive data tables for displaying product information. How to use drop-down menus, custom fonts, and icons. In addition to learning how to use Bootstrap 4, the development of MyPhoto will introduce you to a range of third-party libraries such as Scrollspy (for scroll animations), SalvattoreJS (a library for complementing our Bootstrap grid), Animate.css (for beautiful CSS animations, such as fade-in effects at https://daneden.github.io/animate.css/) and Bootstrap DataTables (for rapidly displaying data in tabular form). The website itself will consist of different sections: A Welcome section An About section A Services section A Gallery section A Contact Us section The development of each section is intended to teach you how to use a distinct set of features found in third-party libraries. For example, by developing the Welcome section, you will learn how to use Bootstrap's jumbotron and alert dialogs along with different font and text styles, while the About section will show you how to use cards. The Services section of our project introduces you to Bootstrap's custom tabs. That is, you will learn how to use Bootstrap's tabs to display a range of different services offered by our website. Following on from the Services section, you will need to use rich imagery to really show off the website's sample services. You will achieve this by really mastering Bootstrap's responsive core along with Bootstrap's carousel and third-party jQuery plugins. Last but not least, the Contact Us section will demonstrate how to use Bootstrap's form elements and helper functions. That is, you will learn how to use Bootstrap to create stylish HTML forms, how to use form fields and input groups, and how to perform data validation. Finally, toward the end of the book, you will learn how to optimize your website, and integrate it with the popular JavaScript frameworks AngularJS (https://angularjs.org/) and React (http://facebook.github.io/react/). As entire books have been written on AngularJS alone, we will only cover the essentials required for the integration itself. Now that you have glimpsed a brief overview of MyPhoto, let’s examine Bootstrap 4 in more detail, and discuss what makes it so different to its predecessor. Take a look at the following screenshot: Figure 1.1: A taste of what is to come: the MyPhoto landing page. What Bootstrap 4 Alpha 4 has to offer Much has changed since Twitter’s Bootstrap was first released on August 19th, 2011. In essence, Bootstrap 1 was a collection of CSS rules offering developers the ability to lay out their website, create forms, buttons, and help with general appearance and site navigation. With respect to these core features, Bootstrap 4 Alpha 4 is still much the same as its predecessors. In other words, the framework's focus is still on allowing developers to create layouts, and helping to develop a consistent appearance by providing stylings for buttons, forms, and other user interface elements. How it helps developers achieve and use these features however, has changed entirely. Bootstrap 4 is a complete rewrite of the entire project, and, as such, ships with many fundamental differences to its predecessors. Along with Bootstrap's major features, we will be discussing the most striking differences between Bootstrap 3 and Bootstrap 4 in the sub sections below. Layout Possibly the most important and widely used feature is Bootstrap's ability to lay out and organize your page. Specifically, Bootstrap offers the following: Responsive containers. Responsive breakpoints for adjusting page layout in response to differing screen sizes. A 12 column grid layout for flexibly arranging various elements on your page. Media objects that act as building blocks and allow you to build your own structural components. Utility classes that allow you to manipulate elements in a responsive manner. For example, you can use the layout utility classes to hide elements, depending on screen size. Content styling Just like its predecessor, Bootstrap 4 overrides the default browser styles. This means that many elements, such as lists or headings, are padded and spaced differently. The majority of overridden styles only affect spacing and positioning, however, some elements may also have their border removed. The reason behind this is simple. To provide users with a clean slate upon which they can build their site. Building on this clean slate, Bootstrap 4 provides styles for almost every aspect of your webpage such as buttons (Figure 1.2), input fields, headings, paragraphs, special inline texts, such as keyboard input (Figure 1.3), figures, tables, and navigation controls. Aside from this, Bootstrap offers state styles for all input controls, for example, styles for disabled buttons or toggled buttons. Take a look at the following screenshot: Figure 1.2: The six button styles that come with Bootstrap 4 are btn-primary,btn-secondary, btn-success,btn-danger, btn-link,btn-info, and btn-warning. Take a look at the following screenshot: Figure 1.3: Bootstrap's content styles. In the preceding example, we see inline styling for denoting keyboard input. Components Aside from layout and content styling, Bootstrap offers a large variety of reusable components that allow you to quickly construct your website's most fundamental features. Bootstrap's UI components encompass all of the fundamental building blocks that you would expect a web development toolkit to offer: Modal dialogs, progress bars, navigation bars, tooltips, popovers, a carousel, alerts, drop-down menu, input groups, tabs, pagination, and components for emphasizing certain contents. Let's have a look at the following modal dialog screenshot: Figure 1.4: Various Bootstrap 4 components in action. In the screenshot above we see a sample modal dialog, containing an info alert, some sample text, and an animated progress bar. Mobile support Similar to its predecessor, Bootstrap 4 allows you to create mobile friendly websites without too much additional development work. By default, Bootstrap is designed to work across all resolutions and screen sizes, from mobile, to tablet, to desktop. In fact, Bootstrap's mobile first design philosophy implies that its components must display and function correctly at the smallest screen size possible. The reasoning behind this is simple. Think about developing a website without consideration for small mobile screens. In this case, you are likely to pack your website full of buttons, labels, and tables. You will probably only discover any usability issues when a user attempts to visit your website using a mobile device only to find a small webpage that is crowded with buttons and forms. At this stage, you will be required to rework the entire user interface to allow it to render on smaller screens. For precisely this reason, Bootstrap promotes a bottom-up approach, forcing developers to get the user interface to render correctly on the smallest possible screen size, before expanding upwards. Utility classes Aside from ready-to-go components, Bootstrap offers a large selection of utility classes that encapsulate the most commonly needed style rules. For example, rules for aligning text, hiding an element, or providing contextual colors for warning text. Cross-browser compatibility Bootstrap 4 supports the vast majority of modern browsers, including Chrome, Firefox, Opera, Safari, Internet Explorer (version 9 and onwards; Internet Explorer 8 and below are not supported), and Microsoft Edge. Sass instead of Less Both Less and Sass (Syntactically Awesome Stylesheets) are CSS extension languages. That is, they are languages that extend the CSS vocabulary with the objective of making the development of many, large, and complex style sheets easier. Although Less and Sass are fundamentally different languages, the general manner in which they extend CSS is the same—both rely on a preprocessor. As you produce your build, the preprocessor is run, parsing the Less/Sass script and turning your Less or Sass instructions into plain CSS. Less is the official Bootstrap 3 build, while Bootstrap 4 has been developed from scratch, and is written entirely in Sass. Both Less and Sass are compiled into CSS to produce a single file, bootstrap.css. It is this CSS file that we will be primarily referencing throughout this book (with the exception of Chapter 3, Building the Layout). Consequently, you will not be required to know Sass in order to follow this book. However, we do recommend that you take a 20 minute introductory course on Sass if you are completely new to the language. Rest assured, if you already know CSS, you will not need more time than this. The language's syntax is very close to normal CSS, and its elementary concepts are similar to those contained within any other programming language. From pixel to root em Unlike its predecessor, Bootstrap 4 no longer uses pixel (px) as its unit of typographic measurement. Instead, it primarily uses root em (rem). The reasoning behind choosing rem is based on a well known problem with px, that is websites using px may render incorrectly, or not as originally intended, as users change the size of the browser's base font. Using a unit of measurement that is relative to the page's root element helps address this problem, as the root element will be scaled relative to the browser's base font. In turn, a page will be scaled relative to this root element. Typographic units of measurement Simply put, typographic units of measurement determine the size of your font and elements. The most commonly used units of measurement are px and em. The former is an abbreviation for pixel, and uses a reference pixel to determine a font's exact size. This means that, for displays of 96 dots per inch (dpi), 1 px will equal an actual pixel on the screen. For higher resolution displays, the reference pixel will result in the px being scaled to match the display's resolution. For example, specifying a font size of 100 px will mean that the font is exactly 100 pixels in size (on a display with 96 dpi), irrespective of any other element on the page. Em is a unit of measurement that is relative to the parent of the element to which it is applied. So, for example, if we were to have two nested div elements, the outer element with a font size of 100 px and the inner element with a font size of 2 em, then the inner element's font size would translate to 200 px (as in this case 1 em = 100 px). The problem with using a unit of measurement that is relative to parent elements is that it increases your code's complexity, as the nesting of elements makes size calculations more difficult. The recently introduced rem measurement aims to address both em's and px's shortcomings by combining their two strengths—instead of being relative to a parent element, rem is relative to the page's root element. No more support for Internet Explorer 8 As was already implicit in the feature summary above, the latest version of Bootstrap no longer supports Internet Explorer 8. As such, the decision to only support newer versions of Internet Explorer was a reasonable one, as not even Microsoft itself provides technical support and updates for Internet Explorer 8 anymore (as of January 2016). Furthermore, Internet Explorer 8 does not support rem, meaning that Bootstrap 4 would have been required to provide a workaround. This in turn would most likely have implied a large amount of additional development work, with the potential for inconsistencies. Lastly, responsive website development for Internet Explorer 8 is difficult, as the browser does not support CSS media queries. Given these three factors, dropping support for this version of Internet Explorer was the most sensible path of action. A new grid tier Bootstrap's grid system consists of a series of CSS classes and media queries that help you lay out your page. Specifically, the grid system helps alleviate the pain points associated with horizontal and vertical positioning of a page's contents and the structure of the page across multiple displays. With Bootstrap 4, the grid system has been completely overhauled, and a new grid tier has been added with a breakpoint of 480 px and below. We will be talking about tiers, breakpoints, and Bootstrap's grid system extensively in this book. Bye-bye GLYPHICONS Bootstrap 3 shipped with a nice collection of over 250 font icons, free of use. In an effort to make the framework more lightweight (and because font icons are considered bad practice), the GLYPHICON set is no longer available in Bootstrap 4. Bigger text – No more panels, wells, and thumbnails The default font size in Bootstrap 4 is 2 px bigger than in its predecessor, increasing from 14 px to 16 px. Furthermore, Bootstrap 4 replaced panels, wells, and thumbnails with a new concept—cards. To readers unfamiliar with the concept of wells, a well is a UI component that allows developers to highlight text content by applying an inset shadow effect to the element to which it is applied. A panel too serves to highlight information, but by applying padding and rounded borders. Cards serve the same purpose as their predecessors, but are less restrictive as they are flexible enough to support different types of content, such as images, lists, or text. They can also be customized to use footers and headers. Take a look at the following screenshot: Figure 1.5: The Bootstrap 4 card component replaces existing wells, thumbnails, and panels. New and improved form input controls Bootstrap 4 introduces new form input controls—a color chooser, a date picker, and a time picker. In addition, new classes have been introduced, improving the existing form input controls. For example, Bootstrap 4 now allows for input control sizing, as well as classes for denoting block and inline level input controls. However, one of the most anticipated new additions is Bootstrap's input validation styles, which used to require third-party libraries or a manual implementation, but are now shipped with Bootstrap 4 (see Figure 1.6 below). Take a look at the following screenshot: Figure 1.6: The new Bootstrap 4 input validation styles, indicating the successful processing of input. Last but not least, Bootstrap 4 also offers custom forms in order to provide even more cross-browser UI consistency across input elements (Figure 1.7). As noted in the Bootstrap 4 Alpha 4 documentation, the input controls are: "built on top of semantic and accessible markup, so they're solid replacements for any default form control" – Source: http://v4-alpha.getbootstrap.com/components/forms/ Take a look at the following screenshot: Figure 1.7: Custom Bootstrap input controls that replace the browser defaults in order to ensure cross-browser UI consistency. Customization The developers behind Bootstrap 4 have put specific emphasis on customization throughout the development of Bootstrap 4. As such, many new variables have been introduced that allow for the easy customization of Bootstrap. Using the $enabled-*- Sass variables, one can now enable or disable specific global CSS preferences. Setting up our project Now that we know what Bootstrap has to offer, let us set up our project: Create a new project directory named MyPhoto. This will become our project root directory. Create a blank index.html file and insert the following HTML code: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> </head> <body> <div class="alert alert-success"> Hello World! </div> </body> </html> Note the three meta tags: The first tag tells the browser that the document in question is utf-8 encoded. Since Bootstrap optimizes its content for mobile devices, the subsequent meta tag is required to help with viewport scaling. The last meta tag forces the document to be rendered using the latest document rendering mode available if viewed in Internet Explorer. Open the index.html in your browser. You should see just a blank page with the words Hello World. Now it is time to include Bootstrap. At its core, Bootstrap is a glorified CSS style sheet. Within that style sheet, Bootstrap exposes very powerful features of CSS with an easy-to-use syntax. It being a style sheet, you include it in your project as you would with any other style sheet that you might develop yourself. That is, open the index.html and directly link to the style sheet. Viewport scaling The term viewport refers to the available display size to render the contents of a page. The viewport meta tag allows you to define this available size. Viewport scaling using meta tags was first introduced by Apple and, at the time of writing, is supported by all major browsers. Using the width parameter, we can define the exact width of the user's viewport. For example, <meta name="viewport" content="width=320px"> will instruct the browser to set the viewport's width to 320 px. The ability to control the viewport's width is useful when developing mobile-friendly websites; by default, mobile browsers will attempt to fit the entire page onto their viewports by zooming out as far as possible. This allows users to view and interact with websites that have not been designed to be viewed on mobile devices. However, as Bootstrap embraces a mobile-first design philosophy, a zoom out will, in fact, result in undesired side-effects. For example, breakpoints will no longer work as intended, as they now deal with the zoomed out equivalent of the page in question. This is why explicitly setting the viewport width is so important. By writing content="width=device-width, initial-scale=1, shrink-to-fit=no", we are telling the browser the following: To set the viewport's width equal to whatever the actual device's screen width is. We do not want any zoom, initially. We do not wish to shrink the content to fit the viewport. For now, we will use the Bootstrap builds hosted on Bootstrap's official Content Delivery Network (CDN). This is done by including the following HTML tag into the head of your HTML document (the head of your HTML document refers to the contents between the <head> opening tag and the </head> closing tag): <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> Bootstrap relies on jQuery, a JavaScript framework that provides a layer of abstraction in an effort to simplify the most common JavaScript operations (such as element selection and event handling). Therefore, before we include the Bootstrap JavaScript file, we must first include jQuery. Both inclusions should occur just before the </body> closing tag: <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"> </script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> Note that, while these scripts could, of course, be loaded at the top of the page, loading scripts at the end of the document is considered best practice to speed up page loading times and to avoid JavaScript issues preventing the page from being rendered. The reason behind this is that browsers do not download all dependencies in parallel (although a certain number of requests are made asynchronously, depending on the browser and the domain). Consequently, forcing the browser to download dependencies early on will block page rendering until these assets have been downloaded. Furthermore, ensuring that your scripts are loaded last will ensure that once you invoke Document Object Model (DOM) operations in your scripts, you can be sure that your page's elements have already been rendered. As a result, you can avoid checks that ensure the existence of given elements. What is a Content Delivery Network? The objective behind any Content Delivery Network (CDN) is to provide users with content that is highly available. This means that a CDN aims to provide you with content, without this content ever (or rarely) becoming unavailable. To this end, the content is often hosted using a large, distributed set of servers. The BootstrapCDN basically allows you to link to the Bootstrap style sheet so that you do not have to host it yourself. Save your changes and reload the index.html in your browser. The Hello World string should now contain a green background: Figure 1.5: Our "Hello World" styled using Bootstrap 4. Now that the Bootstrap framework has been included in our project, open your browser's developer console (if using Chrome on Microsoft Windows, press Ctrl + Shift + I. On Mac OS X you can press cmd + alt + I). As Bootstrap requires another third-party library, Tether for displaying popovers and tooltips, the developer console will display an error (Figure 1.6). Take a look at the following screenshot: Figure 1.6: Chrome's Developer Tools can be opened by going to View, selecting Developer and then clicking on Developer Tools. At the bottom of the page, a new view will appear. Under the Console tab, an error will indicate an unmet dependency. Tether is available via the CloudFare CDN, and consists of both a CSS file and a JavaScript file. As before, we should include the JavaScript file at the bottom of our document while we reference Tether's style sheet from inside our document head: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/css/bootstrap.min.css"> <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/tether/1.3.1/js/tether.min.js"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.4/js/bootstrap.min.js"></script> </body> </html> While CDNs are an important resource, there are several reasons why, at times, using a third party CDN may not be desirable: CDNs introduce an additional point of failure, as you now rely on third-party servers. The privacy and security of users may be compromised, as there is no guarantee that the CDN provider does not inject malicious code into the libraries that are being hosted. Nor can one be certain that the CDN does not attempt to track its users. Certain CDNs may be blocked by the Internet Service Providers of users in different geographical locations. Offline development will not be possible when relying on a remote CDN. You will not be able to optimize the files hosted by your CDN. This loss of control may affect your website's performance (although typically you are more often than not offered an optimized version of the library through the CDN). Instead of relying on a CDN, we could manually download the jQuery, Tether, and Bootstrap project files. We could then copy these builds into our project root and link them to the distribution files. The disadvantage of this approach is the fact that maintaining a manual collection of dependencies can quickly become very cumbersome, and next to impossible as your website grows in size and complexity. As such, we will not manually download the Bootstrap build. Instead, we will let Bower do it for us. Bower is a package management system, that is, a tool that you can use to manage your website's dependencies. It automatically downloads, organizes, and (upon command) updates your website's dependencies. To install Bower, head over to http://bower.io/. How do I install Bower? Before you can install Bower, you will need two other tools: Node.js and Git. The latter is a version control tool—in essence, it allows you to manage different versions of your software. To install Git, head over to http://git-scm.com/and select the installer appropriate for your operating system. NodeJS is a JavaScript runtime environment needed for Bower to run. To install it, simply download the installer from the official NodeJS website: https://nodejs.org/ Once you have successfully installed Git and NodeJS, you are ready to install Bower. Simply type the following command into your terminal: npm install -g bower This will install Bower for you, using the JavaScript package manager npm, which happens to be used by, and is installed with, NodeJS. Once Bower has been installed, open up your terminal, navigate to the project root folder you created earlier, and fetch the bootstrap build: bower install bootstrap#v4.0.0-alpha.4 This will create a new folder structure in our project root: bower_components bootstrap Gruntfile.js LICENSE README.md bower.json dist fonts grunt js less package.js package.json We will explain all of these various files and directories later on in this book. For now, you can safely ignore everything except for the dist directory inside bower_components/bootstrap/. Go ahead and open the dist directory. You should see three sub directories: css fonts js The name dist stands for distribution. Typically, the distribution directory contains the production-ready code that users can deploy. As its name implies, the css directory inside dist includes the ready-for-use style sheets. Likewise, the js directory contains the JavaScript files that compose Bootstrap. Lastly, the fonts directory holds the font assets that come with Bootstrap. To reference the local Bootstrap CSS file in our index.html, modify the href attribute of the link tag that points to the bootstrap.min.css: <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> Let's do the same for the Bootstrap JavaScript file: <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> Repeat this process for both jQuery and Tether. To install jQuery using Bower, use the following command: bower install jquery Just as before, a new directory will be created inside the bower_components directory: bower_components jquery AUTHORS.txt LICENSE.txt bower.json dist sizzle src Again, we are only interested in the contents of the dist directory, which, among other files, will contain the compressed jQuery build jquery.min.js. Reference this file by modifying the src attribute of the script tag that currently points to Google's jquery.min.js by replacing the URL with the path to our local copy of jQuery: <script src="bower_components/jquery/dist/jquery.min.js"></script> Last but not least, repeat the steps already outlined above for Tether: bower install tether Once the installation completes, a similar folder structure than the ones for Bootstrap and jQuery will have been created. Verify the contents of bower_components/tether/dist and replace the CDN Tether references in our document with their local equivalent. The final index.html should now look as follows: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <meta http-equiv="x-ua-compatible" content="ie=edge"> <title>MyPhoto</title> <link rel="stylesheet" href="bower_components/bootstrap/dist/css/bootstrap.min.css"> <link rel="stylesheet" href="bower_components/tether/dist/css/tether.min.css"> </head> <body> <div class="alert alert-success"> Hello World! </div> <script src="bower_components/jquery/dist/jquery.min.js"></script> <script src="bower_components/tether/dist/js/tether.min.js"></script> <script src="bower_components/bootstrap/dist/js/bootstrap.min.js"></script> </body> </html> Refresh the index.html in your browser to make sure that everything works. What IDE and browser should I be using when following the examples in this book? While we recommend a JetBrains IDE or Sublime Text along with Google Chrome, you are free to use whatever tools and browser you like. Our taste in IDE and browser is subjective on this matter. However, keep in mind that Bootstrap 4 does not support Internet Explorer 8 or below. As such, if you do happen to use Internet Explorer 8, you should upgrade it to the latest version. Summary Aside from introducing you to our sample project MyPhoto, this article was concerned with outlining Bootstrap 4, highlighting its features, and discussing how this new version of Bootstrap differs to the last major release (Bootstrap 3). The article provided an overview of how Bootstrap can assist developers in the layout, structuring, and styling of pages. Furthermore, we noted how Bootstrap provides access to the most important and widely used user interface controls through the form of components that can be integrated into a page with minimal effort. By providing an outline of Bootstrap, we hope that the framework's intrinsic value in assisting in the development of modern websites has become apparent to the reader. Furthermore, during the course of the wider discussion, we highlighted and explained some important concepts in web development, such as typographic units of measurement or the definition, purpose and justification of the use of Content Delivery Networks. Last but not least, we detailed how to include Bootstrap and its dependencies inside an HTML document.
Read more
  • 0
  • 0
  • 30382

article-image-mapping-requirements-modular-web-shop-app
Packt
07 Sep 2016
11 min read
Save for later

Mapping Requirements for a Modular Web Shop App

Packt
07 Sep 2016
11 min read
In this article by Branko Ajzele, author of the book Modular Programming with PHP 7, we will be building a software application from the ground up requires diverse skills, as it involves more than just writing down a code. Writing down functional requirements and sketching out a wireframe are often among the first steps in the process, especially if we are working on a client project. These steps are usually done by roles other than the developer, as they require certain insight into client business case, user behavior, and alike. Being part of a larger development team means that, we as developers, usually get requirements, designs, and wireframes then start coding against them. Delivering projects by oneself, makes it tempting to skip these steps and get our hands started with code alone. More often than not, this is an unproductive approach. Laying down functional requirements and a few wireframes is a skill worth knowing and following, even if one is just a developer. (For more resources related to this topic, see here.) Later in this article, we will go over a high-level application requirement, alongside a rough wireframe. In this article, we will be covering the following topics: Defining application requirements Wireframing Defining a technology stack Defining application requirements We need to build a simple, but responsive web shop application. In order to do so, we need to lay out some basic requirements. The types of requirements we are interested in at the moment are those that touch upon interactions between a user and a system. The two most common techniques to specify requirements in regards to user usage are use case and user story. The user stories are a less formal, yet descriptive enough way to outline these requirements. Using user stories, we encapsulate the customer and store manager actions as mentioned here. A customer should be able to do the following: Browse through static info pages (about us, customer service) Reach out to the store owner via a contact form Browse the shop categories See product details (price, description) See the product image with a large view (zoom) See items on sale See best sellers Add the product to the shopping cart Create a customer account Update customer account info Retrieve a lost password Check out See the total order cost Choose among several payment methods Choose among several shipment methods Get an email notification after an order has been placed Check order status Cancel an order See order history A store manager should be able to do the following: Create a product (with the minimum following attributes: title, price, sku, url-key, description, qty, category, and image) Upload a picture to the product Update and delete a product Create a category (with the minimum following attributes: title, url-key, description, and image) Upload a picture to a category Update and delete a category Be notified if a new sales order has been created Be notified if a new sales order has been canceled See existing sales orders by their statuses Update the status of the order Disable a customer account Delete a customer account User stories are a convenient high-level way of writing down application requirements. Especially useful as an agile mode of development. Wireframing With user stories laid out, let's shift our focus to actual wireframing. For reasons we will get into later on, our wireframing efforts will be focused around the customer perspective. There are numerous wireframing tools out there, both free and commercial. Some commercial tools like https://ninjamock.com, which we will use for our examples, still provide a free plan. This can be very handy for personal projects, as it saves us a lot of time. The starting point of every web application is its home page. The following wireframe illustrates our web shop app's homepage: Here we can see a few sections determining the page structure. The header is comprised of a logo, category menu, and user menu. The requirements don't say anything about category structure, and we are building a simple web shop app, so we are going to stick to a flat category structure, without any sub-categories. The user menu will initially show Register and Login links, until the user is actually logged in, in which case the menu will change as shown in following wireframes. The content area is filled with best sellers and on sale items, each of which have an image, title, price, and Add to Cart button defined. The footer area contains links to mostly static content pages, and a Contact Us page. The following wireframe illustrates our web shop app's category page: The header and footer areas remain conceptually the same across the entire site. The content area has now changed to list products within any given category. Individual product areas are rendered in the same manner as it is on the home page. Category names and images are rendered above the product list. The width of a category image gives some hints as to what type of images we should be preparing and uploading onto our categories. The following wireframe illustrates our web shop app's product page: The content area here now changes to list individual product information. We can see a large image placeholder, title, sku, stock status, price, quantity field, Add to Cart button, and product description being rendered. The IN STOCK message is to be displayed when an item is available for purchase and OUT OF STOCK when an item is no longer available. This is to be related to the product quantity attribute. We also need to keep in mind the "See the product image with a big view (zoom)" requirement, where clicking on an image would zoom into it. The following wireframe illustrates our web shop app's register page: The content area here now changes to render a registration form. There are many ways that we can implement the registration system. More often than not, the minimal amount of information is asked on a registration screen, as we want to get the user in as quickly as possible. However, let's proceed as if we are trying to get more complete user information right here on the registration screen. We ask not just for an e-mail and password, but for entire address information as well. The following wireframe illustrates our web shop app's login page: The content area here now changes to render a customer login and forgotten password form. We provide the user with Email and Password fields in case of login, or just an Email field in case of a password reset action. The following wireframe illustrates our web shop app's customer account page: The content area here now changes to render the customer account area, visible only to logged in customers. Here we see a screen with two main pieces of information. The customer information being one, and order history being the other. The customer can change their e-mail, password, and other address information from this screen. Furthermore, the customer can view, cancel, and print all of their previous orders. The My Orders table lists orders top to bottom, from newest to oldest. Though not specified by the user stories, the order cancelation should work only on pending orders. This is something that we will touch upon in more detail later on. This is also the first screen that shows the state of the user menu when the user is logged in. We can see a dropdown showing the user's full name, My Account, and Sign Out links. Right next to it, we have the Cart (%s) link, which is to list exact quantities in a cart. The following wireframe illustrates our web shop app's checkout cart page: The content area here now changes to render the cart in its current state. If the customer has added any products to the cart, they are to be listed here. Each item should list the product title, individual price, quantity added, and subtotal. The customer should be able to change quantities and press the Update Cart button to update the state of the cart. If 0 is provided as the quantity, clicking the Update Cart button will remove such an item from the cart. Cart quantities should at all time reflect the state of the header menu Cart (%s) link. The right-hand side of a screen shows a quick summary of current order total value, alongside a big, clear Go to Checkout button. The following wireframe illustrates our web shop app's checkout cart shipping page: The content area here now changes to render the first step of a checkout process, the shipping information collection. This screen should not be accessible for non-logged in customers. The customer can provide us with their address details here, alongside a shipping method selection. The shipping method area lists several shipping methods. On the right hand side, the collapsible order summary section is shown, listing current items in the cart. Below it, we have the cart subtotal value and a big clear Next button. The Next button should trigger only when all of the required information is provided, in which case it should take us to payment information on the checkout cart payment page. The following wireframe illustrates our web shop app's checkout cart payment page: The content area here now changes to render the second step of a checkout process, the payment information collection. This screen should not be accessible for non-logged in customers. The customer is presented with a list of available payment methods. For the simplicity of the application, we will focus only on flat/fixed payments, nothing robust such as PayPal or Stripe. On the right-hand side of the screen, we can see a collapsible Order summary section, listing current items in the cart. Below it, we have the order totals section, individually listing Cart Subtotal, Standard Delivery, Order Total, and a big clear Place Order button. The Place Order button should trigger only when all of the required information is provided, in which case it should take us to the checkout success page. The following wireframe illustrates our web shop app's checkout success page: The content area here now changes to output the checkout successful message. Clearly this page is only visible to logged in customers that just finished the checkout process. The order number is clickable and links to the My Account area, focusing on the exact order. By reaching this screen, both the customer and store manager should receive a notification email, as per the Get email notification after order has been placed and Be notified if the new sales order has been created requirements. With this, we conclude our customer facing wireframes. In regards to store manager user story requirements, we will simply define a landing administration interface for now, as shown in the following screenshot: Using the framework later on, we will get a complete auto-generated CRUD interface for the multiple Add New and List & Manage links. The access to this interface and its links will be controlled by the framework's security component, since this user will not be a customer or any user in the database as such. Defining a technology stack Once the requirements and wireframes are set, we can focus our attention to the selection of a technology stack. Choosing the right one in this case, is more of a matter of preference, as application requirements for the most part can be easily met by any one of those frameworks. Our choice however, falls onto Symfony. Aside from PHP frameworks, we still need a CSS framework to deliver some structure, styling, and responsiveness within the browser on the client side. Since the focus of this book is on PHP technologies, let's just say we choose the Foundation CSS framework for that task. Summary Creating web applications can be a tedious and time consuming task. Web shops probably being one of the most robust and intensive type of application out there, as they encompass a great deal of features. There are many components involved in delivering the final product; from database, server side (PHP) code to client side (HTML, CSS, and JavaScript) code. In this article, we started off by defining some basic user stories which in turn defined high-level application requirements for our small web shop. Adding wireframes to the mix helped us to visualize the customer facing interface, while the store manager interface is to be provided out of the box by the framework. We further glossed over two of the most popular frameworks that support modular application design. We turned our attention to Symfony as server side technology and Foundation as a client side responsive framework. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Understanding PHP basics [article] PHP Magic Features [article]
Read more
  • 0
  • 0
  • 10526
article-image-integrating-angular-2-react
Mary Gualtieri
01 Sep 2016
5 min read
Save for later

How to integrate Angular 2 with React

Mary Gualtieri
01 Sep 2016
5 min read
It can be overwhelming to choose which framework is the best framework to use to get the job done in JavaScript, because you have so many options out there. In previous years, we have seen two popular frameworks come to fame: React and Angular. React has gained a lot of popularity because it is what Facebook and Instagram are built on. So, which one do you use? If you ask most JavaScript developers, they advise you to use one or the other, and that comes down to personal choice. Let's go ahead and explore Angular and React, and actually explore how the two can work together A common misconception about React is that React is a full JavaScript framework in competition with Angular. But it actually is not. React is a user interface library and is just the view in an 'MVC' framework, with a little bit of a controller in it. In other words, React is a template (an Angular term for view) where you can add some controller logic.It is the same idea of integrating the jQuery UI into JavaScript. React aids in Angular, which is a frontend framework, and makes it more efficient for the user. This is because you can write a reusable component that can be plugged into an application. Angular has many pros, and that's what makes it so appealing to a lot of developers and companies. With my personal experience, Angular has been very powerful in making solid applications. However, one of the cons that bothers me about Angular is how it goes about executing a template. I always want to practice writing DRY code and that can be a problem with Angular. You can end up writing a bunch of HTML that can be complicated and difficult to read. But you can also end up causing a spaghetti effect in your CSS. One of Angular's big strengths is the watchers and the binders. When executed correctly in well-thought-out places, it can be great for fast binding and good responsiveness. But like every human, we all make errors, and when misused, you can have performance issues when binding way too many elements in your HTML. This leads to a very slow and lagged application that no one wants to use. But there is a way to rectify this. You can use React to aid in Angular's downfalls.React was designed to work really well with other libraries and makes rendering views or templates much faster. The beauty of React is how it uses a more efficient algorithm on the virtual DOM. In plain terms, it allows you to change parts of the application that need to be updated without having to touch the rest of the application. You can send a command to update the user interface and React compares these changes to the existing DOM. Instead of theorizing on how Angular and React complement one another, let's see how it looks in code. First, let's create an HMTL file that has an Angular script and a React script. *Remember, your relative path may be different from the example.  Next, let's create a React component that renders a string that is inputted by the user.  What is happening here is that we are using React to render our model. We created a component that renders the props passed to it. Then we create an Angular directive and controller to start the app. The directive is calling the React component and telling it to render. But, let's take a look at another example that integrates Angular 2. We can demonstrate how a React component can be self-contained but still be injected into the Angular 2 world. Angular 2 has an optional hook, onInit(), that takes advantage of triggering the code to render a React component. As you can see, the host component has defined implementation for the onInit handler, where you can call the static initialize function. If you noticed, the initialize method is passing a title text that is passed down from the React component as a prop. *React Component Another item to consider that unites React and Angular is TypeScript. This is a big deal because we can manage both Angular code and React code in the same compilation step. When it comes down to it, TypeScript is what gets stripped down to regular JavaScript. One thing that you have to remember to do is tell the compiler that you are using JSX by specifying the JSX flag. To conclude, Angular will always remain a very popular framework for developers. For an application to render faster, React is a great way to render templates faster and create a more efficient user experience. React is a great compliment to Angular and will only enhance your application. About the author Mary Gualtieri is a full-stack web developer and web designer who enjoys all aspects of the web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solveproblems, all while constantly learning. She can be found on GitHub at MaryGualtieri.
Read more
  • 0
  • 2
  • 22094

article-image-adding-charts-dashboards
Packt
19 Aug 2016
11 min read
Save for later

Adding Charts to Dashboards

Packt
19 Aug 2016
11 min read
In this article by Vince Sesto, author of the book Learning Splunk Web Framework, we will study adding charts to dashboards. We have a development branch to work from and we are going to work further with the SimpleXMLDashboard dashboard. We should already be on our development server environment as we have just switched over to our new development branch. We are going to create a new bar chart, showing the daily NASA site access for our top educational users. We will change the label of the dashboard and finally place an average overlay on top of our chart: (For more resources related to this topic, see here.) Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for all our dashboards: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views We are going to work on the simplexmldashboard.xml file. Open this file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. It is not compulsory to indent and nest your Simple XML code, but it is a good idea to have consistent indentation and commenting to make sure your code is clear and stays as readable as possible. Let's start by changing the name of the dashboard that is displayed to the user. Change line 2 to the following line of code (don't include the line numbers): 2   <label>Educational Site Access</label> Move down to line 16 and you will see that we have closed off our row element with a </row>. We are going to add in a new row where we will place our new chart. After the sixteenth line, add the following three lines to create a new row element, a new panel to add our chart, and finally, open up our new chart element: 17 <row> 18 <panel> 19 <chart> The next two lines will give our chart a title and we can then open up our search: 20 <title>Top Educational User</title> 21 <search> To create a new search, just like we would enter in the Splunk search bar, we will use the query tag as listed with our next line of code. In our search element, we can also set the earliest and latest times for our search, but in this instance we are using the entire data source: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay </query> 23 <earliest>0</earliest> 24 <latest></latest> 25 </search> We have completed our search and we can now modify the way the chart will look on our panel with the option chart elements. In our next four lines of code, we set the chart type as a column chart, set the legend to the bottom of the chart area, remove any master legend, and finally set the height as 250 pixels: 26 <option name="charting.chart">column</option> 27 <option name="charting.legend.placement">bottom</option> 28 <option name="charting.legend.masterLegend">null</option> 29 <option name="height">250px</option> We need to close off the chart, panel, row, and finally the dashboard elements. Make sure you only close off the dashboard element once: 30 </chart> 31 </panel> 32 </row> 33 </dashboard>  We have done a lot of work here. We should be saving and testing our code for every 20 or so lines that we add, so save your changes. And as we mentioned earlier in the article, we want to refresh our cache by entering the following URL into our browser: http://<host:port>/debug/refresh When we view our page, we should see a new column chart at the bottom of our dashboard showing the usage per day for the calclab1.math.tamu.edu domain. But we’re not done with that chart yet. We want to put a line overlay showing the average site access per day for our user. Open up simplexmldashboard.xml again and change our query in line 22 to the following: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay| eventstats avg(count) as average | eval average=round(average,0)</query> Simple XML contains some special characters, which are ', <, >, and &. If you intend to use advanced search queries, you may need to use these characters, and if so you can do so by either using their HTML entity or using the CDATA tags where you can wrap your query with <![CDATA[ and ]]>. We now need to add two new option lines into our Simple XML code. After line 29, add the following two lines, without replacing all of the closing elements that we previously entered. The first will set the chart overlay field to be displayed for the average field; the next will set the color of the overlay: 30 <option name="charting.chart.overlayFields">average</option> 31 <option name="charting.fieldColors">{"count": 0x639BF1, "average":0xFF5A09}</option>    Save your new changes, refresh the cache, and then reload your page. You should be seeing something similar to the following screenshot: The Simple XML of charts As we can see from our example, it is relatively easy to create and configure our charts using Simple XML. When we completed the chart, we used five options to configure the look and feel of the chart, but there are many more that we can choose from. Our chart element needs to always be in between its two parent elements, which are row and panel. Within our chart element, we always start with a title for the chart and a search to power the chart. We can then set out additional optional settings for earliest and latest, and then a list of options to configure the look and feel as we have demonstrated below. If these options are not specified, default values are provided by Splunk: 1 <chart> 2 <title></title> 3 <search> 4 <query></query> 5 <earliest>0</earliest> 6 <latest></latest> 7 </search> 8 <option name=""></option> 9 </chart> There is a long list of options that can be set for our charts; the following is a list of the more important options to know: charting.chart: This is where you set the chart type, with area, bar, bubble, column, fillerGauge, line, markerGauge, pie, radialGauge, and scatter being the charts that you can choose from. charting.backgroudColor: Set the background color of your chart with a Hex color value. charting.drilldown: Set to either all or none. This allows the chart to be clicked on to allow the search to be drilled down for further information. charting.fieldColors: This can map a color to a field as we did with our average field in the preceding example. charting.fontColor: Set the value of the font color in the chart with a Hex color value. height: The height of the chart in pixels. The value must be between 100 and 1,000 pixels. A lot of the options seem to be self-explanatory, but a full list of options and a description can be found on the Splunk reference material at the following URL: http://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference. Expanding our Splunk App with maps We will now go through another example in our NASA Squid and Web Data App to run through a more complex type of visualization to present to our user. We will use the Basic Dashboard that we created, but we will change the Simple XML to give it a more meaningful name, and then set up a map to present to our users where our requests are actually coming from. Maps use a map element and don't rely on the chart element as we have been using. The Simple XML code for the dashboard we created earlier in this article looks like the following: <dashboard> <label>Basic Dashboard</label> </dashboard> So let's get to work and give our Basic Dashboard a little "bling": Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for our Basic Dashboard: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views Open the basic_dashboard.xml file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. We might as well remove all of the code that is in there, because it is going to look completely different than the way it did originally. Now start by setting up your dashboard and label elements, with a label that will give you more information on what the dashboard contains: 1 <dashboard> 2 <label>Show Me Your Maps</label> Open your row, panel, and map elements, and set a title for the new visualization. Make sure you use the map element and not the chart element: 3 <row> 4 <panel> 5 <map> 6 <title>User Locations</title>       We can now add our search query within our search elements. We will only search for IP addresses in our data and use the geostats Splunk function to extract a latitude and longitude from the data: 7 <search> 8 <query>index=main sourcetype="nasasquidlogs" | search From=1* | iplocation From | geostats latfield=lat longfield=lon count by From</query> 9 <earliest>0</earliest> 10 <latest></latest> 11 </search>  The search query that we have in our Simple XML code is more advanced than the previous queries we have implemented. If you need further details on the functions provided in the query, please refer to the Splunk search documentation at the following location: http://docs.splunk.com/Documentation/Splunk/6.4.1/SearchReference/WhatsInThisManual. Now all we need to do is close off all our elements, and that is all that is needed to create our new visualization of IP address requests: visualization of IP address requests: 12 </map> 13 </panel> 14 </row> 15 </dashboard> If your dashboard looks similar to the image below, I think it looks pretty good. But there is more we can do with our code to make it look even better. We can set extra options in our Simple XML code to zoom in, only display a certain part of the map, set the size of the markers, and finally set the minimum and maximum that can be zoomed into the screen. The map looks pretty good, but it seems that a lot of the traffic is being generated by users in USA. Let's have a look at setting some extra configurations in our Simple XML to change the way the map displays to our users. Get back to our basic_dashboard.xml file and add the following options: After our search element is closed off, we can add the following options. First we will set the maximum clusters to be displayed on our map as 100. This will hopefully speed up our map being displayed, and allow all the data points to be viewed further with the drilldown option: 12 <option name="mapping.data.maxClusters">100</option> 13 <option name="mapping.drilldown">all</option> We can now set our central point for the map to load using latitude and longitude values. In this instance, we are going to set the heart of USA as our central point. We are also going to set our zoom value as 4, which will zoom in a little further from the default of 2: 14 <option name="mapping.map.center">(38.48,-102)</option> 15 <option name="mapping.map.zoom">4</option> Remember that we need to have our map, panel, row, and dashboard elements closed off. Save the changes and reload the cache. Let's see what is now displayed: Your map should now be displaying a little faster than what it originally did. It will be focused on USA, where a bulk of the traffic is coming from. The map element has numerous options to use and configure and a full list can be found at the following Splunk reference page: http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML. Summary In this article we covered about Simple XML charts and how to expand our Splunk App with maps. Resources for Article: Further resources on this subject: The Splunk Web Framework [article] Splunk's Input Methods and Data Feeds [article] The Splunk Interface [article]
Read more
  • 0
  • 0
  • 10620
Modal Close icon
Modal Close icon