Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-you-begin
Packt
13 Jul 2016
14 min read
Save for later

Before You Begin

Packt
13 Jul 2016
14 min read
In this article by Ashley Chiasson, the author of the book Mastering Articulate Storyline, provides you with an introduction to the purpose of this book, best practices related to e-learning product development. In this article, we will cover the following topics: Pushing Articulate Storyline to the limit Best practices How to be mindful of reusability Methods for organizing your project The differences between storyboarding and rapid development Ways of streamlining your development (For more resources related to this topic, see here.) Pushing Articulate Storyline to the limit The purpose of this book is really to get you comfortable with pushing Articulate Storyline to its limits. Doing this may also broaden your imagination, allowing you to push your creativity to its limits. There are so many things you can do within Storyline, and a lot of those features, interactions, or functions are overlooked because they just aren't used all that often. Often times, the basic functionality overshadows the more advanced functions because they're easier, they often address the need, and they take less time to learn. That's understandable, but this book is going to open your mind to many more things possible within this tool. You'll get excited, frustrated, excited again, and probably frustrated a few more times, but with all of the practical activities for you to follow along with (and or reverse engineer), you'll be mastering Articulate Storyline and pushing it to its limits within no time! If you don't quite get one of the concepts explained, don't worry. You'll always have access to this book and the activity downloads as a handy reference or refresher. Best practices Before you get too far into your development, it's important to take some steps to streamline your approach by establishing best practices—doing this will help you become more organized and efficient. Everyone has their own process, so this is by no means a prescribed format for the proper way of doing things. These are just some recommendations, from personal experience, that have proven effective as an e-learning developer. Please note that these best practices are not necessarily Storyline-related, but are best practices to consider ahead of development within any e-learning project. Your best practices will likely be project-specific in terms of how your clients or how your organization's internal processes work. Sometimes you'll be provided with storyboard ahead of development and sometimes you'll be expected to rapidly develop. Sometimes you'll be provided with all multimedia ahead of development and sometimes you'll be provided with multimedia after an alpha review. You may want to do a content dump at the beginning of your development process or you may want to work through each slide from start until finish before moving on. Through experience and observation of what other developers are doing, you will learn how to define and adapt your best practices. When a new project comes along, it's always a good idea to employ some form of organization. There are many great reasons for this, some of which include being mindful of reusability, maintaining and organizing project and file structure, and streamlining your development process. This article aims to provide you with as much information as necessary to ensure that you are effectively organizing your projects for enhanced efficiency and an understanding of why these methods should always be considered best practices. How to be mindful of reusability When I think about reusability in e-learning, I think about objects and content that can be reused in a variety of contexts. Developers often run into this when working on large projects or in industries that involve trade-specific content. When working on multiple projects within one sector, you may come across assets used previously in one course (for example, a 3D model of an aircraft) that may be reused in another course of the same content base. Being able to reuse content and/or assets can come in handy as it can save you resources in the long run. Reusing previously established assets (if permitted to do so, of course) would reduce the amount of development time various departments and/or individuals need to spend. Best practices for reusability might include creating your own content repository and defining a file naming convention that will make it easy for you to quickly find what you're looking for. If you're extra savvy, you can create a metadata-coded database, but that might require a lot more effort than you have available. While it does take extra time to either come up with a file naming convention or apply metadata tagging to all assets within your repository, the goal is to make your life easier in the long run. Much like the dreaded administrative tasks required of small business owners, it's not the most sought-after task, but it's a necessary one, especially if you truly want to optimize efficiency! Within Articulate Storyline, you may want to maintain a repository of themes and interactions as you can use elements of these assets for future development and they can save you a lot of time. Most projects, in the early stages, require an initial prototype for the client to sign off on the general look and feel. In this prototyping phase, having a repository of themes and interactions can really make the process a lot smoother because you can call on previous work in order to easily facilitate the elemental design of a new project. Storyline allows you to import content from many sources (for example, PowerPoint, Articulate Engage, Articulate Quizmaker, and more), so you don't feel limited to just reusing Storyline interactions and/or themes. Just structure your repository in an organized manner and you will be able to easily locate the files and file types that you're looking to use at a later date. Another great thing Articulate Storyline is good for when it comes to reusability is Question Banks! Most courses contain questions, knowledge checks, assessments, or whatever you want to call them, but all too seldom do people think about compiling these questions in one neat area for reuse later on. Instead, people often add new question slides, add the question, and go on their merry development way. If you're one of those people, you need to STOP. Your life will be entirely changed by the concept of question banks—if not entirely, at least a little bit, or at least the part of your life that dabbles in development will be changed in some small way. Question banks allow you to create a bank of questions (who would have thought) and call on these questions at any time for placement within your story—reusability at its finest, at least in Storyline. Methods for organizing your project Organizing your project is a necessary evil. Surely there is someone out there who loves this process, but for others who just want to develop all day and all night, there may be a smaller emphasis placed on organization. However, you can take some simple steps to organize your project that can be reused for future projects. Within Storyline, the organizational emphasis of this article will be placed on using Story View and optimizing the use of scenes. These are two elements of Storyline that, depending on the size of your project, can make a world of difference when it comes to making sense of all the content you've authored in terms of making the structure of your content more palatable. Using the Story View Story View is such a great feature of Storyline! It provides you with a bird's eye view of your project, or story, and essentially shows you a visual blueprint of all scenes and slides. This is particularly helpful in projects that involve a lot of branching. Instead of seeing the individual parts, you're seeing the parts as they represent the whole—the Gestalt psychology would be proud! You can also use Story View to plan out the movement of existing scenes or slides if content isn't lining up quite the way you want it to: Optimizing scene use Scenes play a very big role in maintaining organization within your story. They serve to group slides into smaller segments of the entire story and are typically defined using logical breaks. However, it's all up to you how you decide to group your slides. If the story you're working on consists of multiple topics or modules, each topic or module would logically become a new scene. Visually, scenes work in tandem with Story View in that while you're in Story View, you can clearly see the various scenes and move things around appropriately. Functionally, scenes serve to create submenus in the main Storyline menu, but you can change this if you don't want to see each scene delineated in the menu. From an organization and control perspective, scenes can help you reel in unwieldy and overwhelming content. This particularly comes in handy with large courses, where you can easily lose your place when trying to track down a specific slide of a scene, for example, in a sea of 150 slides. In this sense, scenes allow you to chunk content into more manageable scenes within your story and will likely allow you to save on development and revision time. Using scenes will also help when it comes to previewing your story. Instead of having to wait to load 150 slides each time you preview, you can choose to preview a scene and will only have to wait for the slides in that scene to load—perhaps 15 slides of the entire course instead of 150. Scenes really are a magical thing! Asset management Asset management is just what it sounds like—managing your assets. Now, your assets may come in many forms, for example, media assets (your draft and/or completed images/video/audio), customer furnished assets (files provided by the client, which could be raw images/video/audio/PowerPoint/Word documents, and so on.), or content output (outputs from whichever authoring tool you're using). If you've worked on large projects, you will likely relate to how unwieldy these assets can become if you don't have a system in place for keeping everything organized. This is where the management element comes into play. Structuring your folders Setting up a consistent folder structure is really important when it comes to managing your assets. Structuring your folders may seem like a daunting administrative task, but once you determine a structure that works well for you and your projects, you can copy the structure for each project. So yeah, there is a little bit of up front effort, but the headache it will save you in the long run when it comes to tracking down assets for reuse is worth the effort! Again, this folder structure is in no way prescribed, but it is a recommendation, and one that has worked well. It looks something like the following: It may look overwhelming, but it's really not that bad. There are likely more elements accounted here than you may need for your project, but all main elements are included, and you can customize it as you see fit. This is how the folder structure breaks down: Project Folder: 100 Project Management Depending on how large the project is, this folder may have subfolders, for example: Meeting Minutes Action Tracking Risk Management Contracts Invoices 200 Development This folder typically contains subfolders related to my development, for example: Client-Furnished Information (CFI) Scripts and Storyboards Scripts Audio Narration Storyboards Media Video Audio Draft Audio Final Audio Images Flash Output Quality Assurance 300 Client This folder will include anything sent to the client for review, for example: Delivered Review Comments Final Within these folders, there may be other subfolders, but this is the general structure that has proven effective for me. When it comes to filenames, you may wish to follow a file naming convention dictated by the client or follow an internal file naming convention, which indicates the project, type of media, asset number, and version number, for example, PROJECT_A_001_01. If there are multiple courses for one project, you may also want to add an arbitrary course number to keep tabs on which asset belongs to which course. Once a file naming convention has been determined, these filenames will be managed within a spreadsheet, housed within the main 200>Media folder. The basic goal of this recommended folder structure is to organize your course assets and break them into three groups to further help with the organization. If this folder structure sounds like it might be functional for your purposes, go ahead and download a ready-made version of the folder structure. Storyboarding and rapid prototyping Storyboarding and rapid prototyping will likely make their way into your development glossary, if they haven't already, so they're important concepts to discuss when it comes to streamlining your development. Through experience, you'll learn how each of these concepts can help you become more efficient, and this section will discuss some benefits and detriments of both. Storyboarding is a process wherein the sequence of an e-learning project is laid out visually or textually. This process allows instructional designers to layout the e-learning project to indicate screens, topics, teaching points, onscreen text, and media descriptions. However, storyboards may not be limited to just those elements. There are many variations. However, the previously mentioned elements are most commonly represented within a storyboard. Other elements may include audio narration script, assessment items, high-level learning objectives, filenames, source/reference images, or screenshots illustrating the anticipated media asset or screen to be developed. The good thing about storyboarding is that it allows you to organize the content and provides documentation that may be reviewed prior to entry into an authoring environment. Storyboarding provides subject matter experts with a great opportunity for ironing out textual content to ensure accuracy, and can help developers in terms of reducing small text changes once in the authoring environment. These small changes are just that, small, but they also add up quickly and can quickly throw a wrench into your well-oiled, efficient, development machine. Storyboarding also has its downsides. It is an extra step in the development process and may be perceived, by potential clients, as an additional and unnecessary expense. Because storyboards do not depict the final product, reviewers may have difficulty in reviewing content as they cannot contextualize without being able to see the final product. This can be especially true when it comes to reviewing a storyboard involving complex branching scenarios. Rapid prototyping on the other hand involves working within the authoring environment, in this case Articulate Storyline, to develop your e-learning project, slide by slide. This may occur in developing an initial prototype, but may also occur throughout the lifecycle of the project as a means for eliminating the step of storyboarding from the development process. With rapid prototyping, reviewers have added context of visuals and functionality. They are able to review a proposed version of the end product, and as such, their review comments may become more streamlined and their review may take less time to conduct. However, reviewers may also get overloaded by visual stimuli, which may hamper their ability to review for content accuracy. Additionally, rapid prototyping may become less rapid when it comes to revising complex interactions. In both situations, there are clear advantages and disadvantages, so a best practice should be to determine an appropriate way ahead with regard to development and understand which process may best suit the project for which you are authoring. Streamlining your development Storyline provides you with so many ways to streamline your development. A sampling of topics discussed includes the following: Setting up auto-save Setting up defaults Keyboard shortcuts Dockable panels Using the format painter Using the eyedropper Cue points Duplicating objects Naming objects Summary This article introduced you to the concept of pushing Articulate Storyline 2 to its limits, provided you with some tips and tricks when it comes to best practices and being mindful of reusability, identified a functional folder structure and explained the importance that organization will play in your Storyline development, explained the difference between storyboarding and rapid prototyping, and gave you a taste of some topics that may help you streamline your development process. You are now armed with all of my best advice for staying productive and organized, and you should be ready to start a new Storyline project! Resources for Article: Further resources on this subject: Data Science with R [article] Sizing and Configuring your Hadoop Cluster [article] Creating Your Own Theme—A Wordpress Tutorial [article]
Read more
  • 0
  • 0
  • 12778

article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install lodash@1.1 How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 12597

Packt
23 Jul 2015
9 min read
Save for later

Eloquent… without Laravel!

Packt
23 Jul 2015
9 min read
In this article by, Francesco Malatesta author of the book, Learning Laravel’s Eloquent, we will learn everything about Eloquent, starting from the very basics and going through models, relationships, and other topics. You probably started to like it and think about implementing it in your next project. In fact, creating an application without a single SQL query is tempting. Maybe you also showed it to your boss and convinced him/her to use it in your next production project. However, there is a little problem. Yeah, the next project isn't so new. It already exists, and, despite everything, it doesn't use Laravel! You start to shiver. This is so sad because you passed the last week studying this new ORM, a really cool one, and then moving forward. There is always a solution! You are a developer! Also, the solution is not so hard to find. If you want, you can use Eloquent without Laravel. Actually, Laravel is not a monolithic framework. It is made up of several, separate parts, which are combined together to build something greater. However, nothing prevents you from using only selected packages in another application. (For more resources related to this topic, see here.) So, what are we going to see in this article? First of all, we will explore the structure of the database package and see what is inside it. Then, you will learn how to install the illuminate/database package separately for your project and how to configure it for the first use. Then, you will encounter some examples. First of all, we will look at the Eloquent ORM. You will learn how to define models and use them. Having done this, as a little extra, I will show you how to use the Query Builder (remember that the "illuminate/database" package isn't just Eloquent). Maybe you would also enjoy the Schema Builder class. I will cover it, don't worry! We will cover the following: Exploring the directory structure Installing and configuring the database package Using the ORM Using the Query and Schema Builders Summary Exploring the directory structure As I mentioned before, the key step in order to use Eloquent in your application without Laravel is to use the "illuminate/database" package. So, before we install it, let's examine it a little. You can see the package contents here: https://github.com/illuminate/database. So, this is what you will probably see: Folder Description Capsule The capsule manager is a fundamental component. It instantiates the service container and loads some dependencies. Connectors The database package can communicate with various DB systems. For instance, SQLite, MySQL, or PostgreSQL. Every type of database has its own connector. This is the folder in which you will find them. Console The database package isn't just Eloquent with a bunch of connectors. In this specific folder, you will find everything related to console commands, such as artisan db:seed or artisan migrate. Eloquent Every single Eloquent class is placed here. Migrations Don't confuse this with the Console folder. Every class related to migrations is stored here. When you type artisan migrate in your terminal, you are calling a class that is placed here. Query The Query Builder is placed here. Schema Everything related to the Schema Builder is placed here. In the main folder, you will also find some other files. However, don't worry as you don't need to know what they are. If you open the composer.json file, take a look at the following "require" section: "require": { "php": ">=5.4.0", "illuminate/container": "5.1.*", "illuminate/contracts": "5.1.*", "illuminate/support": "5.1.*", "nesbot/carbon": "~1.0" }, As you can see, the database package has some prerequisites that you can't avoid. However, the container is quite small, and it is the same for contracts (just some interfaces) and "illuminate/support". Eloquent uses Carbon (https://github.com/briannesbitt/Carbon) to deal with dates in a smarter way. So, if you are seeing this for the first time and you are confused, don't worry! Everything is all right. Now that you know what you can find in this package, let's see how to install it and configure it for the first time. Installing and configuring the database package Let's start with the setup. First of all, we will install the package using composer as usual. After that, we will configure the capsule manager in order to get started. Installing the package Installing the "illuminate/database" package is really easy. All you have to do is to add "illuminate/database" to the "require" section of your composer.json file, like this: "require": {     "illuminate/database": "5.0.*",   }, Then type composer update in to your terminal, and wait for a few seconds. Another way is to include it with the shortcut in your project folder, obviously from the terminal: composer require illuminate/database No matter which method you chose, you just installed the package. Configuring the package Time to use the capsule manager! In your project, you will use something like this to get started: use Illuminate\Database\Capsule\Manager as Capsule;   $capsule = new Capsule;   $capsule->addConnection([ 'driver'   => 'mysql', 'host'     => 'localhost', 'database' => 'database', 'username' => 'root', 'password' => 'password', 'charset'   => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix'   => '', ]);     // Set the event dispatcher used by Eloquent models... (optional) use Illuminate\Events\Dispatcher; use Illuminate\Container\Container; $capsule->setEventDispatcher(new Dispatcher(new Container)); The config syntax I used is exactly the same you can find in the config/database.php configuration file. The only difference is that this time you are explicitly using an instance of the capsule manager in order to do everything. In the second part of the code, I am setting up the event dispatcher. You must do this if events are required from your project. However, events are not included by default in this package, so you will have to manually add the "illuminate/events" dependency to your composer.json file. Now, the final step! Add this code to your setup file: // Make this Capsule instance available globally via static methods... (optional) $capsule->setAsGlobal();   // Setup the Eloquent ORM... (optional; unless you've used setEventDispatcher()) $capsule->bootEloquent(); With setAsGlobal() called on the capsule manager, you can set it as a global component in order to use it with static methods. You may like it or not; the choice is yours. The final line starts up Eloquent, so you will need it. However, this is also an optional instruction. In some situations you may need the Query Builder only. Then there is nothing else to do! Your application is now configured with the database package (and Eloquent)! Using the ORM Using the Eloquent ORM in a non-Laravel application is not a big change. All you have to do is to declare your model as you are used to doing. Then, you need to call it and use it as you are used to. Here is a perfect example of what I am talking about: use Illuminate\Database\Eloquent\Model;   class Book extends Model {   ...   // some attributes here… protected $table = 'my_books_table';   // some scopes here... public function scopeNewest() {    // query here... }   ...   } Exactly as you did with Laravel, the package you are using is the same. So, no worries! If you want to use the model you just created, then use the following: $books = Book::newest()->take(5)->get(); This also applies for relationships, observers, and so on. Everything is the same. In order to use the database package and ORM exactly, you would do the same thing you did in Laravel; remember to set up the project structure in a way that follows the PSR-4 autoloading convention. Using the Query and Schema Builder It's not just about the ORM; with the database package, you can also use the Query and the Schema Builders. Let's discover how! The Query Builder The Query Builder is also very easy to use. The only difference, this time, is that you are passing through the capsule manager object, like this: $books = Capsule::table('books')              ->where('title', '=', "Michael Strogoff")              ->first(); However, the result is still the same. Also, if you like the DB facade in Laravel, you can use the capsule manager class in the same way: $book = Capsule::select('select title, pages_count from books where id = ?', array(12)); The Schema Builder Now, without Laravel, you don't have migrations. However, you can still use the Schema Builder without Laravel. Like this: Capsule::schema()->create('books', function($table){     $table->increments(''id');     $table->string(''title'', 30);     $table->integer(''pages_count'');    $table->decimal(''price'', 5, 2);.   $table->text(''description'');     $table->timestamps(); }); Previously, you used to call the create() method of the Schema facade. This time is a little different: you will use the create() method, chained to the schema() method of the Capsule class. Obviously, you can use any Schema class method in this way. For instance, you could call something like following: Capsule::schema()->table('books', function($table){    $table->string('title', 50)->change();    $table->decimal('special_price', 5, 2); }); And you are good to go! Remember that if you want to unlock some Schema Builder-specific features you will need to install other dependencies. For example, do you want to rename a column? You will need the doctrine/dbal dependency package. Summary I decided to add this article because many people ask me how to use Eloquent without Laravel. Mostly because they like the framework, but they can't migrate an already started project in its entirety. Also, I think that it's cool to know, in a certain sense, what you can find under the hood. It is always just about curiosity. Curiosity opens new paths, and you have a choice to solve a problem in a new and more elegant way. In these few pages, I just scratched the surface. I want to give you some advice: explore the code. The best way to write good code is to read good code. Resources for Article: Further resources on this subject: Your First Application [article] Building a To-do List with Ajax [article] Laravel 4 - Creating a Simple CRUD Application in Hours [article]
Read more
  • 0
  • 0
  • 12315

article-image-introduction-javascript
Packt
10 Nov 2016
15 min read
Save for later

Introduction to JavaScript

Packt
10 Nov 2016
15 min read
In this article by Simon Timms, author of the book, Mastering JavaScript Design Patterns - Second Edition, we will explore the history of JavaScript and how it came to be the important language that it is today (For more resources related to this topic, see here.) JavaScript is an evolving language that has come a long way from its inception. Possibly more than any other programming language, it has grown and changed with the growth of the World Wide Web. As JavaScript has evolved and grown in importance, the need to apply rigorous methods to its construction has also grown. The road to JavaScript We'll never know how language first came into being. Did it slowly evolve from a series of grunts and guttural sounds made during grooming rituals? Perhaps it developed to allow mothers and their offspring to communicate. Both of these are theories, all but impossible to prove. Nobody was around to observe our ancestors during that important period. In fact, the general lack of empirical evidence lead the Linguistic Society of Paris to ban further discussions on the topic, seeing it as unsuitable for serious study. The early days Fortunately, programming languages have developed in recent history and we've been able to watch them grow and change. JavaScript has one of the more interesting histories of modern programming languages. During what must have been an absolutely frantic 10 days in May of 1995, a programmer at Netscape wrote the foundation for what would grow up to be modern JavaScript. At the time, Netscape was involved in the first of the browser wars with Microsoft. The vision for Netscape was far grander than simply developing a browser. They wanted to create an entire distributed operating system making use of Sun Microsystems' recently-released Java programming language. Java was a much more modern alternative to the C++ Microsoft was pushing. However, Netscape didn't have an answer to Visual Basic. Visual Basic was an easier to use programming language, which was targeted at developers with less experience. It avoided some of the difficulties around memory management that make C and C++ notoriously difficult to program. Visual Basic also avoided strict typing and overall allowed more leeway: Brendan Eich was tasked with developing Netscape repartee to VB. The project was initially codenamed Mocha, but was renamed LiveScript before Netscape 2.0 beta was released. By the time the full release was available, Mocha/LiveScript had been renamed JavaScript to tie it into the Java applet integration. Java Applets were small applications which ran in the browser. They had a different security model from the browser itself and so were limited in how they could interact with both the browser and the local system. It is quite rare to see applets these days, as much of their functionality has become part of the browser. Java was riding a popular wave at the time and any relationship to it was played up. The name has caused much confusion over the years. JavaScript is a very different language from Java. JavaScript is an interpreted language with loose typing, which runs primarily on the browser. Java is a language that is compiled to bytecode, which is then executed on the Java Virtual Machine. It has applicability in numerous scenarios, from the browser (through the use of Java applets), to the server (Tomcat, JBoss, and so on), to full desktop applications (Eclipse, OpenOffice, and so on). In most laypersons' minds, the confusion remains. JavaScript turned out to be really quite useful for interacting with the web browser. It was not long until Microsoft had also adopted JavaScript into their Internet Explorer to complement VBScript. The Microsoft implementation was known as JScript. By late 1996, it was clear that JavaScript was going to be the winning web language for the near future. In order to limit the amount of language deviation between implementations, Sun and Netscape began working with the European Computer Manufacturers Association (ECMA) to develop a standard to which future versions of JavaScript would need to comply. The standard was released very quickly (very quickly in terms of how rapidly standards organizations move), in July of 1997. On the off chance that you have not seen enough names yet for JavaScript, the standard version was called ECMAScript, a name which still persists in some circles. Unfortunately, the standard only specified the very core parts of JavaScript. With the browser wars raging, it was apparent that any vendor that stuck with only the basic implementation of JavaScript would quickly be left behind. At the same time, there was much work going on to establish a standard Document Object Model (DOM) for browsers. The DOM was, in effect, an API for a web page that could be manipulated using JavaScript. For many years, every JavaScript script would start by attempting to determine the browser on which it was running. This would dictate how to address elements in the DOM, as there were dramatic deviations between each browser. The spaghetti of code that was required to perform simple actions was legendary. I remember reading a year-long 20-part series on developing a Dynamic HTML (DHTML) drop down menu such that it would work on both Internet Explorer and Netscape Navigator. The same functionally can now be achieved with pure CSS without even having to resort to JavaScript. DHTML was a popular term in the late 1990s and early 2000s. It really referred to any web page that had some sort of dynamic content that was executed on the client side. It has fallen out of use, as the popularity of JavaScript has made almost every page a dynamic one. Fortunately, the efforts to standardize JavaScript continued behind the scenes. Versions 2 and 3 of ECMAScript were released in 1998 and 1999. It looked like there might finally be some agreement between the various parties interested in JavaScript. Work began in early 2000 on ECMAScript 4, which was to be a major new release. A pause Then, disaster struck. The various groups involved in the ECMAScript effort had major disagreements about the direction JavaScript was to take. Microsoft seemed to have lost interest in the standardization effort. It was somewhat understandable, as it was around that time that Netscape self-destructed and Internet Explorer became the de-facto standard. Microsoft implemented parts of ECMAScript 4 but not all of it. Others implemented more fully-featured support, but without the market leader on-board, developers didn't bother using them. Years passed without consensus and without a new release of ECMAScript. However, as frequently happens, the evolution of the Internet could not be stopped by a lack of agreement between major players. Libraries such as jQuery, Prototype, Dojo, and Mootools, papered over the major differences in browsers, making cross-browser development far easier. At the same time, the amount of JavaScript used in applications increased dramatically. The way of GMail The turning point was, perhaps, the release of Google's GMail application in 2004. Although XMLHTTPRequest, the technology behind Asynchronous JavaScript and XML (AJAX), had been around for about five years when GMail was released, it had not been well-used. When GMail was released, I was totally knocked off my feet by how smooth it was. We've grown used to applications that avoid full reloads, but at the time, it was a revolution. To make applications like that work, a great deal of JavaScript is needed. AJAX is a method by which small chunks of data are retrieved from the server by a client instead of refreshing the entire page. The technology allows for more interactive pages that avoid the jolt of full page reloads. The popularity of GMail was the trigger for a change that had been brewing for a while. Increasing JavaScript acceptance and standardization pushed us past the tipping point for the acceptance of JavaScript as a proper language. Up until that point, much of the use of JavaScript was for performing minor changes to the page and for validating form input. I joke with people that in the early days of JavaScript, the only function name which was used was Validate(). Applications such as GMail that have a heavy reliance on AJAX and avoid full page reloads are known as Single Page Applications or SPAs. By minimizing the changes to the page contents, users have a more fluid experience. By transferring only JavaScript Object Notation (JSON) payload instead of HTML, the amount of bandwidth required is also minimized. This makes applications appear to be snappier. In recent years, there have been great advances in frameworks that ease the creation of SPAs. AngularJS, backbone.js, and ember are all Model View Controller style frameworks. They have gained great popularity in the past two to three years and provide some interesting use of patterns. These frameworks are the evolution of years of experimentation with JavaScript best practices by some very smart people. JSON is a human-readable serialization format for JavaScript. It has become very popular in recent years, as it is easier and less cumbersome than previously popular formats such as XML. It lacks many of the companion technologies and strict grammatical rules of XML, but makes up for it in simplicity. At the same time as the frameworks using JavaScript are evolving, the language is too. 2015 saw the release of a much-vaunted new version of JavaScript that had been under development for some years. Initially called ECMAScript 6, the final name ended up being ECMAScript-2015. It brought with it some great improvements to the ecosystem. Browser vendors are rushing to adopt the standard. Because of the complexity of adding new language features to the code base, coupled with the fact that not everybody is on the cutting edge of browsers, a number of other languages that transcompile to JavaScript are gaining popularity. CoffeeScript is a Python-like language that strives to improve the readability and brevity of JavaScript. Developed by Google, Dart is being pushed by Google as an eventual replacement for JavaScript. Its construction addresses some of the optimizations that are impossible in traditional JavaScript. Until a Dart runtime is sufficiently popular, Google provides a Dart to the JavaScript transcompiler. TypeScript is a Microsoft project that adds some ECMAScript-2015 and even some ECMAScript-201X syntax, as well as an interesting typing system, to JavaScript. It aims to address some of the issues that large JavaScript projects present. The point of this discussion about the history of JavaScript is twofold: first, it is important to remember that languages do not develop in a vacuum. Both human languages and computer programming languages mutate based on the environments in which they are used. It is a popularly held belief that the Inuit people have a great number of words for "snow", as it was so prevalent in their environment. This may or may not be true, depending on your definition for the word and exactly who makes up the Inuit people. There are, however, a great number of examples of domain-specific lexicons evolving to meet the requirements for exact definitions in narrow fields. One need look no further than a specialty cooking store to see the great number of variants of items which a layperson such as myself would call a pan. The Sapir–Whorf hypothesis is a hypothesis within the linguistics domain, which suggests that not only is language influenced by the environment in which it is used, but also that language influences its environment. Also known as linguistic relativity, the theory is that one's cognitive processes differ based on how the language is constructed. Cognitive psychologist Keith Chen has proposed a fascinating example of this. In a very highly-viewed TED talk, Dr. Chen suggested that there is a strong positive correlation between languages that lack a future tense and those that have high savings rates (https://www.ted.com/talks/keith_chen_could_your_language_affect_your_ability_to_save_money/transcript). The hypothesis at which Dr. Chen arrived is that when your language does not have a strong sense of connection between the present and the future, this leads to more reckless behavior in the present. Thus, understanding the history of JavaScript puts one in a better position to understand how and where to make use of JavaScript. The second reason I explored the history of JavaScript is because it is absolutely fascinating to see how quickly such a popular tool has evolved. At the time of writing, it has been about 20 years since JavaScript was first built and its rise to popularity has been explosive. What more exciting thing is there than to work in an ever-evolving language? JavaScript everywhere Since the GMail revolution, JavaScript has grown immensely. The renewed browser wars, which pit Internet Explorer and Edge against Chrome, against Firefox, have lead to building a number of very fast JavaScript interpreters. Brand new optimization techniques have been deployed and it is not unusual to see JavaScript compiled to machine-native code for the added performance it gains. However, as the speed of JavaScript has increased, so has the complexity of the applications built using it. JavaScript is no longer simply a language for manipulating the browser, either. The JavaScript engine behind the popular Chrome browser has been extracted and is now at the heart of a number of interesting projects such as Node.js. Node.js started off as a highly asynchronous method of writing server-side applications. It has grown greatly and has a very active community supporting it. A wide variety of applications have been built using the Node.js runtime. Everything from build tools to editors have been built on the base of Node.js. Recently, the JavaScript engine for Microsoft Edge, ChakraCore, was also open sourced and can be embedded in NodeJS as an alternative to Google's V8. SpiderMonkey, the FireFox equivalent, is also open source and is making its way into more tools. JavaScript can even be used to control microcontrollers. The Johnny-Five framework is a programming framework for the very popular Arduino. It brings a much simpler approach to programming devices than the traditional low-level languages used for programming these devices. Using JavaScript and Arduino opens up a world of possibilities, from building robots to interacting with real-world sensors. All of the major smartphone platforms (iOS, Android, and Windows Phone) have an option to build applications using JavaScript. The tablet space is much the same with tablets supporting programming using JavaScript. Even the latest version of Windows provides a mechanism for building applications using JavaScript: JavaScript is becoming one of the most important languages in the world. Although language usage statistics are notoriously difficult to calculate, every single source which attempts to develop a ranking puts JavaScript in the top 10: Language index Rank of JavaScript Langpop.com 4 Statisticbrain.com 4 Codeval.com 6 TIOBE 8 What is more interesting is that most of of these rankings suggest that the usage of JavaScript is on the rise. The long and short of it is that JavaScript is going to be a major language in the next few years. More and more applications are being written in JavaScript and it is the lingua franca for any sort of web development. Developer of the popular Stack Overflow website Jeff Atwood created Atwood's Law regarding the wide adoption of JavaScript: "Any application that can be written in JavaScript, will eventually be written in JavaScript" – Atwood's Law, Jeff Atwood This insight has been proven to be correct time and time again. There are now compilers, spreadsheets, word processors—you name it—all written in JavaScript. As the applications which make use of JavaScript increase in complexity, the developer may stumble upon many of the same issues as have been encountered in traditional programming languages: how can we write this application to be adaptable to change? This brings us to the need for properly designing applications. No longer can we simply throw a bunch of JavaScript into a file and hope that it works properly. Nor can we rely on libraries such as jQuery to save ourselves. Libraries can only provide additional functionality and contribute nothing to the structure of an application. At least some attention must now be paid to how to construct the application to be extensible and adaptable. The real world is ever-changing, and any application that is unable to change to suit the changing world is likely to be left in the dust. Design patterns provide some guidance in building adaptable applications, which can shift with changing business needs. Summary JavaScript has an interesting history and is really coming of age. With server-side JavaScript taking off and large JavaScript applications becoming common, there is a need for more diligence in building JavaScript applications. For more information on JavaScript, you can check other books by Packt mentioned as follows: Mastering JavaScript Promises: https://www.packtpub.com/application-development/mastering-javascript-promises Mastering JavaScript High Performance: https://www.packtpub.com/web-development/mastering-javascript-high-performance JavaScript : Functional Programming for JavaScript Developers: https://www.packtpub.com/web-development/javascript-functional-programming-javascript-developers Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Saying Hello! [article]
Read more
  • 0
  • 0
  • 12306

article-image-object-oriented-javascript
Packt
21 Dec 2016
9 min read
Save for later

Object-Oriented JavaScript

Packt
21 Dec 2016
9 min read
In this article by Ved Antani, author of the book Object Oriented JavaScript - Third Edition, we will learn that you need to know about object-oriented JavaScript. In this article, we will cover the following topics: ECMAScript 5 ECMAScript 6D Object-oriented programming (For more resources related to this topic, see here.) ECMAScript 5 One of the most important milestone in ECMAScript revisions was ECMAScript5 (ES5), officially accepted in December 2009. ECMAScript5 standard is implemented and supported on all major browsers and server-side technologies. ES5 was a major revision because apart from several important syntactic changes and additions to the standard libraries, ES5 also introduced several new constructs in the language. For instance, ES5 introduced some new objects and properties, and also the so-called strict mode. Strict mode is a subset of the language that excludes deprecated features. The strict mode is opt-in and not required, meaning that if you want your code to run in the strict mode, you will declare your intention using (once per function, or once for the whole program) the following string: "use strict"; This is just a JavaScript string, and it's ok to have strings floating around unassigned to any variable. As a result, older browsers that don't speak ES5 will simply ignore it, so this strict mode is backwards compatible and won't break older browsers. For backwards compatibility, all the examples in this book work in ES3, but at the same time, all the code in the book is written so that it will run without warnings in ES5's strict mode. Strict mode in ES6 While strict mode is optional in ES5, all ES6 modules and classes are strict by default. As you will see soon, most of the code we write in ES6 resides in a module; hence, strict mode is enforced by default. However, it is important to understand that all other constructs do not have implicit strict mode enforced. There were efforts to make newer constructs, such as arrow and generator functions, to also enforce strict mode, but it was later decided that doing so will result in very fragment language rules and code. ECMAScript 6 ECMAScript6 revision took a long time to finish and was finally accepted on 17th June, 2015. ES6 features are slowly becoming part of major browsers and server technologies. It is possible to use transpilers to compile ES6 to ES5 and use the code on environments that do not yet support ES6 completely. ES6 substantially upgrades JavaScript as a language and brings in very exciting syntactical changes and language constructs. Broadly, there are two kinds of fundamental changes in this revision of ECMAScript, which are as follows: Improved syntax for existing features and editions to the standard library; for example, classes and promises New language features; for example, generators ES6 allows you to think differently about your code. New syntax changes can let you write code that is cleaner, easier to maintain, and does not require special tricks. The language itself now supports several constructs that required third-party modules earlier. Language changes introduced in ES6 needs serious rethink in the way we have been coding in JavaScript. A note on the nomenclature—ECMAScript6, ES6, and ECMAScript 2015 are the same, but used interchangeably. Browser support for ES6 Majority of the browsers and server frameworks are on their way toward implementing ES6 features. You can check out the what is supported and what is not by clicking: http://kangax.github.io/compat-table/es6/ Though ES6 is not fully supported on all the browsers and server frameworks, we can start using almost all features of ES6 with the help of transpilers. Transpilers are source-to-source compilers. ES6 transpilers allow you to write code in ES6 syntax and compile/transform them into equivalent ES5 syntax, which can then be run on browsers that do not support the entire range of ES6 features.The defacto ES6 transpiler at the moment is Babel Object-oriented programming Let's take a moment to review what people mean when they say object-oriented, and what the main features of this programming style are. Here's a list of some concepts that are most often used when talking about object-oriented programming (OOP): Object, method, and property Class Encapsulation Inheritance Polymorphism Let's take a closer look into each one of these concepts. If you're new to the object-oriented programming lingo, these concepts might sound too theoretical, and you might have trouble grasping or remembering them from one reading. Don't worry, it does take a few tries, and the subject can be a little dry at a conceptual level. Objects As the name object-oriented suggests, objects are important. An object is a representation of a thing (someone or something), and this representation is expressed with the help of a programming language. The thing can be anything—a real-life object, or a more convoluted concept. Taking a common object, a cat, for example, you can see that it has certain characteristics—color, name, weight, and so on—and can perform some actions—meow, sleep, hide, escape, and so on. The characteristics of the object are called properties in OOP-speak, and the actions are called methods. Classes In real life, similar objects can be grouped based on some criteria. A hummingbird and an eagle are both birds, so they can be classified as belonging to some made up Birds class. In OOP, a class is a blueprint or a recipe for an object. Another name for object is instance, so we can say that the eagle is one concrete instance of the general Birds class. You can create different objects using the same class because a class is just a template, while the objects are concrete instances based on the template. There's a difference between JavaScript and the classic OO languages such as C++ and Java. You should be aware right from the start that in JavaScript, there are no classes; everything is based on objects. JavaScript has the notion of prototypes, which are also objects. In a classic OO language, you'd say something like—create a new object for me called Bob, which is of class Person. In a prototypal OO language, you'd say—I'm going to take this object called Bob's dad that I have lying around (on the couch in front of the TV?) and reuse it as a prototype for a new object that I'll call Bob. Encapsulation Encapsulation is another OOP related concept, which illustrates the fact that an object contains (encapsulates) the following: Data (stored in properties) The means to do something with the data (using methods) One other term that goes together with encapsulation is information hiding. This is a rather broad term and can mean different things, but let's see what people usually mean when they use it in the context of OOP. Imagine an object, say, an MP3 player. You, as the user of the object, are given some interface to work with, such as buttons, display, and so on. You use the interface in order to get the object to do something useful for you, like play a song. How exactly the device is working on the inside, you don't know, and, most often, don't care. In other words, the implementation of the interface is hidden from you. The same thing happens in OOP when your code uses an object by calling its methods. It doesn't matter if you coded the object yourself or it came from some third-party library; your code doesn't need to know how the methods work internally. In compiled languages, you can't actually read the code that makes an object work. In JavaScript, because it's an interpreted language, you can see the source code, but the concept is still the same—you work with the object's interface without worrying about its implementation. Another aspect of information hiding is the visibility of methods and properties. In some languages, objects can have public, private, and protected methods and properties. This categorization defines the level of access the users of the object have. For example, only the methods of the same object have access to the private methods, while anyone has access to the public ones. In JavaScript, all methods and properties are public, but we'll see that there are ways to protect the data inside an object and achieve privacy. Inheritance Inheritance is an elegant way to reuse existing code. For example, you can have a generic object, Person, which has properties such as name and date_of_birth, and which also implements the walk, talk, sleep, and eat functionality. Then, you figure out that you need another object called Programmer. You can reimplement all the methods and properties that a Person object has, but it will be smarter to just say that the Programmer object inherits a Person object, and save yourself some work. The Programmer object only needs to implement more specific functionality, such as the writeCode method, while reusing all of the Person object's functionality. In classical OOP, classes inherit from other classes, but in JavaScript, as there are no classes, objects inherit from other objects. When an object inherits from another object, it usually adds new methods to the inherited ones, thus extending the old object. Often, the following phrases can be used interchangeably—B inherits from A and B extends A. Also, the object that inherits can pick one or more methods and redefine them, customizing them for its own needs. This way, the interface stays the same and the method name is the same, but when called on the new object, the method behaves differently. This way of redefining how an inherited method works is known as overriding. Polymorphism In the preceding example, a Programmer object inherited all of the methods of the parent Person object. This means that both objects provide a talk method, among others. Now imagine that somewhere in your code, there's a variable called Bob, and it just so happens that you don't know if Bob is a Person object or a Programmer object. You can still call the talk method on the Bob object and the code will work. This ability to call the same method on different objects, and have each of them respond in their own way, is called polymorphism. Summary In this article, you learned about how JavaScript came to be and where it is today. You were also introduced to ECMAScript 5 and ECMAScript 6, We also discussed about some of the object-oriented programming concepts. Resources for Article: Further resources on this subject: Diving into OOP Principles [article] Prototyping JavaScript [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 12147

article-image-introduction-aspnet-core-web-api
Packt
07 Mar 2018
13 min read
Save for later

Introduction to ASP.NET Core Web API

Packt
07 Mar 2018
13 min read
In this article by MithunPattankarand MalendraHurbuns, the authors of the book, Mastering ASP.NET Web API,we will start with a quick recap of MVC. We will be looking at the following topics:  Quick recap of MVC framework  Why Web APIs were incepted and it's evolution?  Introduction to .NET Core?  Overview of ASP.NET Core architecture (For more resources related to this topic, see here.) Quick recap of MVC framework Model-View-Controller (MVC) is a powerful and elegant way of separating concerns within an application and applies itself extremely well to web applications. With ASP.NETMVC, it's translated roughly as follows: Models (M): These are the classes that represent the domain you are interested in. These domain objects often encapsulate data stored in a database as well as code that manipulates the data and enforces domain-specific business logic. With ASP.NETMVC, this is most likely a Data Access Layer of some kind, using a tool like Entity Framework or NHibernate or classic ADO.NET.  View (V): This is a template to dynamically generate HTML.  Controller(C): This is a special class that manages the relationship between the View and the Model. It responds to user input, talks to the Model, and decides which view to render (if any). In ASP.NETMVC, this class is conventionally denoted by the suffix Controller. Why Web APIs were incepted and it's evolution? Looking back to days when ASP.NETASMX-based XML web service was widely used for building service-oriented applications, it was easiest way to create SOAP-based service which can be used by both .NET applications and non .NET applications. It was available only over HTTP. Around 2006, Microsoft released Windows Communication Foundation (WCF).WCF was and even now a powerful technology for building SOA-based applications. It was giant leap in the world of Microsoft .NET world. WCF was flexible enough to be configured as HTTP service, Remoting service, TCP service, and so on. Using Contracts of WCF, we would keep entire business logic code base same and expose the service as HTTP based or non HTTP based via SOAP/ non SOAP. Until 2010 the ASMX based XML web service or WCF service were widely used in client server based applications, in-fact everything was running smoothly. But the developers of .NET or non .NET community started to feel need for completely new SOA technology for client server applications. Some of reasons behind them were as follows: With applications in production, the amount of data while communicating started to explode and transferring them over the network was bandwidth consuming. SOAP being light weight to some extent started to show signs of payload increase. A few KB SOAP packets were becoming few MBs of data transfer.  Consuming the SOAP service in applications lead to huge applications size because of WSDL and proxy generation. This was even worse when it was used in web applications. Any changes to SOAP services lead to repeat of consuming them by proxy generation. This wasn't easy task for any developers.  JavaScript-based web frameworks were getting released and gaining ground for much simpler way of web development. Consuming SOAP-based services were not that optimal way. Hand-held devices were becoming popular like tablets, smartphones. They had more focused applications and needed very lightweight service oriented approach.  Browser based Single Page Applications (SPA) was gaining ground very rapidly. Using SOAP based services for quite heavy for these SPA. Microsoft released REST based WCF components which can be configured to respond in JSON or XML, but even then it was WCF which was heavy technology to be used.  Applications where no longer just large enterprise services, but there was need was more focused light weight service to be up & running in few days and much easier to use. Any developer who has seen evolving nature of SOA based technologies like ASMX, WCF or any SOAP based felt the need to have much lighter, HTTP based services. HTTP only, JSON compatible POCO based lightweight services was need of the hour and concept of Web API started gaining momentum. What is Web API? A Web API is a programmatic interface to a system that is accessed via standard HTTP methods and headers. A Web API can be accessed by a variety of HTTP clients, including browsers and mobile devices. For Web API to be successful HTTP based service, it needed strong web infrastructure like hosting, caching, concurrency, logging, security etc. One of the best web infrastructure was none other than ASP.NET. ASP.NET either in form Web Form or MVC was widely adopted, so the solid base for web infrastructure was mature enough to be extended as Web API. Microsoft responded to community needs by creating ASP.NET Web API- a super-simple yet very powerful framework for building HTTP-only, JSON-by-default web services without all the fuss of WCF. ASP.NET Web API can be used to build REST based services in matter of minutes and can easily consumed with any front end technologies. It used IIS (mostly) for hosting, caching, concurrency etc. features, it became quite popular. It was launched in 2012 with most basics needs for HTTP based services like convention-based Routing, HTTP Request and Response messages. Later Microsoft released much bigger and better ASP.NET Web API 2 along with ASP.NETMVC 5 in Visual Studio 2013. ASP.NET Web API 2 evolved at much faster pace with these features. Installed via NuGet Installing of Web API 2 was made simpler by using NuGet, either create empty ASP.NET or MVC project and then run command in NuGet Package Manager Console: Install-Package Microsoft.AspNet.WebApi Attribute Routing Initial release of Web API was based on convention-based routing meaning we define one or more route templates and work around it. It's simple without much fuss as routing logic in a single place & it's applied across all controllers. The real world applications are more complicated with resources (controllers/ actions) have child resources like customers having orders, books having authors etc. In such cases convention-based routing is not scalable. Web API 2 introduced a new concept of Attribute Routing which uses attributes in programming languages to define routes. One straight forward advantage is developer has full controls how URIs for Web API are formed. Here is quick snippet of Attribute Routing: [Route("customers/{customerId}/orders")] public IEnumerable<Order>GetOrdersByCustomer(intcustomerId) { ... } For more understanding on this, read Attribute Routing in ASP.NET Web API 2(https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2) OWIN self-host ASP.NET Web API lives on ASP.NET framework, leading to think that it can be hosted on IIS only. The Web API 2 came new hosting package. Microsoft.AspNet.WebApi.OwinSelfHost With this package it can self-hosted outside IIS using OWIN/Katana. CORS (Cross Origin Resource Sharing) Any Web API developed either using .NET or non .NET technologies and meant to be used across different web frameworks, then enabling CORS is must. A must read on CORS&ASP.NET Web API 2 (https://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api). IHTTPActionResult and Web API OData improvements are other few notable features which helped evolve Web API 2 as strong technology for developing HTTP based services. ASP.NET Web API 2 has becoming more powerful over the years with C# language improvements like Asynchronous programming using Async/ Await, LINQ, Entity Framework Integration, Dependency Injection with DI frameworks, and so on. ASP.NET into Open Source world Every technology has to evolve with growing needs and advancements in hardware, network and software industry, ASP.NET Web API is no exception to that. Some of the evolution that ASP.NET Web API should undergo from perspectives of developer community, enterprises and end users are: ASP.NETMVC and Web API even though part of ASP.NET stack but their implementation and code base is different. A unified code base reduces burden of maintaining them. It's known that Web API's are consumed by various clients like web applications, Native apps, and Hybrid apps, desktop applications using different technologies (.NET or non .NET). But how about developing Web API in cross platform way, need not rely always on Windows OS/ Visual Studio IDE. Open sourcing the ASP.NET stack so that it's adopted on much bigger scale. End users are benefitted with open source innovations. We saw that why Web APIs were incepted, how they evolved into powerful HTTP based service and some evolutions required. With these thoughts Microsoft made an entry into world of Open Source by launching .NET Core and ASP.NET Core 1.0. What is .NET Core? .NET Core is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR. .NET Core 1.0 was released on 27 June 2016 along with Visual Studio 2015 Update 3, which enables .NET Core development. In much simpler terms .NET Core applications can be developed, tested, deployed on cross platforms such as Windows, Linux flavours, macOS systems. With help of .NET Core, we don't really need Windows OS and in particular Visual Studio IDE to develop ASP.NET web applications, command-line apps, libraries, and UWP apps. In short, let's understand .NET Core components: CoreCLR:It is a virtual machine that manages the execution of .NET programs. CoreCLRmeans Core Common Language Runtime, it includes the garbage collector, JIT compiler, base .NET data types and many low-level classes. CoreFX: .NET Core foundational libraries likes class for collections, file systems, console, XML, Async and many others. CoreRT: .NET Core runtime optimized for AOT (ahead of time compilation) scenarios, with the accompanying .NET Native compiler toolchain. Its main responsibility is to do native compilation of code written in any of our favorite .NET programming language. .NET Core shares subset of original .NET framework, plus it comes with its own set of APIs that is not part of .NET framework. This results in some shared APIs that can be used by both .NET core and .NET framework. A .Net Core application can easily work on existing .NET Framework but not vice versa. .NET Core provides a CLI (Command Line Interface) for an execution entry point for operating systems and provides developer services like compilation and package management. The following are the .NET Core interesting points to know: .NET Core can be installed on cross platforms like Windows, Linux, andmacOS. It can be used in device, cloud, and embedded/IoT scenarios.  Visual Studio IDE is not mandatory to work with .NET Core, but when working on Windows OS we can leverage existing IDE knowledge to work.  .NET Core is modular, meaning that instead of assemblies, developers deal with NuGet packages.  .NET Core relies on its package manager to receive updates because cross platform technology can't rely on Windows Updates. To learn .NET Core, we just need a shell, text editor and its runtime installed. .NET Core comes with flexible deployment. It can be included in your app or installed side-by-side user- or machine-wide.  .NET Core apps can also be self-hosted/run as standalone apps. .NET Core supports four cross-platform scenarios--ASP.NET Core web apps, command-line apps, libraries, and Universal Windows Platform apps. It does not implement Windows Forms or WPF which render the standard GUI for desktop software on Windows. At present only C# programming language can be used to write .NET Core apps. F# and VB support are on the way. We will primarily focus on ASP.NET Core web apps which includes MVC and Web API. CLI apps, libraries will be covered briefly. What is ASP.NET Core? A new open-source and cross-platform framework for building modern cloud-based web applications using .NET. ASP.NET Core is completely open-source, you can download it from GitHub. It's cross platform meaning you can develop ASP.NET Core apps on Linux/macOS and of course on Windows OS. ASP.NET was first released almost 15 years back with .NET framework. Since then it's adopted by millions of developers for large, small applications. ASP.NET has evolved with many capabilities. With .NET Core as cross platform, ASP.NET took a huge leap beyond boundaries of Windows OS environment for development and deployment of web applications. ASP.NET Core overview                                                ASP.NET Core Architecture overview ASP.NET Core high level overview provides following insights: ASP.NET Core runs both on Full .NET framework and .NET Core.  ASP.NET Core applications with full .NET framework can only be developed and deployed only Windows OS/Server.  When using .NET core, it can be developed and deployed on platform of choice. The logos of Windows, Linux, macOSindicates that you can work with ASP.NET Core.  ASP.NET Core when on non-Windows machine, use the .NET Core libraries to run the applications. It's obvious you won't have all full .NET libraries but most of them are available.  Developers working on ASP.NET Core can easily switch working on any machine not confined to Visual Studio 2015 IDE. ASP.NET Core can run with different version of .NET Core. ASP.NET Core has much more foundational improvements apart from being cross-platform, we gain following advantages of using ASP.NET Core: Totally Modular: ASP.NET Core takes totally modular approach for application development, every component needed to build application are well factored into NuGet packages. Only add required packages through NuGet to keep overall application lightweight.  ASP.NET Core is no longer based on System.Web.dll. Choose your editors and tools: Visual Studio IDE was used to develop ASP.NET applications on Windows OS box, now since we have moved beyond the Windows world. Then we will require IDE/editors/ Tools required for developingASP.NET applications on Linux/macOS. Microsoft developed powerful lightweight code editors for almost any type of web applications called as Visual Studio Code.  ASP.NET Core is such a framework that we don't need Visual Studio IDE/ code to develop applications. We can use code editors like Sublime, Vim also. To work with C# code in editors, installed and use OmniSharp plugin.  OmniSharp is a set of tooling, editor integrations and libraries that together create an ecosystem that allows you to have a great programming experience no matter what your editor and operating system of choice may be.  Integration with modern web frameworks: ASP.NET Core has powerful, seamless integration with modern web frameworks like Angular, Ember, NodeJS, and Bootstrap.  Using bower andNPM, we can work with modern web frameworks.  Cloud ready: ASP.NET Core apps are cloud ready with configuration system, it just seamlessly gets transitioned from on-premises to cloud.  Built in Dependency Injection. Can be hosted on IIS or self-host in your own process or on nginx.  New light-weight and modular HTTP request pipeline. Unified code base for Web UI and Web APIs. We will see more on this when we explore anatomy of ASP.NET Core application. Summary So in this article we covered MVC framework and introduced .NET Core and its architecture. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 12008
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 11725

article-image-structuring-your-projects
Packt
24 Nov 2016
20 min read
Save for later

Structuring Your Projects

Packt
24 Nov 2016
20 min read
In this article written by Serghei Iakovlev and David Schissler, authors of the book Phalcon Cookbook , we will cover: (For more resources related to this topic, see here.) Choosing the best place for an implementation Automation of routine tasks Creating the application structure by using code generation tools Introduction In this article you will learn that, often, by creating new projects, developers can face some issues such as what components should they create, where to place them in the application structure, what would each component implement, what naming convention to follow, and so on. Actually, creating custom components isn't a difficult matter; we will sort it out in this article. We will create our own component which will display different menus on your site depending on where we are in the application. From one project to another, the developer's work is usually repeated. This holds true for tasks such as creating the project structure, configuration, creating data models, controllers, views, and so on. For those tasks, we will discover the power of Phalcon Developer Tools and how to use them. You will learn how to create an application skeleton by using one command, and even how to create a fully functional application prototype in less than 10 minutes without writing a single line of code. Developers often come up against a situation where they need to create a lot of predefined code templates. Until you are really familiar with the framework it can be useful for you to do everything manually. But anyway all of us would like to reduce repetitive tasks. Phalcon tries to help you by providing an easy and at the same time flexible code generation tool named Phalcon Developer Tools. These tools help you simplify the creation of CRUD components for a regular application. Therefore, you can create working code in a matter of seconds without creating the code yourself. Often, when creating an application using a framework, we need to extend or add functionality to the framework components. We don't have to reinvent the wheel by rewriting those components. We can use class inheritance and extensibility, but often this approach does not work. In such cases, it is better to use additional layers between the main application and the framework by creating a middleware layer. The term middleware has a wide range of meanings, but in the context of PHP web applications it means code, which will be called in turns by each request. We will look into the main principles of creating and using middleware in your application. We will not get into each solution in depth, but instead we will work with tasks that are common for most projects, and implementations extending Phalcon. Choosing the best place for an implementation Let's pretend you want to add a custom component. As the case may be, this component allows to change your site navigation menu. For example, when you have a Sign In link on your navigation menu and you are logged in, that link needs to change to Sign Out. Then you're asking yourself where is the best place in the project to put the code, where to place the files, how to name the classes, how to make them autoload by the autoloader. Getting ready… For successful implementation of this recipe you must have your application deployed. By this we mean that you need to have a web server installed and configured for handling requests to your application, an application must be able to receive requests, and have implemented the necessary components such as Controllers, Views, and a bootstrap file. For this recipe, we assume that our application is located in the apps directory. If this is not the case, you should change this part of the path in the examples shown in this article. How to do it… Follow these steps to complete this recipe: Create the /library/ directory app, if you haven't got one, where user components will be stored. Next, create the Elements (app/library/Elements.php) component. This class extends PhalconMvcUserComponent. Generally, it is not necessary, but it helps get access to application services quickly. The contents of Elements should be: <?php namespace Library; use PhalconMvcUserComponent; use PhalconMvcViewSimple as View; class Elements extends Component { public function __construct() { // ... } public function getMenu() { // ... } } Now we register this class in the Dependency Injection Container. We use a shared instance in order to prevent creating new instances by each service resolving: $di->setShared('elements', function () { return new LibraryElements(); }); If your Session service is not initialized yet, it's time to do it in your bootstrap file. We use a shared instance for the following reasons: $di->setShared('session', function () { $session = new PhalconSessionAdapterFiles(); $session->start(); return $session; }); Create the templates directory within the directory with your views/templates. Then you need to tell the class autoloader about a new namespace, which we have just entered. Let's do it in the following way: $loader->registerNamespaces([ // The APP_PATH constant should point // to the project's root 'Library' => APP_PATH . '/apps/library/', // ... ]); Add the following code right after the tag body in the main layout of your application: <div class="container"> <div class="navbar navbar-inverse"> <div class="container-fluid"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#blog-top- menu" aria-expanded="false"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Blog 24</a> </div> <?php echo $this->elements->getMenu(); ?> </div> </div> </div> Next, we need to create a template for displaying your top menu. Let's create it in views/templates/topMenu.phtml: <div class="collapse navbar-collapse" id="blog-top-menu"> <ul class="nav navbar-nav"> <li class="active"> <a href="#">Home</a> </li> </ul> <ul class="nav navbar-nav navbar-right"> <li> <?php if ($this->session->get('identity')): ?> <a href="#">Sign Out</a> <?php else: ?> <a href="#">Sign In</a> <?php endif; ?> </li> </ul> </div> Now, let's put the component to work. First, create the protected field $simpleView and initialize it in the controller: public function __construct() { $this->simpleView = new View(); $this->simpleView->setDI($this->getDI()); } And finally, implement the getMenu method as follows: public function getMenu() { $this->simpleView->setViewsDir($this->view- >getViewsDir()); return $this->simpleView->render('templates/topMenu'); } Open the main page of your site to ensure that your top menu is rendered. How it works… The main idea of our component is to generate a top menu, and to display the correct menu option depending on the situation, meaning whether the user is authorized or not. We create the user component, Elements, putting it in a place specially designed for the purpose. Of course, when creating the directory library and placing a new class there, we should tell the autoloader about a new namespace. This is exactly what we have done. However, we should take note of one important peculiarity. We should note that if you want to get access to your components quickly even in HTML templates like $this->elements, then you should put the components in the DI container. Therefore, we put our component, LibraryElements, in the container named elements. Since our component inherits PhalconMvcUserComponent, we are able to access all registered application services just by their names. For example, the following instruction, $this->view can be written in a long form, $this->getDi()->getShared('view'), but the first one is obviously more concise. Although not necessary, for application structure purposes, it is better to use a separate directory for different views not connected straight to specific controllers and actions. In our case, the directory views/templates serves for this purpose. We create a HTML template for menu rendering and place it in views/templates/topMenu.phtml. When using the method getMenu, our component will render the view topMenu.phtml and return HTML. In the method getMenu, we get the current path for all our views and set it for the PhalconMvcViewSimple component, created earlier in the constructor. In the view topMenu we access to the session component, which earlier we placed in the DI container. By generating the menu, we check whether the user is authorized or not. In the former case, we use the Sign out menu item, in the latter case we display the menu item with an invitation to Sign in. Automation of routine tasks The Phalcon project provides you with a great tool named Developer Tools. It helps automating repeating tasks, by means of code generation of components as well as a project skeleton. Most of the components of your application can be created only with one command. In this recipe, we will consider in depth the Developer Tools installation and configuration. Getting Ready… Before you begin to work on this recipe, you should have a DBMS configured, a web server installed and configured for handling requests from your application. You may need to configure a virtual host (this is optional) for your application which will receive and handle requests. You should be able to open your newly-created project in a browser at http://{your-host-here}/appname or http://{your-host-here}/, where your-host-here is the name of your project. You should have Git installed, too. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as a particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: Clone Developer Tools in your home directory: git clone git@github.com:phalcon/phalcon-devtools.git devtools Go to the newly created directory devtools, run the./phalcon.sh command, and wait for a message about successful installation completion: $ ./phalcon.sh Phalcon Developer Tools Installer Make sure phalcon.sh is in the same dir as phalcon.php and that you are running this with sudo or as root. Installing Devtools... Working dir is: /home/user/devtools Generating symlink... Done. Devtools installed! Run the phalcon command without arguments to see the available command list and your current Phalcon version: $ phalcon Phalcon DevTools (3.0.0) Available commands: commands (alias of: list, enumerate) controller (alias of: create-controller) model (alias of: create-model) module (alias of: create-module) all-models (alias of: create-all-models) project (alias of: create-project) scaffold (alias of: create-scaffold) migration (alias of: create-migration) webtools (alias of: create-webtools) Now, let's create our project. Go to the folder where you plan to create the project and run the following command: $ phalcon project myapp simple Open the website which you have just created with the previous command in your browser. You should see a message about the successful installation. Create a database for your project: mysql -e 'CREATE DATABASE myapp' -u root -p You will need to configure our application to connect to the database. Open the file app/config/config.php and correct the database connection configuration. Draw attention to the baseUri: parameter if you have not configured your virtual host according to your project. The value of this parameter must be / or /myapp/. As the result, your configuration file must look like this: <?php use PhalconConfig; defined('APP_PATH') || define('APP_PATH', realpath('.')); return new Config([ 'database' => [ 'adapter' => 'Mysql', 'host' => 'localhost', 'username' => 'root', 'password' => '', 'dbname' => 'myapp', 'charset' => 'utf8', ], 'application' => [ 'controllersDir' => APP_PATH . '/app/controllers/', 'modelsDir' => APP_PATH . '/app/models/', 'migrationsDir' => APP_PATH . '/app/migrations/', 'viewsDir' => APP_PATH . '/app/views/', 'pluginsDir' => APP_PATH . '/app/plugins/', 'libraryDir' => APP_PATH . '/app/library/', 'cacheDir' => APP_PATH . '/app/cache/', 'baseUri' => '/myapp/', ] ]); Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); After that we need to create a new controller, UsersController. This controller must provide us with the main CRUD actions on the Users model and, if necessary, display data with the appropriate views. Lets do it with just one command: $ phalcon scaffold users In your web browser, open the URL associated with your newly created resource User and try to find one of the users of our database table at http://{your-host-here}/appname/users (or http://{your-host-here}/users, depending on how you have configured your server for application request handling. Finally, open your project in your file manager to see all the project structure created with Developer Tools: +-- app ¦ +-- cache ¦ +-- config ¦ +-- controllers ¦ +-- library ¦ +-- migrations ¦ +-- models ¦ +-- plugins ¦ +-- schemas ¦ +-- views ¦ +-- index ¦ +-- layouts ¦ +-- users +-- public +-- css +-- files +-- img +-- js +-- temp How it works… We installed Developer Tools with only two commands, git clone and ./phalcon. This is all we need to start using this powerful code generation tool. Next, using only one command, we created a fully functional application environment. At this stage, the application doesn't represent something outstanding in terms of features, but we have saved time from manually creating the application structure. Developer Tools did that for you! If after this command completion you examine your newly created project, you may notice that the primary application configuration has been generated also, including the bootstrap file. Actually, the phalcon project command has additional options that we have not demonstrated in this recipe. We are focusing on the main commands. Enter the command help to see all available project creating options: $ phalcon project help In the modern world, you can hardly find a web application which works without access to a database. Our application isn't an exception. We created a database for our application, and then we created a users table and filled it with primary data. Of course, we need to supply our application with what we have done in the app/config/config.php file with the database access parameters as well as the database name. After the successful database and table creation, we used the scaffold command for the pre-defined code template generation, particularly the Users controller with all main CRUD actions, all the necessary views, and the Users model. As before, we have used only one command to generate all those files. Phalcon Developer Tools is equipped with a good amount of different useful tools. To see all the available options, you can use the command help. We have taken only a few minutes to create the first version of our application. Instead of spending time with repetitive tasks (such as the creation of the application skeleton), we can now use that time to do more exciting tasks. Phalcon Developer Tools helps us save time where possible. But wait, there is more! The project is evolving, and it becomes more featureful day by day. If you have any problems you can always visit the project on GitHub https://github.com/phalcon/phalcon-devtools and search for a solution. There's more… You can find more information on Phalcon Developer Tools installation for Windows and OS X at: https://docs.phalconphp.com/en/latest/reference/tools.html. More detailed information on web server configuration can be found at: https://docs.phalconphp.com/en/latest/reference/install.html Creating the application structure by using code generation tools In the following recipe, we will discuss available code generation tools that can be used for creating a multi-module application. We don't need to create the application structure and main components manually. Getting Ready… Before you begin, you need to have Git installed, as well as any DBMS (for example, MySQL, PostgreSQL, SQLite, and the like), the Phalcon PHP extension (usually it is named php5-phalcon) and a PHP extension, which offers database connectivity support using PDO (for example, php5-mysql, php5-pgsql or php5-sqlite, and the like). You also need to be able to create tables in your database. To accomplish the following recipe, you will require Phalcon Developer Tools. If you already have it installed, you may skip the first three steps related to the installation and go to the fourth step. In this recipe, we assume that your operating system is Linux. Developer Tools installation instructions for Mac OS X and Windows will be similar. You can find the link to the complete documentation for Mac OS X and Windows at the end of this recipe. We used the Terminal to create the database tables, and chose MySQL as our RDBMS. Your setup might vary. The choice of a tool for creating a table in your database, as well as particular DBMS, is yours. Note that syntax for creating a table by using other DBMSs than MySQL may vary. How to do it… Follow these steps to complete this recipe: First you need to decide where you will install Developer Tools. Put the case that you are going to place Developer Tools in your home directory. Then, go to your home directory and run the following command: git clone git@github.com:phalcon/phalcon-devtools.git Now browse to the newly created phalcon-devtools directory and run the following command to ensure that there are no problems: ./phalcon.sh Now, as far as you have Developer Tools installed, browse to the directory, where you intend to create your project, and run the following command: phalcon project blog modules If there were no errors during the previous step, then create a Help Controller by running the following command: phalcon controller Help --base-class=ControllerBase — namespace=Blog\Frontend\Controllers Open the newly generated HelpController in the apps/frontend/controllers/HelpController.php file to ensure that you have the needed controller, as well as the initial indexAction. Open the database configuration in the Frontend module, blog/apps/frontend/config/config.php, and edit the database configuration according to your current environment. Enter the name of an existing database user and a password that has access to that database, and the application database name. You can also change the database adapter that your application needs. If you do not have a database ready, you can create one now. Now, after you have configured the database access, let's create a users table in your database. Create the users table and fill it with the primary data: CREATE TABLE `users` ( `id` INT(11) unsigned NOT NULL AUTO_INCREMENT, `email` VARCHAR(128) NOT NULL, `first_name` VARCHAR(64) DEFAULT NULL, `last_name` VARCHAR(64) DEFAULT NULL, `created_at` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `users_email` (`email`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO `users` (`email`, `first_name`, `last_name`) VALUES ('john@doe.com', 'John', 'Doe'), ('janie@doe.com', 'Janie', 'Doe'); Next, let's create Controller, Views, Layout, and Model by using the scaffold command: phalcon scaffold users --ns- controllers=Blog\Frontend\Controllers —ns- models=Blog\Frontend\Models Open the newly generated UsersController located in the apps/frontend/controllers/UsersController.php file to ensure you have generated all actions needed for user search, editing, creating, displaying, and deleting. To check if all actions work as designed, if you have a web server installed and configured for this recipe, you can go to http://{your-server}/users/index. In so doing, you can make sure that the required Users model is created in the apps/frontend/models/Users.php file, all the required views are created in the apps/frontend/views/users folder, and the user layout is created in the apps/frontend/views/layouts folder. If you have a web server installed and configured for displaying the newly created site, go to http://{your-server}/users/search to ensure that the users from our table are shown. How it works… In the world of programming, code generation is designed to lessen the burden of manually creating repeated code by using predefined code templates. The Phalcon framework provides perfect code generation tools which come with Phalcon Developer Tools. We start with the installation of Phalcon Developer Tools. Note, that if you already have Developer Tools installed, you should skip the steps involving these installation. Next, we generate a fully functional MVC application, which implements the multi-module principle. One command is enough to get a working application at once. We save ourselves the trouble of creating the application directory structure, creating the bootstrap file, creating all the required files, and setting up the initial application structure. For that end, we use only one command. It's really great, isn't it? Our next step is creating a controller. In our example, we use HelpController, which displays just such an approach to creating controllers. Next, we create the table users in our database and fill it with data. With that done, we use a powerful tool for generating predefined code templates, which is called Scaffold. Using only one command in the Terminal, we generate the controller UsersController with all the necessary actions and appropriate views. Besides this, we get the Users model and required layout. If you have a web server configured you can check out the work of Developer Tools at http://{your-server}/users/index. When we use the Scaffold command, the generator determines the presence and names of our table fields. Based on these data, the tool generates a model, as well as views with the required fields. The generator provides you with ready-to-use code in the controller, and you can change this code according to your needs. However, even if you don't change anything, you can use your controller safely. You can search for users, edit and delete them, create new users, and view them. And all of this was made possible with one command. We have discussed only some of the features of code generation. Actually, Phalcon Developer Tools has many more features. For help on the available commands you can use the command phalcon (without arguments). There's more… For more detailed information on installation and configuration of PDO in PHP, visit http://php.net/manual/en/pdo.installation.php. You can find detailed Phalcon Developer Tools installation instructions at https://docs.phalconphp.com/en/latest/reference/tools.html. For more information on Scaffold, refer to https://en.wikipedia.org/wiki/Scaffold_(programming). Summary In this article, you learned about the automation of routine tasks andcreating the application structure. Resources for Article: Further resources on this subject: Using Phalcon Models, Views, and Controllers [Article] Phalcon's ORM [Article] Planning and Structuring Your Test-Driven iOS App [Article]
Read more
  • 0
  • 0
  • 11551

article-image-quick-user-authentication-setup-django
Packt
10 May 2016
21 min read
Save for later

Quick User Authentication Setup with Django

Packt
10 May 2016
21 min read
In this article by Asad Jibran Ahmed author of book Django Project Blueprints we are going to start with a simple blogging platform in Django. In recent years, Django has emerged as one of the clear leaders in web frameworks. When most people decide to start using a web framework, their searches lead them to either Ruby on Rails or Django. Both are mature, stable, and extensively used. It appears that the decision to use one or the other depends mostly on which programming language you’re familiar with. Rubyists go with RoR, and Pythonistas go with Django. In terms of features, both can be used to achieve the same results, although they have different approaches to how things are done. One of the most popular platforms these days is Medium, widely used by a number of high-profile bloggers. Its popularity stems from its elegant theme and simple-to-use interface. I’ll walk you through creating a similar application in Django with a few surprise features that most blogging platforms don’t have. This will give you a taste of things to come and show you just how versatile Django can be. Before starting any software development project, it’s a good idea to have a rough roadmap of what we would like to achieve. Here’s a list of features that our blogging platform will have: Users should be able to register an account and create their blogs Users should be able to tweak the settings of their blogs There should be a simple interface for users to create and edit blog posts Users should be able to share their blog posts on other blogs on the platform I know this seems like a lot of work, but Django comes with a couple of contrib packages that speed up our work considerably. (For more resources related to this topic, see here.) The contrib Packages The contrib packages are a part of Django that contain some very useful applications that the Django developers decided should be shipped with Django. The included applications provide an impressive set of features, including some that we’ll be using in this application: Admin: This is a full-featured CMS that can be used to manage the content of a Django site. The Admin application is an important reason for the popularity of Django. We’ll use this to provide an interface for site administrators to moderate and manage the data in our application. Auth: This provides user registration and authentication without requiring us to do any work. We’ll be using this module to allow users to sign up, sign in, and manage their profiles in our application. Sites: This frameworkprovides utilities that help us run multiple Django sites using the same code base. We’ll use this feature to allow each user to have his own blog with content that can be shared between multiple blogs. There are a lot more goodies in the contrib module. I suggest you take a look at the complete list at https://docs.djangoproject.com/en/stable/ref/contrib/#contrib-packages. I usually end up using at least three of the contrib packages in all my Django projects. They provide often-required features such as user registration and management and free you to work on the core parts of your project, providing a solid foundation to build upon. Setting up our development environment Let’s start by creating the directory structure for our project, setting up the virtual environment, and configuring some basic Django settings that need to be set up in every project. Let’s call our blogging platform BlueBlog. To start a new project, you need to first open up your terminal program. In Mac OS X, it is the built-in terminal. In Linux, the terminal is named separately for each distribution, but you should not have trouble finding it; try searching your program list for the word terminal and something relevant should show up. In Windows, the terminal program is called the Command Line. You’ll need to start the relevant program depending on your operating system. If you are using the Windows operating system, some things will need to be done differently from what the book shows. If you are using Mac OS X or Linux, the commands shown here should work without any problems. Open the relevant terminal program for your operating system and start by creating the directory structure for our project and cd into the root project directory using the following commands: > mkdir –p blueblog > cd blueblog Next, let’s create the virtual environment, install Django, and start our project: > pyvenv blueblogEnv > source blueblogEnv/bin/activate > pip install django > django-admin.py startproject blueblog src With this out of the way, we’re ready to start developing our blogging platform. Database settings Open up the settings found at $PROJECT_DIR/src/blueblog/settings.py in your favorite editor and make sure that the DATABASES settings variable matches this: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } In order to initialize the database file, run the following commands: > cd src > python manage.py migrate Staticfiles settings The last step in setting up our development environment is configuring the staticfiles contrib application. The staticfiles application provides a number of features that make it easy to manage the static files (css, images, and javascript) of your projects. While our usage will be minimal, you should look at the Django documentation for staticfiles in further detail as it is used quite heavily in most real-world Django projects. You can find the documentation at https://docs.djangoproject.com/en/stable/howto/static-files/. In order to set up the staticfiles application, we have to configure a few settings in the settings.py file. First, make sure that django.contrib.staticfiles is added to INSTALLED_APPS. Django should have done this by default. Next, set STATIC_URL to whatever URL you want your static files to be served from. I usually leave this to the default value, ‘/static/’. This is the URL that Django will put in your templates when you use the static template tag to get the path to a static file. Base template Next, let’s set up a base template that all the other templates in our application will inherit from. I prefer to have templates that are used by more than one application of a project in a directory named templates in the project source folder. To set this up, add os.path.join(BASE_DIR, 'templates') to the DIRS array of the TEMPLATES configuration dictionary in the settings file, and then create a directory named templates in $PROJECT_ROOT/src. Next, using your favorite text editor, create a file named base.html in the new folder with the following content: <html> <head> <title>BlueBlog</title> </head> <body> {% block content %} {% endblock %} </body> </html> Just as Python classes can inherit from other classes, Django templates can also inherit from other templates. Also, just as Python classes can have functions overridden by their subclasses, Django templates can also define blocks that children templates can override. Our base.html template provides one block to inherit templates to override called content. The reason for using template inheritance is code reuse. We should put HTML that we want to be visible on every page of our site, such as headers, footers, copyright notices, meta tags, and so on, in the base template. Then, any template inheriting from it will automatically get all this common HTML included automatically, and we will only need to override the HTML code for the block that we want to customize. You’ll see this principal of creating and overriding blocks in base templates used throughout the projects in this book. User accounts With the database setup out of the way, let’s start creating our application. If you remember, the first thing on our list of features is to allow users to register accounts on our site. As I’ve mentioned before, we’ll be using the auth package from the Django contrib packages to provide user account features. In order to use the auth package, we’ll need to add it our INSTALLED_APPS list in the settings file (found at $PROJECT_ROOT/src/blueblog/settings.py). In the settings file, find the line defining INSTALLED_APPS and make sure that the ‘django.contrib.auth’ string is part of the list. It should be by default but if, for some reason, it’s not there, add it manually. You’ll see that Django has included the auth package and a couple of other contrib applications to the list by default. A new Django project includes these applications by default because almost all Django projects end up using these. If you need to add the auth application to the list, remember to use quotes to surround the application name. We also need to make sure that the MIDDLEWARE_CLASSES list contains django.contrib.sessions.middleware.SessionMiddleware, django.contrib.auth.middleware.AuthenticationMiddleware, and django.contrib.auth.middleware.SessionAuthenticationMiddleware. These middleware classes give us access to the logged in user in our views and also make sure that if I change the password for my account, I’m logged out from all other devices that I previously logged on to. As you learn more about the various contrib applications and their purpose, you can start removing any that you know you won’t need in your project. Now, let’s add the URLs, views, and templates that allow the users to register with our application. The user accounts app In order to create the various views, URLs, and templates related to user accounts, we’ll start a new application. To do so, type the following in your command line: > python manage.py startapp accounts This should create a new accounts folder in the src folder. We’ll add code that deals with user accounts in files found in this folder. To let Django know that we want to use this application in our project, add the application name (accounts) to the INSTALLED_APPS setting variable; making sure to surround it with quotes. Account registration The first feature that we will work on is user registration. Let’s start by writing the code for the registration view in accounts/views.py. Make the contents of views.py match what is shown here: from django.contrib.auth.forms import UserCreationForm from django.core.urlresolvers import reverse from django.views.generic import CreateView class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') I’ll explain what each line of this code is doing in a bit. First, I’d like you to get to a state where you can register a new user and see for yourself how the flow works. Next, we’ll create the template for this view. In order to create the template, you first need to create a new folder called templates in the accounts folder. The name of the folder is important as Django automatically searches for templates in folders of that name. To create this folder, just type the following: > mkdir accounts/templates Next, create a new file called user_registration.html in the templates folder and type in the following code: {% extends "base.html" %} {% block content %} <h1>Create New User</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create Account" /> </form> {% endblock %} Finally, remove the existing code in blueblog/urls.py and replace it with this: from django.conf.urls import include from django.conf.urls import url from django.contrib import admin from django.views.generic import TemplateView from accounts.views import UserRegistrationView urlpatterns = [ url(r'^admin/', include(admin.site.urls)), url(r'^$', TemplateView.as_view(template_name='base.html'), name='home'), url(r'^new-user/$', UserRegistrationView.as_view(), name='user_registration'), ] That’s all the code that we need to get user registration in our project! Let’s do a quick demonstration. Run the development server by typing as follows: > python manage.py runserver In your browser, visit http://127.0.0.1:8000/new-user/ and you’ll see a user registration form. Fill this in and click on submit. You’ll be taken to a blank page on successful registration. If there are some errors, the form will be shown again with the appropriate error messages. Let’s verify that our new account was indeed created in our database. For the next step, we will need to have an administrator account. The Django auth contrib application can assign permissions to user accounts. The user with the highest level of permission is called the super user. The super user account has free reign over the application and can perform any administrator actions. To create a super user account, run this command: > python manage.py createsuperuser As you already have the runserver command running in your terminal, you will need to quit it first by pressing Ctrl + C in the terminal. You can then run the createsuperuser command in the same terminal. After running the createsuperuser command, you’ll need to start the runserver command again to browse the site. If you want to keep the runserver command running and run the createsuperuser command in a new terminal window, you will need to make sure that you activate the virtual environment for this application by running the same source blueblogEnv/bin/activate command that we ran earlier when we created our new project. After you have created the account, visit http://127.0.0.1:8000/admin/ and log in with the admin account. You will see a link titled Users. Click on this, and you should see a list of users registered in our app. It will include the user that you just created. Congrats! In most other frameworks, getting to this point with a working user registration feature would take a lot more effort. Django, with it’s batteries included approach, allows us to do the same with a minimum of effort. Next, I’ll explain what each line of code that you wrote does. Generic views Here’s the code for the user registration view again: class UserRegistrationView(CreateView): form_class = UserCreationForm template_name = 'user_registration.html' def get_success_url(self): return reverse('home') Our view is pretty short for something that does such a lot of work. That’s because instead of writing code from scratch to handle all the work, we use one of the most useful features of Django, Generic Views. Generic views are base classes included with Django that provide functionality commonly required by a lot of web apps. The power of generic views comes from the ability to customize them to a great degree with ease. You can read more about Django generic views in the documentation available at https://docs.djangoproject.com/en/1.9/topics/class-based-views/. Here, we’re using the CreateView generic view. This generic view can display ModelForm using a template and, on submission, can either redisplay the page with errors if the form data was invalid or call the save method on the form and redirect the user to a configurable URL. CreateView can be configured in a number of ways. If you want ModelForm to be created automatically from some Django model, just set the model attribute to the model class, and the form will be generated automatically from the fields of the model. If you want to have the form only show certain fields from the model, use the fields attribute to list the fields that you want, exactly like you’d do while creating ModelForm. In our case, instead of having ModelForm generated automatically, we’re providing one of our own, UserCreationForm. We do this by setting the form_class attribute on the view. This form, which is part of the auth contrib app, provides the fields and a save method that can be used to create a new user. You’ll see that this theme of composing solutions from small reusable parts provided by Django is a common practice in Django web app development and, in my opinion, one of the best features of the framework. Finally, we define a get_success_url function that does a simple reverse URL and returns the generated URL. CreateView calls this function to get the URL to redirect the user to when a valid form is submitted and saved successfully. To get something up and running quickly, we left out a real success page and just redirected the user to a blank page. We’ll fix this later. Templates and URLs The template, which extends the base template that we created earlier, simply displays the form passed to it by CreateView using the form.as_p method, which you might have seen in the simple Django projects you may have worked on before. The urls.py file is a bit more interesting. You should be familiar with most of it—the parts where we include the admin site URLs and the one where we assign our view a URL. It’s the usage of TemplateView that I want to explain here. Like CreateView, TemplateView is another generic view provided to us by Django. As the name suggests, this view can render and display a template to the user. It has a number of customization options. The most important one is template_name, which tells it which template to render and display to the user. We could have created another view class that subclassed TemplateView and customized it by setting attributes and overriding functions like we did for our registration view. However, I wanted to show you another method of using a generic view in Django. If you only need to customize some basic parameters of a generic view—in this case, we only wanted to set the template_name parameter of the view—you can just pass the values as key=value pairs as function keyword arguments to the as_view method of the class when including it in the urls.py file. Here, we pass the template name that the view renders when the user accesses its URL. As we just needed a placeholder URL to redirect the user to, we simply use the blank base.html template. This technique of customizing generic views by passing key/value pairs only makes sense when you’re interested in customizing very basic attributes, like we do here. In case you want more complicated customizations, I advice you to subclass the view; otherwise, you will quickly get messy code that is difficult to maintain. Login and logout With registration out of the way, let’s write code to provide users with the ability to log in and log out. To start out, the user needs some way to go to the login and registration pages from any page on the site. To do this, we’ll need to add header links to our template. This is the perfect opportunity to demonstrate how template inheritance can lead to much cleaner and less code in our templates. Add the following lines right after the body tag to our base.html file: {% block header %} <ul> <li><a href="">Login</a></li> <li><a href="">Logout</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> </ul> {% endblock %} If you open the home page for our site now (at http://127.0.0.1:8000/), you should see that we now have three links on what was previously a blank page. It should look similar to the following screenshot: Click on the Register Account link. You’ll see the registration form we had before and the same three links again. Note how we only added these links to the base.html template. However, as the user registration template extends the base template, it got those links without any effort on our part. This is where template inheritance really shines. You might have noticed that href for the login/logout links is empty. Let’s start with the login part. Login view Let’s define the URL first. In blueblog/urls.py, import the login view from the auth app: from django.contrib.auth.views import login Next, add this to the urlpatterns list: url(r'^login/$', login, {'template_name': 'login.html'}, name='login'), Then, create a new file in accounts/templates called login.html. Put in the following content: {% extends "base.html" %} {% block content %} <h1>Login</h1> <form action="{% url "login" %}" method="post">{% csrf_token %} {{ form.as_p }} <input type="hidden" name="next" value="{{ next }}" /> <input type="submit" value="Submit" /> </form> {% endblock %} Finally, open up blueblog/settings.py file and add the following line to the end of the file: LOGIN_REDIRECT_URL = '/' Let’s go over what we’ve done here. First, notice that instead of creating our own code to handle the login feature, we used the view provided by the auth app. We import it using from django.contrib.auth.views import login. Next, we associate it with the login/ URL. If you remember the user registration part, we passed the template name to the home page view as a keyword parameter in the as_view() function. This approach is used for class-based views. For old-style view functions, we can pass a dictionary to the url function that is passed as keyword arguments to the view. Here, we use the template that we created in login.html. If you look at the documentation for the login view (https://docs.djangoproject.com/en/stable/topics/auth/default/#django.contrib.auth.views.login), you’ll see that on successfully logging in, it redirects the user to settings.LOGIN_REDIRECT_URL. By default, this setting has a value of /accounts/profile/. As we don’t have such a URL defined, we change the setting to point to our home page URL instead. Next, let’s define the logout view. Logout view In blueblog/urls.py, import the logout view: from django.contrib.auth.views import logout Add the following to the urlpatterns list: url(r'^logout/$', logout, {'next_page': '/login/'}, name='logout'), That’s it. The logout view doesn’t need a template; it just needs to be configured with a URL to redirect the user to after logging them out. We just redirect the user back to the login page. Navigation links Having added the login/logout view, we need to make the links we added in our navigation menu earlier take the user to those views. Change the list of links that we had in templates/base.html to the following: <ul> {% if request.user.is_authenticated %} <li><a href="{% url "logout" %}">Logout</a></li> {% else %} <li><a href="{% url "login" %}">Login</a></li> <li><a href="{% url "user_registration"%}">Register Account</a></li> {% endif %} </ul> This will show the Login and Register Account links to the user if they aren’t already logged in. If they are logged in, which we check using the request.user.is_authenticated function, they are only shown the Logout link. You can test all of these links yourself and see how little code was needed to make such a major feature of our site work. This is all possible because of the contrib applications that Django provides. Summary In this article we started with a simple blogging platform in Django. We also had a look at setting up the Database, Staticfiles and Base templates. We have also created a user account app with registration and navigation links in it. Resources for Article: Further resources on this subject: Setting up a Complete Django E-commerce store in 30 minutes [article] "D-J-A-N-G-O... The D is silent." - Authentication in Django [article] Test-driven API Development with Django REST Framework [article]
Read more
  • 0
  • 0
  • 11304

article-image-create-collisions-objects
Packt
19 Feb 2016
10 min read
Save for later

Create Collisions on Objects

Packt
19 Feb 2016
10 min read
In this article by Julien Moreau-Mathis, the author of the book, Babylon.JS Essentials, you will learn about creating and customizing the scenes using the materials and meshes provided by the designer. (For more resources related to this topic, see here.) In this article, let's play with the gameplay itself and create interactions with the objects in the scene by creating collisions and physics simulations. Collisions are important to add realism in your scenes if you want to walk without crossing the walls. Moreover, to make your scenes more alive, let's introduce the physics simulation with Babylon.js and, finally, see how it is easy to integrate these two notions in your scenes. We are going to discuss the following topics: Checking collisions in a scene Physics simulation Checking collisions in a scene Starting from the concept, configuring and checking collisions in a scene can be done without mathematical notions. We all have the notions of gravity and ellipsoid even if the ellipsoid word is not necessarily familiar. How collisions work in Babylon.js? Let's start with the following scene (a camera, light, plane, and box): The goal is to block the current camera of the scene in order not to cross the objects in the scene. In other words, we want the camera to stay above the plane and stop moving forward if it collides with the box. To perform these actions, the collisions system provided by Babylon.js uses what we call a collider. A collider is based on the bounding box of an object that looks as follows: A bounding box simply represents the minimum and maximum positions of a mesh's vertices and is computed automatically by Babylon.js. This means that you have to do nothing particularly tricky to configure collisions on objects. All the collisions are based on these bounding boxes of each mesh to determine if the camera should be blocked or not. You'll also see that the physics engines use the same kind of collider to simulate physics. If the geometry of a mesh has to be changed by modifying the vertices' positions, the bounding box information can be refreshed by calling the following code: myMesh.refreshBoundingInfo(); To draw the bounding box of an object, simply set the .showBoundingBox property of the mesh instance to true with the following code: myMesh.showBoundingBox = true; Configuring collisions in a scene Let's practice with the Babylon.js collisions engine itself. You'll see that the engine is particularly well hidden as you have to enable checks on scenes and objects only. First, configure the scene to enable collisions and then wake the engine up. If the following property is false, all the next properties will be ignored by Babylon.js and the collisions engine will be in stand-by mode. Then, it's easy to enable or disable collisions in a scene without modifying more properties: scene.collisionsEnabled = true; // Enable collisions in scene Next, configure the camera to check collisions. Then, the collisions engine will check collisions for all the rendered cameras that have collisions enabled. Here, we have only one camera to configure: camera.checkCollisions = true; // Check collisions for THIS camera To finish, just set the .checkCollisions property of each mesh to true to active collisions (the plane and box): plane.checkCollisions = true; box.checkCollisions = true; Now, the collisions engine will check the collisions in the scene for the camera on both the plane and box meshes. You guessed it; if you want to enable collisions only on the plane and you want the camera to walk across the box, you'll have to set the .checkCollisions property of the box to false: plane.checkCollisions = true; box.checkCollisions = false; Configuring gravity and ellipsoid In the previous section, the camera checks the collision on the plane and box but is not submitted to a famous force named gravity. To enrich the collisions in your scene, you can apply the gravitational force, for example, to go down the stairs. First, enable the gravity on the camera by setting the .applyGravity property to true: camera.applyGravity = true; // Enable gravity on the camera Customize the gravity direction by setting BABYLON.Vector3 to the .gravity property of your scene: scene.gravity = new BABYLON.Vector3(0.0, -9.81, 0.0); // To stay on earth Of course, the gravity in space should be as follows: scene.gravity = BABYLON.Vector3.Zero(); // No gravity in space Don't hesitate to play with the values in order to adjust the gravity to your scene referential. The last parameter to enrich the collisions in your scene is the camera's ellipsoid. The ellipsoid represents the camera's dimensions in the scene. In other words, it adjusts the collisions according to the X, Y, and Z axes of the ellipsoid (an ellipsoid is represented by BABYLON.Vector3). For example, the camera must measure 1m80 (Y axis) and the minimum distance to collide on the X (sides) and Z (forward) axes must be 1 m. Then, the ellipsoid must be (X=1, Y=1.8, Z=1). Simply set the .ellipsoid property of the camera as follows: camera.ellipsoid = new BABYLON.Vector3(1, 1.8, 1); The default value of the cameras ellipsoids is (X=0.5, Y=1.0, Z=0.5) As for the gravity, don't hesitate to adjust the X, Y, and Z values to your scene referential. Physics simulation Physics simulation is pretty different from the collisions system as it does not occur on the cameras but on the objects of the scene themselves. In other words, if physics is enabled (and configured) on a box, the box will interact with the other meshes in the scene and will try to represent the real physical movements. For example, let's take a sphere in the air. If you apply physics on the sphere, the sphere will fall until it collides with another mesh and, according to the given parameters, will tend to bounce and roll in the scene. The example files reproduce the behavior with a sphere that falls on the box in the middle of the sphere. Enabling physics in Babylon.js In Babylon.js, physics simulations can be done using plugins only. Two plugins are available using either the Cannon.js framework or the Oimo.js framework. These two frameworks are included in the Babylon.js GitHub repository in the dist folder. Each scene has its own physics simulations system and can be enabled with the following lines: var gravity = new BABYLON.Vector3(0, -9.81, 0); scene.enablePhysics(gravity, new BABYLON.OimoJSPlugin()); // or scene.enablePhysics(gravity, new BABYLON.CannonJSPlugin()); The .enablePhysics(gravity, plugin) function takes the following two arguments: The gravitational force of the scene to apply on objects The plugin to use: Oimo.js: This is the new BABYLON.OimoJSPlugin() plugin Cannon.js: This is the new BABYLON.CannonJSPlugin() plugin To disable physics in a scene, simply call the .disablePhysicsEngine() function using the following code: scene.disablePhysicsEngine(); Impostors Once the physics simulations are enabled in a scene, you can configure the physics properties (or physics states) of the scene meshes. To configure the physics properties of a mesh, the BABYLON.Mesh class provides a function named .setPhysicsState(impostor, options). The parameters are as follows: The impostor: Each kind of mesh has its own impostor according to their forms. For example, a box will tend to slip while a sphere will tend to roll. There are several types of impostors. The options: These options define the values used in physics equations. They count the mass, friction, and restitution. Let's have a box named box with a mass of 1 and set its physics properties: box.setPhysicsState(BABYLON.PhysicsEngine.BoxImpostor, { mass: 1 }); This is all; the box is now configured to interact with other configured meshes by following the physics equations. Let's imagine that the box is in the air. It will fall until it collides with another configured mesh. Now, let's have a sphere named sphere with a mass of 2 and set its physics properties: sphere.setPhysicsState(BABYLON.PhysicsEngine.SphereImpostor, { mass: 2 }); You'll notice that the sphere, which is a particular kind of mesh, has its own impostor (sphere impostor). In contrast to the box, the physics equations of the plugin will make the sphere roll while the box will slip on other meshes. According to their mass, if the box and sphere collide together, then the sphere will tend to push harder the box. The available impostors in Babylon.js are as follows: The box impostor: BABYLON.PhysicsEngine.BoxImpostor The sphere impostor: BABYLON.PhysicsEngine.SphereImpostor The plane impostor: BABYLON.PhysicsEngine.PlaneImpostor The cylinder impostor: BABYLON.PhysicsEngine.CylinderImpostor In fact, in Babylon.js, the box, plane, and cylinder impostors are the same according to the Cannon.js and Oimo.js plugins. Regardless of the impostor parameter, the options parameter is the same. In these options, there are three parameters, which are as follows: The mass: This represents the mass of the mesh in the world. The heavier the mesh is, the harder it is to stop its movement. The friction: This represents the force opposed to the meshes in contact. In other words, it represents how the mesh is slippery. To give you an order, the glass as a friction equals to 1. We can determine that the friction is in the range [0, 1]. The restitution: This represents how the mesh will bounce on others. Imagine a ping-pong ball and its table. If the table's material is a carpet, the restitution will be small, but if the table's material is glass, the restitution will be maximum. A real interval for the restitution is in [0, 1]. In the example files, these parameters are set and if you play, you'll see that these three parameters are linked together in the physics equations. Appling a force (impulse) on a mesh At any moment, you can apply a new force or impulse to a configured mesh. Let's take an explosion: a box is located at the coordinates (X=0, Y=0, Z=0) and an explosion happens above the box at the coordinates (X=0, Y=-5, Z=0). In real life, the box should be pushed up: this action is possible by calling a function named .applyImpulse(force, contactPoint) provided by the BABYLON.Mesh class. Once the mesh is configured with its options and impostor, you can, at any moment, call this function to apply a force to the object. The parameters are as follows: The force: This represents the force in the X, Y, and Z axes The contact point: This represents the origin of the force located in the X, Y and Z axes For example, the explosion generates a force only on Y that is equal to 10 (10 is an arbitrary value) and has its origin at the coordinates (X=0, Y=-5, Z=0): mesh.applyImpulse(new BABYLON.Vector3(0, 10, 0), new BABYLON.Vector3(0, -5, 0)); Once the impulse is applied to the mesh (only once), the box is pushed up and will fall according to its physics options (mass, friction, and restitution). Summary You are now ready to configure collisions in your scene and simulate physics. Whether by code or by the artists, you learned the pipeline to make your scenes more alive. Resources for Article: Further resources on this subject: Building Games with HTML5 and Dart[article] Fundamentals of XHTML MP in Mobile Web Development[article] MODx Web Development: Creating Lists[article]
Read more
  • 0
  • 0
  • 11299
article-image-using-mock-objects-test-interactions
Packt
23 Apr 2015
25 min read
Save for later

Using Mock Objects to Test Interactions

Packt
23 Apr 2015
25 min read
In this article by Siddharta Govindaraj, author of the book Test-Driven Python Development, we will look at the Event class. The Event class is very simple: receivers can register with the event to be notified when the event occurs. When the event fires, all the receivers are notified of the event. (For more resources related to this topic, see here.) A more detailed description is as follows: Event classes have a connect method, which takes a method or function to be called when the event fires When the fire method is called, all the registered callbacks are called with the same parameters that are passed to the fire method Writing tests for the connect method is fairly straightforward—we just need to check that the receivers are being stored properly. But, how do we write the tests for the fire method? This method does not change any state or store any value that we can assert on. The main responsibility of this method is to call other methods. How do we test that this is being done correctly? This is where mock objects come into the picture. Unlike ordinary unit tests that assert on object state, mock objects are used to test that the interactions between multiple objects occurs as it should. Hand writing a simple mock To start with, let us look at the code for the Event class so that we can understand what the tests need to do. The following code is in the file event.py in the source directory: class Event:    """A generic class that provides signal/slot functionality"""      def __init__(self):        self.listeners = []      def connect(self, listener):        self.listeners.append(listener)      def fire(self, *args, **kwargs):        for listener in self.listeners:            listener(*args, **kwargs) The way this code works is fairly simple. Classes that want to get notified of the event should call the connect method and pass a function. This will register the function for the event. Then, when the event is fired using the fire method, all the registered functions will be notified of the event. The following is a walk-through of how this class is used: >>> def handle_event(num): ...   print("I got number {0}".format(num)) ... >>> event = Event() >>> event.connect(handle_event) >>> event.fire(3) I got number 3 >>> event.fire(10) I got number 10 As you can see, every time the fire method is called, all the functions that registered with the connect method get called with the given parameters. So, how do we test the fire method? The walk-through above gives a hint. What we need to do is to create a function, register it using the connect method, and then verify that the method got notified when the fire method was called. The following is one way to write such a test: import unittest from ..event import Event   class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        called = False        def listener():            nonlocal called            called = True          event = Event()        event.connect(listener)        event.fire()        self.assertTrue(called) Put this code into the test_event.py file in the tests folder and run the test. The test should pass. The following is what we are doing: First, we create a variable named called and set it to False. Next, we create a dummy function. When the function is called, it sets called to True. Finally, we connect the dummy function to the event and fire the event. If the dummy function was successfully called when the event was fired, then the called variable would be changed to True, and we assert that the variable is indeed what we expected. The dummy function we created above is an example of a mock. A mock is simply an object that is substituted for a real object in the test case. The mock then records some information such as whether it was called, what parameters were passed, and so on, and we can then assert that the mock was called as expected. Talking about parameters, we should write a test that checks that the parameters are being passed correctly. The following is one such test:    def test_a_listener_is_passed_right_parameters(self):        params = ()        def listener(*args, **kwargs):            nonlocal params            params = (args, kwargs)        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape":"square"}), params) This test is the same as the previous one, except that it saves the parameters that are then used in the assert to verify that they were passed properly. At this point, we can see some repetition coming up in the way we set up the mock function and then save some information about the call. We can extract this code into a separate class as follows: class Mock:    def __init__(self):        self.called = False        self.params = ()      def __call__(self, *args, **kwargs):        self.called = True        self.params = (args, kwargs) Once we do this, we can use our Mock class in our tests as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called)      def test_a_listener_is_passed_right_parameters(self):        listener = Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        self.assertEquals(((5, ), {"shape": "square"}),             listener.params) What we have just done is to create a simple mocking class that is quite lightweight and good for simple uses. However, there are often times when we need much more advanced functionality, such as mocking a series of calls or checking the order of specific calls. Fortunately, Python has us covered with the unittest.mock module that is supplied as a part of the standard library. Using the Python mocking framework The unittest.mock module provided by Python is an extremely powerful mocking framework, yet at the same time it is very easy to use. Let us redo our tests using this library. First, we need to import the mock module at the top of our file as follows: from unittest import mock Next, we rewrite our first test as follows: class EventTest(unittest.TestCase):    def test_a_listener_is_notified_when_an_event_is_raised(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire()        self.assertTrue(listener.called) The only change that we've made is to replace our own custom Mock class with the mock.Mock class provided by Python. That is it. With that single line change, our test is now using the inbuilt mocking class. The unittest.mock.Mock class is the core of the Python mocking framework. All we need to do is to instantiate the class and pass it in where it is required. The mock will record if it was called in the called instance variable. How do we check that the right parameters were passed? Let us look at the rewrite of the second test as follows:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_called_with(5, shape="square") The mock object automatically records the parameters that were passed in. We can assert on the parameters by using the assert_called_with method on the mock object. The method will raise an assertion error if the parameters don't match what was expected. In case we are not interested in testing the parameters (maybe we just want to check that the method was called), then we can pass the value mock.ANY. This value will match any parameter passed. There is a subtle difference in the way normal assertions are called compared to assertions on mocks. Normal assertions are defined as a part of the unittest.Testcase class. Since our tests inherit from that class, we call the assertions on self, for example, self.assertEquals. On the other hand, the mock assertion methods are a part of the mock object, so you call them on the mock object, for example, listener.assert_called_with. Mock objects have the following four assertions available out of the box: assert_called_with: This method asserts that the last call was made with the given parameters assert_called_once_with: This assertion checks that the method was called exactly once and was with the given parameters assert_any_call: This checks that the given call was made at some point during the execution assert_has_calls: This assertion checks that a list of calls occurred The four assertions are very subtly different, and that shows up when the mock has been called more than one. The assert_called_with method only checks the last call, so if there was more than one call, then the previous calls will not be asserted. The assert_any_call method will check if a call with the given parameters occurred anytime during execution. The assert_called_once_with assertion asserts for a single call, so if the mock was called more than once during execution, then this assert would fail. The assert_has_calls assertion can be used to assert that a set of calls with the given parameters occurred. Note that there might have been more calls than what we checked for in the assertion, but the assertion would still pass as long as the given calls are present. Let us take a closer look at the assert_has_calls assertion. Here is how we can write the same test using this assertion:    def test_a_listener_is_passed_right_parameters(self):        listener = mock.Mock()        event = Event()        event.connect(listener)        event.fire(5, shape="square")        listener.assert_has_calls([mock.call(5, shape="square")]) The mocking framework internally uses _Call objects to record calls. The mock.call function is a helper to create these objects. We just call it with the expected parameters to create the required call objects. We can then use these objects in the assert_has_calls assertion to assert that the expected call occurred. This method is useful when the mock was called multiple times and we want to assert only some of the calls. Mocking objects While testing the Event class, we only needed to mock out single functions. A more common use of mocking is to mock a class. Take a look at the implementation of the Alert class in the following: class Alert:    """Maps a Rule to an Action, and triggers the action if the rule    matches on any stock update"""      def __init__(self, description, rule, action):        self.description = description        self.rule = rule        self.action = action      def connect(self, exchange):        self.exchange = exchange        dependent_stocks = self.rule.depends_on()        for stock in dependent_stocks:            exchange[stock].updated.connect(self.check_rule)      def check_rule(self, stock):        if self.rule.matches(self.exchange):            self.action.execute(self.description) Let's break down how this class works as follows: The Alert class takes a Rule and an Action in the initializer. When the connect method is called, it takes all the dependent stocks and connects to their updated event. The updated event is an instance of the Event class that we saw earlier. Each Stock class has an instance of this event, and it is fired whenever a new update is made to that stock. The listener for this event is the self.check_rule method of the Alert class. In this method, the alert checks if the new update caused a rule to be matched. If the rule matched, it calls the execute method on the Action. Otherwise, nothing happens. This class has a few requirements, as shown in the following, that need to be met. Each of these needs to be made into a unit test. If a stock is updated, the class should check if the rule matches If the rule matches, then the corresponding action should be executed If the rule doesn't match, then nothing happens There are a number of different ways in which we could test this; let us go through some of the options. The first option is not to use mocks at all. We could create a rule, hook it up to a test action, and then update the stock and verify that the action was executed. The following is what such a test would look like: import unittest from datetime import datetime from unittest import mock   from ..alert import Alert from ..rule import PriceRule from ..stock import Stock   class TestAction:    executed = False      def execute(self, description):        self.executed = True   class AlertTest(unittest.TestCase):    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)       action = TestAction()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        self.assertTrue(action.executed) This is the most straightforward option, but it requires a bit of code to set up and there is the TestAction that we need to create just for the test case. Instead of creating a test action, we could instead replace it with a mock action. We can then simply assert on the mock that it got executed. The following code shows this variation of the test case:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = PriceRule("GOOG", lambda stock: stock.price > 10)        action = mock.MagicMock()       alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") A couple of observations about this test: If you notice, alert is not the usual Mock object that we have been using so far, but a MagicMock object. A MagicMock object is like a Mock object but it has special support for Python's magic methods which are present on all classes, such as __str__, hasattr. If we don't use MagicMock, we may sometimes get errors or strange behavior if the code uses any of these methods. The following example illustrates the difference: >>> from unittest import mock >>> mock_1 = mock.Mock() >>> mock_2 = mock.MagicMock() >>> len(mock_1) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: object of type 'Mock' has no len() >>> len(mock_2) 0 >>>  In general, we will be using MagicMock in most places where we need to mock a class. Using Mock is a good option when we need to mock stand alone functions, or in rare situations where we specifically don't want a default implementation for the magic methods. The other observation about the test is the way methods are handled. In the test above, we created a mock action object, but we didn't specify anywhere that this mock class should contain an execute method and how it should behave. In fact, we don't need to. When a method or attribute is accessed on a mock object, Python conveniently creates a mock method and adds it to the mock class. Therefore, when the Alert class calls the execute method on our mock action object, that method is added to our mock action. We can then check that the method was called by asserting on action.execute.called. The downside of Python's behavior of automatically creating mock methods when they are accessed is that a typo or change in interface can go unnoticed. For example, suppose we rename the execute method in all the Action classes to run. But if we run our test cases, it still passes. Why does it pass? Because the Alert class calls the execute method, and the test only checks that the execute method was called, which it was. The test does not know that the name of the method has been changed in all the real Action implementations and that the Alert class will not work when integrated with the actual actions. To avoid this problem, Python supports using another class or object as a specification. When a specification is given, the mock object only creates the methods that are present in the specification. All other method or attribute accesses will raise an error. Specifications are passed to the mock at initialization time via the spec parameter. Both the Mock as well as MagicMock classes support setting a specification. The following code example shows the difference when a spec parameter is set compared to a default Mock object: >>> from unittest import mock >>> class PrintAction: ...     def run(self, description): ...         print("{0} was executed".format(description)) ...   >>> mock_1 = mock.Mock() >>> mock_1.execute("sample alert") # Does not give an error <Mock name='mock.execute()' id='54481752'>   >>> mock_2 = mock.Mock(spec=PrintAction) >>> mock_2.execute("sample alert") # Gives an error Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 557, in __getattr__    raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'execute' Notice in the above example that mock_1 goes ahead and executes the execute method without any error, even though the method has been renamed in the PrintAction. On the other hand, by giving a spec, the method call to the nonexistent execute method raises an exception. Mocking return values The second variant above showed how we could use a mock Action class in the test instead of a real one. In the same way, we can also use a mock rule instead of creating a PriceRule in the test. The alert calls the rule to see whether the new stock update caused the rule to be matched. What the alert does depends on whether the rule returned True or False. All the mocks we've created so far have not had to return a value. We were just interested in whether the right call was made or not. If we mock the rule, then we will have to configure it to return the right value for the test. Fortunately, Python makes that very simple to do. All we have to do is to set the return value as a parameter in the constructor to the mock object as follows: >>> matches = mock.Mock(return_value=True) >>> matches() True >>> matches(4) True >>> matches(4, "abcd") True As we can see above, the mock just blindly returns the set value, irrespective of the parameters. Even the type or number of parameters is not considered. We can use the same procedure to set the return value of a method in a mock object as follows: >>> rule = mock.MagicMock() >>> rule.matches = mock.Mock(return_value=True) >>> rule.matches() True >>>  There is another way to set the return value, which is very convenient when dealing with methods in mock objects. Each mock object has a return_value attribute. We simply set this attribute to the return value and every call to the mock will return that value, as shown in the following: >>> from unittest import mock >>> rule = mock.MagicMock() >>> rule.matches.return_value = True >>> rule.matches() True >>>  In the example above, the moment we access rule.matches, Python automatically creates a mock matches object and puts it in the rule object. This allows us to directly set the return value in one statement without having to create a mock for the matches method. Now that we've seen how to set the return value, we can go ahead and change our test to use a mocked rule object, as shown in the following:    def test_action_is_executed_when_rule_matches(self):        exchange = {"GOOG": Stock("GOOG")}        rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)        exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") There are two calls that the Alert makes to the rule: one to the depends_on method and the other to the matches method. We set the return value for both of them and the test passes. In case no return value is explicitly set for a call, the default return value is to return a new mock object. The mock object is different for each method that is called, but is consistent for a particular method. This means if the same method is called multiple times, the same mock object will be returned each time. Mocking side effects Finally, we come to the Stock class. This is the final dependency of the Alert class. We're currently creating Stock objects in our test, but we could replace it with a mock object just like we did for the Action and PriceRule classes. The Stock class is again slightly different in behavior from the other two mock objects. The update method doesn't just return a value—it's primary behavior in this test is to trigger the updated event. Only if this event is triggered will the rule check occur. In order to do this, we must tell our mock stock class to fire the event when the update event is called. Mock objects have a side_effect attribute to enable us to do just this. There are many reasons we might want to set a side effect. Some of them are as follows: We may want to call another method, like in the case of the Stock class, which needs to fire the event when the update method is called. To raise an exception: this is particularly useful when testing error situations. Some errors such as a network timeout might be very difficult to simulate, and it is better to test using a mock that simply raises the appropriate exception. To return multiple values: these may be different values each time the mock is called, or specific values, depending on the parameters passed. Setting the side effect is just like setting the return value. The only difference is that the side effect is a lambda function. When the mock is executed, the parameters are passed to the lambda function and the lambda is executed. The following is how we would use this with a mocked out Stock class:    def test_action_is_executed_when_rule_matches(self):        goog = mock.MagicMock(spec=Stock)        goog.updated = Event()        goog.update.side_effect = lambda date, value:                goog.updated.fire(self)        exchange = {"GOOG": goog}      rule = mock.MagicMock(spec=PriceRule)        rule.matches.return_value = True        rule.depends_on.return_value = {"GOOG"}        action = mock.MagicMock()        alert = Alert("sample alert", rule, action)        alert.connect(exchange)         exchange["GOOG"].update(datetime(2014, 2, 10), 11)        action.execute.assert_called_with("sample alert") So what is going on in that test? First, we create a mock of the Stock class instead of using the real one. Next, we add in the updated event. We need to do this because the Stock class creates the attribute at runtime in the __init__ scope. Because the attribute is set dynamically, MagicMock does not pick up the attribute from the spec parameter. We are setting an actual Event object here. We could set it as a mock as well, but it is probably overkill to do that. Finally, we set the side effect for the update method in the mock stock object. The lambda takes the two parameters that the method does. In this particular example, we just want to fire the event, so the parameters aren't used in the lambda. In other cases, we might want to perform different actions based on the values of the parameters. Setting the side_effect attribute allows us to do that. Just like with the return_value attribute, the side_effect attribute can also be set in the constructor. Run the test and it should pass. The side_effect attribute can also be set to an exception or a list. If it is set to an exception, then the given exception will be raised when the mock is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = Exception() >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 941, in _mock_call    raise effect Exception If it is set to a list, then the mock will return the next element of the list each time it is called. This is a good way to mock a function that has to return different values each time it is called, as shown in the following: >>> m = mock.Mock() >>> m.side_effect = [1, 2, 3] >>> m() 1 >>> m() 2 >>> m() 3 >>> m() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:Python34libunittestmock.py", line 885, in __call__    return _mock_self._mock_call(*args, **kwargs) File "C:Python34libunittestmock.py", line 944, in _mock_call    result = next(effect) StopIteration As we have seen, the mocking framework's method of handling side effects using the side_effect attribute is very simple, yet quite powerful. How much mocking is too much? In the previous few sections, we've seen the same test written with different levels of mocking. We started off with a test that didn't use any mocks at all, and subsequently mocked out each of the dependencies one by one. Which one of these solutions is the best? As with many things, this is a point of personal preference. A purist would probably choose to mock out all dependencies. My personal preference is to use real objects when they are small and self-contained. I would not have mocked out the Stock class. This is because mocks generally require some configuration with return values or side effects, and this configuration can clutter the test and make it less readable. For small, self-contained classes, it is simpler to just use the real object. At the other end of the spectrum, classes that might interact with external systems, or that take a lot of memory, or are slow are good candidates for mocking out. Additionally, objects that require a lot of dependencies on other object to initialize are candidates for mocking. With mocks, you just create an object, pass it in, and assert on parts that you are interested in checking. You don't have to create an entirely valid object. Even here there are alternatives to mocking. For example, when dealing with a database, it is common to mock out the database calls and hardcode a return value into the mock. This is because the database might be on another server, and accessing it makes the tests slow and unreliable. However, instead of mocks, another option could be to use a fast in-memory database for the tests. This allows us to use a live database instead of a mocked out database. Which approach is better depends on the situation. Mocks versus stubs versus fakes versus spies We've been talking about mocks so far, but we've been a little loose on the terminology. Technically, everything we've talked about falls under the category of a test double. A test double is some sort of fake object that we use to stand in for a real object in a test case. Mocks are a specific kind of test double that record information about calls that have been made to it, so that we can assert on them later. Stubs are just an empty do-nothing kind of object or method. They are used when we don't care about some functionality in the test. For example, imagine we have a method that performs a calculation and then sends an e-mail. If we are testing the calculation logic, we might just replace the e-mail sending method with an empty do-nothing method in the test case so that no e-mails are sent out while the test is running. Fakes are a replacement of one object or system with a simpler one that facilitates easier testing. Using an in-memory database instead of the real one, or the way we created a dummy TestAction earlier in this article would be examples of fakes. Finally, spies are objects that are like middlemen. Like mocks, they record the calls so that we can assert on them later, but after recording, they continue execution to the original code. Spies are different from the other three in the sense that they do not replace any functionality. After recording the call, the real code is still executed. Spies sit in the middle and do not cause any change in execution pattern. Summary In this article, you looked at how to use mocks to test interactions between objects. You saw how to hand write our own mocks, followed by using the mocking framework provided in the Python standard library. Resources for Article: Further resources on this subject: Analyzing a Complex Dataset [article] Solving problems – closest good restaurant [article] Importing Dynamic Data [article]
Read more
  • 0
  • 0
  • 11065

article-image-restful-java-web-services-design
Packt
18 Nov 2009
5 min read
Save for later

RESTful Java Web Services Design

Packt
18 Nov 2009
5 min read
We'll leave the RESTful implementation for a later article. Our sample application is a micro-blogging web service (similar to Twitter), where users create accounts and then post entries. Finally, while designing our application, we'll define a set of steps that can be applied to designing any software system that needs to be deployed as a RESTful web service. Designing a RESTful web service Designing RESTful web services is not different from designing traditional web applications. We still have business requirements, we still have users who want to do things with data, and we still have hardware constraints and software architectures to deal with. The main difference, however, is that we look at the requirements to tease out resources and forget about specific actions to be taken on these resources. We can think of RESTful web service design as being similar to Object Oriented Design (OOD). In OOD, we try to identify objects from the data we want to represent together with the actions that an object can have. But the similarities end at the data structure definition, because with RESTful web services we already have specific calls that are part of the protocol itself. The underlying RESTful web service design principles can be summarized in the following four steps: Requirements gathering—this step is similar to traditional software requirement gathering practices. Resource identification—this step is similar to OOD where we identify objects, but we don't worry about messaging between objects. Resource representation definition—because we exchange representation between clients and servers, we should define what kind of representation we need to use. Typically, we use XML, but JSON has gained popularity. That's not to say that we can't use any other form of resource representation—on the contrary, we could use XHTML or any other form of binary representation, though we let the requirements guide our choices. URI definition—with resources in place, we need to define the API, which consists of URIs for clients and servers to exchange resources' representations. This design process is not static. These are iterative steps that gravitate around   resources. Let's say that during the URI definition step we discover that one of the URI's responses is not covered in one of the resources we have identified. Then we go back to define a suitable resource. In most cases, however, we find that the resources that we already have cover most of our needs, and we just have to combine existing resources into a meta-resource to take care of the new requirement. Requirements of sample web service The RESTful web service we design in this article is a social networking web application similar to Twitter. We follow an OOD process mixed with an agile philosophy for designing and coding our applications. This means that we create just enough documentation to be useful, but not so much that we spend an inordinate amount of time deciphering it during our implementation phase. As with any application, we begin by listing the main business requirements, for which we have the following use cases (these are the main functions of our application): A web user creates an account with a username and a password (creating an account means that the user is now registered). Registered users post blog entries to their accounts. We limit messages to 140 characters. Registered and non-registered users view all blog entries. Registered and non-registered users view user profiles. Registered users update their user profiles, for example, users update their password. Registered and non-registered users search for terms in all blog entries. However simple this example may be, social networking sites work on these same principles: users sign up for accounts to post personal updates or information. Our intention here, though, is not to fully replicate Twitter or to fully create a social networking application. What we are trying to outline is a set of requirements that will test our understanding of RESTful web services design and implementation. The core value of social networking sites lies in the ability to connect to multiple users who connect with us, and the value is derived from what the connections mean within the community, because of the tendency of users following people with similar interests. For example, the connections between users create targeted distribution networks.The connections between users create random graphs in the graph theory sense, where nodes are users and edges are connections between users. This is what is referred to as the social graph. Resource identification Out of the use cases listed above, we now need to define the service's resources. From reading the requirements we see that we need users and messages. Users appear in two ways: a single user and a list of users. Additionally, users have the ability to post blog entries in the form of messages of no more than 140 characters. This means that we need resources for a single message and a list of messages. In sum, we identify the following resources: User List of users Message List of messages
Read more
  • 0
  • 0
  • 11057

article-image-exceptions-and-logging-apache-struts-2
Packt
16 Oct 2009
8 min read
Save for later

Exceptions and Logging in Apache Struts 2

Packt
16 Oct 2009
8 min read
Handling exceptions in Struts 2 Struts 2 provides a declarative exception handling mechanism that can be configured globally (for an entire package), or for a specific action. This capability can reduce the amount of exception handling code necessary inside actions under some circumstances, most notably when underlying systems, such as our services, throw runtime exceptions (exceptions that we don't need to wrap in a try/catch or declare that a method throws). To sum it up, we can map exception classes to Struts 2 results. The exception handling mechanism depends on the exception interceptor. If we modify our interceptor stack, we must keep that in mind. In general, removing the exception interceptor isn't preferred. Global exception mappings Setting up a global exception handler result is as easy as adding a global exception mapping element to a Struts 2 configuration file package definition and configuring its result. For example, to catch generic runtime exceptions, we could add the following: <global-exception-mappings><exception-mapping result="runtime"exception="java.lang.RuntimeException"/></global-exception-mappings> This means that if a java.lang.RuntimeException (or a subclass) is thrown, the framework will take us to the runtime result. The runtime result may be declared in an element, an action configuration, or both. The most specific result will be used. This implies that an action's result configuration might take precedence over a global exception mapping. For example, consider the global exception mapping shown in the previous code snippet. If we configure an action as follows, and a RuntimeException is thrown, we'll see the locally defined runtime result, even if there is a global runtime result. <action name="except1"class="com.packt.s2wad.ch09.examples.exceptions.Except1"><result name="runtime">/WEB-INF/jsps/ch9/exceptions/except1-runtime.jsp</result>... This can occasionally lead to confusion if a result name happens to collide with a result used for an exception. However, this can happen with global results anyway (a case where a naming convention for global results can be handy). Action-specific exception mappings In addition to overriding the result used for an exception mapping, we can also override a global exception mapping on a per-action basis. For example, if an action needs to use a result named runtime2 as the destination of a RuntimeException, we can configure an exception mapping specific to that action. <action name="except2"class="com.packt.s2wad.ch09.examples.exceptions.Except1"><exception-mapping result="runtime2"exception="java.lang.RuntimeException"/>... As with our earlier examples, the runtime2 result may be configured either as a global result or as an action-specific result. Accessing the exception We have many options regarding how to handle exceptions. We can show the user a generic "Something horrible has happened!" page, we can take the user back and allow them to retry the operation or refill the input form, and so on. The appropriate course of action depends on the application and, most likely, on the type of exception. We can display exception-specific information as well. The exception interceptor pushes an exception encapsulation object onto the stack with the exception and exceptionStack properties. While the stack trace is probably not appropriate for user-level error pages, the exception can be used to help create a useful error message, provide I18N property keys for messages (or values used in messages), suggest possible remedies, and so on. The simplest example of accessing the exception property from our JSP is to simply display the exception message. For example, if we threw a RuntimeException, we might create it as follows: throw newRuntimeException("Runtime thrown from ThrowingAction"); Our exception result page, then, could access the message using the usual property tag (or JSTL, if we're taking advantage of Struts 2's custom request processor): <s:property value="exception.message"/> The underlying action is still available on the stack—it's the next object on the value stack. It can be accessed from the JSP as usual, as long as we're not trying to access properties named exception or exceptionStack, which would be masked by the exception holder. (We can still access an action property named exception using OGNL's immediate stack index notation—[1].exception.) Architecting exceptions and exception handling We have pretty good control over what is displayed for our application exceptions. It is customizable based on exception type, and may be overridden on a per-action basis. However, to make use of this flexibility, we require a well-thought-out exception policy in our application. There are some general principles we can follow to help make this easier. Checked versus unchecked exceptions Before we start, let's recall that Java offers two main types of exceptions—checked and unchecked. Checked exceptions are exceptions we declare with a throws keyword or wrapped in a try/catch block. Unchecked exceptions are runtime exceptions or a subclass. It isn't always clear what type we should use when writing our code or creating our exceptions. It's been the subject of much debate over the years, but some guidelines have become apparent. One clear thing about checked exceptions is that they aren't always worth the aggravation they cause, but may be useful when the programmer has a reasonable chance of recovering from the exception. One issue with checked exceptions is that unless they're caught and wrapped in a more abstract exception (coming up next), we're actually circumventing some of the benefits of encapsulation. One of the benefits being circumvented is that when exceptions are declared as being thrown all the way up a call hierarchy, all of the classes involved are forced to know something about the class throwing the exception. It's relatively rare that this exposure is justifiable. Application-specific exceptions One of the more useful exception techniques is to create application-specific exception classes. A compelling feature of providing our own exception classes is that we can include useful diagnostic information in the exception class itself. These classes are like any other Java class. They can contain methods, properties, and constructors. For example, let's assume a service that throws an exception when the user calling the service doesn't have access rights to the service. One way to create and throw this exception would be as follows: throw new RuntimeException("User " + user.getId()+ " does not have access to the 'update' service."); However, there are some issues with this approach. It's awkward from the Struts 2's standpoint. Because it's a RuntimeException, we have only one option for handling the exception—mapping a RuntimeException to a result. Yes, we could map the exception type per-action, but that gets unwieldy. It also doesn't help if we need to map two different types of RuntimeExceptions to two different results. Another potential issue would arise if we had a process that examined exceptions and did something useful with them. For example, we might send an email with user details based on the above exception. This would amount to parsing the exception message, pulling out the user ID, and using it to get user details for inclusion in the email. This is where we'd need to create an exception class of our own, subclassed from RuntimeException. The class would have encapsulated exception related information, and a mechanism to differentiate between the different types of exceptions. A third benefit comes when we wrap lower-level exceptions—for example, a Spring-related exception. Rather than create a Spring dependency up the entire call chain, we'd wrap it in our own exception, abstracting the lower-level exception. This allows us to change the underlying implementation and aggregate differing exception types under one (or more) application-specific exception. One way of creating the above scenario would be to create an exception class that takes a User object and a message as its constructor arguments: package com.packt.s2wad.ch09.exceptions;public class UserAccessException extends RuntimeException {private User user;private String msg;public UserAccessException(User user, String msg) {this.user = user;this.msg = msg;}public String getMessage() {return "User " + user.getId() + " " + msg;}} We can now create an exception mapping for a UserAccessException (as well as a generic RuntimeException if we need it). In addition, the exception carries along with it the information needed to create useful messages: throw new UserAccessException(user,"does not have access to the 'update' service."); "Self-documenting" of code could be made even safer, in the sense of ensuring that it's only used in the ways in which it is intended. We could add an enum to the class to encapsulate the reasons the exception can be thrown, including the text for each reason. We'll add the following inside  our UserAccessException: public enum Reason {NO_ROLE("does not have role"),NO_ACCESS("does not have access");private String message;private Reason(String message) {this.message = message;}public String getMessage() { return message; }}; We'll also modify the constructor and getMessage() method to use the new Reason enumeration. public UserAccessException(User user, Reason reason) {this.user = user;this.reason = reason;}public String getMessage() {return String.format("User %d %s.",user.getId(), reason.getMessage());} Now, when we throw the exception, we explicitly know that we're using the exception class correctly (at least type-wise). The string message for each of the exception reasons is encapsulated within the exception class itself. throw new UserAccessException(user,UserAccessException.Reason.NO_ACCESS); With Java 5's static imports, it might make even more sense to create static helper methods in the exception class, leading to the concise, but understandable code: throw userHasNoAccess(user);
Read more
  • 0
  • 1
  • 10849
article-image-running-multiple-mysql-server-instances-parallel-linux-server
Packt
01 Oct 2010
5 min read
Save for later

Running Multiple MySQL Server Instances in Parallel on a Linux Server

Packt
01 Oct 2010
5 min read
  MySQL Admin Cookbook 99 great recipes for mastering MySQL configuration and administration Set up MySQL to perform administrative tasks such as efficiently managing data and database schema, improving the performance of MySQL servers, and managing user credentials Deal with typical performance bottlenecks and lock-contention problems Restrict access sensibly and regain access to your database in case of loss of administrative user credentials Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on MySQL, see here.) Introduction On most Linux setups, MySQL comes as a readymade installation package, making it easy to get started. It is, however, a little more complicated to run multiple instances in parallel, often a setup handy for development. This is because in contrast to Windows, MySQL is usually not installed in a self-contained directory, but most Linux distribution packages spread it across the appropriate system folders for programs, configuration files, and so on. You can, however, also install MySQL in its own directory, for example, if you need to use a version not available as a prepared package for your Linux distribution. While this gives you the greatest flexibility, as a downside you will have to take care of wiring up your MySQL server with the operating system manually. For example, you will need to hook up the startup and shutdown scripts with the appropriate facilities of your distribution. In more recent distributions, you can make use of a tool called mysqld_multi, a solution that lets you set up multiple instances of MySQL daemons with varying configurations. In this recipe, we will show you how to set up two parallel MySQL servers, listening on different TCP ports and using separate data directories for their respective databases. Getting ready This recipe is based on an Ubuntu Linux machine with the 8.04 LTS version. mysqld_multi comes with the MySQL packages for that operating system. If you are using other distributions, you need to make sure you have mysqld_multi installed to be able to follow along. Refer to your distribution's package repositories for information on which packages you need to install. You will also need an operating system user with sufficient privileges to edit the MySQL configuration file—typically /etc/mysql/my.cnf on Ubuntu—and restart services. As for AppArmor or SELinux, we assume these have been disabled before you start to simplify the process. How to do it... Locate and open the my.cnf configuration file in a text editor. Create the following two sections in the file: # mysqld_multi test, instance 1[mysqld1]server-id=10001socket=/var/run/mysqld/mysqld1.sockport=23306pid-file=/var/run/mysqld/mysqld1.piddatadir=/var/lib/mysql1log_bin=/var/log/mysql1/mysql1-bin.log# mysqld_multi test, instance 2[mysqld2]server-id=10002socket=/var/run/mysqld/mysqld2.sockport=33306pid-file=/var/run/mysqld/mysqld2.piddatadir=/var/lib/mysql2log_bin=/var/log/mysql2/mysql2-bin.log Save the configuration file. Issue the following command to verify the two sections are found by mysqld_multi: $ sudo mysqld_multi report Initialize the data directories: $ sudo mysql_install_db --user=mysql --datadir=/var/lib/mysql1$ sudo mysql_install_db --user=mysql --datadir=/var/lib/mysql2 Start both instances and verify they have been started: $ sudo mysqld_multi start 1$ sudo mysqld_multi report Connect to both instances and verify their settings: $ mysql -S /var/run/mysqld/mysql1.sockmysql> SHOW VARIABLES LIKE 'server_id'; How it works... mysqld_multi uses a single configuration file for all MySQL server instances, but inside that file each instance has its individual [mysqld] section with its specific options. mysqld_multi then takes care of launching the MySQL executable with the correct options to use the options from its corresponding section. The sections are distinguished by a positive number directly appended to the word mysqld in the section header. You can specify all the usual MySQL configuration file options in these sections, just as you would for a single instance. Make sure, however, to specify the minimum set of options as in the recipe steps previously stated, as these are required to be unique for every single instance. There's more... Some special preparation might be needed, depending on the particular operating system you are using. Turning off AppArmor / SELinux for Linux distributions If your system uses the AppArmor or SELinux security features, you will need to make sure these are either turned off while you try this out, or configured (for permanent use once your configuration has been finished) to allow access to the newly defined directories and files. See the documentation for your respective Linux distribution for more details on how to do this. Windows On Windows, running multiple server instances is usually more straightforward. MySQL is normally installed in a separate, self-contained folder. To run two or more independent server instances, you only need to install a Windows service for each of them and point them to an individual configuration file. Considering the alternative MySQL Sandbox project As an alternative to mysqld_multi you might want to have a look at MySQL Sandbox, which offers a different approach to hosting multiple independent MySQL installations on a single operating system. While mysqld_multi manages multiple configurations in a single file, MySQL Sandbox aims at completely separating MySQL installations from each other, easily allowing even several MySQL releases to run side by side. For more details, visit the project's website at http://mysqlsandbox.net Preventing invalid date values from being stored in DATE or DATETIME columns In this recipe, we will show you how to configure MySQL in a way such that invalid dates are rejected when a client attempts to store them in a DATE or DATETIME column using a combination of flags for the SQL mode setting. See the There's more... section of this recipe for some more detailed information on the server mode setting in general and on how to use it on a per-session basis. Getting ready
Read more
  • 0
  • 0
  • 10583

article-image-mapping-requirements-modular-web-shop-app
Packt
07 Sep 2016
11 min read
Save for later

Mapping Requirements for a Modular Web Shop App

Packt
07 Sep 2016
11 min read
In this article by Branko Ajzele, author of the book Modular Programming with PHP 7, we will be building a software application from the ground up requires diverse skills, as it involves more than just writing down a code. Writing down functional requirements and sketching out a wireframe are often among the first steps in the process, especially if we are working on a client project. These steps are usually done by roles other than the developer, as they require certain insight into client business case, user behavior, and alike. Being part of a larger development team means that, we as developers, usually get requirements, designs, and wireframes then start coding against them. Delivering projects by oneself, makes it tempting to skip these steps and get our hands started with code alone. More often than not, this is an unproductive approach. Laying down functional requirements and a few wireframes is a skill worth knowing and following, even if one is just a developer. (For more resources related to this topic, see here.) Later in this article, we will go over a high-level application requirement, alongside a rough wireframe. In this article, we will be covering the following topics: Defining application requirements Wireframing Defining a technology stack Defining application requirements We need to build a simple, but responsive web shop application. In order to do so, we need to lay out some basic requirements. The types of requirements we are interested in at the moment are those that touch upon interactions between a user and a system. The two most common techniques to specify requirements in regards to user usage are use case and user story. The user stories are a less formal, yet descriptive enough way to outline these requirements. Using user stories, we encapsulate the customer and store manager actions as mentioned here. A customer should be able to do the following: Browse through static info pages (about us, customer service) Reach out to the store owner via a contact form Browse the shop categories See product details (price, description) See the product image with a large view (zoom) See items on sale See best sellers Add the product to the shopping cart Create a customer account Update customer account info Retrieve a lost password Check out See the total order cost Choose among several payment methods Choose among several shipment methods Get an email notification after an order has been placed Check order status Cancel an order See order history A store manager should be able to do the following: Create a product (with the minimum following attributes: title, price, sku, url-key, description, qty, category, and image) Upload a picture to the product Update and delete a product Create a category (with the minimum following attributes: title, url-key, description, and image) Upload a picture to a category Update and delete a category Be notified if a new sales order has been created Be notified if a new sales order has been canceled See existing sales orders by their statuses Update the status of the order Disable a customer account Delete a customer account User stories are a convenient high-level way of writing down application requirements. Especially useful as an agile mode of development. Wireframing With user stories laid out, let's shift our focus to actual wireframing. For reasons we will get into later on, our wireframing efforts will be focused around the customer perspective. There are numerous wireframing tools out there, both free and commercial. Some commercial tools like https://ninjamock.com, which we will use for our examples, still provide a free plan. This can be very handy for personal projects, as it saves us a lot of time. The starting point of every web application is its home page. The following wireframe illustrates our web shop app's homepage: Here we can see a few sections determining the page structure. The header is comprised of a logo, category menu, and user menu. The requirements don't say anything about category structure, and we are building a simple web shop app, so we are going to stick to a flat category structure, without any sub-categories. The user menu will initially show Register and Login links, until the user is actually logged in, in which case the menu will change as shown in following wireframes. The content area is filled with best sellers and on sale items, each of which have an image, title, price, and Add to Cart button defined. The footer area contains links to mostly static content pages, and a Contact Us page. The following wireframe illustrates our web shop app's category page: The header and footer areas remain conceptually the same across the entire site. The content area has now changed to list products within any given category. Individual product areas are rendered in the same manner as it is on the home page. Category names and images are rendered above the product list. The width of a category image gives some hints as to what type of images we should be preparing and uploading onto our categories. The following wireframe illustrates our web shop app's product page: The content area here now changes to list individual product information. We can see a large image placeholder, title, sku, stock status, price, quantity field, Add to Cart button, and product description being rendered. The IN STOCK message is to be displayed when an item is available for purchase and OUT OF STOCK when an item is no longer available. This is to be related to the product quantity attribute. We also need to keep in mind the "See the product image with a big view (zoom)" requirement, where clicking on an image would zoom into it. The following wireframe illustrates our web shop app's register page: The content area here now changes to render a registration form. There are many ways that we can implement the registration system. More often than not, the minimal amount of information is asked on a registration screen, as we want to get the user in as quickly as possible. However, let's proceed as if we are trying to get more complete user information right here on the registration screen. We ask not just for an e-mail and password, but for entire address information as well. The following wireframe illustrates our web shop app's login page: The content area here now changes to render a customer login and forgotten password form. We provide the user with Email and Password fields in case of login, or just an Email field in case of a password reset action. The following wireframe illustrates our web shop app's customer account page: The content area here now changes to render the customer account area, visible only to logged in customers. Here we see a screen with two main pieces of information. The customer information being one, and order history being the other. The customer can change their e-mail, password, and other address information from this screen. Furthermore, the customer can view, cancel, and print all of their previous orders. The My Orders table lists orders top to bottom, from newest to oldest. Though not specified by the user stories, the order cancelation should work only on pending orders. This is something that we will touch upon in more detail later on. This is also the first screen that shows the state of the user menu when the user is logged in. We can see a dropdown showing the user's full name, My Account, and Sign Out links. Right next to it, we have the Cart (%s) link, which is to list exact quantities in a cart. The following wireframe illustrates our web shop app's checkout cart page: The content area here now changes to render the cart in its current state. If the customer has added any products to the cart, they are to be listed here. Each item should list the product title, individual price, quantity added, and subtotal. The customer should be able to change quantities and press the Update Cart button to update the state of the cart. If 0 is provided as the quantity, clicking the Update Cart button will remove such an item from the cart. Cart quantities should at all time reflect the state of the header menu Cart (%s) link. The right-hand side of a screen shows a quick summary of current order total value, alongside a big, clear Go to Checkout button. The following wireframe illustrates our web shop app's checkout cart shipping page: The content area here now changes to render the first step of a checkout process, the shipping information collection. This screen should not be accessible for non-logged in customers. The customer can provide us with their address details here, alongside a shipping method selection. The shipping method area lists several shipping methods. On the right hand side, the collapsible order summary section is shown, listing current items in the cart. Below it, we have the cart subtotal value and a big clear Next button. The Next button should trigger only when all of the required information is provided, in which case it should take us to payment information on the checkout cart payment page. The following wireframe illustrates our web shop app's checkout cart payment page: The content area here now changes to render the second step of a checkout process, the payment information collection. This screen should not be accessible for non-logged in customers. The customer is presented with a list of available payment methods. For the simplicity of the application, we will focus only on flat/fixed payments, nothing robust such as PayPal or Stripe. On the right-hand side of the screen, we can see a collapsible Order summary section, listing current items in the cart. Below it, we have the order totals section, individually listing Cart Subtotal, Standard Delivery, Order Total, and a big clear Place Order button. The Place Order button should trigger only when all of the required information is provided, in which case it should take us to the checkout success page. The following wireframe illustrates our web shop app's checkout success page: The content area here now changes to output the checkout successful message. Clearly this page is only visible to logged in customers that just finished the checkout process. The order number is clickable and links to the My Account area, focusing on the exact order. By reaching this screen, both the customer and store manager should receive a notification email, as per the Get email notification after order has been placed and Be notified if the new sales order has been created requirements. With this, we conclude our customer facing wireframes. In regards to store manager user story requirements, we will simply define a landing administration interface for now, as shown in the following screenshot: Using the framework later on, we will get a complete auto-generated CRUD interface for the multiple Add New and List & Manage links. The access to this interface and its links will be controlled by the framework's security component, since this user will not be a customer or any user in the database as such. Defining a technology stack Once the requirements and wireframes are set, we can focus our attention to the selection of a technology stack. Choosing the right one in this case, is more of a matter of preference, as application requirements for the most part can be easily met by any one of those frameworks. Our choice however, falls onto Symfony. Aside from PHP frameworks, we still need a CSS framework to deliver some structure, styling, and responsiveness within the browser on the client side. Since the focus of this book is on PHP technologies, let's just say we choose the Foundation CSS framework for that task. Summary Creating web applications can be a tedious and time consuming task. Web shops probably being one of the most robust and intensive type of application out there, as they encompass a great deal of features. There are many components involved in delivering the final product; from database, server side (PHP) code to client side (HTML, CSS, and JavaScript) code. In this article, we started off by defining some basic user stories which in turn defined high-level application requirements for our small web shop. Adding wireframes to the mix helped us to visualize the customer facing interface, while the store manager interface is to be provided out of the box by the framework. We further glossed over two of the most popular frameworks that support modular application design. We turned our attention to Symfony as server side technology and Foundation as a client side responsive framework. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Understanding PHP basics [article] PHP Magic Features [article]
Read more
  • 0
  • 0
  • 10526
Modal Close icon
Modal Close icon