Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-working-with-the-vue-router-plugin-for-spas
Pravin Dhandre
07 Jun 2018
5 min read
Save for later

Working with the Vue-router plugin for SPAs

Pravin Dhandre
07 Jun 2018
5 min read
Single-Page Applications (SPAs) are web applications that load a single HTML page and updates that page dynamically based on the user interaction with the app. These SPAs use AJAX and HTML5 for creating fluid and responsive web applications without any requirement of constant page reloads. In this tutorial, we will show a step-by-step approach of how to install an extremely powerful plugin Vue-router to build Single Page Applications. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example. Similar to how you add Vue and Vuex to your applications, you can either directly include the library from unpkg, or head to the following URL and download a local copy for yourself: https://unpkg.com/Vue-router. Add the JavaScript to a new HTML document, along with Vue, and your application's JavaScript. Create an application container element, your view, as well. In the following example, I have saved the Vue-router JavaScript file as router.js: Initialize VueRouter in your JavaScript application We are now ready to add VueRouter and utilize its power. Before we do that, however, we need to create some very simple components which we can load and display based on the URL. As we are going to be loading the components with the router, we don't need to register them with Vue.component, but instead create JavaScript objects with the same properties as we would a Vue component. For this first exercise, we are going to create two pages—Home and About pages. Found on most websites, these should help give you context as to what is loading where and when. Create two templates in your HTML page for us to use: Don't forget to encapsulate all your content in one "root" element (represented here by the wrapping div tags). You also need to ensure you declare the templates before your application JavaScript is loaded. We've created a Home page template, with the id of homepage, and an About page, containing some placeholder text from lorem ipsum, with the id of about. Create two components in your JavaScript which reference these two templates: The next step is to give the router a placeholder to render the components in the view. This is done by using a custom <router-view> HTML element. Using this element gives you control over where your content will render. It allows us to have a header and footer right in the app view, without needing to deal with messy templates or include the components themselves. Add a header, main, and footer element to your app. Give yourself a logo in the header and credits in the footer; in the main HTML element, place the router-view placeholder: Everything in the app view is optional, except the router-view, but it gives you an idea of how the router HTML element can be implemented into a site structure. The next stage is to initialize the Vue-router and instruct Vue to use it. Create a new instance of VueRouter and add it to the Vue instance—similar to how we added Vuex in the previous section: We now need to tell the router about our routes (or paths), and what component it should load when it encounters each one. Create an object inside the Vue-router instance with a key of routes and an array as the value. This array needs to include an object for each route: Each route object contains a path and component key. The path is a string of the URL that you want to load the component on. Vue-router serves up components based on a first-come-first-served basis. For example, if there are several routes with the same path, the first one encountered is used. Ensure each route has the beginning slash—this tells the router it is a root page and not a sub-page. Press save and view your app in the browser. You should be presented with the content of the Home template component. If you observe the URL, you will notice that on page load a hash and forward slash (#/) are appended to the path. This is the router creating a method for browsing the components and utilizing the address bar. If you change this to the path of your second route, #/about, you will see the contents of the About component. Vue-router is also able to use the JavaScript history API to create prettier URLs. For example, yourdomain.com/index.html#about would become yourdomain.com/about. This is activated by adding mode: 'history' to your VueRouter instance: With this, you should now be familiar with Vue-router and how to initialize it for creating new routes and paths for different web pages on your website. Do check out the book Vue.js 2.x by Example to start developing building blocks for a successful e-commerce website. What is React.js and how does it work? Why has Vue.js become so popular? Building a real-time dashboard with Meteor and Vue.js
Read more
  • 0
  • 0
  • 25950

article-image-laravel-50-essentials
Packt
12 Aug 2016
9 min read
Save for later

Laravel 5.0 Essentials

Packt
12 Aug 2016
9 min read
In this article by Alfred Nutile from the book, Laravel 5.x Cookbook, we will learn the following topics: Setting up Travis to Auto Deploy when all is Passing Working with Your .env File Testing Your App on Production with Behat (For more resources related to this topic, see here.) Setting up Travis to Auto Deploy when all is Passing Level 0 of any work should be getting a deployment workflow setup. What that means in this case is that a push to GitHub will trigger our Continuous Integration (CI). And then from the CI, if the tests are passing, we trigger the deployment. In this example I am not going to hit the URL Forge gives you but I am going to send an Artifact to S3 and then have call CodeDeploy to deploy this Artifact. Getting ready… You really need to see the section before this, otherwise continue knowing this will make no sense. How to do it… Install the travis command line tool in Homestead as noted in their docs https://github.com/travis-ci/travis.rb#installation. Make sure to use Ruby 2.x: sudo apt-get install ruby2.0-dev sudo gem install travis -v 1.8.2 --no-rdoc --no-ri Then in the recipe folder I run the command > travis setup codedeploy I answer all the questions keeping in mind:     The KEY and SECRET are the ones we made of the I AM User in the Section before this     The S3 KEY is the filename not the KEY we used for a above. So in my case I just use the name again of the file latest.zip since it sits inside the recipe-artifact bucket. Finally I open the .travis.yml file, which the above modifies and I update the before-deploy area so the zip command ignores my .env file otherwise it would overwrite the file on the server. How it works… Well if you did the CodeDeploy section before this one you will know this is not as easy as it looks. After all the previous work we are able to, with the one command travis setup codedeploy punch in securely all the needed info to get this passing build to deploy. So after phpunit reports things are passing we are ready. With that said we had to have a lot of things in place, S3 bucket to put the artifact, permission with the KEY and SECRET to access the Artifact and CodeDeploy, and a CodeDeploy Group and Application to deploy to. All of this covered in the previous section. After that it is just the magic of Travis and CodeDeploy working together to make this look so easy. See also… Travis Docs: https://docs.travis-ci.com/user/deployment/codedeploy https://github.com/travis-ci/travis.rb https://github.com/travis-ci/travis.rb#installation Working with Your .env File The workflow around this can be tricky. Going from Local, to TravisCI, to CodeDeploy and then to AWS without storing your info in .env on GitHub can be a challenge. What I will show here are some tools and techniques to do this well. Getting ready…. A base install is fine I will use the existing install to show some tricks around this. How to do it… Minimize using Conventions as much as possible     config/queue.php I can do this to have one or more Queues     config/filesystems.php Use the Config file as much as possible. For example this is in my .env If I add config/marvel.php and then make it look like this My .env can be trimmed down by KEY=VALUES later on I can call to those:    Config::get('marvel.MARVEL_API_VERSION')    Config::get('marvel.MARVEL_API_BASE_URL') Now to easily send to Staging or Production using the EnvDeployer library >composer require alfred-nutile-inc/env-deployer:dev-master Follow the readme.md for that library. Then as it says in the docs setup your config file so that it matches the destination IP/URL and username and path for those services. I end up with this config file config/envdeployer.php Now the trick to this library is you start to enter KEY=VALUES into your .env file stacked on top of each other. For example, my database settings might look like this. so now I can type: >php artisan envdeployer:push production Then this will push over SSH your .env to production and swap out the related @production values for each KEY they are placed above. How it works… The first mindset to follow is conventions before you put a new KEY=VALUE into the .env file set back and figure out defaults and conventions around what you already must have in this file. For example must haves, APP_ENV, and then I always have APP_NAME so those two together do a lot to make databases, queues, buckets and so on. all around those existing KEYs. It really does add up, whether you are working alone or on a team focus on these conventions and then using the config/some.php file workflow to setup defaults. Then libraries like the one I use above that push this info around with ease. Kind of like Heroku you can command line these settings up to the servers as needed. See also… Laravel Validator for the .env file: https://packagist.org/packages/mathiasgrimm/laravel-env-validator Laravel 5 Fundamentals: Environments and Configuration: https://laracasts.com/series/laravel-5-fundamentals/episodes/6 Testing Your App on Production with Behat So your app is now on Production! Start clicking away at hundreds of little and big features so you can make sure everything went okay or better yet run Behat! Behat on production? Sounds crazy but I will cover some tips on how to do this including how to setup some remote conditions and clean up when you are done. Getting ready… Any app will do. In my case I am going to hit production with some tests I made earlier. How to do it… Tag a Behat test @smoke or just a Scenario that you know it is safe to run on Production for example features/home/search.feature. Update behat.yml adding a profile call production. Then run > vendor/bin/behat -shome_ui --tags=@smoke --profile=production I run an Artisan command to run all these Then you will see it hit the production url and only the Scenarios you feel are safe for Behat. Another method is to login as a demo user. And after logging in as that user you can see data that is related to that user only so you can test authenticated level of data and interactions. For example database/seeds/UserTableSeeder.php add the demo user to the run method Then update your .env. Now push that .env setting up to Production.  >php artisan envdeploy:push production Then we update our behat.yml file to run this test even on Production features/auth/login.feature. Now we need to commit our work and push to GitHub so TravisCI can deploy and changes: Since this is a seed and not a migration I need to rerun seeds on production. Since this is a new site, and no one has used it this is fine BUT of course this would have been a migration if I had to do this later in the applications life. Now let's run this test, from our vagrant box > vendor/bin/behat -slogin_ui --profile=production But it fails because I am setting up the start of this test for my local database not the remote database features/bootstrap/LoginPageUIContext.php. So I can basically begin to create a way to setup the state of the world on the remote server. > php artisan make:controller SetupBehatController And update that controller to do the setup. And make the route app/Http/routes.php Then update the behat test features/bootstrap/LoginPageUIContext.php And we should do some cleanup! First add a new method to features/bootstrap/LoginPageUIContext.php. Then add that tag to the Scenarios this is related to features/auth/login.feature Then add the controller like before and route app/Http/Controllers/CleanupBehatController.php Then Push and we are ready test this user with fresh state and then clean up when they are done! In this case I could test editing the Profile from one state to another. How it works… Not to hard! Now we have a workflow that can save us a ton of clicking around Production after every deployment. To begin with I add the tag @smoke to tests I considered safe for production. What does safe mean? Basically read only tests that I know will not effect that site's data. Using the @smoke tag I have a consistent way to make Suites or Scenarios as safe to run on Production. But then I take it a step further and create a way to test authenticated related state. Like make a Favorite or updating a Profile! By using some simple routes and a user I can begin to tests many other things on my long list of features I need to consider after every deploy. All of this happens with the configurability of Behat and how it allows me to manage different Profiles and Suites in the behat.yml file! Lastly I tie into the fact that Behat has hooks. I this case I tie in to the @AfterScenario by adding that to my Annotation. And I add another hooks @profile so it only runs if the Scenario has that Tag. That is it, thanks to Behat, Hooks and how easy it is to make Routes in Laravel I can easily take care of a large percentage of what otherwise would be a tedious process after every deployment! See also… Behat Docus on Hooks—http://docs.behat.org/en/v3.0/guides/3.hooks.html Saucelabs—on behat.yml setting later and you can test your site on numerous devices: https://saucelabs.com/. Summary This article gives a summary of Setting up Travis, working with .env files and Behat.  Resources for Article: Further resources on this subject: CRUD Applications using Laravel 4 [article] Laravel Tech Page [article] Eloquent… without Laravel! [article]
Read more
  • 0
  • 0
  • 25383

article-image-laracon-us-2019-highlights-laravel-6-release-update-laravel-vapor-and-more
Bhagyashree R
26 Jul 2019
5 min read
Save for later

Laracon US 2019 highlights: Laravel 6 release update, Laravel Vapor, and more

Bhagyashree R
26 Jul 2019
5 min read
Laracon US 2019, probably the biggest Laravel conference, wrapped up yesterday. Its creator, Tylor Otwell kick-started the event by talking about the next major release, Laravel 6. He also showcased a project that he has been working on called Laravel Vapor, a full-featured serverless management and deployment dashboard for PHP/Laravel. https://twitter.com/taylorotwell/status/1154168986180997125 This was a two-day event from July 24-25 hosted at Time Square, NYC. The event brings together people passionate about creating applications with Laravel, the open-source PHP web framework. Many exciting talks were hosted at this game-themed event. Evan You, the creator of Vue, was there presenting what’s coming in Vue.js 3.0. Caleb Porzio, a developer at Tighten Co., showcased a Laravel framework named Livewire that enables you to build dynamic user interfaces with vanilla PHP.  Keith Damiani, a Principal Programmer at Tighten, talked about graph database use cases. You can watch this highlights video compiled by Romega Digital to get a quick overview of the event: https://www.youtube.com/watch?v=si8fHDPYFCo&feature=youtu.be Laravel 6 coming next month Since its birth, Laravel has followed a custom versioning system. It has been on 5.x release version for the past four years now. The team has now decided to switch to semantic versioning system. The framework currently stands at version 5.8, and instead of calling the new release 5.9 the team has decided to go with Laravel 6, which is scheduled for the next month. Otwell emphasized that they have decided to make this change to bring more consistency in the ecosystem as all optional Laravel components such as Cashier, Dusk, Valet, Socialite use semantic versioning. This does not mean that there will be any “paradigm shift” and developers have to rewrite their stuff. “This does mean that any breaking change would necessitate a new version which means that the next version will be 7.0,” he added. With the new version comes new branding Laravel gets a fresh look with every major release ranging from updated logos to a redesigned website. Initially, this was a volunteer effort with different people pitching in to help give Laravel a fresh look. Now that Laravel has got some substantial backing, Otwell has worked with Focus Lab, one of the top digital design agencies in the US. They have together come up with a new logo and a brand new website. The website looks easy to navigate and also provides improved documentation to give developers a good reading experience. Source: Laravel Laravel Vapor, a robust serverless deployment platform for Laravel After giving a brief on version 6 and the updated branding, Otwell showcased his new project named Laravel Vapor. Currently, developers use Forge for provisioning and deploying their PHP applications on DigitalOcean, Linode, AWS, and more. It provides painless Virtual Private Server (VPS) management. It is best suited for medium and small projects and performs well with basic load balancing. However, it does lack a few features that could have been helpful for building bigger projects such as autoscaling. Also, developers have to worry about updating their operating systems and PHP versions. To address these limitations, Otwell created this deployment platform. Here are some of the advantages Laravel Vapor comes with: Better scalability: Otwell’s demo showed that it can handle over half a million requests with an average response time of 12 ms. Facilitates collaboration: Vapor is built around teams. You can create as many teams as you require by just paying for one single plan. Fine-grained control: It gives you fine-grained control over what each team member can do. You can set what all they can do across all the resources Vapor manages. A “vanity URL” for different environments: Vapor gives you a staging domain, which you can access with what Otwell calls a “vanity URL.” It enables you to immediately access your applications with “a nice domain that you can share with your coworkers until you are ready to assign a custom domain,“ says Otwell. Environment metrics: Vapor provides many environment metrics that give you an overview of an application environment. These metrics include how many HTTP requests have the application got in the last 24 hours, how many CLI invocations, what’s the average duration of those things, how much these cost on lambda, and more. Logs: You can review and search your recent logs right from the Vapor UI. It also auto-updates when any new entry comes in the log. Databases: With Vapor, you can provision two types of databases: fixed-sized database and serverless database. The fixed-sized database is the one where you have to pick its specifications like VCPU, RAM, etc. In the serverless one, however, if you do not select these specifications and it will automatically scale according to the demand. Caches: You can create Redis clusters right from the Vapor UI with as many nodes as you want. It supports the creation and management of elastic Redis cache clusters, which can be scaled without experiencing any downtime. You can attach them to any of the team’s projects and use them with multiple projects at the same time. To watch the entire demonstration by Otwell check out this video: https://www.youtube.com/watch?v=XsPeWjKAUt0&feature=youtu.be Laravel 5.7 released with support for email verification, improved console testing Building a Web Service with Laravel 5 Symfony leaves PHP-FIG, the framework interoperability group
Read more
  • 0
  • 0
  • 24029

article-image-crud-applications-using-laravel-4
Packt
19 Dec 2013
18 min read
Save for later

CRUD Applications using Laravel 4

Packt
19 Dec 2013
18 min read
(for more resources related to this topic, see here.) Getting familiar with Laravel 4 Let's Begin the Journey, and install Laravel 4. Now if everything is installed correctly you will be greeted by this beautiful screen, as shown in the following screenshot, when you hit your browser with http://localhost/laravel/public or http://localhost/<installeddirectory>/public: Now that you can see we have installed Laravel correctly, you would be thinking how can I use Laravel? How do I create apps with Laravel? Or you might be wondering why and how this screen is shown to us? What's behind the scenes? How Laravel 4 sets this screen for us? So let's review that. When you visit the http://localhost/laravel/public, Laravel 4 detects that you are requesting for the default route which is "/". You would be wondering what route is this if you are not familiar with the MVC world. Let me explain that. In traditional web applications we use a URL with page name, say for example: http://www.shop.com/products.php The preceding URL will be bound to the page products.php in the web server hosting shop.com. We can assume that it displays all the products from the database. Now say for example, we want to display a category of books from all the products. You will say, "Hey, it's easy!" Just add the category ID into the URL as follows: http://www.shop.com/products.php?cat=1 Then put the filter in the page products.php that will check whether the category ID is passed. This sounds perfect, but what about pagination and other categories? Soon clients will ask you to change one of your category page layouts to change and you will hack your code more. And your application URLs will look like the following: http://www.shop.com/products.php?cat=2 http://www.shop.com/products.php?cat=3&page=1&total=20 http://www.shop.com/products.php?cat=3&page=1&total=20&layout=1 If you look at your code after six months, you would be looking at one huge products.php page with all of your business and view code mixed in one large file. You wouldn't remember those easy hacks you did in order to manage client requests. On top of that, a client or client's SEO executive might ask you why are all the URLs so badly formatted? Why are they are not human friendly? In a way they are right. Your URLs are not as pretty as the following: http://www.shop.com/products http://www.shop.com/products/books http://www.shop.com/products/cloths The preceding URLs are human friendly. Users can easily change categories themselves. In addition to that, your client's SEO executives will love you for those URLs just as a search engine likes those URLs. You might be puzzled now; how do you do that? Here my friend MVC (Model View Controller) comes into the picture. MVC frameworks are meant specifically for doing this. It's one of the core goals of using the MVC framework in web development. So let's go back to our topic "routing"; routing means decoupling your URL request and assigning it to some specific action via your controller/route. In the Laravel MVC world, you register all your routes in a route file and assign an action to them. All your routes are generally found at /app/routes.php. If you open your newly downloaded Laravel installation's routes.php file, you will notice the following code: Route::get('/', function() { return View::make('hello'); }); The preceding code registers a route with / means default URL with view /app/views/hello.php. Here view is just an .html file. Generally view files are used for managing your presentation logic. So check /app/views/hello.php, or better let's create an about page for our application ourselves. Let's register a route about by adding the following code to app/routes.php: Route::get('about', function() { return View::make('about'); }); We would need to create a view at app/views/about.php. So create the file and insert the following code in to it: <!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <title>About my little app</title> </head> <body> <h1>Hello Laravel 4!</h1> <p> Welcome to the Awesomeness! </p> </body> </html> Now head over to your browser and run http://localhost/laravel/public/about. You will be greeted with the following output: Hello Laravel 4! Welcome to the Awesomeness! Isn't it easy? You can define your route and separate the view for each type of request. Now you might be thinking what about Controllers as the term MVC has C for Controllers? And isn't it difficult to create routes and views for each action? What advantage will we have if we use the preceding pattern? Well we found that mapping URLs to a particular action in comparison to the traditional one-file-based method. Well first you are organizing your code way better as you will have actions responding to specific URLs mapped in the route file. Any developer can recognize routes and see what's going on with your code. Developers do not have to check many files to see which files are using which code. Your presentation logic is separated, so if a designer wants to change something, he will know he needs to look at the view folder of your application. Now about Controllers; they allow us to group related actions into a single class. So in a typical MVC project, there will be one user Controller that will be responsible for all user-related actions, such as registering, logging in, editing a profile, and changing the password. Generally routes are used for small applications or creating static pages quickly. Controllers provide more in-depth options to create a group of methods that belong to a specific class related to the application. Here is how we can create Controllers in Laravel 4. Open your app/routes.php file and add following code: Route::get('contact', 'Pages@contact'); The preceding code will register the http://yourapp.com/contact URL in the Pages Controller's contact method. So let's write a page's Controller. Create a file PagesController.php at /app/controllers/ in your Laravel 4 installation directory. The following are the contents of the PagesController.php file: <?php class PagesController extends BaseController { public function contact() { return View::make('hello'); } } Here BaseController is a class provided by Laravel so we can place our Controller shared logic in a common class. And it extends the framework's Controller class and provides the Controller functionality. You can check Basecontroller.php in the Controller's directory to add shared logic. Controllers versus routes So you are wondering now, "What's the difference between Controllers and routes?" Which one to use? Controllers or routes? Here are the differences between Controllers and routes: A disadvantage of routes is that you can't share code between routes, as routes work via Closure functions. And the scope of a function is bound within function. Controllers give a structure to your code. You can define your system in well-grouped classes, which are divided in such a way that it makes sense, for example, users, dashboard, products, and so on. Compared to routes, Controllers have only one disadvantage and it's that you have to create a file for each Controller; however, if you think in terms of organizing the code in a large application, it makes more sense to use Controllers.   Creating a simple CRUD application with Laravel 4 Now as we have a basic understanding of how we can create pages, let's create a simple CRUD application with Laravel 4. The application we want to create will manage the users of our application. We will create the following list of features for our application: List users (read users from the database) Create new users Edit user information Delete user information Adding pagination to the list of users Now to start off with things, we would need to set up a database. So if you have phpMyAdmin installed with your local web server setup, head over to http://localhost/phpmyadmin; if you don't have phpMyAdmin installed, use the MySQL admin tool workbench to connect with your database and create a new database. Now we need to configure Laravel 4 to connect with our database. So head over to your Laravel 4 application folder, open /app/config/database.php, change the MySQL array, and match your current database settings. Here is the MySQL database array from database.php file: 'mysql' => array( 'driver' => 'mysql', 'host' => 'localhost', 'database' => '<yourdbname>', 'username' => 'root', 'password' => '<yourmysqlpassord>', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), Now we are ready to work with the database in our application. Let's first create the database table Users via the following SQL queries from phpMyAdmin or any MySQL database admin tool; CREATE TABLE IF NOT EXISTS 'users' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'username' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'password' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'email' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'phone' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'name' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'created_at' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', 'updated_at' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', PRIMARY KEY ('id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ; Now let's seed some data into the Users table so when we fetch the users we won't get empty results. Run the following queries into your database admin tool: INSERT INTO 'users' ('id', 'username', 'password', 'email', 'phone', 'name', 'created_at', 'updated_at') VALUES (1, 'john', 'johndoe', 'johndoe@gmail.com', '123456', 'John', '2013-06-07 08:13:28', '2013-06-07 08:13:28'), (2, 'amy', 'amy.deg', 'amy@outlook.com', '1234567', 'amy', '2013-06-07 08:14:49', '2013-06-07 08:14:49');   Listing the users – read users from database Let's read users from the database. We would need to follow the steps described to read users from database: A route that will lead to our page A controller that will handle our method The Eloquent Model that will connect to the database A view that will display our records in the template So let's create our route at /app/routes.php. Add the following line to the routes.php file: Route::resource('users', 'UserController'); If you have noticed previously, we had Route::get for displaying our page Controller. But now we are using resource. So what's the difference? In general we face two types of requests during web projects: GET and POST. We generally use these HTTP request types to manipulate our pages, that is, you will check whether the page has any POST variables set; if not, you will display the user form to enter data. As a user submits the form, it will send a POST request as we generally define the <form method="post"> tag in our pages. Now based on page's request type, we set the code to perform actions such as inserting user data into our database or filtering records. What Laravel provides us is that we can simply tap into either a GET or POST request via routes and send it to the appropriate method. Here is an example for that: Route::get('/register', 'UserController@showUserRegistration'); Route::post('/register', 'UserController@saveUser'); See the difference here is we are registering the same URL, /register, but we are defining its GET method so Laravel can call UserController class' showUserRegistration method. If it's the POST method, Laravel should call the saveUser method of the UserController class. You might be wondering what's the benefit of it? Well six months later if you want to know how something's happening in your app, you can just check out the routes.php file and guess which Controller and which method of Controller handles the part you are interested in, developing it further or solving some bug. Even some other developer who is not used to your project will be able to understand how things work and can easily help move your project. This is because he would be able to somewhat understand the structure of your application by checking routes.php. Now imagine the routes you will need for editing, deleting, or displaying a user. Resource Controller will save you from this trouble. A single line of route will map multiple restful actions with our resource Controller. It will automatically map the following actions with HTTP verbs: HTTP VERB ACTION GET READ POST CREATE PUT UPDATE DELETE DELETE On top of that you can actually generate your Controller via a simple command-line artisan using the following command: $ php artisan Usercontroller:make users This will generate UsersController.php with all the RESTful empty methods, so you will have an empty structure to play with. Here is what we will have after the preceding command: class UserController extends BaseController { /** * Display a listing of the resource. * * @return Response */ public function index() { // } /** * Show the form for creating a new resource. * * @return Response */ public function create() { // } /** * Store a newly created resource in storage. * * @return Response */ public function store() { // } /** * Display the specified resource. * * @param int $id * @return Response */ public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return Response */ public function edit($id) { // } /** * Update the specified resource in storage. * * @param int $id * @return Response */ public function update($id) { // } /** * Remove the specified resource from storage. * * @param int $id * @return Response */ public function destroy($id) { // } } Now let's try to understand what our single line route declaration created relationship with our generated Controller. HTTP VERB Path Controller Action/method GET /Users Index GET /Users/create Create POST /Users Store GET /Users/{id} Show (individual record) GET /Users/{id}/edit Edit PUT /Users/{id} Update DELETE /Users/{id} Destroy As you can see, resource Controller really makes your work easy. You don't have to create lots of routes. Also Laravel 4's artisan-command-line generator can generate resourceful Controllers, so you will write very less boilerplate code. And you can also use the following command to view the list of all the routes in your project from the root of your project, launching command line: $ php artisan routes Now let's get back to our basic task, that is, reading users. Well now we know that we have UserController.php at /app/controller with the index method, which will be executed when somebody launches http://localhost/laravel/public/users. So let's edit the Controller file to fetch data from the database. Well as you might remember, we will need a Model to do that. But how do we define one and what's the use of Models? You might be wondering, can't we just run the queries? Well Laravel does support queries through the DB class, but Laravel also has Eloquent that gives us our table as a database object, and what's great about object is that we can play around with its methods. So let's create a Model. If you check your path /app/models/User.php, you will already have a user Model defined. It's there because Laravel provides us with some basic user authentication. Generally you can create your Model using the following code: class User extends Eloquent {} Now in your controller you can fetch the user object using the following code: $users = User::all(); $users->toarray(); Yeah! It's that simple. No database connection! No queries! Isn't it magic? It's the simplicity of Eloquent objects that many people like in Laravel. But you have the following questions, right? How does Model know which table to fetch? How does Controller know what is a user? How does the fetching of user records work? We don't have all the methods in the User class, so how did it work? Well models in Laravel use a lowercase, plural name of the class as the table name unless another name is explicitly specified. So in our case, User was converted to a lowercase user and used as a table to bind with the User class. Models are automatically loaded by Laravel, so you don't have to include the reference of the Model file. Each Model inherits an Eloquent instance that resolves methods defined in the model.php file at vendor/Laravel/framework/src/Illumininate/Database/Eloquent/ like all, insert, update, delete and our user class inherit those methods and as a result of this, we can fetch records via User::all(). So now let's try to fetch users from our database via the Eloquent object. I am updating the index method in our app/controllers/UsersController.php as it's the method responsible as per the REST convention we are using via resource Controller. public function index() { $users = User::all(); return View::make('users.index', compact('users')); } Now let's look at the View part. Before that, we need to know about Blade. Blade is a templating engine provided by Laravel. Blade has a very simple syntax, and you can determine most of the Blade expressions within your view files as they begin with @. To print anything with Blade, you can use the {{ $var }} syntax. Its PHP-equivalent syntax would be: <?php echo $var; ?> Now back to our view; first of all, we need to create a view file at /app/views/users/index.blade.php, as our statement would return the view file from users.index. We are passing a compact users array to this view. So here is our index.blade.php file: @section('main') <h1>All Users</h1> <p>{{ link_to_route('users.create', 'Add new user') }}</p> @if ($users->count()) <table class="table table-striped table-bordered"> <thead> <tr> <th>Username</th> <th>Password</th> <th>Email</th> <th>Phone</th> <th>Name</th> </tr> </thead> <tbody> @foreach ($users as $user) <tr> <td>{{ $user->username }}</td> <td>{{ $user->password }}</td> <td>{{ $user->email }}</td> <td>{{ $user->phone }}</td> <td>{{ $user->name }}</td> <td>{{ link_to_route('users.edit', 'Edit', array($user->id), array('class' => 'btn btn-info')) }}</td> <td> {{ Form::open(array('method' => 'DELETE', 'route' => array('users.destroy', $user->id))) }} {{ Form::submit('Delete', array('class' => 'btn btn-danger')) }} {{ Form::close() }} </td> </tr> @endforeach </tbody> </table> @else There are no users @endif @stop Let's see the code line by line. In the first line we are extending the user layouts via the Blade template syntax @extends. What actually happens here is that Laravel will load the layout file at /app/views/layouts/user.blade.php first. Here is our user.blade.php file's code: <!doctype html> <html> <head> <meta charset="utf-8"> <link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"> <style> table form { margin-bottom: 0; } form ul { margin-left: 0; list-style: none; } .error { color: red; font-style: italic; } body { padding-top: 20px; } </style> </head> <body> <div class="container"> @if (Session::has('message')) <div class="flash alert"> <p>{{ Session::get('message') }}</p> </div> @endif @yield('main') </div> </body> </html> Now in this file we are loading the Twitter bootstrap framework for styling our page, and via yield('main') we can load the main section from the view that is loaded. So here when we load http://localhost/laravel/public/users, Laravel will first load the users.blade.php layout view and then the main section will be loaded from index.blade.php. Now when we get back to our index.blade.php, we have the main section defined as @section('main'), which will be used by Laravel to load it into our layout file. This section will be merged into the layout file where we have put the @yield ('main') section. We are using Laravel's link_to_route method to link to our route, that is, /users/create. This helper will generate an HTML link with the correct URL. In the next step, we are looping through all the user records and displaying it simply in a tabular format. Now if you have followed everything, you will be greeted by the following screen:
Read more
  • 0
  • 0
  • 23465

article-image-tools-intypescript
Packt
02 Jan 2017
14 min read
Save for later

Tools inTypeScript

Packt
02 Jan 2017
14 min read
In this article by Nathan Rozentals, author of the book Mastering TypeScript, Second Edition, you will learn how to build enterprise-ready, industrial web applications using TypeScript and leading JavaScript frameworks. In this article, we will cover the following topics: What is TypeScript? The benefits of TypeScript (For more resources related to this topic, see here.) What is TypeScript? TypeScript is both a language and a set of tools to generate JavaScript. It was designed by Anders Hejlsberg at Microsoft (the designer of C#) as an open source project to help developers write enterprise scale JavaScript. TypeScript generates JavaScript – it's as simple as that. Instead of requiring a completely new runtime environment, TypeScript generated JavaScript can reuse all of the existing JavaScript tools, frameworks, and wealth of libraries that are available for JavaScript. The TypeScript language and compiler, however, brings the development of JavaScript closer to a more traditional object-oriented experience. EcmaScript JavaScript as a language has been around for a long time, and is also governed by a language feature standard. The language defined in this standard is called ECMAScript, and each JavaScript interpreter must deliver functions and features that conform to this standard. The definition of this standard helped the growth of JavaScript and the web in general, and allowed websites to render correctly on many different browsers on many different operating systems. The ECMAScript standard was published in 1999 and is known as ECMA-262, third edition. With the popularity of the language, and the explosive growth of internet applications, the ECMAScript standard needed to be revised and updated. This process resulted in a draft specification for ECMAScript, called the fourth edition. Unfortunately, this draft suggested a complete overhaul of the language, and was not well received. Eventually, leaders from Yahoo, Google, and Microsoft tabled an alternate proposal which they called ECMAScript 3.1. This proposal was numbered 3.1, as it was a smaller feature set of the third edition, and sat between edition three and four of the standard. This proposal for language changes tabled earlier was eventually adopted as the fifth edition of the standard, and was called ECMAScript 5. The ECMAScript fourth edition was never published, but it was decided to merge the best features of both the fourth edition and the 3.1 feature set into a sixth edition named ECMAScript Harmony. The TypeScript compiler has a parameter that can switch between different versions of the ECMAScript standard. TypeScript currently supports ECMAScript 3, ECMAScript 5, and ECMAScript 6. When the compiler runs over your TypeScript, it will generate compile errors if the code you are attempting to compile is not valid for that standard. The team at Microsoft has committed to follow the ECMAScript standards in any new versions of the TypeScript compiler, so as new editions are adopted, the TypeScript language and compiler will follow suit. The benefits of TypeScript To give you a flavor of the benefits of TypeScript (and this is by no means the full list), let's have a very quick look at some of the things that TypeScript brings to the table: A compilation step Strong or static typing Type definitions for popular JavaScript libraries Encapsulation Private and public member variable decorators Compiling One of the most frustrating things about JavaScript development is the lack of a compilation step. JavaScript is an interpreted language, and therefore needs to be run in order to test that it is valid. Every JavaScript developer will tell horror stories of hours spent trying to find bugs in their code, only to find that they have missed a stray closing brace { , or a simple comma ,– or even a double quote " where there should have been a single quote ' . Even worse, the real headaches arrive when you misspell a property name, or unwittingly reassign a global variable. TypeScript will compile your code, and generate compilation errors where it finds these sorts of syntax errors. This is obviously very useful, and can help to highlight errors before the JavaScript is run. In large projects, programmers will often need to do large code merges and with today's tools doing automatic merges, it is surprising how often the compiler will pick up these types of errors. While tools to do this sort of syntax checking like JSLint have been around for years, it is obviously beneficial to have these tools integrated into your IDE. Using TypeScript in a continuous integration environment will also fail a build completely when compilation errors are found, further protecting your programmers against these types of bugs. Strong typing JavaScript is not strongly typed. It is a language that is very dynamic, as it allows objects to change their properties and behavior on the fly. As an example of this, consider the following code: var test = "this is a string"; test = 1; test = function(a, b) { return a + b; } On the first line of the preceding code, the variable test is bound to a string. It is then assigned a number, and finally is redefined to be a function that expects two parameters. Traditional object-oriented languages, however, will not allow the type of a variable to change, hence they are called strongly typed languages. While all of the preceding code is valid JavaScript and could be justified, it is quite easy to see how this could cause runtime errors during execution. Imagine that you were responsible for writing a library function to add two numbers, and then another developer inadvertently reassigned your function to subtract these numbers instead. These sorts of errors may be easy to spot in a few lines of code, but it becomes increasingly difficult to find and fix these as your code base and your development team grows. Another feature of strong typing is that the IDE you are working in understands what type of variable you are working with, and can bring better autocomplete or Intellisense options to the fore. TypeScript's syntactic sugar TypeScript introduces a very simple syntax to check the type of an object at compile time. This syntax has been referred to as syntactic sugar, or more formally, type annotations. Consider the following TypeScript code: var test: string = "this is a string"; test = 1; test = function(a, b) { return a + b; } Note on the first line of this code snippet, we have introduced a colon : and a string keyword between our variable and its assignment. This type annotation syntax means that we are setting the type of our variable to be of type string, and that any code that does not adhere to these rules will generate a compile error. Running the preceding code through the TypeScript compiler will generate two errors: hello.ts(3,1): error TS2322: Type 'number' is not assignable to type 'string'. hello.ts(4,1): error TS2322: Type '(a: any, b: any) => any' is not assignable to type 'string'. The first error is fairly obvious. We have specified that the variable test is a string, and therefore attempting to assign a number to it will generate a compile error. The second error is similar to the first, and is, in essence, saying that we cannot assign a function to a string. In this way, the TypeScript compiler introduces strong or static typing to your JavaScript code, giving you all of the benefits of a strongly typed language. TypeScript is therefore described as a superset of JavaScript. Type definitions for popular JavaScript libraries As we have seen, TypeScript has the ability to annotate JavaScript, and bring strong typing to the JavaScript development experience. But how do we strongly type existing JavaScript libraries? The answer is surprisingly simple—by creating a definition file. TypeScript uses files with a .d.ts extension as a sort of header file, similar to languages such as C++, to superimpose strongly typing on existing JavaScript libraries. These definition files hold information that describes each available function, and or variables, along with their associated type annotations. Let's have a quick look at what a definition would look like. As an example, I have lifted a function from the popular Jasmine unit testing framework, called describe: var describe = function(description, specDefinitions) { return jasmine.getEnv().describe(description, specDefinitions); }; Note that this function has two parameters—description and specDefinitions. But JavaScript does not tell us what sort of variables these are. We would need to have a look at the Jasmine documentation to figure out how to call this function. If we head over to http://jasmine.github.io/2.0/introduction.html, we will see an example of how to use this function: describe("A suite", function () { it("contains spec with an expectation", function () { expect(true).toBe(true); }); }); From the documentation, then, we can easily see that the first parameter is a string, and the second parameter is a function. But there is nothing in JavaScript that forces us to conform to this API. As mentioned before, we could easily call this function with two numbers or inadvertently switch the parameters around, sending a function first, and a string second. We will obviously start getting runtime errors if we do this, but TypeScript using a definition file can generate compile-time errors before we even attempt to run this code. Let's have a look at a piece of the jasmine.d.ts definition file: declare function describe( description: string, specDefinitions: () => void ): void; This is the TypeScript definition for the describe function. Firstly, declare function describe tells us that we can use a function called describe, but that the implementation of this function will be provided at runtime. Clearly, the description parameter is strongly typed to a string, and the specDefinitions parameter is strongly typed to be a function that returns void. TypeScript uses the double braces () syntax to declare functions, and the arrow syntax to show the return type of the function. So () => void is a function that does not return anything. Finally, the describe function itself will return void. If our code were to try and pass in a function as the first parameter, and a string as the second parameter (clearly breaking the definition of this function), as shown in the following example: describe(() => { /* function body */}, "description"); TypeScript will generate the following error: hello.ts(11,11): error TS2345: Argument of type '() => void' is not assignable to parameter of type 'string'. This error is telling us that we are attempting to call the describe function with invalid parameters,andit clearly shows that TypeScript will generate errors if we attempt to use external JavaScript libraries incorrectly. DefinitelyTyped Soon after TypeScript was released, Boris Yankov started a Git Hub repository to house definition files, called Definitely Typed (http://definitelytyped.org). This repository has now become the first port of call for integrating external libraries into TypeScript, and it currently holds definitions for over 1,600 JavaScript libraries. Encapsulation One of the fundamental principles of object-oriented programming is encapsulation—the ability to define data, as well as a set of functions that can operate on that data, into a single component. Most programming languages have the concept of a class for this purpose, providing a way to define a template for data and related functions. Let's first take a look at a simple TypeScript class definition: class MyClass { add(x, y) { return x + y; } } varclassInstance = new MyClass(); var result = classInstance.add(1,2); console.log(`add(1,2) returns ${result}`); This code is pretty simple to read and understand. We have created a class, named MyClass, with a simple add function. To use this class we simply create an instance of it, and call the add function with two arguments. JavaScript, unfortunately, does not have a class statement, but instead uses functions to reproduce the functionality of classes. Encapsulation through classes is accomplished by either using the prototype pattern, or by using the closure pattern. Understanding prototypes and the closure pattern, and using them correctly, is considered a fundamental skill when writing enterprise-scale JavaScript. A closure is essentially a function that refers to independent variables. This means that variables defined within a closure function remember the environment in which they were created. This provides JavaScript with a way to define local variables, and provide encapsulation. Writing the MyClass definition in the preceding code, using a closure in JavaScript, would look something like the following: varMyClass = (function () { // the self-invoking function is the // environment that will be remembered // by the closure function MyClass() { // MyClass is the inner function, // the closure } MyClass.prototype.add = function (x, y) { return x + y; }; return MyClass; }()); varclassInstance = new MyClass(); var result = classInstance.add(1, 2); console.log("add(1,2) returns " + result); We start with a variable called MyClass, and assign it to a function that is executed immediately note the })(); syntax near the bottom of the closure definition. This syntax is a common way to write JavaScript in order to avoid leaking variables into the global namespace. We then define a new function named MyClass, and return this new function to the outer calling function. We then use the prototype keyword to inject a new function into the MyClass definition. This function is named add and takes two parameters, returning their sum. The last few lines of the code show how to use this closure in JavaScript. Create an instance of the closure type, and then execute the add function. Running this code will log add(1,2) returns 3 to the console, as expected. Looking at the JavaScript code versus the TypeScript code, we can easily see how simple the TypeScript looks compared to the equivalent JavaScript. Remember how we mentioned that JavaScript programmers can easily misplace a brace {, or a bracket (? Have a look at the last line in the closure definition—})();. Getting one of these brackets or braces wrong can take hours of debugging to find. Public and private accessors A further object oriented principle that is used in Encapsulation is the concept of data hiding that is the ability to have public and private variables. Private variables are meant to be hidden to the user of a particular class as these variables should only be used by the class itself. Inadvertently exposing these variables can easily cause runtime errors. Unfortunately, JavaScript does not have a native way of declaring variables private. While this functionality can be emulated using closures, a lot of JavaScript programmers simply use the underscore character _ to denote a private variable. At runtime though, if you know the name of a private variable you can easily assign a value to it. Consider the following JavaScript code: varMyClass = (function() { function MyClass() { this._count = 0; } MyClass.prototype.countUp = function() { this._count ++; } MyClass.prototype.getCountUp = function() { return this._count; } return MyClass; }()); var test = new MyClass(); test._count = 17; console.log("countUp : " + test.getCountUp()); The MyClass variable is actually a closure with a constructor function, a countUp function, and a getCountUp function. The variable _count is supposed to be a private member variable that is used only within the scope of the closure. Using the underscore naming convention gives the user of this class some indication that the variable is private, but JavaScript will still allow you to manipulate the variable _count. Take a look at the second last line of the code snippet. We are explicitly setting the value of _count to 17 which is allowed by JavaScript, but not desired by the original creator of the class. The output of this code would be countUp : 17. TypeScript, however, introduces the public and private keywords that can be used on class member variables. Trying to access a class member variable that has been marked as private will generate a compile time error. As an example of this, the JavaScript code above can be written in TypeScript, as follows: class CountClass { private _count: number; constructor() { this._count = 0; } countUp() { this._count ++; } getCount() { return this._count; } } varcountInstance = new CountClass() ; countInstance._count = 17; On the second line of our code snippet, we have declared a private member variable named _count. Again, we have a constructor, a countUp, and a getCount function. If we compile this file, the compiler will generate an error: hello.ts(39,15): error TS2341: Property '_count' is private and only accessible within class 'CountClass'. This error is generated because we are trying to access the private variable _count in the last line of the code. The TypeScript compiler, therefore, is helping us to adhere to public and private accessors by generating a compile error when we inadvertently break this rule. Summary In this article, we took a quick look at what TypeScript is and what benefits it can bring to the JavaScript development experience. Resources for Article: Further resources on this subject: Introducing Object Oriented Programmng with TypeScript [article] Understanding Patterns and Architecturesin TypeScript [article] Writing SOLID JavaScript code with TypeScript [article]
Read more
  • 0
  • 0
  • 22189

article-image-how-to-publish-a-microservice-as-a-service-onto-a-docker
Pravin Dhandre
12 Apr 2018
6 min read
Save for later

How to publish Microservice as a service onto a Docker

Pravin Dhandre
12 Apr 2018
6 min read
In today’s tutorial, we will show a step-by-step approach to publishing a prebuilt microservice onto a docker so you can scale and control it further. Once the swarm is ready, we can use it to create services that we can use for scaling, but first, we will create a shared registry in our swarm. Then, we will build a microservice and publish it into a Docker. Creating a registry When we create a swarm's service, we specify an image that will be used, but when we ask Docker to create the instances of the service, it will use the swarm master node to do it. If we have built our Docker images in our machine, they are not available on the master node of the swarm, so it will create a registry service that we can use to publish our images and reference when we create our own services. First, let's create the registry service: docker service create --name registry --publish 5000:5000 registry Now, if we list our services, we should see our registry created: docker service ls ID NAME MODE REPLICAS IMAGE PORTS os5j0iw1p4q1 registry replicated 1/1 registry:latest *:5000->5000/tcp And if we visit http://localhost:5000/v2/_catalog, we should just get: {"repositories":[]} This Docker registry is ephemeral, meaning that if we stop the service or start it again, our images will disappear. For that reason, we recommend to use it only for development. For a real registry, you may want to use an external service. We discussed some of them in Chapter 7, Creating Dockers. Creating a microservice In order to create a microservice, we will use Spring Initializr, as we have been doing in previous chapters. We can start by visiting the URL: https:/​/​start.​spring.​io/​: Creating a project in Spring Initializr We have chosen to create a Maven Project using Kotlin and Spring Boot 2.0.0 M7. We chose the Group to be com.Microservices and the Artifact to be chapter08. For Dependencies, we have set Web. Now, we can click Generate Project to download it as a zip file. After we unzip it, we can open it with IntelliJ IDEA to start working on our project. After a few minutes, our project will be ready and we can open the Maven window to see the different lifecycle phases and Maven plugins and their goals. We covered how to use Spring Initializr, Maven, and IntelliJ IDEA in Chapter 2, Getting Started with Spring Boot 2.0. You can check out this chapter to learn about topics not covered in this section. Now, we will add RestController to our project, in IntelliJ IDEA, in the Project window. Right-click our com.Microservices.chapter08 package and then choose New | Kotlin File/Class. In the pop-up window, we will set its name as HelloController and in the Kind drop-down, choose Class. Let's modify our newly created controller: package com.microservices.chapter08 import org.springframework.web.bind.annotation.GetMapping import org.springframework.web.bind.annotation.RestController import java.util.* import java.util.concurrent.atomic.AtomicInteger @RestController class HelloController { private val id: String = UUID.randomUUID().toString() companion object { val total: AtomicInteger = AtomicInteger() } @GetMapping("/hello") fun hello() = "Hello I'm $id and I have been called ${total.incrementAndGet()} time(s)" } This small controller will display a message with a GET request to the /hello URL. The message will display a unique id that we establish when the object is created and it will show how many times it has been called. We have used AtomicInteger to guarantee that no other requests are modified on our number concurrently. Finally, we will rename application.properties in the resources folder, using Shift + F6 to get application.yml, and edit it: logging.level.org.springframework.web: DEBUG We will establish that we have the log in the debug level for the Sprint Web Framework, so for the latter, we could view the information that will be needed in our logs. Now, we can run our microservice, either using IntelliJ IDEA Maven Project window with the springboot:run target, or by executing it on the command line: mvnw spring-boot:run Either way, when we visit the http://localhost:8080/hello URL, we should see something like: Hello I'm a0193477-b9dd-4a50-85cc-9d9405e02299 and I have been called 1 time(s) Doing consecutive calls will increase the number. Finally, we can see at the end of our log that we have several requests. The following should appear: Hello I'm a0193477-b9dd-4a50-85cc-9d9405e02299 and I have been called 1 time(s) Now, we will stop the microservice to create our Docker. Creating our Docker Now that our microservice is running, we need to create a Docker. To simplify the process, we will just use Dockerfile. To create this file on the top of the Project window, rightclick chapter08 and then select New | File and type Dockerfile in the drop-down menu. In the next window, click OK and the file will be created. Now, we can add this to our Dockerfile: FROM openjdk:8-jdk-alpine ADD target/*.jar app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar", "App.jar"] In this case, we tell the JVM to use dev/urandom instead of dev/random to generate random numbers required by our ID generation and this will make the operation much faster. You may think that urandom is less secure than random, but read this article for more information: https:/​/www.​2uo.​de/​myths-​about-​urandom/​.​ From our microservice directory, we will use the command line to create our package: mvnw package Then, we will create a Docker for it: docker build . -t hello Now, we need to tag our image in our shared registry service, using the following command: docker tag hello localhost:5000/hello Then, we will push our image to the shared registry service: docker push localhost:5000/hello Creating the service Finally, we will add the microservice as a service to our swarm, exposing the 8080 port: docker service create --name hello-service --publish 8080:8080 localhost:5000/hello Now, we can navigate to http://localhost:8888/application/default to get the same results as before, but now we run our microservice in a Docker swarm. If you found this tutorial useful, do check out the book Hands-On Microservices with Kotlin  to work with Kotlin, Spring and Spring Boot for building reactive and cloud-native microservices. Also check out other posts: How to publish Docker and integrate with Maven Understanding Microservices How to build Microservices using REST framework
Read more
  • 0
  • 0
  • 22057
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-build-cross-platform-desktop-application-nodejs-and-electron
Mika Turunen
14 Oct 2015
9 min read
Save for later

How to build a cross-platform desktop application with Node.js and Electron

Mika Turunen
14 Oct 2015
9 min read
Do you want to make a desktop application, but you have only mastered web development so far? Or maybe you feel overwhelmed by all of the different API’s that different desktop platforms have to offer? Or maybe you want to write a beautiful application in HTML5 and JavaScript and have it working on the desktop? Maybe you want to port an existing web application to the desktop? Well, luckily for us, there are a number of alternatives and we are going to look into Node.js and Electron to help us get our HTML5 and JavaScript running on the desktop side with no hiccups. What are the different parts in an Electron application Commonly, all of the different components in Electron are either running in the main process (backend) or the rendering process (frontend). The main process can communicate with different parts of the operating system if there’s a need for that, and the rendering process mainly just focuses on showing the content, pretty much like in any HTML5 application you find on the Internet. The processes communicate with each other through IPC (inter-process communication), which in Node.js terms is just a super simple event emitter and nothing else. You can send events and listen for events. You can get the complete source code from here for this post. Let's start working on it You need to have node.js installed and you can install it from https://nodejs.org/. Now that you have Node.js installed you can start focusing on creating the application. First of all, create an empty directory where you will be placing your code. # Open up your favourite terminal, command-line tool or any other alternative as we'll be running quite a bit of commands # Create the directory mkdir /some/location/that/works/in/your/system # Go into the directory cd /some/location/that/works/in/your/system # Now we need to initialize it for our Electron and Node work npm init NPM will start asking you questions about the application we are about to make. You can just hit Enter and not answer any of them if you feel like it. We can fill them in manually once we know a bit more about our application. Now we should have a directory structure with the following files in it: package.json And that's it, nothing else. We'll start by creating two new files in your favorite text editor or IDE. The files are (leave the files empty): main.js index.html Drop all of the files into the same directory as the package.json is in for easier handling of everything for now. Main.js will be our main process file, which is the connecting layer to the underlying desktop operating system for our Electron application. At this point we need to install Electron as a dependency for our application, which is really easy. Just write: npm install --save electron-prebuilt Alternatively if you cloned/downloaded the associated Github repository you can just go into the directory and write: npm install This will install all dependencies from package.json, including the prebuilt-electron. Now we have Electron's prebuilt binaries installed as a direct dependency for our application and we can run our application on our platform. It's wise to manually update our package.json file using the npm init command generated for us. Open up package.josn file and modify the scripts block to look like this (or if it's missing, create it): "main": "main.js", "scripts": { "start": "electron ." }, The whole package.json file should be roughly something like this (taken from the tutorial repo I linked earlier): { "name": "", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "start": "electron ." }, "repository": { }, "keywords": [ ], "author": "", "license": "MIT", "bugs": { }, "homepage": "", "dependencies": { "electron-prebuilt": "^0.25.3" } } The main property in the file points to the main.js and the scripts sections start property tells it to run command "Electron .", which essentially tells Electron to digest the current directory as an application and Electron hardwires the property main as the main process for the application. This means that main.js is now our main process, just like we wanted. Main and rendering process We need to write the main process JavaScript and the rendering process HTML to get our application to start. Let's start with the main process, main.js. You can also find all of the code below from the tutorial repository here. The code has been peppered with a good amount of comments to give a deeper understanding of what is going on in the code and what different parts do in the context of Electron. // Loads Electron specific app that is not commonly available for node or io.js var app = require("app"); // Inter process communication -- Used to communicate from Main process (this) // to the actual rendering process (index.html) -- not really used in this example var ipc = require("ipc"); // Loads the Electron specific module or browser handling var BrowserWindow = require("browser-window"); // Report crashes to our server. var crashReporter = require("crash-reporter"); // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the javascript object is garbage collected var mainWindow = null; // Quit when all windows are closed. app.on("window-all-closed", function() { // OS X specific check if (process.platform != "darwin") { app.quit(); } }); // This event will be called when Electron has done initialization and ready for creating browser windows. app.on("ready", function() { crashReporter.start(); // Create the browser window (where the applications visual parts will be) mainWindow = newBrowserWindow({ width: 800, height: 600 }); // Building the file path to the index.html mainWindow.loadUrl("file://" + __dirname + "/index.html"); // Emitted when the window is closed. // The function just deferences the mainWindow so garbage collection can // pick it up mainWindow.on("closed", function() { mainWindow = null; }); }); You can now start the application, but it'll just start an empty window since we have nothing to render in the rendering process. Let's fix that by populating our index.html with some content. <!DOCTYPE html> <html> <head> <title>Hello Tutorial!</title> </head> <body> <h2>Tutorial</h2> We are using node.js <script>document.write(process.version)</script> and Electron <script>document.write(process.versions["electron"])</script>. </body> </html> Because this is an Electron application we have used the node.js/io.js process and other content relating to the actual node.js/io.js setup we have going. The line document.write(process.version) actually is a call to the Node.js process. This is one of the great things about Electron: we are essentially bridging the gap between the desktop applications and HTML5 applications. Now to run the application. npm start There is a huge list of different desktop environment integration possibilities you can do with Electron and you can read more about them from the Electron documentation at http://electron.atom.io/. Obviously this is still far from a complete application, but this should give you the understanding on how to work with Electron, how it behaves and what you can do with it. You can start using your favorite JavaScript/CSS frontend framework in the index.html to build a great looking GUI for your new desktop application and you can also use all Node.js specific NPM modules in the backend along with the desktop environment integration. Maybe we'll look into writing a great looking GUI for our application with some additional desktop environment integration in another post. Packaging and distributing Electron applications Applications can be packaged into distributable operating system specific containers. For example, .exe files allow them to run on different hardware. The packaging process is fairly simple and well documented in the Electron's documentation and it is out of the scope for this post but worth the look if you want to package your application. To understand more of the application distribution and packaging process, read the Electrons official documentation on it here. Electron and it's current use Electron is still really fresh and right out of GitHub's knowing hands, but it's already been adopted by quite few companies for use and there are number of applications already built on top of it. Companies using Electron: Slack Microsoft Github Applications built with Electron or using Electron Visual Studio Code - Microsofts Visual Studio code Heartdash - Hearthdash is a card tracking application for Hearthstone. Monu - Process monitoring app Kart - Frontend for RetroArch Friends - P2P chat powered by the web Final words on Electron It's obvious that Electron is still taking its first baby steps, but it's hard to deny the fact that more and more user interfaces will be written in different web technologies with HTML5 and this is one of the great starts for it. It'll be interesting to see how the gap between the desktop and the web application develop as time goes on and people like you and me will be playing a key role in the development of future applications. With help of technologies like Electron the desktop application development just got that much easier. For more Node.js content, look no further than our dedicated page! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 21899

article-image-apple-announces-webkit-tracking-prevention-policy-that-considers-web-tracking-as-a-security-vulnerability
Bhagyashree R
19 Aug 2019
5 min read
Save for later

Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability

Bhagyashree R
19 Aug 2019
5 min read
Inspired by Mozilla’s anti-tracking policy, Apple has announced its intention to implement the WebKit Tracking Prevention Policy into Safari, the details of which it shared last week. This policy outlines the types of tracking techniques that will be prevented in WebKit to ensure user privacy. The anti-tracking mitigations listed in this policy will be applied “universally to all websites, or based on algorithmic, on-device classification.” https://twitter.com/webkit/status/1161782001839607809 Web tracking is the collection of user data over multiple web pages and websites, which can be linked to individual users via a unique user identifier. All your previous interactions with any website could be recorded and recalled with the help of a tracking system like cookies. Among the data tracked include the things you have searched, the websites you visited, the things you have clicked on, the movements of your mouse around a web page, and more. Organizations and companies rely heavily on web tracking to gain insight into their user behavior and preferences. One of the main purposes of these insights is user profiling and targeted marketing. While this user tracking helps businesses, it can be pervasive and used for other sinister purposes. In the recent past, we have seen many companies including the big tech like Facebook and Google involved in several scandals related to violating user online privacy. For instance, Facebook’s Cambridge Analytica scandal and Google’s cookie case. Apple aims to create “a healthy web ecosystem, with privacy by design” The WebKit Prevention Policy will prevent several tracking techniques including cross-site tracking, stateful tracking, covert stateful tracking, navigational tracking, fingerprinting, covert tracking, and other unknown techniques that do not fall under these categories. WebKit will limit the capability of using a tracking technique in case it is not possible to prevent it without any undue harm to the user. If this also does not help, users will be asked for their consent. Apple will treat any attempt to subvert the anti-tracking methods as a security vulnerability. “We treat circumvention of shipping anti-tracking measures with the same seriousness as an exploitation of security vulnerabilities,” Apple wrote. It warns to add more restrictions without prior notice against parties who attempt to circumvent the tracking prevention methods. Apple further mentioned that there won’t be any exception even if you have a valid use for a technique that is also used for tracking. The announcement reads, “But WebKit often has no technical means to distinguish valid uses from tracking, and doesn’t know what the parties involved will do with the collected data, either now or in the future.” WebKit Tracking Prevention Policy’s unintended impact With the implementation of this policy, Apple warns of certain unintended repercussions as well. Among the possibly affected features are funding websites using targeted or personalized advertising, federated login using a third-party login provider, fraud prevention, and more. In cases of tradeoffs, WebKit will prioritize user benefits over current website practices. Apple promises to limit this unintended impact and might update the tracking prevention methods to permit certain use cases. In the future, it will also come up with new web technologies that will allow these practices without comprising the user online privacy such as Storage Access API and Privacy-Preserving Ad Click Attribution. What users are saying about Apple’s anti-tracking policy A time when there is increasing concern regarding user online privacy, this policy comes as a blessing. Many users are appreciating this move, while some do fear that this will affect some of the user-friendly features. In an ongoing discussion on Hacker News, a user commented, “The fact that this makes behavioral targeting even harder makes me very happy.” Some others also believe that focusing on online tracking protection methods will give browsers an edge over Google’s Chrome. A user said, “One advantage of Google's dominance and their business model being so reliant on tracking, is that it's become the moat for its competitors: investing energy into tracking protection is a good way for them to gain a competitive advantage over Google, since it's a feature that Google will not be able to copy. So as long as Google's competitors remain in business, we'll probably at least have some alternatives that take privacy seriously.” When asked about the added restrictions that will be applied if a party is found circumventing tracking prevention, a member of the WebKit team commented, “We're willing to do specifically targeted mitigations, but only if we have to. So far, nearly everything we've done has been universal or algorithmic. The one exception I know of was to delete tracking data that had already been planted by known circumventors, at the same time as the mitigation to stop anyone else from using that particular hole (HTTPS supercookies).” Some users had questions about the features that will be impacted by the introduction of this policy. A user wrote, “While I like the sentiment, I hate that Safari drops cookies after a short period of non-use. I wind up having to re-login to sites constantly while Chrome does it automatically.” Another user added, “So what is going to happen when Apple succeeds in making it impossible to make any money off advertisements shown to iOS users on the web? I'm currently imagining a future where publishers start to just redirect iOS traffic to install their app, where they can actually make money. Good news for the walled garden, I guess?” Read Apple’s official announcement, to know more about the WebKit Tracking Prevention Policy. Firefox Nightly now supports Encrypted Server Name Indication (ESNI) to prevent 3rd parties from tracking your browsing history All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users  
Read more
  • 0
  • 0
  • 21636

article-image-converting-xml-pdf
Packt
23 Oct 2009
5 min read
Save for later

Converting XML to PDF

Packt
23 Oct 2009
5 min read
XML is the most suitable format for data exchange, but not for data presentation. Adobe's PDF and Microsoft Excel's spreadsheet are the commonly used formats for data presentation. If you receive an XML file containing data that needs to be included in a PDF or Excel spreadsheet file, you need to convert the XML file to the relevant format. Some of the commonly used XML-to-PDF conversion tools/APIs are discussed in the following table: Tool/API Description iText iText is a Java library for generating a PDF document. iText may be downloaded from http://www.lowagie.com/iText/. Stylus Studio XML Publisher XML Publisher is a report designer, which supports many data sources, including XML, to generate PDF reports. Stylus Studio XML Editor XML Editor supports XML conversion to PDF. Apache FOP The Apache project provides an open source FO processor called Apache FOP to render an XSL-FO document as a PDF document. We will discuss the Apache FOP processor in this chapter. XMLMill for Java XMLMill may be used to generate PDF documents from XML data combined with XSL and XSLT. XMLMill may be downloaded from http://www.xmlmill.com/. RenderX XEP RenderX provides an XSL-FO processor called XEP that may be used to generate a PDF document.   We can convert an XML file data to a PDF document using any of these tools/APIs in JDeveloper 11g. Here, we will use Apache FOP API. The XSL specification consists of two components: a language for transforming XML documents (XSLT), and XML syntax for specifying formatting objects (XSL-FO). Using XSL-FO, the layout, fonts, and representations of the data may be formatted. Apache FOP (Formatting Objects Processor) is a print formatter for converting XSL formatting objects (XSL-FO) to an output format such as PDF, PCL, PS, SVG, XML, Print, AWT, MIF, or TXT. In this  article, we will convert an XML document to PDF using XSL-FO and the FOP processor in Oracle JDeveloper 11g.The procedure to create a PDF document from an XML file using the Apache FOP processor in JDeveloper is as follows: Create an XML document. Create an XSL stylesheet. Convert the XML document to an XSL-FO document. Convert the XSL-FO document to a PDF file. Setting the environment We need to download the FOP JAR file fop-0.20.5-bin.zip (or a later version) from http://archive.apache.org/dist/xmlgraphics/fop/binaries/ and extract the ZIP file to a directory. To develop an XML-to-PDF conversion application, we need to create an application (ApacheFOP, for example) and a project (ApacheFOP for example) in JDeveloper. In the project add an XML document, catalog.xml, with File | New. In the New Gallery window select Categories | General | XML and Items | XML Document. Click on OK. In the Create XML File window specify a File Name, catalog.xml, and click on OK. A catalog.xml file gets added to the ApacheFOP project. Copy the following catalog.xml listing to catalog.xml: <?xml version="1.0" encoding="UTF-8"?><catalog title="Oracle Magazine" publisher="Oracle Publishing"> <journal edition="September-October 2008"> <article> <title>Share 2.0</title> <author>Alan Joch</author> </article> <article> <title>Restrictions Apply</title> <author>Alan Joch</author> </article> </journal> <journal edition="March-April 2008"> <article> <title>Oracle Database 11g Redux</title> <author>Tom Kyte</author> </article> <article> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article> </journal></catalog> We also need to add an XSL stylesheet to convert the XML document to an XSL-FO document. Create an XSL stylesheet with File | New. In the New Gallery window, select Categories | General | XML and Items | XSL Stylesheet. Click on OK. In the Create XSL File window specify a File Name (catalog.xsl) and click on OK. A catalog.xsl file gets added to the ApacheFOP project. To convert the XML document to an XSL-FO document and subsequently create a PDF file from the XSL-FO file, we need a Java application. Add a Java class,  XMLToPDF.java, with File | New. In the New Gallery window select Categories | General and Items | Java Class. Click on OK. In the Create Java Class window specify a class Name (XMLToPDF for example) and click on OK. A Java class gets added to the ApacheFOP project. The directory structure of the FOP application is shown in the following illustration: Next, add the FOP JAR files to the project. Select the project node (ApacheFOP node) and then Tools | Project Properties. In the Project Properties window, select Libraries and Classpath. Add the Oracle XML Parser v2 library with the  Add Library button. The JAR files required to develop an FOP application are listed in the following table: JAR File Description <FOP>/fop-0.20.5/build/fop.jar Apache FOP API <FOP>/fop-0.20.5/lib/batik.jar Graphics classes <FOP>/fop-0.20.5/lib/ avalon-framework-cvs-20020806.jar Logger classes <FOP>/fop-0.20.5/lib/ xercesImpl-2.2.1.jar The DOMParser and the SAXParser classes
Read more
  • 0
  • 0
  • 21099

article-image-oracle-web-services-manager-authentication-and-authorization
Packt
23 Oct 2009
6 min read
Save for later

Oracle Web Services Manager: Authentication and Authorization

Packt
23 Oct 2009
6 min read
Here, we will see: Steps involved in the authentication and authorization process Learning file authentication and authorization Implementing active directory authentication and authorization Details of policy template Steps Involved in the Authentication and Authorization Process Oracle Web Services Manager can authenticate the web services request by validating the credentials against a data store. The credentials (e.g. username and password, SAML token, certificate, etc.) that are attached to the web services will be validated against the data store, such as the file system, databases, active directory and any LDAP compliant directory. Once authentication is successful, the next step is to perform authorization by validating the username against a set of pre-defined groups which have access to the web service. The following figure shows the process where the user accesses an application which acts as a client for the web service. The client application then attaches the username and password to make the web service request. The username and password are then validated against file system or LDAP directory by Oracle WSM, either using the gateway or the agent. The authentication and authorization against different directory stores can be configured using Oracle WSM policy steps. Oracle Web Services Manager has predefined policy steps for: File Authenticate and Authorize Active Directory Authenticate and Authorize LDAP Authenticate and Authorize In the previous figure, the Oracle WSM Gateway is used to protect the web services and externalize the security. In order to authenticate and authorize requests to web services, the web services can be registered within the gateway and the request pipeline of gateway will validate the credentials and authorize the access before it forwards the request to the actual web service provider. The gateway steps for authentication and authorization can be summarized as: Log incoming request (optional) Extract credentials get the credentials from the SOAP message or HTTP header) Authenticate (file authenticate, active directory authenticate, etc.) Authorize (file authorize, active directory authorize, etc.) Request is forwarded to the web service provider The response from the web service also follows through a similar response pipeline where you can implement the log, encryption of response, or signing, or response, etc. While it is not required to implement any steps in the response pipeline, there should be a response pipeline even if it's doing nothing. Oracle WSM: File Authenticate and Authorize Oracle Web Services MManager can authenticate the web services requests against a file that has the list of usernames and passwords. In this example, the username and password information are part of the SOAP message, however one can also send a username and password as HTTP header, or it can be any XMML data that is a part of the web services message. While file-based authentication can easily be compromised, it is often used as a jump start or testing process to validate the authentication and authorization process. Authentication and authorization of web service requests against a file requires three main steps, and these are described below. There is a default log step which will log all the request and response messages, and you can also include that log step at any point to log messages: Extract Credentials File Authenticate File Authorize The first step to authenticate a web service request against a password file (file authenticate) is to extract the username and password credentials from the SOAP message. The client application attaches the username and password to the SOAP message, as per the UserName token profile. In the policy to authenticate the web service against the file, add the step in the request process to extract credentials. Since this is a web service request, as opposed to HTTP post, configure the Credentials location to WS-BASIC (refer to the following screenshot). Note: WS-BASIC means that it is WS-security compliant. WS-security is the oasis specification that specifies how security tokens are inserted as a part of the SOAP message. In other words, WS-BASIC means that the username and password can be found in the SOAP message, as per the username token profile of the WS-security specification. Once the credentials are extracted, the next step is to validate them against the file. The default implementation of the Oracle WSM File Authenticate requires the username and password to be in a comma separated format and the password should be the hash value using a MMD5 or SHA1 algorithm. In order to authenticate the credentials against the data store, the next step is to configure the File Authenticate step in Oracle WSMM. In this step, the options are straightforward. We have to configure the location of the password file and the hash algorithm format as either md5 or SHA1 (see the next screenshot). The sample file with username and password is: bob:{MD5}jK2x5HPF1b3NIjcmjdlDNA== You can use the wsmadmin tool provided as part of Oracle WSMM standalone or SOA suite). Type: wsmadmin md5encode bob password c;.htpasswd     Now that the authentication steps are configured, the next step is to configure the authorization policy step to ensure that only valid users can access the web service. For the file authorization method, it is no different than the file authenticate method i.e. even the user-to-role mappings are kept in the file. The following figure shows the File Authorize policy step. In this step, we have to define the location of the XML file that contains the users to roles mapping, and also the list of roles that should be allowed to access the service. The roles XML file should look like: <?xml version=‘1.0' encoding=‘utf-8'?> <UserRoles> <user username="joe" roles="guest"/> <user username="Bob" roles="Admin,guest"/> </UserRoles> In the previous XML file, the list of roles the user belongs to are defined as a value of roles element and is comma separated. Now that we have completed the steps to extract credentials, authenticate the request and also authorize the request, the next step is to save the policy steps and commit the policy changes. Once the policy is committed, any request to that web service would require a username and password, and that user should have necessary privileges to access the service. Oracle WSM: Active Directory Authenticate and Authorize In the previous section, we discussed authenticating and authorizing web service requests against a file. Though it's an easy start, security based on a file system can be easily compromised and will be tough to maintain. Authentication and authorization of web services are better handled when integrated with a native LDAP directory, such as active directory, so that the AD administrator can manage users and group membership. In this section, we will discuss how to authenticate and authorize web service requests against an active directory. Active-directory-based authentication and authorization of web service requests involves the same steps as file-based-authentication and authorization, and they are: Extract Credentials Active Directory Authenticate Active Directory Authorize
Read more
  • 0
  • 0
  • 20706
article-image-crud-operations-rest
Packt
16 Sep 2015
11 min read
Save for later

CRUD Operations in REST

Packt
16 Sep 2015
11 min read
In this article by Ludovic Dewailly, the author of Building a RESTful Web Service with Spring, we will learn how requests to retrieve data from a RESTful endpoint, created to access the rooms in a sample property management system, are typically mapped to the HTTP GET method in RESTful web services. We will expand on this by implementing some of the endpoints to support all the CRUD (Create, Read, Update, Delete) operations. In this article, we will cover the following topics: Mapping the CRUD operations to the HTTP methods Creating resources Updating resources Deleting resources Testing the RESTful operations Emulating the PUT and DELETE methods (For more resources related to this topic, see here.) Mapping the CRUD operations[km1]  to HTTP [km2] [km3] methods The HTTP 1.1 specification defines the following methods: OPTIONS: This method represents a request for information about the communication options available for the requested URI. This is, typically, not directly leveraged with REST. However, this method can be used as a part of the underlying communication. For example, this method may be used when consuming web services from a web page (as a part of the C[km4] ross-origin resource sharing mechanism). GET: This method retrieves the information identified by the request URI. In the context of the RESTful web services, this method is used to retrieve resources. This is the method used for read operations (the R in CRUD). HEAD: The HEAD requests are semantically identical to the GET requests except the body of the response is not transmitted. This method is useful for obtaining meta-information about resources. Similar to the OPTIONS method, this method is not typically used directly in REST web services. POST: This method is used to instruct the server to accept the entity enclosed in the request as a new resource. The create operations are typically mapped to this HTTP method. PUT: This method requests the server to store the enclosed entity under the request URI. To support the updating of REST resources, this method can be leveraged. As per the HTTP specification, the server can create the resource if the entity does not exist. It is up to the web service designer to decide whether this behavior should be implemented or resource creation should only be handled by POST requests. DELETE: The last operation not yet mapped is for the deletion of resources. The HTTP specification defines a DELETE method that is semantically aligned with the deletion of RESTful resources. TRACE: This method is used to perform actions on web servers. These actions are often aimed to aid development and the testing of HTTP applications. The TRACE requests aren't usually mapped to any particular RESTful operations. CONNECT: This HTTP method is defined to support HTTP tunneling through a proxy server. Since it deals with transport layer concerns, this method has no natural semantic mapping to the RESTful operations. The RESTful architecture does not mandate the use of HTTP as a communication protocol. Furthermore, even if HTTP is selected as the underlying transport, no provisions are made regarding the mapping of the RESTful operations to the HTTP method. Developers could feasibly support all operations through POST requests. This being said, the following CRUD to HTTP method mapping is commonly used in REST web services: Operation HTTP method Create POST Read GET Update PUT Delete DELETE Our sample web service will use these HTTP methods to support CRUD operations. The rest of this article will illustrate how to build such operations. Creating r[km5] esources The inventory component of our sample property management system deals with rooms. If we have already built an endpoint to access the rooms. Let's take a look at how to define an endpoint to create new resources: @RestController @RequestMapping("/rooms") public class RoomsResource { @RequestMapping(method = RequestMethod.POST) public ApiResponse addRoom(@RequestBody RoomDTO room) { Room newRoom = createRoom(room); return new ApiResponse(Status.OK, new RoomDTO(newRoom)); } } We've added a new method to our RoomsResource class to handle the creation of new rooms. @RequestMapping is used to map requests to the Java method. Here we map the POST requests to addRoom(). Not specifying a value (that is, path) in @RequestMapping is equivalent to using "/". We pass the new room as @RequestBody. This annotation instructs Spring to map the body of the incoming web request to the method parameter. Jackson is used here to convert the JSON request body to a Java object. With this new method, the POSTing requests to http://localhost:8080/rooms with the following JSON body will result in the creation of a new room: { name: "Cool Room", description: "A room that is very cool indeed", room_category_id: 1 } Our new method will return the newly created room: { "status":"OK", "data":{ "id":2, "name":"Cool Room", "room_category_id":1, "description":"A room that is very cool indeed" } } We can decide to return only the ID of the new resource in response to the resource creation. However, since we may sanitize or otherwise manipulate the data that was sent over, it is a good practice to return the full resource. Quickly testing endpoints[km6]  For the purpose of quickly testing our newly created endpoint, let's look at testing the new rooms created using Postman. Postman (https://www.getpostman.com) is a Google Chrome plugin extension that provides tools to build and test web APIs. This following screenshot illustrates how Postman can be used to test this endpoint: In Postman, we specify the URL to send the POST request to http://localhost:8080/rooms, with the "[km7] application/json" content type header and the body of the request. Sending this requesting will result in a new room being created and returned as shown in the following: We have successfully added a room to our inventory service using Postman. It is equally easy to create incomplete requests to ensure our endpoint performs any necessary sanity checks before persisting data into the database. JSON versus[km8]  form data Posting forms is the traditional way of creating new entities on the web and could easily be used to create new RESTful resources. We can change our method to the following: @RequestMapping(method = RequestMethod.POST, consumes = MediaType.APPLICATION_FORM_URLENCODED_VALUE) public ApiResponse addRoom(String name, String description, long roomCategoryId) { Room room = createRoom(name, description, roomCategoryId); return new ApiResponse(Status.OK, new RoomDTO(room)); } The main difference with the previous method is that we tell Spring to map form requests (that is, with application/x-www-form-urlencoded the content type) instead of JSON requests. In addition, rather than expecting an object as a parameter, we receive each field individually. By default, Spring will use the Java method attribute names to map incoming form inputs. Developers can change this behavior by annotating attribute with @RequestParam("…") to specify the input name. In situations where the main web service consumer is a web application, using form requests may be more applicable. In most cases, however, the former approach is more in line with RESTful principles and should be favored. Besides, when complex resources are handled, form requests will prove cumbersome to use. From a developer standpoint, it is easier to delegate object mapping to a third-party library such as Jackson. Now that we have created a new resource, let's see how we can update it. Updating r[km9] esources Choosing URI formats is an important part of designing RESTful APIs. As seen previously, rooms are accessed using the /rooms/{roomId} path and created under /rooms. You may recall that as per the HTTP specification, PUT requests can result in creation of entities, if they do not exist. The decision to create new resources on update requests is up to the service designer. It does, however, affect the choice of path to be used for such requests. Semantically, PUT requests update entities stored under the supplied request URI. This means the update requests should use the same URI as the GET requests: /rooms/{roomId}. However, this approach hinders the ability to support resource creation on update since no room identifier will be available. The alternative path we can use is /rooms with the room identifier passed in the body of the request. With this approach, the PUT requests can be treated as POST requests when the resource does not contain an identifier. Given the first approach is semantically more accurate, we will choose not to support resource create on update, and we will use the following path for the PUT requests: /rooms/{roomId} Update endpoint[km10]  The following method provides the necessary endpoint to modify the rooms: @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) public ApiResponse updateRoom(@PathVariable long roomId, @RequestBody RoomDTO updatedRoom) { try { Room room = updateRoom(updatedRoom); return new ApiResponse(Status.OK, new RoomDTO(room)); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError(999, "No room with ID " + roomId)); } } As discussed in the beginning of this article, we map update requests to the HTTP PUT verb. Annotating this method with @RequestMapping(value = "/{roomId}", method = RequestMethod.PUT) instructs Spring to direct the PUT requests here. The room identifier is part of the path and mapped to the first method parameter. In fashion similar to the resource creation requests, we map the body to our second parameter with the use of @RequestBody. Testing update requests[km11]  With Postman, we can quickly create a test case to update the room we created. To do so, we send a PUT request with the following body: { id: 2, name: "Cool Room", description: "A room that is really very cool indeed", room_category_id: 1 } The resulting response will be the updated room, as shown here: { "status": "OK", "data": { "id": 2, "name": "Cool Room", "room_category_id": 1, "description": "A room that is really very cool indeed." } } Should we attempt to update a nonexistent room, the server will generate the following response: { "status": "ERROR", "error": { "error_code": 999, "description": "No room with ID 3" } } Since we do not support resource creation on update, the server returns an error indicating that the resource cannot be found. Deleting resources[km12]  It will come as no surprise that we will use the DELETE verb to delete REST resources. Similarly, the reader will have already figured out that the path to delete requests will be /rooms/{roomId}. The Java method that deals with room deletion is as follows: @RequestMapping(value = "/{roomId}", method = RequestMethod.DELETE) public ApiResponse deleteRoom(@PathVariable long roomId) { try { Room room = inventoryService.getRoom(roomId); inventoryService.deleteRoom(room.getId()); return new ApiResponse(Status.OK, null); } catch (RecordNotFoundException e) { return new ApiResponse(Status.ERROR, null, new ApiError( 999, "No room with ID " + roomId)); } } By declaring the request mapping method to be RequestMethod.DELETE, Spring will make this method handle the DELETE requests. Since the resource is deleted, returning it in the response would not make a lot of sense. Service designers may choose to return a boolean flag to indicate the resource was successfully deleted. In our case, we leverage the status element of our response to carry this information back to the consumer. The response to deleting a room will be as follows: { "status": "OK" } With this operation, we have now a full-fledged CRUD API for our Inventory Service. Before we conclude this article, let's discuss how REST developers can deal with situations where not all HTTP verbs can be utilized. HTTP method override In certain situations (for example, when the service or its consumers are behind an overzealous corporate firewall, or if the main consumer is a web page), only the GET and POST HTTP methods might be available. In such cases, it is possible to emulate the missing verbs by passing a customer header in the requests. For example, resource updates can be handle using POST requests by setting a customer header (for example, X-HTTP-Method-Override) to PUT to indicate that we are emulating a PUT request via a POST request. The following method will handle this scenario: @RequestMapping(value = "/{roomId}", method = RequestMethod.POST, headers = {"X-HTTP-Method-Override=PUT"}) public ApiResponse updateRoomAsPost(@PathVariable("roomId") long id, @RequestBody RoomDTO updatedRoom) { return updateRoom(id, updatedRoom); } By setting the headers attribute on the mapping annotation, Spring request routing will intercept the POST requests with our custom header and invoke this method. Normal POST requests will still map to the Java method we had put together to create new rooms. Summary In this article, we've performed the implementation of our sample RESTful web service by adding all the CRUD operations necessary to manage the room resources. We've discussed how to organize URIs to best embody the REST principles and looked at how to quickly test endpoints using Postman. Now that we have a fully working component of our system, we can take some time to discuss performance. Resources for Article: Further resources on this subject: Introduction to Spring Web Application in No Time[article] Aggregators, File exchange Over FTP/FTPS, Social Integration, and Enterprise Messaging[article] Time Travelling with Spring[article]
Read more
  • 0
  • 0
  • 20530

article-image-getting-started-with-django-restful-web-services
Sugandha Lahoti
27 Mar 2018
19 min read
Save for later

Getting started with Django RESTful Web Services

Sugandha Lahoti
27 Mar 2018
19 min read
In this article, we will kick start with learning RESTful Web Service and subsequently learn how to create models and perform migration, serialization and deserialization in Django. Here’s a quick glance of what to expect in this article: Defining the requirements for RESTful Web Service Analyzing and understanding Django tables and the database Controlling, serialization and deserialization in Django Defining the requirements for RESTful Web Service Imagine a team of developers working on a mobile app for iOS and Android and requires a RESTful Web Service to perform CRUD operations with toys. We definitely don't want to use a mock web service and we don't want to spend time choosing and configuring an ORM (short for Object-Relational Mapping). We want to quickly build a RESTful Web Service and have it ready as soon as possible to start interacting with it in the mobile app. We really want the toys to persist in a database but we don't need it to be production-ready. Therefore, we can use the simplest possible relational database, as long as we don't have to spend time performing complex installations or configurations. Django REST framework, also known as DRF, will allow us to easily accomplish this task and start making HTTP requests to the first version of our RESTful Web Service. In this case, we will work with a very simple SQLite database, the default database for a new Django REST framework project. First, we must specify the requirements for our main resource: a toy. We need the following attributes or fields for a toy entity: An integer identifier A name An optional description A toy category description, such as action figures, dolls, or playsets A release date A bool value indicating whether the toy has been on the online store's homepage at least once In addition, we want to have a timestamp with the date and time of the toy's addition to the database table, which will be generated to persist toys. In a RESTful Web Service, each resource has its own unique URL. In our web service, each toy will have its own unique URL. The following table shows the HTTP verbs, the scope, and the semantics of the methods that our first version of the web service must support. Each method is composed of an HTTP verb and a scope. All the methods have a well-defined meaning for toys and collections: HTTP verb Scope Semantics GET Toy Retrieve a single toy GET Collection of toys Retrieve all the stored toys in the collection, sorted by their name in ascending order POST Collection of toys Create a new toy in the collection PUT Toy Update an existing toy DELETE Toy Delete an existing toy In the previous table, the GET HTTP verb appears twice but with two different scopes: toys and collection of toys. The first row shows a GET HTTP verb applied to a toy, that is, to a single resource. The second row shows a GET HTTP verb applied to a collection of toys, that is, to a collection of resources. We want our web service to be able to differentiate collections from a single resource of the collection in the URLs. When we refer to a collection, we will use a slash (/) as the last character for the URL, as in http://localhost:8000/toys. When we refer to a single resource of the collection we won't use a slash (/) as the last character for the URL, as in http://localhost:8000/toys/5. Let's consider that http://localhost:8000/toys/ is the URL for the collection of toys. If we add a number to the previous URL, we identify a specific toy with an ID or primary key equal to the specified numeric value. For example, http://localhost:8000/toys/42 identifies the toy with an ID equal to 42. We have to compose and send an HTTP request with the POST HTTP verb and http://localhost:8000/toys/ request URL to create a new toy and add it to the toys collection. In this example, our RESTful Web Service will work with JSON (short for JavaScript Object Notation), and therefore we have to provide the JSON key-value pairs with the field names and the values to create the new toy. As a result of the request, the server will validate the provided values for the fields, make sure that it is a valid toy, and persist it in the database. The server will insert a new row with the new toy in the appropriate table and it will return a 201 Created status code and a JSON body with the recently added toy serialized to JSON, including the assigned ID that was automatically generated by the database and assigned to the toy object: POST http://localhost:8000/toys/ We have to compose and send an HTTP request with the GET HTTP verb and http://localhost:8000/toys/{id} request URL to retrieve the toy whose ID matches the specified numeric value in {id}. For example, if we use the request URL http://localhost:8000/toys/25, the server will retrieve the toy whose ID matches 25. As a result of the request, the server will retrieve a toy with the specified ID from the database and create the appropriate toy object in Python. If a toy is found, the server will serialize the toy object into JSON, return a 200 OK status code, and return a JSON body with the serialized toy object. If no toy matches the specified ID, the server will return only a 404 Not Found status: GET http://localhost:8000/toys/{id} We have to compose and send an HTTP request with the PUT HTTP verb and request URL http://localhost:8000/toys/{id} to retrieve the toy whose ID matches the value in {id} and replace it with a toy created with the provided data. In addition, we have to provide the JSON key-value pairs with the field names and the values to create the new toy that will replace the existing one. As a result of the request, the server will validate the provided values for the fields, make sure that it is a valid toy, and replace the one that matches the specified ID with the new one in the database. The ID for the toy will be the same after the update operation. The server will update the existing row in the appropriate table and it will return a 200 OK status code and a JSON body with the recently updated toy serialized to JSON. If we don't provide all the necessary data for the new toy, the server will return a 400 Bad Request status code. If the server doesn't find a toy with the specified ID, the server will only return a 404 Not Found status: PUT http://localhost:8000/toys/{id} We have to compose and send an HTTP request with the DELETE HTTP verb and request URL http://localhost:8000/toys/{id} to remove the toy whose ID matches the specified numeric value in {id}. For example, if we use the request URL http://localhost:8000/toys/34, the server will delete the toy whose ID matches 34. As a result of the request, the server will retrieve a toy with the specified ID from the database and create the appropriate toy object in Python. If a toy is found, the server will request the ORM delete the toy row associated with this toy object and the server will return a 204 No Content status code. If no toy matches the specified ID, the server will return only a 404 Not Found status: DELETE http://localhost:8000/toys/{id} Creating first Django model Now, we will create a simple Toy model in Django, which we will use to represent and persist toys. Open the toys/models.py file. The following lines show the initial code for this file with just one import statement and a comment that indicates we should create the models: from django.db import models #Create your models here. The following lines show the new code that creates a Toy class, specifically, a Toy model in the toys/models.py file. The code file for the sample is included in the hillar_django_restful_02_01 folder in the restful01/toys/models.py file: from django.db import models class Toy(models.Model): created = models.DateTimeField(auto_now_add=True) name = models.CharField(max_length=150, blank=False, default='') description = models.CharField(max_length=250, blank=True, default='') toy_category = models.CharField(max_length=200, blank=False, default='') release_date = models.DateTimeField() was_included_in_home = models.BooleanField(default=False) class Meta: ordering = ('name',) The Toy class is a subclass of the django.db.models.Model class and defines the following attributes: created, name, description, toy_category, release_date, and was_included_in_home. Each of these attributes represents a database column or field. We specified the field types, maximum lengths, and defaults for many attributes. The class declares a Meta inner class that declares an ordering attribute and sets its value to a tuple of string whose first value is the 'name' string. This way, the inner class indicates to Django that, by default, we want the results ordered by the name attribute in ascending order. Running initial migration Now, it is necessary to create the initial migration for the new Toy model we recently coded. We will also synchronize the SQLite database for the first time. By default, Django uses the popular self-contained and embedded SQLite database, and therefore we don't need to make changes in the initial ORM configuration. In this example, we will be working with this default configuration. Of course, we will upgrade to another database after we have a sample web service built with Django. We will only use SQLite for this example. We just need to run the following Python script in the virtual environment that we activated in the previous chapter. Make sure you are in the restful01 folder within the main folder for the virtual environment when you run the following command: python manage.py makemigrations toys The following lines show the output generated after running the previous command: Migrations for 'toys': toys/migrations/0001_initial.py: - Create model Toy The output indicates that the restful01/toys/migrations/0001_initial.py file includes the code to create the Toy model. The following lines show the code for this file that was automatically generated by Django. The code file for the sample is included in the hillar_django_restful_02_01 folder in the restful01/toys/migrations/0001_initial.py file: # -*- coding: utf-8 -*- # Generated by Django 1.11.5 on 2017-10-08 05:19 from   future    import unicode_literals from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [ ] operations = [ migrations.CreateModel( name='Toy', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('created', models.DateTimeField(auto_now_add=True)), ('name', models.CharField(default='', max_length=150)), ('description', models.CharField(blank=True, default='', max_length=250)), ('toy_category', models.CharField(default='', max_length=200)), ('release_date', models.DateTimeField()), ('was_included_in_home', models.BooleanField(default=False)), ], options={ 'ordering': ('name',), }, ), ] Understanding migrations The automatically generated code defines a subclass of the django.db.migrations.Migration class named Migration, which defines an operation that creates the Toy model's table and includes it in the operations attribute. The call to the migrations.CreateModel method specifies the model's name, the fields, and the options to instruct the ORM to create a table that will allow the underlying database to persist the model. The fields argument is a list of tuples that includes information about the field name, the field type, and additional attributes based on the data we provided in our model, that is, in the Toy class. Now, run the following Python script to apply all the generated migrations. Make sure you are in the restful01 folder within the main folder for the virtual environment when you run the following command: python manage.py migrate The following lines show the output generated after running the previous command: Operations to perform: Apply all migrations: admin, auth, contenttypes, sessions, toys Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying sessions.0001_initial... OK Applying toys.0001_initial... OK After we run the previous command, we will notice that the root folder for our restful01 project now has a db.sqlite3 file that contains the SQLite database. We can use the SQLite command line or any other application that allows us to easily check the contents of the SQLite database to check the tables that Django generated. The first migration will generate many tables required by Django and its installed apps before running the code that creates the table for the Toys model. These tables provide support for user authentication, permissions, groups, logs, and migration management. We will work with the models related to these tables after we add more features and security to our web services. After the migration process creates all these Django tables in the underlying database, the first migration runs the Python code that creates the table required to persist our model. Thus, the last line of the running migrations section displays Applying toys.0001_initial. Analyzing the database In most modern Linux distributions and macOS, SQLite is already installed, and therefore you can run the sqlite3 command-line utility. In Windows, if you want to work with the sqlite3.exe command-line utility, you have to download the bundle of command-line tools for managing SQLite database files from the downloads section of the SQLite webpage at http://www.sqlite.org/download.html. For example, the ZIP file that includes the command-line tools for version 3.20.1 is sqlite- tools-win32-x8 6-3200100.zip. The name for the file changes with the SQLite version. You just need to make sure that you download the bundle of command-line tools and not the ZIP file that provides the SQLite DLLs. After you unzip the file, you can include the folder that includes the command-line tools in the PATH environment variable, or you can access the sqlite3.exe command-line utility by specifying the full path to it. Run the following command to list the generated tables. The first argument, db.sqlite3, specifies the file that contains that SQLite database and the second argument indicates the command that we want the sqlite3 command-line utility to run against the specified database: sqlite3 db.sqlite3 ".tables" The following lines show the output for the previous command with the list of tables that Django generated in the SQLite database: auth_group                django_admin_log auth_group_permissions                          django_content_type auth_permission                         django_migrations auth_user                 django_session auth_user_groups                      toys_toy auth_user_user_permissions The following command will allow you to check the contents of the toys_toy table after we compose and send HTTP requests to the RESTful Web Service and the web service makes CRUD operations to the toys_toy table: sqlite3 db.sqlite3 "SELECT * FROM toys_toy ORDER BY name;" Instead of working with the SQLite command-line utility, you can use a GUI tool to check the contents of the SQLite database. DB Browser for SQLite is a useful, free, multiplatform GUI tool that allows us to easily check the database contents of an SQLite database in Linux, macOS, and Windows. You can read more information about this tool and download its different versions from http://sqlitebrowser.org. Once you have installed the tool, you just need to open the db.sqlite3 file and you can check the database structure and browse the data for the different tables. After we start working with the first version of our web service, you need to check the contents of the toys_toy table with this tool The SQLite database engine and the database file name are specified in the restful01/settings.py Python file. The following lines show the declaration of the DATABASES dictionary, which contains the settings for all the databases that Django uses. The nested dictionary maps the database named default with the django.db.backends.sqlite3 database engine and the db.sqlite3 database file located in the BASE_DIR folder (restful01): DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } After we execute the migrations, the SQLite database will have the following tables. Django uses prefixes to identify the modules and applications that each table belongs to. The tables that start with the auth_ prefix belong to the Django authentication module. The table that starts with the toys_ prefix belongs to our toys application. If we add more models to our toys application, Django will create new tables with the toys_ prefix: auth_group: Stores authentication groups auth_group_permissions: Stores permissions for authentication groups auth_permission: Stores permissions for authentication auth_user: Stores authentication users auth_user_groups: Stores authentication user groups auth_user_groups_permissions: Stores permissions for authentication user groups django_admin_log: Stores the Django administrator log django_content_type: Stores Django content types django_migrations: Stores the scripts generated by Django migrations and the date and time at which they were applied django_session: Stores Django sessions toys_toy: Persists the Toys model sqlite_sequence: Stores sequences for SQLite primary keys with autoincrement fields Understanding the table generated by Django The toys_toy table persists in the database the Toy class we recently created, specifically, the Toy model. Django's integrated ORM generated the toys_toy table based on our Toy model. Run the following command to retrieve the SQL used to create the toys_toy table: sqlite3 db.sqlite3 ".schema toys_toy" The following lines show the output for the previous command together with the SQL that the migrations process executed, to create the toys_toy table that persists the Toy model. The next lines are formatted to make it easier to understand the SQL code. Notice that the output from the command is formatted in a different way: CREATE TABLE IF NOT EXISTS "toys_toy" ( "id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "created" datetime NOT NULL, "name" varchar(150) NOT NULL, "description" varchar(250) NOT NULL, "toy_category" varchar(200) NOT NULL, "release_date" datetime NOT NULL, "was_included_in_home" bool NOT NULL ); The toys_toy table has the following columns (also known as fields) with their SQLite types, all of them not nullable: id: The integer primary key, an autoincrement row created: DateTime name: varchar(150) description: varchar(250) toy_category: varchar(200) release_date: DateTime was_included_in_home: bool Controlling, serialization and deserialization Our RESTful Web Service has to be able to serialize and deserialize the Toy instances into JSON representations. In Django REST framework, we just need to create a serializer class for the Toy instances to manage serialization to JSON and deserialization from JSON. Now, we will dive deep into the serialization and deserialization process in Django REST framework. It is very important to understand how it works because it is one of the most important components for all the RESTful Web Services we will build. Django REST framework uses a two-phase process for serialization. The serializers are mediators between the model instances and Python primitives. Parser and renderers handle as mediators between Python primitives and HTTP requests and responses. We will configure our mediator between the Toy model instances and Python primitives by creating a subclass of the rest_framework.serializers.Serializer class to declare the fields and the necessary methods to manage serialization and deserialization. We will repeat some of the information about the fields that we have included in the Toy model so that we understand all the things that we can configure in a subclass of the Serializer class. However, we will work with shortcuts, which will reduce boilerplate code later in the following examples. We will write less code in the following examples by using the ModelSerializer class. Now, go to the restful01/toys folder and create a new Python code file named serializers.py. The following lines show the code that declares the new ToySerializer class. The code file for the sample is included in the hillar_django_restful_02_01 folder in the restful01/toys/serializers.py file: from rest_framework import serializers from toys.models import Toy class ToySerializer(serializers.Serializer): pk = serializers.IntegerField(read_only=True) name = serializers.CharField(max_length=150) description = serializers.CharField(max_length=250) release_date = serializers.DateTimeField() toy_category = serializers.CharField(max_length=200) was_included_in_home = serializers.BooleanField(required=False) def create(self, validated_data): return Toy.objects.create(**validated_data) def update(self, instance, validated_data): instance.name = validated_data.get('name', instance.name) instance.description = validated_data.get('description', instance.description) instance.release_date = validated_data.get('release_date', instance.release_date) instance.toy_category = validated_data.get('toy_category', instance.toy_category) instance.was_included_in_home = validated_data.get('was_included_in_home', instance.was_included_in_home) instance.save() return instance The ToySerializer class declares the attributes that represent the fields that we want to be serialized. Notice that we have omitted the created attribute that was present in the Toy model. When there is a call to the save method that ToySerializer inherits from the serializers.Serializer superclass, the overridden create and update methods define how to create a new instance or update an existing instance. In fact, these methods must be implemented in our class because they only raise a NotImplementedError exception in their base declaration in the serializers.Serializer superclass. The create method receives the validated data in the validated_data argument. The code creates and returns a new Toy instance based on the received validated data. The update method receives an existing Toy instance that is being updated and the new validated data in the instance and validated_data arguments. The code updates the values for the attributes of the instance with the updated attribute values retrieved from the validated data. Finally, the code calls the save method for the updated Toy instance and returns the updated and saved instance. We designed a RESTful Web Service to interact with a simple SQLite database and perform CRUD operations with toys. We defined the requirements for our web service and understood the tasks performed by each HTTP method and different scope. You read an excerpt from the book, Django RESTful Web Services, written by Gaston C. Hillar. This book helps developers build complex RESTful APIs from scratch with Django and the Django REST Framework. The code bundle for the article is hosted on GitHub.  
Read more
  • 0
  • 0
  • 19255

article-image-introduction-using-nodejs-hadoops-mapreduce-jobs
Harri Siirak
25 Sep 2015
5 min read
Save for later

Using Node.js and Hadoop to store distributed data

Harri Siirak
25 Sep 2015
5 min read
Hadoop is a well-known open-source software framework for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware. It's designed with a fundamental assumption that hardware failures can (and will) happen and thus should be automatically handled in software by the framework. Under the hood it's using HDFS (Hadoop Distributed File System) for the data storage. HDFS can store large files across multiple machines and it achieves reliability by replicating the data across multiple hosts (default replication factor is 3 and can be configured to be higher when needed). Although it's designed for mostly immutable files and may not be suitable for systems requiring concurrent write-operations. Its target usage is not only restricted to MapReduce jobs, but it also can be used for cost effective and reliable data storage. In the following examples, I am going to give you an overview of how to establish connections to HDFS storage (namenode) and how to perform basic operations on the data. As you can probably guess, I'm using Node.js to build these examples. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices. So it's really ideal for what I want to show you next. Two popular libraries for acccessing HDFS in Node.js are node-hdfs and webhdfs. The first one uses Hadoop's native libhdfs library and protocol to communicate with Hadoop namenode, albeit it seems to be not maintained anymore and doesn't support Stream API. Another one is using WebHDFS, which defines a public HTTP REST API, directly built into Hadoop's core (namenodes and datanodes both) and which permits clients to access Hadoop from multiple languages without installing Hadoop, and supports all HDFS user operations including reading files, writing to files, making directories, changing permissions and renaming. More details about WebHDFS REST API and about its implementation details and response codes/types can be found from here. At this point I'm assuming that you have Hadoop cluster up and running. There are plenty of good tutorials out there showing how to setup and run Hadoop cluster (single and multi node). Installing and using the webhdfs library webhdfs implements most of the REST API calls, albeit it's not yet supporting Hadoop delegation tokens. It's also Stream API compatible what makes its usage pretty straightforward and easy. Detailed examples and use cases for another supported calls can be found from here. Install webhdfs from npm: npm install wehbhdfs Create a new script named webhdfs-client.js: // Include webhdfs module var WebHDFS = require('webhdfs'); // Create a new var hdfs = WebHDFS.createClient({ user: 'hduser', // Hadoop user host: 'localhost', // Namenode host port: 50070 // Namenode port }); module.exports = hdfs; Here we initialized new webhdfs client with options, including namenode's host and port where we are connecting to. Let's proceed with a more detailed example. Storing file data in HDFS Create a new script named webhdfs-write-test.js and add the code below. // Include created client var hdfs = require('./webhdfs-client'); // Include fs module for local file system operations var fs = require('fs'); // Initialize readable stream from local file // Change this to real path in your file system var localFileStream = fs.createReadStream('/path/to/local/file'); // Initialize writable stream to HDFS target var remoteFileStream = hdfs.createWriteStream('/path/to/remote/file'); // Pipe data to HDFS localFileStream.pipe(remoteFileStream); // Handle errors remoteFileStream.on('error', function onError (err) { // Do something with the error }); // Handle finish event remoteFileStream.on('finish', function onFinish () { // Upload is done }); Basically what we are doing here is that we're initializing readable file stream from a local filesystem and piping its contents seamlessly into remote HDFS target. Optionally webhdfs exposes error and finish. Reading file data from HDFS Let's retrieve the data what we just stored in HDFS storage. Create a new script named webhdfs-read-test.js and add code below. var hdfs = require('./webhdfs-client'); var fs = require('fs'); // Initialize readable stream from HDFS source var remoteFileStream = hdfs.createReadStream('/path/to/remote/file'); // Variable for storing data var data = new Buffer(); remoteFileStream.on('error', function onError (err) { // Do something with the error }); remoteFileStream.on('data', function onChunk (chunk) { // Concat received data chunk data = Buffer.concat([ data, chunk ]); }); remoteFileStream.on('finish', function onFinish () { // Upload is done // Print received data console.log(data.toString()); }); What's next? Now when we have data in Hadoop cluster, we can start processing it by spawning some MapReduce jobs, and when it's processed we can retrieve the output data. In the second part of this article, I'm going to give you an overview of how Node.js can be used as part of MapReduce jobs. About the author Harri is a senior Node.js/Javascript developer among a talented team of full-stack developers who specialize in building scalable and secure Node.js based solutions. He can be found on Github at harrisiirak.
Read more
  • 0
  • 2
  • 19157
article-image-building-games-html5-and-dart
Packt
21 Sep 2015
19 min read
Save for later

Building Games with HTML5 and Dart

Packt
21 Sep 2015
19 min read
In this article written by Ivo Balbaert, author of the book Learning Dart - Second Edition, you will learn to create a well-known memory game. Also, you will design a model first and work up your way from a modest beginning to a completely functional game, step by step. You will also learn how to enhance the attractiveness of web games with audio and video techniques. The following topics will be covered in this article: The model for the memory game Spiral 1—drawing the board Spiral 2—drawing cells Spiral 3—coloring the cells Spiral 4—implementing the rules Spiral 5—game logic (bringing in the time element) Spiral 6—some finishing touches Spiral 7—using images (For more resources related to this topic, see here.) The model for the memory game When started, the game presents a board with square cells. Every cell hides an image that can be seen by clicking on the cell, but this disappears quickly. You must remember where the images are, because they come in pairs. If you quickly click on two cells that hide the same picture, the cells will "flip over" and the pictures will stay visible. The objective of the game is to turn over all the pairs of matching images in a very short time. After some thinking we came up with the following model, which describes the data handled by the application. In our game, we have a number of pictures, which could belong to a Catalog. For example, a travel catalog with a collection of photos from our trips or something similar. Furthermore, we have a collection of cells and each cell is hiding a picture. Also, we have a structure that we will call memory, and this contains the cells in a grid of rows and columns. We could draw it up as shown in the following figure. You can import the model from the game_memory_json.txt file that contains its JSON representation: A conceptual model of the memory game The Catalog ID is its name, which is mandatory, but the description is optional. The Picture ID consists of the sequence number within the Catalog. The imageUri field stores the location of the image file. width and height are optional properties, since they may be derived from the image file. The size may be small, medium, or large to help select an image. The ID of a Memory is its name within the Catalog, the collection of cells is determined by the memory length, for example, 4 cells per side. Each cell is of the same length cellLength, which is a property of the memory. A memory is recalled when all the image pairs are discovered. Some statistics must be kept, such as recall count, the best recall time in seconds, and the number of cell clicks to recover the whole image (minTryCount). The Cell has the row and column coordinates and also the coordinates of its twin with the same image. Once the model is discussed and improved, model views may be created: a Board would be a view of the Memory concept and a Box would be a view of the Cell concept. The application would be based on the Catalog concept. If there is no need to browse photos of a catalog and display them within a page, there would not be a corresponding view. Now, we can start developing this game from scratch. Spiral 1 – drawing the board The app starts with main() in educ_memory_game.dart: library memory; import 'dart:html'; part 'board.dart'; void main() { // Get a reference to the canvas. CanvasElement canvas = querySelector('#canvas'); (1) new Board(canvas); (2) } We'll draw a board on a canvas element. So, we need a reference that is given in line (1). The Board view is represented in code as its own Board class in the board.dart file. Since everything happens on this board, we construct its object with canvas as an argument (line (2)). Our game board will be periodically drawn as a rectangle in line (4) by using the animationFrame method from the Window class in line (3): part of memory; class Board { CanvasElement canvas; CanvasRenderingContext2D context; num width, height; Board(this.canvas) { context = canvas.getContext('2d'); width = canvas.width; height = canvas.height; window.animationFrame.then(gameLoop); (3) } void gameLoop(num delta) { draw(); window.animationFrame.then(gameLoop); } void draw() { clear(); border(); } void clear() { context.clearRect(0, 0, width, height); } void border() { context..rect(0, 0, width, height)..stroke(); (4) } } This is our first result: The game board Spiral 2 – drawing cells In this spiral, we will give our app code some structure: Board is a view, so board.dart is moved to the view folder. We will also introduce here the Memory class from our model in its own code memory.dart file in the model folder. So, we will have to change the part statements to the following: part 'model/memory.dart'; part 'view/board.dart'; The Board view needs to know about Memory. So, we will include it in the Board class and make its object in the Board constructor: new Board(canvas, new Memory(4)); The Memory class is still very rudimentary with only its length property: class Memory { num length; Memory(this.length); } Our Board class now also needs a method to draw the lines, which we decided to make private because it is specific to Board, as well as the clear() and border()methods: void draw() { _clear(); _border(); _lines(); } The lines method is quite straightforward; first draw it on a piece of paper and translate it to code using moveTo and lineTo. Remember that x goes from top-left to right and y goes from top-left to bottom: void _lines() { var gap = height / memory.length; var x, y; for (var i = 1; i < memory.length; i++) { x = gap * i; y = x; context ..moveTo(x, 0) ..lineTo(x, height) ..moveTo(0, y) ..lineTo(width, y); } } The result is a nice grid: Board with cells Spiral 3 – coloring the cells To simplify, we will start using colors instead of pictures to be shown in the grid. Up until now, we didn't implement the cell from the model. Let's do that in modelcell.dart. We start simple by saying that the Cell class has the row, column, and color properties, and it belongs to a Memory object passed in its constructor: class Cell { int row, column; String color; Memory memory; Cell(this.memory, this.row, this.column); } Because we need a collection of cells, it is a good idea to make a Cells class, which contains List. We give it an add method and also an iterator so that we are able to use a for…in statement to loop over the collection: class Cells { List _list; Cells() { _list = new List(); } void add(Cell cell) { _list.add(cell); } Iterator get iterator => _list.iterator; } We will need colors that are randomly assigned to the cells. We will also need some utility variables and methods that do not specifically belong to the model and don't need a class. Hence, we will code them in a folder called util. To specify the colors for the cells, we will use two utility variables: a List variable of colors (colorList), which has the name colors, and a colorMap variable that maps the names to their RGB values. Refer to utilcolor.dart; later on, we can choose some fancier colors: var colorList = ['black', 'blue', //other colors ]; var colorMap = {'black': '#000000', 'blue': '#0000ff', //... }; To generate (pseudo) random values (ints, doubles, or Booleans), Dart has the Random class from dart:math. We will use the nextInt method, which takes an integer (the maximum value) and returns a positive random integer in the range from 0 (inclusive) to max (exclusive). We will build upon this in utilrandom.dart to make methods that give us a random color: int randomInt(int max) => new Random().nextInt(max); randomListElement(List list) => list[randomInt(list.length - 1)]; String randomColor() => randomListElement(colorList); String randomColorCode() => colorMap[randomColor()]; Our Memory class now contains an instance of the Cells class: Cells cells; We build this in the Memory constructor in a nested for loop, where each cell is successively instantiated with a row and column, given a random color, and added to cells: Memory(this.length) { cells = new Cells(); var cell; for (var x = 0; x < length; x++) { for (var y = 0; y < length; y++) { cell = new Cell(this, x, y); cell.color = randomColor(); cells.add(cell); } } } We can draw a rectangle and fill it with a color at the same time. So, we realize that we don't need to draw lines as we did in the previous spiral! The _boxes method is called from the draw animation: with a for…in statement, we loop over the collection of cells and call the _colorBox method that will draw and color the cell for each cell: void _boxes() { for (Cell cell in memory.cells) { _colorBox(cell); } } void _colorBox(Cell cell) { var gap = height / memory.length; var x = cell.row * gap; var y = cell.column * gap; context ..beginPath() ..fillStyle = colorMap[cell.color] ..rect(x, y, gap, gap) ..fill() ..stroke() ..closePath(); } Spiral 4 – implementing the rules However, wait! Our game can only work if the same color appears in only two cells: a cell and its twin cell. Moreover, a cell can be hidden or not: the color can be seen or not? To take care of this, the Cell class gets two new attributes: Cell twin; bool hidden = true; The _colorBox method in the Board class can now show the color of the cell when hidden is false (line (2)); when hidden = true (the default state), a neutral gray color will be used for the cell (line (1)): static const String COLOR_CODE = '#f0f0f0'; We also gave the gap variable a better name, boxSize: void _colorBox(Cell cell) { var x = cell.column * boxSize; var y = cell.row * boxSize; context.beginPath(); if (cell.hidden) { context.fillStyle = COLOR_CODE; (1) } else { context.fillStyle = colorMap[cell.color]; (2) } // same code as in Spiral 3 } The lines (1) and (2) can also be stated more succinctly with the ? ternary operator. Remember that the drawing changes because the _colorBox method is called via draw at 60 frames per second and the board can react to a mouse click. In this spiral, we will show a cell when it is clicked together with its twin cell and then they will stay visible. Attaching an event handler for this is easy. We add the following line to the Board constructor: querySelector('#canvas').onMouseDown.listen(onMouseDown); The onMouseDown event handler has to know on which cell the click occurred. The mouse event e contains the coordinates of the click in its e.offset.x and e.offset.y properties (lines (3) and (4)). We will obtain the cell's row and column by using a truncating division ~/ operator dividing the x (which gives the column) and y (which gives the row) values by boxSize: void onMouseDown(MouseEvent e) { int row = e.offset.y ~/ boxSize; (3) int column = e.offset.x ~/ boxSize; (4) Cell cell = memory.getCell(row, column); (5) cell.hidden = false; (6) cell.twin.hidden = false; (7) } Memory has a collection of cells. To get the cell with a specified row and column value, we will add a getCell method to memory and call it in line (5). When we have the cell, we will set its hidden property and that of its twin cell to false (lines (6) to (7)). The getCell method must return the cell at the given row and column. It loops through all the cells in line (8) and checks each cell, whether it is positioned at that row and column (line (9)). If yes, it will return that cell: Cell getCell(int row, int column) { for (Cell cell in cells) { (8) if (cell.intersects(row, column)) { (9) return cell; } } } For this purpose, we will add an intersects method to the Cell class. This checks whether its row and column match the given row and column for the current cell (see line (10)): bool intersects(int row, int column) { if (this.row == row && this.column == column) { (10) return true; } return false; } Now, we have already added a lot of functionality, but the drawing of the board will need some more thinking: How to give a cell (and its twin cell) a random color that is not yet used? How to attach a cell randomly to a twin cell that is not yet used? To end this, we will have to make the constructor of Memory a lot more intelligent: Memory(this.length) { if (length.isOdd) { (1) throw new Exception( 'Memory length must be an even integer: $length.'); } cells = new Cells(); var cell, twinCell; for (var x = 0; x < length; x++) { for (var y = 0; y < length; y++) { cell = getCell(y, x); (2) if (cell == null) { (3) cell = new Cell(this, y, x); cell.color = _getFreeRandomColor(); (4) cells.add(cell); twinCell = _getFreeRandomCell(); (5) cell.twin = twinCell; (6) twinCell.twin = cell; twinCell.color = cell.color; cells.add(twinCell); } } } } The number of pairs given by ((length * length) / 2) must be even. This is only true if the length parameter of Memory itself is even, so we checked it in line (1). Again, we coded a nested loop and got the cell at that row and column. However, as the cell at that position has not yet been made (line (3)), we continued to construct it and assign its color and twin. In line (4), we called _getFreeRandomColor to get a color that is not yet used: String _getFreeRandomColor() { var color; do { color = randomColor(); } while (usedColors.any((c) => c == color)); (7) usedColors.add(color); (8) return color; } The do…while loop continues as long as the color is already in a list of usedColors. On exiting from the loop, we found an unused color, which is added to usedColors in line (8) and also returned. We then had to set everything for the twin cell. We searched for a free one with the _getFreeRandomCell method in line (5). Here, the do…while loop continues until a (row, column) position is found where cell == null is, meaning that we haven't yet created a cell there (line (9)). We will promptly do this in line (10): Cell _getFreeRandomCell() { var row, column; Cell cell; do { row = randomInt(length); column = randomInt(length); cell = getCell(row, column); } while (cell != null); (9) return new Cell(this, row, column); (10) } From line (6) onwards, the properties of the twin cell are set and added to the list. This is all we need to produce the following result: Paired colored cells Spiral 5 – game logic (bringing in the time element) Our app isn't playable yet: When a cell is clicked, its color must only show for a short period of time (say one second) When a cell and its twin cell are clicked within a certain time interval, they must remain visible All of this is coded in the mouseDown event handler and we also need a lastCellClicked variable of the Cell type in the Board class. Of course, this is exactly the cell we get in the mouseDown event handler. So, we will set it in line (5) in the following code snippet: void onMouseDown(MouseEvent e) { // same code as in Spiral 4 - if (cell.twin == lastCellClicked && lastCellClicked.shown) { (1) lastCellClicked.hidden = false; (2) if (memory.recalled) memory.hide(); (3) } else { new Timer(const Duration(milliseconds: 1000), () => cell.hidden = true); (4) } lastCellClicked = cell; (5) } In line (1), we checked whether the last clicked cell was the twin cell and whether this is still shown. Then, we made sure in (2) that it stays visible. shown is a new getter in the Cell class to make the code more readable: bool get shown => !hidden;. If at that moment all the cells were shown (the memory is recalled), we again hid them in line (3). If the last clicked cell was not the twin cell, we hid the current cell after one second in line (4). recalled is a simple getter (read-only property) in the Memory class and it makes use of a Boolean variable in Memory that is initialized to false (_recalled = false;): bool get recalled { if (!_recalled) { if (cells.every((c) => c.shown)) { (6) _recalled = true; } } return _recalled; } In line (6), we tested that if every cell is shown, then this variable is set to true (the game is over). every is a new method in the Cells List and a nice functional way to write this is given as follows: bool every(Function f) => list.every(f); The hide method is straightforward: hide every cell and reset the _recalled variable to false: hide() { for (final cell in cells) cell.hidden = true; _recalled = false; } This is it, our game works! Spiral 6 – some finishing touches A working program always gives its developer a sense of joy, and rightfully so. However, this doesn't that mean you can leave the code as it is. On the contrary, carefully review your code for some time to see whether there is room for improvement or optimization. For example, are the names you used clear enough? The color of a hidden cell is now named simply COLOR_CODE in board.dart, renaming it to HIDDEN_CELL_COLOR_CODE makes its meaning explicit. The List object used in the Cells class can indicate that it is List<Cell>, by applying the fact that Dart lists are generic. The parameter of the every method in the Cell class is more precise—it is a function that accepts a cell and returns bool. Our onMouseDown event handler contains our game logic, so it is very important to tune it if possible. After some thought, we see that the code from the previous spiral can be improved; in the following line, the second condition after && is, in fact, unnecessary: if (cell.twin == lastCellClicked && lastCellClicked.shown) {...} When the player has guessed everything correctly, showing the completed screen for a few seconds will be more satisfactory (line (2)). So, this portion of our event handler code will change to: if (cell.twin == lastCellClicked) { (1) lastCellClicked.hidden = false; if (memory.recalled) { // game over new Timer(const Duration(milliseconds: 5000), () => memory.hide()); (2) } } else if (cell.twin.hidden) { new Timer(const Duration(milliseconds: 800), () => cell.hidden = true); } Why don’t we show a "YOU HAVE WON!" banner. We will do this by drawing the text on the canvas (line (3)), so we must do it in the draw() method (otherwise, it would disappear after INTERVAL milliseconds): void draw() { _clear(); _boxes(); if (memory.recalled) { // game over context.font = "bold 25px sans-serif"; context.fillStyle = "red"; context.fillText("YOU HAVE WON !", boxSize, boxSize * 2); (3) } } Then, the same game with the same configuration can be played again. We could make it more obvious that a cell is hidden by decorating it with a small circle in the _colorBox method (line (4)): if (cell.hidden) { context.fillStyle = HIDDEN_CELL_COLOR_CODE; var centerX = cell.column * boxSize + boxSize / 2; var centerY = cell.row * boxSize + boxSize / 2; var radius = 4; context.arc(centerX, centerY, radius, 0, 2 * PI, false); (4) } We do want to give our player a chance to start over by supplying a Play again button. The easiest way will be to simply refresh the screen (line (5)) by adding this code to the startup script: void main() { canvas = querySelector('#canvas'); ButtonElement play = querySelector('#play'); play.onClick.listen(playAgain); new Board(canvas, new Memory(4)); } playAgain(Event e) { window.location.reload(); (5) } Spiral 7 – using images One improvement that certainly comes to mind is the use of pictures instead of colors as shown in the Using images screenshot. How difficult would that be? It turns out that this is surprisingly easy, because we already have the game logic firmly in place! In the images folder, we supply a number of game pictures. Instead of the color property, we give the cell a String property (image), which will contain the name of the picture file. We then replace utilcolor.dart with utilimages.dart, which contains a imageList variable with the image filenames. In utilrandom.dart, we will replace the color methods with the following code: String randomImage() => randomListElement(imageList); The changes to memory.dart are also straightforward: replace the usedColor list with List usedImages = []; and the _getFreeRandomColor method with _getFreeRandomImage, which will use the new list and method: List usedImages = []; String _getFreeRandomImage() { var image; do { image = randomImage(); } while (usedImages.any((i) => i == image)); usedImages.add(image); return image; } In board.dart, we replace _colorBox(cell) with _imageBox(cell). The only new thing is how to draw the image on canvas. For this, we need ImageElement objects. Here, we have to be careful to create these objects only once and not over and over again in every draw cycle, because this produces a flickering screen. We will store the ImageElements object in a Map: var imageMap = new Map<String, ImageElement>(); Then, we populate this in the Board constructor with a for…in loop over memory.cells: for (var cell in memory.cells) { ImageElement image = new Element.tag('img'); (1) image.src = 'images/${cell.image}'; (2) imageMap[cell.image] = image; (3) } We create a new ImageElement object in line (1), giving it the complete file path to the image file as a src property in line (2) and store it in imageMap in line (3). The image file will then be loaded into memory only once. We don't do any unnecessary network access to effectively cache the images. In the draw cycle, we will load the image from imageMap and draw it in the current cell with the drawImage method in line (4): if (cell.hidden) { // see previous code } else { ImageElement image = imageMap[cell.image]; context.drawImage(image, x, y); // resize to cell size (4) } Perhaps, you can think of other improvements? Why not let the player specify the game difficulty by asking the number of boxes. It is 16 now. Check whether the input is a square of an even number. Do you have enough colors to choose from? Perhaps, dynamically building a list with enough random colors would be a better idea. Calculating and storing the statistics discussed in the model would also make the game more attractive. Another enhancement from the model is to support different catalogs of pictures. Go ahead and exercise your Dart skills! Summary By thoroughly investigating two games applying all of Dart we have already covered, your Dart star begins to shine. For other Dart games, visit http://www.builtwithdart.com/projects/games/. You can find more information at http://www.dartgamedevs.org/ on building games. Resources for Article: Further resources on this subject: Slideshow Presentations [article] Dart with JavaScript [article] Practical Dart [article]
Read more
  • 0
  • 0
  • 19049

article-image-how-to-build-and-deploy-node-app-docker
John Oerter
20 Sep 2016
7 min read
Save for later

How to Build and Deploy a Node App with Docker

John Oerter
20 Sep 2016
7 min read
How many times have you deployed your app that was working perfectly in your local environment to production, only to see it break? Whether it was directly related to the bug or feature you were working on, or another random issue entirely, this happens all too often for most developers. Errors like this not only slow you down, but they're also embarrassing. Why does this happen? Usually, it's because your development environment on your local machine is different from the production environment you're deploying to. The tenth factor of the Twelve-Factor App is Dev/prod parity. This means that your development, staging, and production environments should be as similar as possible. The authors of the Twelve-Factor App spell out three "gaps" that can be present. They are: The time gap: A developer may work on code that takes days, weeks, or even months to go into production. The personnel gap: Developers write code, ops engineers deploy it. The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deployment uses Apache, MySQL, and Linux. (Source) In this post, we will mostly focus on the tools gap, and how to bridge that gap in a Node application with Docker. The Tools Gap In the Node ecosystem, the tools gap usually manifests itself either in differences in Node and npm versions, or differences in package dependency versions. If a package author publishes a breaking change in one of your dependencies or your dependencies' dependencies, it is entirely possible that your app will break on the next deployment (assuming you reinstall dependencies with npm install on every deployment), while it runs perfectly on your local machine. Although you can work around this issue using tools like npm shrinkwrap, adding Docker to the mix will streamline your deployment life cycle and minimize broken deployments to production. Why Docker? Docker is unique because it can be used the same way in development and production. When you enable the architecture of your app to run inside containers, you can easily scale out and create small containers that can be composed together to make one awesome system. Then, you can mimic this architecture in development so you never have to guess how your app will behave in production. In regards to the time gap and the personnel gap, Docker makes it easier for developers to automate deployments, thereby decreasing time to production and making it easier for full-stack teams to own deployments. Tools and Concepts When developing inside Docker containers, the two most important concepts are docker-compose and volumes. docker-compose helps define mulit-container environments and the ability to run them with one command. Here are some of the more often used docker-compose commands: docker-compose build: Builds images for services defined in docker-compose.yml docker-compose up: Creates and starts services. This is the same as running docker-compose create && docker-compose start docker-compose run: Runs a one-off command inside a container Volumes allow you to mount files from the host machine into the container. When the files on your host machine change, they change inside the container as well. This is important so that we don't have to constantly rebuild containers during development every time we make a change. You can also use a tool like node-mon to automatically restart the node app on changes. Let's walk through some tips and tricks with developing Node apps inside Docker containers. Set up Dockerfile and docker-compose.yml When you start a new project with Docker, you'll first want to define a barebones Dockerfile and docker-compose.yml to get you started. Here's an example Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user USER app-user WORKDIR $HOME/app This Dockerfile displays two best practices: Favor exact version tags over floating tags such as latest. Node releases often these days, and you don't want to implicitly upgrade when building your container on another machine. By specifying a version such as 6.2.1, you ensure that anyone who builds the image will always be working from the same node version. Create a new user to run the app inside the container. Without this step, everything would run under root in the container. You certainly wouldn't do that on a physical machine, so don't do in Docker containers either. Here's an example starter docker-compose.yml: web: build: . volumes: - .:/home/app-user/app Pretty simple right? Here we are telling Docker to build the web service based on our Dockerfile and create a volume from our current host directory to /home/app-user/app inside the container. This simple setup lets you build the container with docker-compose build and then run bash inside it with docker-compose run --rm web /bin/bash. Now, it's essentially the same as if you were SSH'd into a remote server or working off a VM, except that any file you create inside the container will be on your host machine and vice versa. With that in mind, you can bootstrap your Node app from inside your container using npm init -y and npm shrinkwrap. Then, you can install any modules you need such as Express. Install node modules on build With that done, we need to update our Dockerfile to install dependencies from npm when the image is built. Here is the updated Dockerfile: FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install Notice that we had to change the ownership of the copied files to app-user. This is because files copied into a container are automatically owned by root. Add a volume for the node_modules directory We also need to make an update to our docker-compose.yml to make sure that our modules are installed inside the container properly. web: build: . volumes: - .:/home/app-user/app - /home/app-user/app/node_modules Without adding a data volume to /home/app-user/app/node_modules, the node_modules wouldn't exist at runtime in the container because our host directory, which won't contain the node_modules directory, would be mounted and hide the node_modules directory that was created when the container was built. For more information, see this Stack Overflow post. Running your app Once you've got an entry point to your app ready to go, simply add it as a CMD in your Dockerfile: CMD ["node", "index.js"] This will automatically start your app on docker-compose up. Running tests inside your container is easy as well. docker-compose --rm run web npm test You could easily hook this into CI. Production Now going to production with your Docker-powered Node app is a breeze! Just use docker-compose again. You will probably want to define another docker-compose.yml that is especially written for production use. This means removing volumes, binding to different ports, setting NODE_ENV=production, and so on. Once you have a production config file, you can tell docker-compose to use it, like so: docker-compose -f docker-compose.yml -f docker-compose.production.yml up The -f lets you specify a list of files that are merged in the order specified. Here is a complete Dockerfile and docker-compose.yml for reference: # Dockerfile FROM node:6.2.1 RUN useradd --user-group --create-home --shell /bin/false app-user ENV HOME=/home/app-user COPY package.json npm-shrinkwrap.json $HOME/app/ RUN chown -R app-user:app-user $HOME/* USER app-user WORKDIR $HOME/app RUN npm install CMD ["node", "index.js"] # docker-compose.yml web: build: . ports: - '3000:3000' volumes: - .:/home/app-user/app - /home/app-user/app/node_modules About the author John Oerter is a software engineer from Omaha, Nebraska, USA. He has a passion for continuous improvement and learning in all areas of software development, including Docker, JavaScript, and C#. He blogs here.
Read more
  • 0
  • 0
  • 18098
Modal Close icon
Modal Close icon