Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-features-sitecore
Packt
25 Apr 2016
17 min read
Save for later

Features of Sitecore

Packt
25 Apr 2016
17 min read
In this article by Yogesh Patel, the author of the book, Sitecore Cookbook for Developers, we will discuss about the importance of Sitecore and its good features. (For more resources related to this topic, see here.) Why Sitecore? Sitecore Experience Platform (XP) is not only an enterprise-level content management system (CMS), but rather a web framework or web platform, which is the global leader in experience management. It continues to be very popular because of its highly scalable and robust architecture, continuous innovations, and ease of implementations compared to other CMSs available. It also provides an easier integration with many external platforms such as customer relationship management (CRM), e-commerce, and so on. Sitecore architecture is built with the Microsoft .NET framework and provides greater depth of APIs, flexibility, scalability, performance, and power to developers. It has great out-of-the-box capabilities, but one of its great strengths is the ease of extending these capabilities; hence, developers love Sitecore! Sitecore provides many features and functionalities out of the box to help content owners and marketing teams. These features can be extended and highly customized to meet the needs of your unique business rules. Sitecore provides these features with different user-friendly interfaces for content owners that helps them manage content and media easily and quickly. Sitecore user interfaces are supported on almost every modern browser. In addition, fully customized web applications can be layered in and integrated with other modules and tools using Sitecore as the core platform. It helps marketers to optimize the flow of content continuously for better results and more valuable outcomes. It also provides in-depth analytics, personalized experience to end users, and marketing automation tools, which play a significant role for marketing teams. The following are a few of the many features of Sitecore. CMS based on the .NET Framework Sitecore provides building components on ASP.NET Web forms as well as ASP.NET Model-View-Controller (MVC) frameworks, so developers can choose either approach to match the required architecture. Sitecore provides web controls and sublayouts while working with ASP.NET web forms and view rendering, controller rendering, and models and item rendering while working with the ASP.NET MVC framework. Sitecore also provides two frameworks to prepare user interface (UI) applications for Sitecore clients—Sheer UI and SPEAK. Sheer UI applications are prepared using Extensible Application Markup Language (XAML) and most of the Sitecore applications are prepared using Sheer UI. Sitecore Process Enablement and Accelerator Kit (SPEAK) is the latest framework to develop Sitecore applications with a consistent interface quickly and easily. SPEAK gives you a predefined set of page layouts and components: Component-based architecture Sitecore is built on a component-based architecture, which provides us with loosely coupled independent components. The main advantage of these components is their reusability and loosely coupled independent behaviour. It aims to provide reusability of components at the page level, site level, and Sitecore instance level to support multisite or multitenant sites. Components in Sitecore are built with the normal layered approach, where the components are split into layers such as presentation, business logic, data layer, and so on. Sitecore provides different presenation components, including layouts, sublayouts, web control renderings, MVC renderings, and placeholders. Sitecore manages different components in logical grouping by their templates, layouts, sublayouts, renderings, devices, media, content items, and so on: Layout engine The Sitecore layout engine extends the ASP.NET web application server to merge content with presentation logic dynamically when web clients request resources. A layout can be a web form page (.aspx) or MVC view (.cshtml) file. A layout can have multiple placeholders to place content on predefined places, where the controls are placed. Controls can be HTML markup controls such as a sublayout (.ascx) file, MVC view (.cshtml) file, or other renderings such as web control, controller rendering, and so on, which can contain business logic. Once the request criteria are resolved by the layout engine, such as item, language, and device, the layout engine creates a platform to render different controls and assemble their output to relevant placeholders on the layout. Layout engine provides both static and dynamic binding. So, with dynamic binding, we can have clean HTML markups and reusability of all the controls or components. Binding of controls, layouts, and devices can be applied on Sitecore content items itself, as shown in the following screenshot: Once the layout engine renders the page, you can see how the controls will be bound to the layout, as shown in the following image: The layout engine in Sitecore is reponsible for layout rendering, device detection, rule engine, and personalization: Multilingual support In Sitecore, content can be maintained in any number of languages. It provides easier integration with external translation providers for seamless translation and also supports the dynamic creation of multilingual web pages. Sitecore also supports the language fallback feature on the field, item, and template level, which makes life easier for content owners and developers. It also supports chained fallback. Multi-device support Devices represent different types of web clients that connect to the Internet and place HTTP requests. Each device represents a different type of web client. Each device can have unique markup requirements. As we saw, the layout engine applies the presentation components specified for the context device to the layout details of the context item. In the same way, developers can use devices to format the context item output using different collections of presentation components for various types of web clients. Dynamically assembled content can be transformed to conform to virtually any output format, such as a mobile, tablet, desktop, print, or RSS. Sitecore also supports the device fallback feature so that any web page not supported for the requesting device can still be served through the fallback device. It also supports chained fallback for devices. Multi-site capabilities There are many ways to manage multisites on a single Sitecore installation. For example, you can host multiple regional domains with different regional languages as the default language for a single site. For example, http://www.sitecorecookbook.com will serve English content, http://www.sitecorecookbook.de will serve German content of the same website, and so on. Another way is to create multiple websites for different subsidiaries or franchise of a company. In this approach, you can share some common resources across all the sites such as templates, renderings, user interface elements, and other content or media items, but have unique content and pages so that you can find a separate existence of each website in Sitecore. Sitecore has security capabilities so that each franchise or subsidiary can manage their own website independently without affecting other websites. Developers have full flexibility to re-architect Sitecore's multisite architecture as per business needs. Sitecore also supports multitenant multisite architecture so that each website can work as an individual physical website. Caching Caching plays a very important role in website performance. Sitecore contains multiple levels of caching such as prefetch cache, data cache, item cache, and HTML cache. Apart from this, Sitecore creates different caching such as standard values cache, filtered item cache, registry cache, media cache, user cache, proxy cache, AccessResult cache, and so on. This makes understanding all the Sitecore caches really important. Sitecore caching is a very vast topic to cover; you can read more about it at http://sitecoreblog.patelyogesh.in/2013/06/how-sitecore-caching-work.html. Configuration factory Sitecore is configured using IIS's configuration file, Web.config. Sitecore configuration factory allows you to configure pipelines, events, scheduling agents, commands, settings, properties, and configuration nodes in Web.config files, which can be defined in the /configuration/sitecore path. Configurations inside this path can be spread out between multiple files to make it scalable. This process is often called config patching. Instead of touching the Web.config file, Sitecore provides the Sitecore.config file in the App_ConfigInclude directory, which contains all the important Sitecore configurations. Functionality-specific configurations are split into the number of .config files based, which you can find in its subdirectories. These .config files are merged into a single configuration file at runtime, which you can evaluate using http://<domain>/sitecore/admin/showconfig.aspx. Thus, developers create custom .config files in the App_ConfigInclude directory to introduce, override, or delete settings, properties, configuration nodes, and attributes without touching Sitecore's default .config files. This makes managing .config files very easy from development to deployment. You can learn more about file patching from https://sdn.sitecore.net/upload/sitecore6/60/include_file_patching_facilities_sc6orlater-a4.pdf. Dependency injection in .NET has become very common nowadays. If you want to build a generic and reusable functionality, you will surely go for the inversion of control (IoC) framework. Fortunately, Sitecore provides a solution that will allow you to easily use different IoC frameworks between projects. Using patch files, Sitecore allows you to define objects that will be available at runtime. These nodes are defined under /configuration/sitecore and can be retrieved using the Sitecore API. We can define types, constructors, methods, properties, and their input parameters in logical nodes inside nodes of pipelines, events, scheduling agents, and so on. You can learn more examples of it from http://sitecore-community.github.io/docs/documentation/Sitecore%20Fundamentals/Sitecore%20Configuration%20Factory/. Pipelines An operation to be performed in multiple steps can be carried out using the pipeline system, where each individual step is defined as a processor. Data processed from one processor is then carried to the next processor in arguments. The flow of the pipeline can be defined in XML format in the .config files. You can find default pipelines in the Sitecore.config file or patch file under the <pipelines> node (which are system processes) and the <processors> node (which are UI processes). The following image visualizes the pipeline and processors concept: Each processor in a pipeline contains a method named Process() that accepts a single argument, Sitecore.Pipelines.PipelineArgs, to get different argument values and returns void. A processor can abort the pipeline, preventing Sitecore from invoking subsequent processors. A page request traverses through different pipelines such as <preProcessRequest>, <httpRequestBegin>, <renderLayout>, <httpRequestEnd>, and so on. The <httpRequestBegin> pipeline is the heart of the Sitecore HTTP request execution process. It defines different processors to resolve the site, device, language, item, layout, and so on sequentially, which you can find in Sitecore.config as follows: <httpRequestBegin>   ...   <processor type="Sitecore.Pipelines.HttpRequest.SiteResolver,     Sitecore.Kernel"/>   <processor type="Sitecore.Pipelines.HttpRequest.UserResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.DatabaseResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.BeginDiagnostics,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.DeviceResolver,     Sitecore.Kernel"/>   <processor type="     Sitecore.Pipelines.HttpRequest.LanguageResolver,     Sitecore.Kernel"/>   ... </httpRequestBegin> There are more than a hundred pipelines, and the list goes on increasing after every new version release. Sitecore also allows us to create our own pipelines and processors. Background jobs When you need to do some long-running operations such as importing data from external services, sending e-mails to subscribers, resetting content item layout details, and so on, we can use Sitecore jobs, which are asynchronous operations in the backend that you can monitor in a foreground thread (Job Viewer) of Sitecore Rocks or by creating a custom Sitecore application. The jobs can be invoked from the user interface by users or can be scheduled. Sitecore provides APIs to invoke jobs with many different options available. You can simply create and start a job using the following code: public void Run() {   JobOptions options = new JobOptions("Job Name", "Job Category",     "Site Name", "current object", "Task Method to Invoke", new     object[] { rootItem })   {     EnableSecurity = true,     ContextUser = Sitecore.Context.User,     Priority = ThreadPriority.AboveNormal   };   JobManager.Start(options); } You can schedule tasks or jobs by creating scheduling agents in the Sitecore.config file. You can also set their execution frequency. The following example shows you how Sitecore has configured PublishAgent, which publishes a site every 12 hours and simply executes the Run() method of the Sitecore.Tasks.PublishAgent class: <scheduling>   <agent type="Sitecore.Tasks.PublishAgent" method="Run"     interval="12:00:00">     <param desc="source database">master</param>     <param desc="target database">web</param>     <param desc="mode (full or smart or       incremental)">incremental</param>     <param desc="languages">en, da</param>   </agent> </scheduling> Apart from this, Sitecore also provides you with the facility to define scheduled tasks in the database, which has a great advantage of storing tasks in the database, so that we can handle its start and end date and time. We can use it once or make it recurring as well. Workflow and publishing Workflows are essential to the content author experience. Workflows ensure that items move through a predefined set of states before they become publishable. It is necessary to ensure that content receives the appropriate reviews and approvals before publication to the live website. Apart from workflow, Sitecore provides highly configurable security features, access permissions, and versioning. Sitecore also provides full workflow history like when and by whom the content was edited, reviewed, or approved. It also allows you to restrict publishing as well as identify when it is ready to be published. Publishing is an essential part of working in Sitecore. Every time you edit or create new content, you have to publish it to see it on your live website. When publishing happens, the item is copied from the master database to the web database. So, the content of the web database will be shown on the website. When multiple users are working on different content pages or media items, publishing restrictions and workflows play a vital role to make releases, embargoed, or go-live successful. There are three types of publishing available in Sitecore: Republish: This publishes every item even though items are already published. Smart Publish: Sitecore compares the internal revision identifier of the item in the master and web databases. If both identifiers are different, it means that the item is changed in the master database, hence Sitecore will publish the item or skip the item if identifiers are the same. Incremental Publish: Every modified item is added to the publish queue. Once incremental publishing is done, Sitecore will publish all the items found in the publish queue and clear it. Sitecore also supports the publishing of subitems as well as related items (such as publishing a content item will also publish related media items). Search Sitecore comes with out-of-the-box Lucene support. You can also switch your Sitecore search to Solr, which just needs to install Solr and enable Solr configurations already available. Sitecore by default indexes Sitecore content in Lucene index files. The Sitecore search engine lets you search through millions of items of the content tree quickly with the help of different types of queries with Lucene or Solr indexes. Sitecore provides you with the following functionalities for content search: We can search content items and documents such as PDF, Word, and so on. It allows you to search content items based on preconfigured fields. It provides APIs to create and search composite fields as per business needs. It provides content search APIs to sort, filter, and page search results. We can apply wildcards to search complex results and autosuggest. We can apply boosting to influence search results or elevate results by giving more priority. We can create custom dictionaries and index files, using which we can suggest did you mean kind of suggestions to users. We can apply facets to refine search results as we can see on e-commerce sites. W can apply different analyzers to hunt MoreLikeThis results or similar results. We can tag content or media items to categorize them so that we can use features such as a tag cloud. It provides a scalable user interface to search content items and apply filters and operations to selected search results. It provides different indexing strategies to create transparent and diverse models for index maintenance. In short, Sitecore allows us to implement different searching techniques, which are available in Google or other search engines. Content authors always find it difficult while working with a big number of items. You can read more about Sitecore search at https://doc.sitecore.net/sitecore_experience_platform/content_authoring/searching/searching. Security model Sitecore has the reputation of being very easy to set up the security of users, roles, access rights, and so on. Sitecore follows the .NET security model, so we get all the basic information of the .NET membership in Sitecore, which offers several advantages: A variety of plug-and-play features provided directly by Microsoft The option to replace or extend the default configuration with custom providers It is also possible to store the accounts in different storage areas using several providers simultaneously Sitecore provides item-level and field-level rights and an option to create custom rights as well Dynamic user profile structure and role management is possible just through the user interface, which is simpler and easier compared to pure ASP.NET solutions It provides easier implementation for integration with external systems Even after having an extended wrapper on the .NET solution, we get the same performance as a pure ASP.NET solution Experience analytics and personalization Sitecore contains state-of-the-art Analysis, Insights, Decisions, Automation (AIDA) framework, which is the heart for marketing programs. It provides comprehensive analytics data and reports, insights from every website interaction with rules, behavior-based personalization, and marketing automation. Sitecore collects all the visitor interactions in a real-time, big data repository—Experience Database (xDB)—to increase the availability, scalability, and performance of website. Sitecore Marketing Foundation provides the following features: Sitecore uses MongoDB, a big marketing data repository that collects all customer interactions. It provides real-time data to marketers to automate interactions across all channels. It provides a unified 360 degree view of the individual website visitors and in-depth analytics reports. It provides fundamental analytics measurement components such as goals and events to evaluate the effectiveness of online business and marketing campaigns. It provides comprehensive conditions and actions to achieve conditional and behavioral or predictive personalization, which helps show customers what they are looking for instead of forcing them to see what we want to show. Sitecore collects, evaluates, and processes Omnichannel visitor behavioral patterns, which helps better planned effective marketing campaigns and improved user experience. Sitecore provides an engagement plan to control how your website interacts with visitors. It helps nurture relationships with your visitors by adapting personalized communication based on which state they are falling. Sitecore provides an in-depth geolocation service, helpful in optimizing campaigns through segmentation, personalization, and profiling strategies. The Sitecore Device Detection service is helpful in personalizing user experience or promotions based on the device they use. It provides different dimensions and reports to reflect data on full taxonomy provided in Marketing Control Panel. It provides different charting controls to get smart reports. It has full flexibility for developers to customize or extend all these features. High performance and scalability Sitecore supports heavy content management and content delivery usage with a large volume of data. Sitecore is architected for high performance and unlimited scalability. Sitecore cache engine provides caching on the raw data as well as rendered output data, which gives a high-performance platform. Sitecore uses the event queue concept for scalability. Theoretically, it makes Sitecore scalable to any number of instances under a load balancer. Summary In this article, we discussed about the importance of Sitecore and its good features. We also saw that Sitecore XP is not only an enterprise-level CMS, but also a web platform, which is the global leader in experience management. Resources for Article: Further resources on this subject: Building a Recommendation Engine with Spark [article] Configuring a MySQL linked server on SQL Server 2008 [article] Features and utilities in SQL Developer Data Modeler [article]
Read more
  • 0
  • 0
  • 36942

article-image-hello-world-program
Packt
20 Apr 2016
12 min read
Save for later

Hello World Program

Packt
20 Apr 2016
12 min read
In this article by Manoj Kumar, author of the book Learning Sinatra, we will write an application. Make sure that you have Ruby installed. We will get a basic skeleton app up and running and see how to structure the application. (For more resources related to this topic, see here.) In this article, we will discuss the following topics: A project that will be used to understand Sinatra Bundler gem File structure of the application Responsibilities of each file Before we begin writing our application, let's write the Hello World application. Getting started The Hello World program is as follows: 1require 'sinatra'23 get '/' do4 return 'Hello World!'5 end The following is how the code works: ruby helloworld.rb Executing this from the command line will run the application and the server will listen to the 4567 port. If we point our browser to http://localhost:4567/, the output will be as shown in the following screenshot: The application To understand how to write a Sinatra application, we will take a small project and discuss every part of the program in detail. The idea We will make a ToDo app and use Sinatra along with a lot of other libraries. The features of the app will be as follows: Each user can have multiple to-do lists Each to-do list will have multiple items To-do lists can be private, public, or shared with a group Items in each to-do list can be assigned to a user or group The modules that we build are as follows: Users: This will manage the users and groups List: This will manage the to-do lists Items: This will manage the items for all the to-do lists Before we start writing the code, let's see what the file structure will be like, understand why each one of them is required, and learn about some new files. The file structure It is always better to keep certain files in certain folders for better readability. We could dump all the files in the home folder; however, that would make it difficult for us to manage the code: The app.rb file This file is the base file that loads all the other files (such as, models, libs, and so on) and starts the application. We can configure various settings of Sinatra here according to the various deployment environments. The config.ru file The config.ru file is generally used when we need to deploy our application with different application servers, such as Passenger, Unicorn, or Heroku. It is also easy to maintain the different deployment environment using config.ru. Gemfile This is one of the interesting stuff that we can do with Ruby applications. As we know, we can use a variety of gems for different purposes. The gems are just pieces of code and are constantly updated. Therefore, sometimes, we need to use specific versions of gems to maintain the stability of our application. We list all the gems that we are going to use for our application with their version. Before we discuss how to use this Gemfile, we will talk about gem bundler. Bundler The gem bundler manages the installation of all the gems and their dependencies. Of course, we would need to install the gem bundler manually: gem install bundler This will install the latest stable version of bundler gem. Once we are done with this, we need to create a new file with the name Gemfile (yes, with a capital G) and add the gems that we will use. It is not necessary to add all the gems to Gemfile before starting to write the application. We can add and remove gems as we require; however, after every change, we need to run the following: bundle install This will make sure that all the required gems and their dependencies are installed. It will also create a 'Gemfile.lock' file. Make sure that we do not edit this file. It contains all the gems and their dependencies information. Therefore, we now know why we should use Gemfile. This is the lib/routes.rb path for folder containing the routes file. What is a route? A route is the URL path for which the application serves a web page when requested. For example, when we type http://www.example.com/, the URL path is / and when we type http://www.example.com/something/, /something/ is the URL path. Now, we need to explicitly define all the routes for which we will be serving requests so that our application knows what to return. It is not important to have this file in the lib folder or to even have it at all. We can also write the routes in the app.rb file. Consider the following examples: get '/' do # code end post '/something' do # code end Both of the preceding routes are valid. The get and post method are the HTTP methods. The first code block will be executed when a GET request is made on / and the second one will be executed when a POST request is made on /something. The only reason we are writing the routes in a separate file is to maintain clean code. The responsibility of each file will be clearly understood in the following: models/: This folder contains all the files that define model of the application. When we write the models for our application, we will save them in this folder. public/: This folder contains all our CSS, JavaScript, and image files. views/: This folder will contain all the files that define the views, such as HTML, HAML, and ERB files. The code Now, we know what we want to build. You also have a rough idea about what our file structure would be. When we run the application, the rackup file that we load will be config.ru. This file tells the server what environment to use and which file is the main application to load. Before running the server, we need to write a minimum code. It includes writing three files, as follows: app.rb config.ru Gemfile We can, of course, write these files in any order we want; however, we need to make sure that all three files have sufficient code for the application to work. Let's start with the app.rb file. The app.rb file This is the file that config.ru loads when the application is executed. This file, in turn, loads all the other files that help it to understand the available routes and the underlying model: 1 require 'sinatra' 2 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end What does this code do? Let's see what this code does in the following: 1 require 'sinatra' //This loads the sinatra gem into memory. 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end This defines our main application's class. This skeleton is enough to start the basic application. We inherit the Base class of the Sinatra module. Before starting the application, we may want to change some basic configuration settings such as logging, error display, user sessions, and so on. We handle all these configurations through the configure blocks. Also, we might need different configurations for different environments. For example, in development mode, we might want to see all the errors; however, in production we don’t want the end user to see the error dump. Therefore, we can define the configurations for different environments. The first step would be to set the application environment to the concerned one, as follows: 4 set :environment, ENV['RACK_ENV'] We will later see that we can have multiple configure blocks for multiple environments. This line reads the system environment RACK_ENV variable and sets the same environment for the application. When we discuss config.ru, we will see how to set RACK_ENV in the first place: 6 configure do 7 end The following is how we define a configure block. Note that here we have not informed the application that to which environment do these configurations need to be applied. In such cases, this becomes the generic configuration for all the environments and this is generally the last configuration block. All the environment-specific configurations should be written before this block in order to avoid code overriding: 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } If we see the file structure discussed earlier, we can see that models/ is a directory that contains the model files. We need to import all these files in the application. We have kept all our model files in the models/ folder: Dir[File.join(File.dirname(__FILE__),'models','*.rb')] This would return an array of files having the .rb extension in the models folder. Doing this, avoids writing one require line for each file and modifying this file again: 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } Similarly, we will import all the files in the lib/ folder. Therefore, in short, the app.rb configures our application according to the deployment environment and imports the model files and the other library files before starting the application. Now, let's proceed to write our next file. The config.ru file The config.ru is the rackup file of the application. This loads all the gems and app.rb. We generally pass this file as a parameter to the server, as follows: 1 require 'sinatra' 2 require 'bundler/setup' 3 Bundler.require 4 5 ENV["RACK_ENV"] = "development" 6 7 require File.join(File.dirname(__FILE__), 'app.rb') 8 9 Todo .start! W Working of the code Let's go through each of the lines, as follows: 1 require 'sinatra' 2 require 'bundler/setup' The first two lines import the gems. This is exactly what we do in other languages. The gem 'sinatra' command will include all the Sinatra classes and help in listening to requests, while the bundler gem will manage all the other gems. As we have discussed earlier, we will always use bundler to manage our gems. 3 Bundler.require This line of the code will check Gemfile and make sure that all the gems available match the version and all the dependencies are met. This does not import all the gems as all gems may not be needed in the memory at all times: 5 ENV["RACK_ENV"] = "development" This code will set the system environment RACK_ENV variable to development. This will help the server know which configurations does it need to use. We will later see how to manage a single configuration file with different settings for different environments and use one particular set of configurations for the given environment. If we use version control for our application, config.ru is not version controlled. It has to be customized on whether our environment is development, staging, testing, or production. We may version control a sample config.ru. We will discuss this when we talk about deploying our application. Next, we will require the main application file, as follows: 7 require File.join(File.dirname(__FILE__), 'app.rb') We see here that we have used the File class to include app.rb: File.dirname(__FILE__) It is a convention to keep config.ru and app.rb in the same folder. It is good practice to give the complete file path whenever we require a file in order to avoid breaking the code. Therefore, this part of the code will return the path of the folder containing config.ru. Now, we know that our main application file is in the same folder as config.ru, therefore, we do the following: File.join(File.dirname(__FILE__), 'app.rb') This would return the complete file path of app.rb and the line 7 will load the main application file in the memory. Now, all we need to do is execute app.rb to start the application, as follows: 9 Todo .start! We see that the start! method is not defined by us in the Todo class in app.rb. This is inherited from the Sinatra::Base class. It starts the application and listens to incoming requests. In short, config.ru checks the availability of all the gems and their dependencies, sets the environment variables, and starts the application. The easiest file to write is Gemfile. It has no complex code and logic. It just contains a list of gems and their version details. Gemfile In Gemfile, we need to specify the source from where the gems will be downloaded and the list of the gems. Therefore, let's write a Gemfile with the following lines: 1 source 'https://rubygems.org' 2 gem 'bundler', '1.6.0' 3 gem 'sinatra', '1.4.4' The first line specifies the source. The https://rubygems.org website is a trusted place to download gems. It has a large collection of gems hosted. We can view this page, search for gems that we want to use, read the documentation, and select the exact version for our application. Generally, the latest stable version of bundler is used. Therefore, we search the site for bundler and find out its version. We do the same for the Sinatra gem. Summary In this article, you learned how to build a Hello World program using Sinatra. Resources for Article: Further resources on this subject: Getting Ready for RubyMotion[article] Quick start - your first Sinatra application[article] Building tiny Web-applications in Ruby using Sinatra[article]
Read more
  • 0
  • 0
  • 11808

article-image-creating-your-own-node-module
Soham Kamani
18 Apr 2016
6 min read
Save for later

Creating Your Own Node Module

Soham Kamani
18 Apr 2016
6 min read
Node.js has a great community and one of the best package managers I have ever seen. One of the reasons npm is so great is because it encourages you to make small composable modules, which usually have just one responsibility. Many of the larger, more complex node modules are built by composing smaller node modules. As of this writing, npm has over 219,897 packages. One of the reasons this community is so vibrant is because it is ridiculously easy to make your own node module. This post will go through the steps to create your own node module, as well as some of the best practices to follow while doing so. Prerequisites and Installation node and npm are a given. Additionally, you should also configure your npm author details: npm set init.author.name "My Name" npm set init.author.email "your@email.com" npm set init.author.url "http://your-website.com" npm adduser These are the details that would show up on npmjs.org once you publish. Hello World The reason that I say creating a node module is ridiculously easy is because you only need two files to create the most basic version of a node module. First up, create a package.json file inside of a new folder by running the npm init command. This will ask you to choose a name. Of course, the name you are thinking of might already exist in the npm registry, so to check for this run the command npm ls owner module_name , where module_name is replaced by the namespace you want to check. If it exists, you will get information about the authors: $ npm owner ls forever indexzero <charlie.robbins@gmail.com> bradleymeck <bradley.meck@gmail.com> julianduque <julianduquej@gmail.com> jeffsu <me@jeffsu.com> jcrugzz <jcrugzz@gmail.com> If your namespace is free you would get an error message. Something similar to : $ npm owner ls does_not_exist npm ERR! owner ls Couldnt get owner data does_not_exist npm ERR! Darwin 14.5.0 npm ERR! argv "node" "/usr/local/bin/npm" "owner" "ls" "does_not_exist" npm ERR! node v0.12.4 npm ERR! npm v2.10.1 npm ERR! code E404 npm ERR! 404 Registry returned 404 GET on https://registry.npmjs.org/does_not_exist npm ERR! 404 npm ERR! 404 'does_not_exist' is not in the npm registry. npm ERR! 404 You should bug the author to publish it (or use the name yourself!) npm ERR! 404 npm ERR! 404 Note that you can also install from a npm ERR! 404 tarball, folder, http url, or git url. npm ERR! Please include the following file with any support request: npm ERR! /Users/sohamchetan/Documents/jekyll-blog/npm-debug.log After setting up package.json, add a JavaScript file: module.exports = function(){ return 'Hello World!'; } And that's it! Now execute npm publish . and your node module will be published to npmjs.org. Also, anyone can now install your node module by running npm install --save module_name, where module name is the "name" property contained in package.json. Now anyone can use your module like this : var someModule = require('module_name'); console.log(someModule()); // This will output "Hello World!" Dependencies As stated before, rarely will you find large scale node modules that do not depend on other smaller modules. This is because npm encourages modularity and composability. To add dependancies to your own module, simply install them. For example, one of the most depended upon packages is lodash, a utility library. To add this, run the command : npm install --save lodash Now you can use lodash everywhere in your module by "requiring" it, and when someone else downloads your module, they get lodash bundled along with it as well. Additionally you would want to have some modules purely for development and not for distribution. These are dev-dependencies, and can be installed with the npm install --save-dev command. Dev dependencies will not install when someone else installs your node module. Configuring package.json The package.json file is what contains all the metadata for your node_module. A few fields are filled out automatically (like dependencies or devDependencies during npm installs). There are a few more fields in package.json that you should consider filling out so that your node module is best fitted to its purpose. "main": The relative path of the entry point of your module. Whatever is assigned to module.exports in this file is exported when someone "requires" your module. By default this is the index.js file. "keywords": It’s an array of keywords describing your module. Quite helpful when others from the community are searching for something that your module happens to solve. "license": I normally publish all my packages with an "MIT" licence because of its openness and popularity in the open source community. "version": This is pretty crucial because you cannot publish a node module with the same version twice. Normally, semver versioning should be followed. If you want to know more about the different properties you can set in package.json there's a great interactive guide you can check out. Using Yeoman Generators Although it's really simple to make a basic node module, it can be quite a task to make something substantial using just index.js nd package.json file. In these cases, there's a lot more to do, such as: Writing and running tests. Setting up a CI tool like Travis. Measuring code coverage. Installing standard dev dependencies for testing. Fortunately, there are many Yeoman generators to help you bootstrap your project. Check out generator-nm for setting up a basic project structure for a simple node module. If writing in ES6 is more your style, you can take a look at generator-nm-es6. These generators get your project structure, complete with a testing framework and CI integration so that you don't have to spend all your time writing boilerplate code. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 9226

article-image-setting-build-chain-grunt
Packt
18 Apr 2016
24 min read
Save for later

Setting up a Build Chain with Grunt

Packt
18 Apr 2016
24 min read
In this article by Bass Jobsen, author of the book Sass and Compass Designer's Cookbook you will learn the following topics: Installing Grunt Installing Grunt plugins Utilizing the Gruntfile.js file Adding a configuration definition for a plugin Adding the Sass compiler task (For more resources related to this topic, see here.) This article introduces you to the Grunt Task Runner and the features it offers to make your development workflow a delight. Grunt is a JavaScript Task Runner that is installed and managed via npm, the Node.js package manager. You will learn how to take advantage of its plugins to set up your own flexible and productive workflow, which will enable you to compile your Sass code. Although there are many applications available for compiling Sass, Grunt is a more flexible, versatile, and cross-platform tool that will allow you to automate many development tasks, including Sass compilation. It can not only automate the Sass compilation tasks, but also wrap any other mundane jobs, such as linting and minifying and cleaning your code, into tasks and run them automatically for you. By the end of this article, you will be comfortable using Grunt and its plugins to establish a flexible workflow when working with Sass. Using Grunt in your workflow is vital. You will then be shown how to combine Grunt's plugins to establish a workflow for compiling Sass in real time. Grunt becomes a tool to automate integration testing, deployments, builds, and development in which you can use. Finally, by understanding the automation process, you will also learn how to use alternative tools, such as Gulp. Gulp is a JavaScript task runner for node.js and relatively new in comparison to Grunt, so Grunt has more plugins and a wider community support. Currently, the Gulp community is growing fast. The biggest difference between Grunt and Gulp is that Gulp does not save intermediary files, but pipes these files' content in memory to the next stream. A stream enables you to pass some data through a function, which will modify the data and then pass the modified data to the next function. In many situations, Gulp requires less configuration settings, so some people find Gulp more intuitive and easier to learn. In this article, Grunt has been chosen to demonstrate how to run a task runner; this choice does not mean that you will have to prefer the usage of Grunt in your own project. Both the task runners can run all the tasks described in this article. Simply choose the task runner that suits you best. This recipe demonstrates shortly how to compile your Sass code with Gulp. In this article, you should enter your commands in the command prompt. Linux users should open a terminal, while Mac users should run Terminal.app and Window users should use the cmd command for command line usage. Installing Grunt Grunt is essentially a Node.js module; therefore, it requires Node.js to be installed. The goal of this recipe is to show you how to install Grunt on your system and set up your project. Getting ready Installing Grunt requires both Node.js and npm. Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications, and npm is a package manager for Node.js. You can download the Node.js source code or a prebuilt installer for your platform at https://nodejs.org/en/download/. Notice that npm is bundled with node. Also, read the instructions at https://github.com/npm/npm#super-easy-install. How to do it... After installing Node.js and npm, installing Grunt is as simple as running a single command, regardless of the operating system that you are using. Just open the command line or the Terminal and execute the following command: npm install -g grunt-cli That's it! This command will install Grunt globally and make it accessible anywhere on your system. Run the grunt --version command in the command prompt in order to confirm that Grunt has been successfully installed. If the installation is successful, you should see the version of Grunt in the Terminal's output: grunt --version grunt-cli v0.1.11 After installing Grunt, the next step is to set it up for your project: Make a folder on your desktop and call it workflow. Then, navigate to it and run the npm init command to initialize the setup process: mkdir workflow && cd $_ && npm init Press Enter for all the questions and accept the defaults. You can change these settings later. This should create a file called package.json that will contain some information about the project and the project's dependencies. In order to add Grunt as a dependency, install the Grunt package as follows: npm install grunt --save-dev Now, if you look at the package.json file, you should see that Grunt is added to the list of dependencies: ..."devDependencies": {"grunt": "~0.4.5" } In addition, you should see an extra folder created. Called node_modules, it will contain Grunt and other modules that you will install later in this article. How it works... In the preceding section, you installed Grunt (grunt-cli) with the -g option. The -g option installs Grunt globally on your system. Global installation requires superuser or administrator rights on most systems. You need to run only the globally installed packages from the command line. Everything that you will use with the require() function in your programs should be installed locally in the root of your project. Local installation makes it possible to solve your project's specific dependencies. More information about global versus local installation of npm modules can be found at https://www.npmjs.org/doc/faq.html. There's more... Node package managers are available for a wide range of operation systems, including Windows, OSX, Linux, SunOS, and FreeBSD. A complete list of package managers can be found at https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager. Notice that these package managers are not maintained by the Node.js core team. Instead, each package manager has its own maintainer. See also The npm Registry is a public collection of packages of open source code for Node.js, frontend web apps, mobile apps, robots, routers, and countless other needs of the JavaScript community. You can find the npm Registry at https://www.npmjs.org/. Also, notice that you do not have to use Task Runners to create build chains. Keith Cirkel wrote about how to use npm as a build tool at http://blog.keithcirkel.co.uk/how-to-use-npm-as-a-build-tool/. Installing Grunt plugins Grunt plugins are the heart of Grunt. Every plugin serves a specific purpose and can also work together with other plugins. In order to use Grunt to set up your Sass workflow, you need to install several plugins. You can find more information about these plugins in this recipe's How it works... section. Getting ready Before you install the plugins, you should first create some basic files and folders for the project. You should install Grunt and create a package.json file for your project. Also, create an index.html file to inspect the results in your browser. Two empty folders should be created too. The scss folder contains your Sass code and the css folder contains the compiled CSS code. Navigate to the root of the project, repeat the steps from the Installing Grunt recipe of this article, and create some additional files and directories that you are going to work with throughout the article. In the end, you should end up with the following folder and file structure: How to do it... Grunt plugins are essentially Node.js modules that can be installed and added to the package.json file in the list of dependencies using npm. To do this, follow the ensuing steps: Navigate to the root of the project and run the following command, as described in the Installing Grunt recipe of this article: npm init Install the modules using npm, as follows: npm install grunt-contrib-sass load-grunt-tasks grunt-postcss --save-dev Notice the single space before the backslash in each line. For example, on the second line, grunt-contrib-sass , there is a space before the backslash at the end of the line. The space characters are necessary because they act as separators. The backslash at the end is used to continue the commands on the next line. The npm install command will download all the plugins and place them in the node_modules folder in addition to including them in the package.json file. The next step is to include these plugins in the Gruntfile.js file. How it works... Grunt plugins can be installed and added to the package.json file using the npm install command followed by the name of the plugins separated by a space, and the --save-dev flag: npm install nameOfPlugin1 nameOfPlugin2 --save-dev The --save-dev flag adds the plugin names and a tilde version range to the list of dependencies in the package.json file so that the next time you need to install the plugins, all you need to do is run the npm install command. This command looks for the package.json file in the directory from which it was called, and will automatically download all the specified plugins. This makes porting workflows very easy; all it takes is copying the package.json file and running the npm install command. Finally, the package.json file contains a JSON object with metadata. It is also worth explaining the long command that you have used to install the plugins in this recipe. This command installs the plugins that are continued on to the next line by the backslash. It is essentially equivalent to the following: npm install grunt-contrib-sass –-save-dev npm install load-grunt-tasks –-save-dev npm install grunt-postcss –-save-dev As you can see, it is very repetitive. However, both yield the same results; it is up to you to choose the one that you feel more comfortable with. The node_modules folder contains all the plugins that you install with npm. Every time you run npm install name-of-plugin, the plugin is downloaded and placed in the folder. If you need to port your workflow, you do not need to copy all the contents of the folder. In addition, if you are using a version control system, such as Git, you should add the node_modules folder to the .gitignore file so that the folder and its subdirectories are ignored. There's more... Each Grunt plugin also has its own metadata set in a package.json file, so plugins can have different dependencies. For instance, the grunt-contrib-sass plugin, as described in the Adding the Sass compiler task recipe, has set its dependencies as follows: "dependencies": {     "async": "^0.9.0",     "chalk": "^0.5.1",     "cross-spawn": "^0.2.3",     "dargs": "^4.0.0",     "which": "^1.0.5"   } Besides the dependencies described previously, this task also requires you to have Ruby and Sass installed. In the following list, you will find the plugins used in this article, followed by a brief description: load-grunt-tasks: This loads all the plugins listed in the package.json file grunt-contrib-sass: This compiles Sass files into CSS code grunt-postcss: This enables you to apply one or more postprocessors to your compiled CSS code CSS postprocessors enable you to change your CSS code after compilation. In addition to installing plugins, you can remove them as well. You can remove a plugin using the npm uninstall name-of-plugin command, where name-of-plugin is the name of the plugin that you wish to remove. For example, if a line in the list of dependencies of your package.json file contains grunt-concurrent": "~0.4.2",, then you can remove it using the following command: npm uninstall grunt-concurrent Then, you just need to make sure to remove the name of the plugin from your package.json file so that it is not loaded by the load-grunt-tasks plugin the next time you run a Grunt task. Running the npm prune command after removing the items from the package.json file will also remove the plugins. The prune command removes extraneous packages that are not listed in the parent package's dependencies list. See also More information on the npm version's syntax can be found at https://www. npmjs.org/doc/misc/semver.html  Also, see http://caniuse.com/ for more information on the Can I Use database Utilizing the Gruntfile.js file The Gruntfile.js file is the main configuration file for Grunt that handles all the tasks and task configurations. All the tasks and plugins are loaded using this file. In this recipe, you will create this file and will learn how to load Grunt plugins using it. Getting ready First, you need to install Node and Grunt, as described in the Installing Grunt recipe of this article. You will also have to install some Grunt plugins, as described in the Installing Grunt plugins recipe of this article. How to do it... Once you have installed Node and Grunt, follow these steps: In your Grunt project directory (the folder that contains the package.json file), create a new file, save it as Gruntfile.js, and add the following lines to it: module.exports = function(grunt) {   grunt.initConfig({     pkg: grunt.file.readJSON('package.json'),       //Add the Tasks configurations here.   }); // Define Tasks here }; This is the simplest form of the Gruntfile.js file that only contains two information variables. The next step is to load the plugins that you installed in the Installing Grunt plugins recipe. Add the following lines at the end of your Gruntfile.js file: grunt.loadNpmTasks('grunt-sass'); In the preceding line of code, grunt-sass is the name of the plugin you want to load. That is all it takes to load all the necessary plugins. The next step is to add the configurations for each task to the Gruntfile.js file. How it works... Any Grunt plugin can be loaded by adding a line of JavaScript to the Gruntfile.js file, as follows: grunt.loadNpmTasks('name-of-module'); This line should be added every time a new plugin is installed so that Grunt can access the plugin's functions. However, it is tedious to load every single plugin that you install. In addition, you will soon notice that, as your project grows, the number of configuration lines will increase as well. The Gruntfile.js file should be written in JavaScript or CoffeeScript. Grunt tasks rely on configuration data defined in a JSON object passed to the grunt.initConfig method. JavaScript Object Notation (JSON) is an alternative for XML and used for data exchange. JSON describes name-value pairs written as "name": "value". All the JSON data is separated by commas with JSON objects written inside curly brackets and JSON arrays inside square brackets. Each object can hold more than one name/value pair with each array holding one or more objects. You can also group tasks into one task. Your alias groups of tasks using the following line of code: grunt.registerTask('alias',['task1', 'task2']); There's more... Instead of loading all the required Grunt plugins one by one, you can load them automatically with the load-grunt-tasks plugin. You can install this by using the following command in the root of your project: npm install load-grunt-tasks --save-dev Then, add the following line at the very beginning of your Gruntfile.js file after module.exports: require('load-grunt-tasks')(grunt); Now, your Gruntfile.js file should look like this: module.exports = function(grunt) {   require('load-grunt-tasks')(grunt);   grunt.initConfig({     pkg: grunt.file.readJSON('package.json'),       //Add the Tasks configurations here.   }); // Define Tasks here }; The load-grunt-tasks plugin loads all the plugins specified in the package.json file. It simply loads the plugins that begin with the grunt- prefix or any pattern that you specify. This plugin will also read dependencies, devDependencies, and peerDependencies in your package.json file and load the Grunt tasks that match the provided patterns. A pattern to load specifically chosen plugins can be added as a second parameter. You can load, for instance, all the grunt-contrib tasks with the following code in your Gruntfile.js file: require('load-grunt-tasks')(grunt, {pattern: 'grunt-contrib-*'}); See also Read more about the load-grunt-tasks module at https://github.com/sindresorhus/load-grunt-task Adding a configuration definition for a plugin Any Grunt task needs a configuration definition. The configuration definitions are usually added to the Gruntfile.js file itself and are very easy to set up. In addition, it is very convenient to define and work with them because they are all written in the JSON format. This makes it very easy to spot the configurations in the plugin's documentation examples and add them to your Gruntfile.js file. In this recipe, you will learn how to add the configuration for a Grunt task. Getting ready For this recipe, you will first need to create a basic Gruntfile.js file and install the plugin you want to configure. If you want to install the grunt-example plugin, you can install it using the following command in the root of your project: npm install grunt-example --save-dev How to do it... Once you have created the basic Gruntfile.js file (also refer to the Utilizing the Gruntfile.js file recipe of this article), follow this step: A simple form of the task configuration is shown in the following code. Start by adding it to your Gruntfile.js file wrapped inside grunt.initConfig{}: example: {   subtask: {    files: {      "stylesheets/main.css":      "sass/main.scss"     }   } } How it works... If you look closely at the task configuration, you will notice the files field that specifies what files are going to be operated on. The files field is a very standard field that appears in almost all the Grunt plugins simply due to the fact that many tasks require some or many file manipulations. There's more... The Don't Repeat Yourself (DRY) principle can be applied to your Grunt configuration too. First, define the name and the path added to the beginning of the Gruntfile.js file as follows: app {  dev : "app/dev" } Using the templates is a key in order to avoid hard coded values and inflexible configurations. In addition, you should have noticed that the template has been used using the <%= %> delimiter to expand the value of the development directory: "<%= app.dev %>/css/main.css": "<%= app.dev %>/scss/main.scss"   The <%= %> delimiter essentially executes inline JavaScript and replaces values, as you can see in the following code:   "app/dev/css/main.css": "app/dev/scss/main.scss" So, put simply, the value defined in the app object at the top of the Gruntfile.js file is evaluated and replaced. If you decide to change the name of your development directory, for example, all you need to do is change the app's variable that is defined at the top of your Gruntfile.js file. Finally, it is also worth mentioning that the value for the template does not necessarily have to be a string and can be a JavaScript literal. See also You can read more about templates in the Templates section of Grunt's documentation at http://gruntjs.com/configuring- tasks#templates Adding the Sass compiler task The Sass tasks are the core task that you will need for your Sass development. It has several features and options, but at the heart of it is the Sass compiler that can compile your Sass files into CSS. By the end of this recipe, you will have a good understanding of this plugin, how to add it to your Gruntfile.js file, and how to take advantage of it. In this recipe, the grunt-contrib-sass plugin will be used. This plugin compiles your Sass code by using Ruby Sass. You should use the grunt-sass plugin to compile Sass into CSS with node-sass (LibSass). Getting ready The only requirement for this recipe is to have the grunt-contrib-sass plugin installed and loaded in your Gruntfile.js file. If you have not installed this plugin in the Installing Grunt Plugins recipe of this article, you can do this using the following command in the root of your project: npm install grunt-contrib-sass --save-dev You should also install grunt local by running the following command: npm install grunt --save-dev Finally, your project should have the file and directory, as describe in the Installing Grunt plugins recipe of this article. How to do it... An example of the Sass task configuration is shown in the following code. Start by adding it to your Gruntfile.js file wrapped inside the grunt.initConfig({}) code. Now, your Gruntfile.js file should look as follows: module.exports = function(grunt) {   grunt.initConfig({     //Add the Tasks configurations here.     sass: {                                            dist: {                                            options: {                                       style: 'expanded'         },         files: {                                         'stylesheets/main.css': 'sass/main.scss'  'source'         }       }     }   });     grunt.loadNpmTasks('grunt-contrib-sass');     // Define Tasks here    grunt.registerTask('default', ['sass']);  } Then, run the following command in your console: grunt sass The preceding command will create a new stylesheets/main.css file. Also, notice that the stylesheets/main.css.map file has also been automatically created. The Sass compiler task creates CSS sourcemaps to debug your code by default. How it works... In addition to setting up the task configuration, you should run the Grunt command to test the Sass task. When you run the grunt sass command, Grunt will look for a configuration called Sass in the Gruntfile.js file. Once it finds it, it will run the task with some default options if they are not explicitly defined. Successful tasks will end with the following message: Done, without errors. There's more... There are several other options that you can include in the Sass task. An option can also be set at the global Sass task level, so the option will be applied in all the subtasks of Sass. In addition to options, Grunt also provides targets for every task to allow you to set different configurations for the same task. In other words, if, for example, you need to have two different versions of the Sass task with different source and destination folders, you could easily use two different targets. Adding and executing targets are very easy. Adding more builds just follows the JSON notation, as shown here:    sass: {                                      // Task       dev: {                                    // Target         options: {                               // Target options           style: 'expanded'         },         files: {                                 // Dictionary of files         'stylesheets/main.css': 'sass/main.scss' // 'destination': 'source'         }       },       dist: {                               options: {                        style: 'expanded',           sourcemap: 'none'                  },         files: {                                      'stylesheets/main.min.css': 'sass/main.scss'         }       }     } In the preceding example, two builds are defined. The first one is named dev and the second is called dist. Each of these targets belongs to the Sass task, but they use different options and different folders for the source and the compiled Sass code. Moreover, you can run a particular target using grunt sass:nameOfTarget, where nameOfTarge is the name of the target that you are trying to use. So, for example, if you need to run the dist target, you will have to run the grunt sass:dist command in your console. However, if you need to run both the targets, you could simply run grunt sass and it would run both the targets sequentially. As already mentioned, the grunt-contrib-sass plugin compiles your Sass code by using Ruby Sass, and you should use the grunt-sass plugin to compile Sass to CSS with node-sass (LibSass). To switch to the grunt-sass plugin, you will have to install it locally first by running the following command in your console: npm install grunt-sass Then, replace grunt.loadNpmTasks('grunt-contrib-sass'); with grunt.loadNpmTasks('grunt-sass'); in the Gruntfile.js file; the basic options for grunt-contrib-sass and grunt-sass are very similar, so you have to change the options for the Sass task when switching to grunt-sass. Finally, notice that grunt-contrib-sass also has an option to turn Compass on. See also Please refer to Grunt's documentation for a full list of options, which is available at https://gruntjs/grunt-contrib-sass#options Also, read Grunt's documentation for more details about configuring your tasks and targets at http://gruntjs.com/configuring-tasks#task-configuration-and-targets github.com/ Summary In this article you studied about installing Grunt, installing Grunt plugins, utilizing the Gruntfile.js file, adding a configuration definition for a plugin and adding the Sass compiler task. Resources for Article: Further resources on this subject: Meeting SAP Lumira [article] Security in Microsoft Azure [article] Basic Concepts of Machine Learning and Logistic Regression Example in Mahout [article]
Read more
  • 0
  • 0
  • 35045

Packt
13 Apr 2016
7 min read
Save for later

Nginx "expires" directive – Emitting Caching Headers

Packt
13 Apr 2016
7 min read
In this article by Alex Kapranoff, the author of the book Nginx Troubleshooting, explains how all browsers (and even many non-browser HTTP clients) support client-side caching. It is a part of the HTTP standard, albeit one of the most complex caching to understand. Web servers do not control client-side caching to full extent, obviously, but they may issue recommendations about what to cache and how, in the form of special HTTP response headers. This is a topic thoroughly discussed in many great articles and guides, so we will mention it shortly, and with a lean towards problems you may face and how to troubleshoot them. (For more resources related to this topic, see here.) In spite of the fact that browsers have been supporting caching on their side for at least 20 years, configuring cache headers was always a little confusing mostly due to the fact that there two sets of headers designed for the same purpose but having different scopes and totally different formats. There is the Expires: header, which was designed as a quick and dirty solution and also the new (relatively) almost omnipotent Cache-Control: header, which tries to support all the different ways an HTTP cache could work. This is an example of a modern HTTP request-response pair containing the caching headers. First is the request headers sent from the browser (here Firefox 41, but it does not matter): User-Agent:"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:41.0) Gecko/20100101 Firefox/41.0" Accept:"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" Accept-Encoding:"gzip, deflate" Connection:"keep-alive" Cache-Control:"max-age=0" Then, the response headers is: Cache-Control:"max-age=1800" Content-Encoding:"gzip" Content-Type:"text/html; charset=UTF-8" Date:"Sun, 10 Oct 2015 13:42:34 GMT" Expires:"Sun, 10 Oct 2015 14:12:34 GMT" We highlighted the parts that are relevant. Note that some directives may be sent by both sides of the conversation. First, the browser sent the Cache-Control: max-age=0 header because the user pressed the F5 key. This is an indication that the user wants to receive a response that is fresh. Normally, the request will not contain this header and will allow any intermediate cache to respond with a stale but still nonexpired response. In this case, the server we talked to responded with a gzipped HTML page encoded in UTF-8 and indicated that the response is okay to use for half an hour. It used both mechanisms available, the modern Cache-Control:max-age=1800 header and the very old Expires:Sun, 10 Oct 2015 14:12:34 GMT header. The X-Cache: "EXPIRED" header is not a standard HTTP header but was also probably (there is no way to know for sure from the outside) emitted by Nginx. It may be an indication that there are, indeed, intermediate caching proxies between the client and the server, and one of them added this header for debugging purposes. The header may also show that the backend software uses some internal caching. Another possible source of this header is a debugging technique used to find problems in the Nginx cache configuration. The idea is to use the cache hit or miss status, which is available in one of the handy internal Nginx variables as a value for an extra header and then to be able to monitor the status from the client side. This is the code that will add such a header: add_header X-Cache $upstream_cache_status; Nginx has a special directive that transparently sets up both of standard cache control headers, and it is named expires. This is a piece of the nginx.conf file using the expires directive: location ~* \.(?:css|js)$ { expires 1y; add_header Cache-Control "public"; } First, the pattern uses the so-called noncapturing parenthesis, which is a feature first appeared in Perl regular expressions. The effect of this regexp is the same as of a simpler \.(css|js)$ pattern, but the regular expression engine is specifically instructed not to create a variable containing the actual string from inside the parenthesis. This is a simple optimization. Then, the expires directive declares that the content of the css and js files will expire after a year of storage. The actual headers as received by the client will look like this: Server: nginx/1.9.8 (Ubuntu) Date: Fri, 11 Mar 2016 22:01:04 GMT Content-Type: text/css Last-Modified: Thu, 10 Mar 2016 05:45:39 GMT Expires: Sat, 11 Mar 2017 22:01:04 GMT Cache-Control: max-age=31536000 The last two lines contain the same information in wildly different forms. The Expires: header is exactly one year after the date in the Date: header, whereas Cache-Control: specifies the age in seconds so that the client do the date arithmetics itself. The last directive in the provided configuration extract adds another Cache-Control: header with a value of public explicitly. What this means is that the content of the HTTP resource is not access-controlled and therefore may be cached not only for one particular user but also anywhere else. A simple and effective strategy that was used in offices to minimize consumed bandwidth is to have an office-wide caching proxy server. When one user requested a resource from a website on the Internet and that resource had a Cache-Control: public designation, the company cache server would store that to serve to other users on the office network. This may not be as popular today due to cheap bandwidth, but because history has a tendency to repeat itself, you need to know how and why Cache-Control: public works. The Nginx expires directive is surprisingly expressive. It may take a number of different values. See this table: off This value turns off Nginx cache headers logic. Nothing will be added, and more importantly, existing headers received from upstreams will not be modified. epoch This is an artificial value used to purge a stored resource from all caches by setting the Expires header to "1 January, 1970 00:00:01 GMT". max This is the opposite of the "epoch" value. The Expires header will be equal to "31 December 2037 23:59:59 GMT", and the Cache-Control max-age set to 10 years. This basically means that the HTTP responses are guaranteed to never change, so clients are free to never request the same thing twice and may use their own stored values. Specific time An actual specific time value means an expiry deadline from the time of the respective request. For example, expires 10w; A negative value for this directive will emit a special header Cache-Control: no-cache. "modified" specific time If you add the keyword "modified" before the time value, then the expiration moment will be computed relatively to the modification time of the file that is served. "@" specific time A time with an @ prefix specifies an absolute time-of-day expiry. This should be less than 24 hours. For example, Expires @17h;. Many web applications choose to emit the caching headers themselves, and this is a good thing. They have more information about which resources change often and which never change. Tampering with the headers that you receive from the upstream may or may not be a thing you want to do. Sometimes, adding headers to a response while proxying it may produce a conflicting set of headers and therefore create an unpredictable behavior. The static files that you serve with Nginx yourself should have the expires directive in place. However, the general advice about upstreams is to always examine the caching headers you get and refrain from overoptimizing by setting up more aggressive caching policy. Resources for Article: Further resources on this subject: Nginx service [article] Fine-tune the NGINX Configuration [article] Nginx Web Services: Configuration and Implementation [article]
Read more
  • 0
  • 0
  • 26567

article-image-creating-graphs-and-charts
Packt
12 Apr 2016
17 min read
Save for later

Creating Graphs and Charts

Packt
12 Apr 2016
17 min read
In this article by Bhushan Purushottam Joshi author of the book Canvas Cookbook, highlights data representation in the form of graphs and charts with the following topics: Drawing the axes Drawing a simple equation Drawing a sinusoidal wave Drawing a line graph Drawing a bar graph Drawing a pie chart (For more resources related to this topic, see here.) Drawing the axes In school days, we all might have used a graph paper and drawn a vertical line called y axis and a horizontal line called as x axis. Here, in the first recipe of ours, we do only the drawing of axes. Also, we mark the points at equal intervals. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Axes</title> <script src="graphaxes.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> </form> </body> </html> The JavaScript code is as follows: varxMin=-10;varyMin=-10;varxMax=10;varyMax=10; //draw the x-axis varcan;varctx;varxaxisx;varxaxisy;varyaxisx;varyaxisy; varinterval;var length; functioninit(){ can=document.getElementById('MyCanvasArea'); ctx=can.getContext('2d'); ctx.clearRect(0,0,can.width,can.height); varsel=document.forms['myform'].elements['startvalue']; xMin=sel.value; yMin=xMin; xMax=-xMin; yMax=-xMin; drawXAxis(); drawYAxis(); } functiondrawXAxis(){ //x axis drawing and marking on the same xaxisx=10; xaxisy=can.height/2; ctx.beginPath(); ctx.lineWidth=2; ctx.strokeStyle="black"; ctx.moveTo(xaxisx,xaxisy); xaxisx=can.width-10; ctx.lineTo(xaxisx,xaxisy); ctx.stroke(); ctx.closePath(); length=xaxisx-10; noofxfragments=xMax-xMin; interval=length/noofxfragments; //mark the x-axis xaxisx=10; ctx.beginPath(); ctx.font="bold 10pt Arial"; for(vari=xMin;i<=xMax;i++) { ctx.lineWidth=0.15; ctx.strokeStyle="grey"; ctx.fillText(i,xaxisx-5,xaxisy-10); ctx.moveTo(xaxisx,xaxisy-(can.width/2)); ctx.lineTo(xaxisx,(xaxisy+(can.width/2))); ctx.stroke(); xaxisx=Math.round(xaxisx+interval); } ctx.closePath(); } functiondrawYAxis(){ yaxisx=can.width/2; yaxisy=can.height-10; ctx.beginPath(); ctx.lineWidth=2; ctx.strokeStyle="black"; ctx.moveTo(yaxisx,yaxisy); yaxisy=10 ctx.lineTo(yaxisx,yaxisy); ctx.stroke(); ctx.closePath(); yaxisy=can.height-10; length=yaxisy-10; noofxfragments=yMax-yMin; interval=length/noofxfragments; //mark the y-axis ctx.beginPath(); ctx.font="bold 10pt Arial"; for(vari=yMin;i<=yMax;i++) { ctx.lineWidth=0.15; ctx.strokeStyle="grey"; ctx.fillText(i,yaxisx-20,yaxisy+5); ctx.moveTo(yaxisx-(can.height/2),yaxisy); ctx.lineTo((yaxisx+(can.height/2)),yaxisy); ctx.stroke(); yaxisy=Math.round(yaxisy-interval); } ctx.closePath(); } How it works... There are two functions in the JavaScript code viz. drawXAxis and drawYAxis. A canvas is not calibrated the way a graph paper is. A simple calculation is used to do the same. In both the functions, there are two parts. One part draws the axis and the second marks the axis on regular intervals. These are delimited by ctx.beginPath() and ctx.closePath(). In the first part, the canvas width and height are used to draw the axis. In the second part, we do some calculation. The length of the axis is divided by the number of markers to get the interval. If the starting point is -3, then we have -3, -2, -1, 0, 1, 2, and 3 on the axis, which makes 7 marks and 6 parts. The interval is used to generate x and y coordinate value for the starting point and plot the markers. There is more... Try to replace the following: ctx.moveTo(xaxisx,xaxisy-(can.width/2)); (in drawXAxis()) ctx.lineTo(xaxisx,(xaxisy+(can.width/2)));(in drawXAxis()) ctx.moveTo(yaxisx-(can.height/2),yaxisy);(in drawYAxis()) ctx.lineTo((yaxisx+(can.height/2)),yaxisy);(in drawYAxis()) WITH ctx.moveTo(xaxisx,xaxisy-5); ctx.lineTo(xaxisx,(xaxisy+5)); ctx.moveTo(yaxisx-5,yaxisy); ctx.lineTo((yaxisx+5),yaxisy); Also, instead of grey color for markers, you can use red. Drawing a simple equation This recipe is a simple line drawing on a graph using an equation. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Equation</title> <script src="graphaxes.js"></script> <script src="plotequation.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> <br> Enter the coeficient(c) for the equation y=cx <input type="text" size=5 name="coef"> <input type="button" value="Click to plot" onclick="plotEquation()"> <input type="button" value="Reset" onclick="init()"> </form> </body> </html> The JavaScript code is as follows: functionplotEquation(){ varcoef=document.forms['myform'].elements['coef']; var s=document.forms['myform'].elements['startvalue']; var c=coef.value; var x=parseInt(s.value); varxPos; varyPos; while(x<=xMax) { y=c*x; xZero=can.width/2; yZero=can.height/2; if(x!=0) xPos=xZero+x*interval; else xPos=xZero-x*interval; if(y!=0) yPos=yZero-y*interval; else yPos=yZero+y*interval; ctx.beginPath(); ctx.fillStyle="blue"; ctx.arc(xPos,yPos,5,Math.PI/180,360*Math.PI/180,false); ctx.fill(); ctx.closePath(); if(x<xMax) { ctx.beginPath(); ctx.lineWidth=3; ctx.strokeStyle="green"; ctx.moveTo(xPos,yPos); nextX=x+1; nextY=c*nextX; if(nextX!=0) nextXPos=xZero+nextX*interval; else nextXPos=xZero-nextX*interval; if(nextY!=0) nextYPos=yZero-nextY*interval; else nextYPos=yZero+nextY*interval; ctx.lineTo(nextXPos,nextYPos); ctx.stroke(); ctx.closePath(); } x=x+1; } } How it works... We use one more script in this recipe. There are two scripts referred by the HTML file. One is the previous recipe named graphaxes.js, and the other one is the current one named plotequation.js. JavaScript allows you to use the variables created in one file into the other, and this is done in this new recipe. You already know how the axes are drawn. This recipe is to plot an equation y=cx, where c is the coefficient entered by the user. We take the minimum of the x value from the drop-down list and calculate the values for y in a loop. We plot the current and next coordinate and draw a line between the two. This happens till we reach the maximum value of x. Remember that the maximum and minimum value of x and y is same. There is more... Try the following: Input positive as well as negative value for coefficient. Drawing a sinusoidal wave This recipe also uses the previous recipe of axes drawing. The output looks like this: How to do it… The HTML code is as follows: <html> <head> <title>Equation</title> <script src="graphaxes.js"></script> <script src="plotSineEquation.js"></script> </head> <body onload=init()> <canvas width="600" height="600" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> <br> <form id="myform"> Select your starting value <select name="startvalue" onclick="init()"> <option value=-10>-10</option> <option value=-9>-9</option> <option value=-8>-8</option> <option value=-7>-7</option> <option value=-6>-6</option> <option value=-5>-5</option> <option value=-4>-4</option> <option value=-3>-3</option> <option value=-2>-2</option> </select> <br> <input type="button" value="Click to plot a sine wave" onclick="plotEquation()"> <input type="button" value="Reset" onclick="init()"> </form> </body> </html> The JavaScript code is as follows: functionplotEquation() { var s=document.forms['myform'].elements['startvalue']; var x=parseInt(s.value); //ctx.fillText(x,100,100); varxPos; varyPos; varnoofintervals=Math.round((2*Math.abs(x)+1)/2); xPos=10; yPos=can.height/2; xEnd=xPos+(2*interval); yEnd=yPos; xCtrl1=xPos+Math.ceil(interval/2); yCtrl1=yPos-200; xCtrl2=xEnd-Math.ceil(interval/2); yCtrl2=yPos+200; drawBezierCurve(ctx,xPos,yPos,xCtrl1,yCtrl1,xCtrl2,yCtrl2,xEnd,yEnd,"red",2); for(vari=1;i<noofintervals;i++) { xPos=xEnd; xEnd=xPos+(2*interval); xCtrl1=xPos+Math.floor(interval/2)+15; xCtrl2=xEnd-Math.floor(interval/2)-15; drawBezierCurve(ctx,xPos,yPos,xCtrl1,yCtrl1,xCtrl2,yCtrl2,xEnd,yEnd,"red",2); } } function drawBezierCurve(ctx,xstart,ystart,xctrl1,yctrl1,xctrl2,yctrl2,xend,yend,color,width) { ctx.strokeStyle=color; ctx.lineWidth=width; ctx.beginPath(); ctx.moveTo(xstart,ystart); ctx.bezierCurveTo(xctrl1,yctrl1,xctrl2,yctrl2,xend,yend); ctx.stroke(); } How it works... We use the Bezier curve to draw the sine wave along the x axis. A bit of calculation using the interval between two points, which encompasses a phase, is done to achieve this. The number of intervals is calculated in the following statement: varnoofintervals=Math.round((2*Math.abs(x)+1)/2); where x is the value in the drop-down list. One phase is initially drawn before the for loop begins. The subsequent phases are drawn in the for loop. The start and end x coordinate changes in every iteration. The ending coordinate for the first sine wave is the first coordinate for the subsequent sine wave. Drawing a line graph Graphs are always informative. The basic graphical representation can be a line graph, which is demonstrated here: How to do it… The HTML code is as follows: <html> <head> <title>A simple Line chart</title> <script src="linechart.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit() { vargCanvas = document.getElementById('MyCanvasArea'); // Ensure that the element is available within the DOM varctx = gCanvas.getContext('2d'); // Bar chart data var data = new Array(7); data[0] = "1,130"; data[1] = "2,140"; data[2] = "3,150"; data[3] = "4,140"; data[4] = "5,180"; data[5] = "6,240"; data[6] = "7,340"; // Draw the bar chart drawLineGraph(ctx, data, 70, 100, (gCanvas.height - 40), 50); } functiondrawLineGraph(ctx, data, startX, barWidth, chartHeight, markDataIncrementsIn) { // Draw the x axis ctx.lineWidth = "3.0"; var max=0; varstartY = chartHeight; drawLine(ctx, startX, startY, startX, 1); drawLine(ctx, startX, startY, 490, startY); for(vari=0,m=0;i<data.length;i++,m+=60) { ctx.lineWidth=0.3; drawLine(ctx,startX,startY-m,490,startY-m) ctx.font="bold 12pt Arial"; ctx.fillText(m,startX-30,startY-m); } for(vari=0,m=0;i<data.length;i++,m+=61) { ctx.lineWidth=0.3; drawLine(ctx, startX+m, startY, startX+m, 1); var values=data[i].split(","); var day; switch(values[0]) { case "1": day="MO"; break; case "2": day="TU"; break; case "3": day="WE"; break; case "4": day="TH"; break; case "5": day="FR"; break; case "6": day="SA"; break; case "7": day="SU"; break; } ctx.fillText(day,startX+m-10, startY+20); } //plot the points and draw lines between them varstartAngle = 0 * (Math.PI/180); varendAngle = 360 * (Math.PI/180); varnewValues; for(vari=0,m=0;i<data.length;i++,m+=60) { ctx.beginPath(); var values=data[i].split(","); varxPos=startX+parseInt(values[0])+m; varyPos=chartHeight-parseInt(values[1]); ctx.arc(xPos, yPos, 5, startAngle,endAngle, false); ctx.fillStyle="red"; ctx.fill(); ctx.fillStyle="blue"; ctx.fillText(values[1],xPos, yPos); ctx.stroke(); ctx.closePath(); if(i>0){ ctx.strokeStyle="green"; ctx.lineWidth=1.5; ctx.moveTo(oldxPos,oldyPos); ctx.lineTo(xPos,yPos); ctx.stroke(); } oldxPos=xPos; oldyPos=yPos; } } functiondrawLine(ctx, startx, starty, endx, endy) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); } How it works... All the graphs in the subsequent recipes also work on an array named data. The array element has two parts: one indicates the day and the second indicates the usage in minutes. A split function down the code splits the element into two independent elements. The coordinates are calculated using a parameter named m, which is used in calculating the value of the x coordinate. The value in minutes and the chart height is used to calculate the position of y coordinate. Inside the loop, there are two coordinates, which are used to draw a line. One in the moveTo() method and the other in the lineTo() method. However, the coordinates oldxPos and oldyPos are not calculated in the first iteration, for the simple reason that we cannot draw a line with a single coordinate. Next iteration onwards, we have two coordinates and then the line is drawn between the prior and current coordinates. There is more... Use your own data Drawing a bar graph Another typical representation, which is widely used, is the bar graph. Here is an output of this recipe: How to do it… The HTML code is as follows: <html> <head> <title>A simple Bar chart</title> <script src="bargraph.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit(){ vargCanvas = document.getElementById('MyCanvasArea'); // Ensure that the element is available within the DOM varctx = gCanvas.getContext('2d'); // Bar chart data var data = new Array(7); data[0] = "MON,130"; data[1] = "TUE,140"; data[2] = "WED,150"; data[3] = "THU,140"; data[4] = "FRI,170"; data[5] = "SAT,250"; data[6] = "SUN,340"; // Draw the bar chart drawBarChart(ctx, data, 70, 100, (gCanvas.height - 40), 50); } functiondrawBarChart(ctx, data, startX, barWidth, chartHeight, markDataIncrementsIn) { // Draw the x and y axes ctx.lineWidth = "3.0"; varstartY = chartHeight; //drawLine(ctx, startX, startY, startX, 30); drawBarGraph(ctx, startX, startY, startX, 30,data,chartHeight); drawLine(ctx, startX, startY, 570, startY); } functiondrawLine(ctx, startx, starty, endx, endy) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); } functiondrawBarGraph(ctx, startx, starty, endx, endy,data,chartHeight) { ctx.beginPath(); ctx.moveTo(startx, starty); ctx.lineTo(endx, endy); ctx.closePath(); ctx.stroke(); var max=0; //code to label x-axis for(i=0;i<data.length;i++) { varxValues=data[i].split(","); varxName=xValues[0]; ctx.textAlign="left"; ctx.fillStyle="#b90000"; ctx.font="bold 15px Arial"; ctx.fillText(xName,startx+i*50+i*20,chartHeight+15,200); var height=parseInt(xValues[1]); if(parseInt(height)>parseInt(max)) max=height; varcolor='#'+Math.floor(Math.random()*16777215).toString(16); drawBar(ctx,startx+i*50+i*20,(chartHeight-height),height,50,color); ctx.fillText(Math.round(height/60)+" hrs",startx+i*50+i*20,(chartHeight-height-20),200); } //title the x-axis ctx.beginPath(); ctx.fillStyle="black"; ctx.font="bolder 20pt Arial"; ctx.fillText("<------------Weekdays------------>",startx+150,chartHeight+35,200); ctx.closePath(); //y-axis labelling varylabels=Math.ceil(max/60); varyvalue=0; ctx.font="bold 15pt Arial"; for(i=0;i<=ylabels;i++) { ctx.textAlign="right"; ctx.fillText(yvalue,startx-5,(chartHeight-yvalue),50); yvalue+=60; } //title the y-axis ctx.beginPath(); ctx.font = 'bolder 20pt Arial'; ctx.save(); ctx.translate(20,70); ctx.rotate(-0.5*Math.PI); varrText = 'Rotated Text'; ctx.fillText("<--------Time in minutes--------->" , 0, 0); ctx.closePath(); ctx.restore(); } functiondrawBar(ctx,xPos,yPos,height,width,color){ ctx.beginPath(); ctx.fillStyle=color; ctx.rect(xPos,yPos,width,height); ctx.closePath(); ctx.stroke(); ctx.fill(); } How it works... The processing is similar to that of a line graph, except that here there are rectangles drawn, which represent bars. Also, the number 1, 2, 3… are represented as day of the week (for example, 1 means Monday). This line in the code: varcolor='#'+Math.floor(Math.random()*16777215).toString(16); is used to generate random colors for the bars. The number 16777215 is a decimal value for #FFFFF. Note that the value of the control variable i is not directly used for drawing the bar. Rather i is manipulated to get the correct coordinates on the canvas and then the bar is drawn using the drawBar() function. drawBar(ctx,startx+i*50+i*20,(chartHeight-height),height,50,color); There is more... Use your own data and change the colors. Drawing a pie chart A share can be easily represented in form of a pie chart. This recipe demonstrates a pie chart: How to do it… The HTML code is as follows: <html> <head> <title>A simple Pie chart</title> <script src="piechart.js"></script> </head> <body onload=init()> <h1>Your WhatsApp Usage</h1> <canvas width="600" height="500" id="MyCanvasArea" style="border:2px solid blue;" tabindex="0"> Canvas tag is not supported by your browser </canvas> </body> </html> The JavaScript code is as follows: functioninit() { var can = document.getElementById('MyCanvasArea'); varctx = can.getContext('2d'); var data = [130,140,150,140,170,250,340]; varcolors = ["crimson", "blue", "yellow", "navy", "aqua", "purple","red"]; var names=["MON","TUE","WED","THU","FRI","SAT","SUN"]; varcenterX=can.width/2; varcenterY=can.height/2; //varcenter = [can.width/2,can.height / 2]; var radius = (Math.min(can.width,can.height) / 2)-50; varstartAngle=0, total=0; for(vari in data) { total += data[i]; } varincrFactor=-(centerX-centerX/2); var angle=0; for (vari = 0; i<data.length; i++){ ctx.fillStyle = colors[i]; ctx.beginPath(); ctx.moveTo(centerX,centerY); ctx.arc(centerX,centerY,radius,startAngle,startAngle+(Math.PI*2*(data[i]/total)),false); ctx.lineTo(centerX,centerY); ctx.rect(centerX+incrFactor,20,20,10); ctx.fill(); ctx.fillStyle="black"; ctx.font="bold 10pt Arial"; ctx.fillText(names[i],centerX+incrFactor,15); ctx.save(); ctx.translate(centerX,centerY); ctx.rotate(startAngle); var dx=Math.floor(can.width*0.5)-100; vardy=Math.floor(can.height*0.20); ctx.fillText(names[i],dx,dy); ctx.restore(); startAngle += Math.PI*2*(data[i]/total); incrFactor+=50; } } How it works... Again the data here is the same, but instead of bars, we use arcs here. The trick is done by changing the end angle as per the data available. Translation and rotation helps in naming the weekdays for the pie chart. There is more... Use your own data and change the colors to get acquainted. Summary Managers make decisions based on the data representations. The data is usually represented in a report form and in the form of graph or charts. The latter representation plays a major role in providing a quick review of the data. In this article, we represent dummy data in the form of graphs and chart. Resources for Article: Further resources on this subject: HTML5 Canvas[article] HTML5: Developing Rich Media Applications using Canvas[article] Building the Untangle Game with Canvas and the Drawing API[article]
Read more
  • 0
  • 0
  • 34650
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-advanced-react
Packt
12 Apr 2016
7 min read
Save for later

Advanced React

Packt
12 Apr 2016
7 min read
In this article by Sven A. Robbestad, author of ReactJS Blueprints, we will cover the following topics: Understanding Webpack Adding Redux to your ReactJS app Understanding Redux reducers, actions, and the store (For more resources related to this topic, see here.) Introduction Understanding the tools you use and the libraries you include in your web app is important to make an efficient web application. In this article, we'll look at some of the difficult parts of modern web development with ReactJS, including Webpack and Redux. Webpack is an important tool for modern web developers. It is a module bundler and works by bundling all modules and files within the context of your base folder. Any file within this context is considered a module and attemptes will be made to bundled it. The only exceptions are files placed in designated vendor folders by default, that are node_modules and web_modules files. Files in these folders are explicitly required in your code to be bundled. Redux is an implementation of the Flux pattern. Flux describes how data should flow through your app. Since the birth of the pattern, there's been an explosion in the number of libraries that attempt to execute on the idea. It's safe to say that while many have enjoyed moderate success, none has been as successful as Redux. Configuring Webpack You can configure Webpack to do almost anything you want, including replacing the current code loaded in your browser with the updated code, while preserving the state of the app. Webpack is configured by writing a special configuration file, usually called webpack.config.js. In this file, you specify the entry and output parameters, plugins, module loaders, and various other configuration parameters. A very basic config file looks like this: var webpack = require('webpack'); module.exports = { entry: [ './entry' ], output: { path: './', filename: 'bundle.js' } }; It's executed by issuing this command from the command line: webpack --config webpack.config.js You can even drop the config parameter, as Webpack will automatically look for the presence of webpack.config.js if not specified. In order to convert the source files before bundling, you use module loaders. Adding this section to the Webpack config file will ensure that the babel-loader module converts JavaScript 2015 code to ECMAScript 5: module: { loaders: [{ test: /.js?$/', loader: 'babel-loader', exclude: /node_modules/, query: { presets: ['es2015','react'] } }] } The first option (required), test, is a regex match that tells Webpack which files these loader operates on. The regex tells Webpack to look for files with a period followed by the letters js and then any optional letters (?) before the end ($). This makes sure that the loader reads both plain JavaScript files and JSX files. The second option (required), loader, is the name of the package that we'll use to convert the code. The third option (optional), exclude, is another regex variable used to explicitly ignore a set of folders or files. The final option (optional), query, contains special configuration options for Babel. The recommended way to do it is actually by setting them in a special file called .babelrc. This file will be picked up automatically by Babel when transpiling files. Adding Redux to your ReactJS app When ReactJS was first introduced to the public in late 2013/early 2014, you would often hear it mentioned together with functional programming. However, there's no inherent requirement to write functional code when writing the ReactJS code, and JavaScript itself being a multi-paradigm language is neither strictly functional nor strictly imperative. Redux chose the functional approach, and it's quickly gaining traction as the superior Flux implementation. There are a number of benefits of choosing a functional, which are as follows: No side effects allowed, that is, the operation is stateless Always returns the same output for a given input Ideal for creating recursive operations Ideal for parallel execution Easy to establish the single source of truth Easy to debug Easy to persist the store state for a faster development cycle Easy to create functionality such as undo and redo Easy to inject the store state for server rendering The concept of stateless operations is possibly the number one benefit, as it makes it very easy to reason about the state of your application. This is, however, not the idiomatic Reflux approach, because it's actually designed to create many stores and has the children listen to changes separately. Application state is the only most difficult part of any application, and every single implementation of Flux has attempted to solve this problem. Redux solves it by not actually doing Flux at all but is an amalgamation of the ideas of Flux and the functional programming language Elm. There are three parts to Redux: actions, reducers, and the global store. The store In Redux, there is only one global store. It is an object that holds the state of your entire application. You create a store by passing your root reducing function (or reducer, for short) to a method called createStore. Rather than creating more stores, you use a concept called reducer composition to split data handling logic. You will then need to use a function called combineReducers to create a single root reducer. The createStore function is derived from Redux and is usually called once in the root of your app (or your store file). It is then passed on to your app and then propagated to the app's children. The only way to change the state of the store is to dispatch an action on it. This is not the same as a Flux dispatcher because Redux doesn't have one. You can also subscribe to changes from the store in order to update your components when the store changes state. Actions An action is an object that represents an intention to change the state. It must have a type field that indicates what kind of action is being performed. They can be defined as constants and imported from other modules. Apart from this requirement, the structure of the object is entirely up to you. A basic action object can look like this: { type: 'UPDATE', payload: { value: "some value" } } The payload property is optional and can be an object, as we saw earlier, or any other valid JavaScript type, such as a function or a primitive. Reducers A reducer is a function that accepts an accumulation and a value and returns a new accumulation. In other words, it returns the next state based on the previous state and an action. It must be a pure function, free of side effects, and it does not mutate the existing state. For smaller apps, it's okay to start with a single reducer, and as your app grows, you split off smaller reducers that manage specific parts of your state tree. This is what's called reducer composition and is the fundamental pattern of building apps with Redux. You start with a single reducer, and as your app grows, split it off into smaller reducers that manage specific parts of the state tree. Because reducers are just functions, you can control the order in which they are called, pass additional data, or even make reusable reducers for common tasks such as pagination. It's okay to have multiple reducers. In fact, it's encouraged. Summary In this article, you learned about Webpack and how to configure it. You also learned about adding Redux to your ReactJS app. Apart from this, you learned about Redux's reducers, actions, and the store. Resources for Article: Further resources on this subject: Getting Started with React [article] Reactive Programming and the Flux Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 14716

article-image-mastering-fundamentals
Packt
08 Apr 2016
10 min read
Save for later

Mastering of Fundamentals

Packt
08 Apr 2016
10 min read
In this article by Piotr Sikora, author of the book Professional CSS3, you will master box model, floating's troubleshooting positioning and display types. Readers, after this article, will be more aware of the foundation of HTML and CSS. In this article, we shall cover the following topics: Get knowledge about the traditional box model Basics of floating elements The foundation of positioning elements on webpage Get knowledge about display types (For more resources related to this topic, see here.) Traditional box model Understanding box model is the foundation in CSS theories. You have to know the impact of width, height, margin, and borders on the size of the box and how can you manage it to match the element on a website. Main questions for coders and frontend developers on interviews are based on box model theories. Let's begin this important lesson, which will be the foundation for every subject. Padding/margin/border/width/height The ingredients of final width and height of the box are: Width Height Margins Paddings Borders For a better understanding of box model, here is the image from Chrome inspector: For a clear and better understanding of box model, let's analyze the image: On the image, you can see that, in the box model, we have four edges: Content edge Padding edge Border edge Margin edge The width and height of the box are based on: Width/height of content Padding Border Margin The width and height of the content in box with default box-sizing is controlled by properties: Min-width Max-width Width Min-height Max-height Height An important thing about box model is how background properties will behave. Background will be included in the content section and in the padding section (to padding edge). Let's get a code and try to point all the elements of the box model. HTML: <div class="element">   Lorem ipsum dolor sit amet consecteur </div> CSS: .element {    background: pink;    padding: 10px;    margin: 20px;   width: 100px;   height: 100px;    border: solid 10px black; }   In the browser, we will see the following: This is the view from the inspector of Google Chrome: Let's check how the areas of box model are placed in this specific example: The basic task for interviewed Front End Developer is—the box/element is described with the styles: .box {     width: 100px;     height: 200px;     border: 10px solid #000;     margin: 20px;     padding: 30px; } Please count the final width and height (the real space that is needed for this element) of this element. So, as you can see, the problem is to count the width and height of the box. Ingridients of width: Width Border left Border right Padding left Padding right Additionally, for the width of the space taken by the box: Margin left Margin right Ingridients of height: Height Border top Border bottom Padding top Padding bottom Additionally, for height of the space taken by the box: Margin top Margin bottom So, when you will sum the element, you will have an equation: Width: Box width = width + borderLeft + borderRight + paddingLeft + paddingRight Box width = 100px + 10px + 10px + 30px + 30px = 180px Space width: width = width + borderLeft + borderRight + paddingLeft + paddingRight +  marginLeft + marginRight width = 100px + 10px + 10px + 30px + 30px + 20px + 20 px = 220px Height: Box height = height + borderTop + borderBottom + paddingTop + paddingBottom Box height  = 200px + 10px + 10px + 30px + 30px = 280px Space height: Space height = height + borderTop + borderBottom + paddingTop + paddingBottom +  marginTop + marginBottom Space height = 200px + 10px + 10px + 30px + 30px + 20px + 20px = 320px Here, you can check it in a real browser: Omiting problems with traditional box model (box sizing) The basic theory of box model is pretty hard to learn. You need to remember about all the elements of width/height, even if you set the width and height. The hardest for beginners is the understanding of padding, which shouldn't be counted as a component of width and height. It should be inside the box, and it should impact on these values. To change this behavior to support CSS3 since Internet Explorer 8, box sizing comes to picture. You can set the value: box-sizing: border-box What it gives to you? Finally, the counting of box width and height will be easier because box padding and border is inside the box. So, if we are taking our previous class: .box {     width: 100px;     height: 200px;     border: 10px solid #000;     margin: 20px;     padding: 30px; } We can count the width and height easily: Width = 100px Height = 200px Additionally, the space taken by the box: Space width = 140 px (because of the 20 px margin on both sides: left and right) Space height = 240 px (because of the 20 px margin on both sides: top and bottom) Here is a sample from Chrome: So, if you don't want to repeat all the problems of a traditional box model, you should use it globally for all the elements. Of course, it's not recommended in projects that you are getting in some old project, for example, from new client that needs some small changes:  * { width: 100px; } Adding the preceding code can make more harm than good because of the inheritance of this property for all the elements, which are now based on a traditional box model. But for all the new projects, you should use it. Floating elements Floating boxes are the most used in modern layouts. The theory of floating boxes was still used especially in grid systems and inline lists in CSS frameworks. For example, class and mixin inline-list (in Zurb Foundation framework) are based on floats. Possibilities of floating elements Element can be floated to the left and right. Of course, there is a method that is resetting floats too. The possible values are: float: left; // will float element to left float: right; // will float element to right float: none; // will reset float Most known floating problems When you are using floating elements, you can have some issues. Most known problems with floated elements are: Too big elements (because of width, margin left/right, padding left/right, and badly counted width, which is based on box model) Not cleared floats All of these problems provide a specific effect, which you can easily recognize and then fix. Too big elements can be recognized when elements are not in one line and it should. What you should check first is if the box-sizing: border-box is applied. Then, check the width, padding, and margin. Not cleared floats you can easily recognize when to floating structure some elements from next container are floated. It means that you have no clearfix in your floating container. Define clearfix/class/mixin When I was starting developing HTML and CSS code, there was a method to clear the floats with classes .cb or .clear, both defined as: .clearboth, .cb {     clear: both } This element was added in the container right after all the floated elements. This is important to remember about clearing the floats because the container which contains floating elements won't inherit the height of highest floating element (will have a height equal 0). For example: <div class="container">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div>     <div class="clearboth"></div> </div> Where CSS looks like this: .float {     width: 100px;     height: 100px;     float: left; }   .clearboth {     clear: both } Nowadays, there is a better and faster way to clear floats. You can do this with clearfix, which can be defined like this: .clearfix:after {     content: " ";     visibility: hidden;     display: block;     height: 0;     clear: both; } You can use in HTML code: <div class="container clearfix">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div> </div> The main reason to switch on clearfix is that you save one tag (with clears both classes). Recommended usage is based on the clearfix mixin, which you can define like this in SASS: =clearfix   &:after     content: " "     visibility: hidden     display: block     height: 0     clear: both So, every time you need to clear floating in some container, you need to invoke it. Let's take the previous code as an example: <div class="container">     <div class="float">         … content ...     </div>     <div class="float">         … content ...     </div> </div> A container can be described as: .container   +clearfix Example of using floating elements The most known usage of float elements is grids. Grid is mainly used to structure the data displayed on a webpage. In this article, let's check just a short draft of grid. Let's create an HTML code: <div class="row">     <div class="column_1of2">         Lorem     </div>     <div class="column_1of2">         Lorem     </div>   </div> <div class="row">     <div class="column_1of3">         Lorem     </div>     <div class="column_1of3">         Lorem     </div>     <div class="column_1of3">         Lorem     </div>   </div>   <div class="row">     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div>     <div class="column_1of4">         Lorem     </div> </div> And SASS: *   box-sizing: border-box =clearfix   &:after     content: " "     visibility: hidden     display: block     height: 0     clear: both .row   +clearfix .column_1of2   background: orange   width: 50%   float: left   &:nth-child(2n)     background: red .column_1of3   background: orange   width: (100% / 3)   float: left   &:nth-child(2n)     background: red .column_1of4   background: orange   width: 25%   float: left   &:nth-child(2n)     background: red The final effect: As you can see, we have created a structure of a basic grid. In places where HTML code is placed, Lorem here is a full lorem ipsum to illustrate the grid system. Summary In this article, we studied about the traditional box model and floating elements in detail. Resources for Article: Further resources on this subject: Flexbox in CSS [article] CodeIgniter Email and HTML Table [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 13522

article-image-using-native-sdks-and-libraries-react-native
Emilio Rodriguez
07 Apr 2016
6 min read
Save for later

Using Native SDKs and Libraries in React Native

Emilio Rodriguez
07 Apr 2016
6 min read
When building an app in React Native we may end up needing to use third-party SDKs or libraries. Most of the time, these are only available in their native version, and, therefore, only accessible as Objective-C or Swift libraries in the case of iOS apps or as Java Classes for Android apps. Only in a few cases these libraries are written in JavaScript and even then, they may need pieces of functionality not available in React Native such as DOM access or Node.js specific functionality. In my experience, this is one of the main reasons driving developers and IT decision makers in general to run away from React Native when considering a mobile development framework for their production apps. The creators of React Native were fully aware of this potential pitfall and left a door open in the framework to make sure integrating third-party software was not only possible but also quick, powerful, and doable by any non-iOS/Android native developer (i.e. most of the React Native developers). As a JavaScript developer, having to write Objective-C or Java code may not be very appealing in the beginning, but once you realize the whole process of integrating a native SDK can take as little as eight lines of code split in two files (one header file and one implementation file), the fear quickly fades away and the feeling of being able to perform even the most complex task in a mobile app starts to take over. Suddenly, the whole power of iOS and Android can be at any React developer’s disposal. To better illustrate how to integrate a third-party SDK we will use one of the easiest to integrate payment providers: Paymill. If we take a look at their site, we notice that only iOS and Android SDKs are available for mobile payments. That should leave out every app written in React Native if it wasn’t for the ability of this framework to communicate with native modules. For the sake of convenience I will focus this article on the iOS module. Step 1: Create two native files for our bridge. We need to create an Objective-C class, which will serve as a bridge between our React code and Paymill’s native SDK. Normally, an Objective-C class is made out of two files, a .m and a .h, holding the module implementation and the header for this module respectively. To create the .h file we can right-click on our project’s main folder in XCode > New File > Header file. In our case, I will call this file PaymillBridge.h. For React Native to communicate with our bridge, we need to make it implement the RTCBridgeModule included in React Native. To do so, we only have to make sure our .h file looks like this: // PaymillBridge.h #import "RCTBridgeModule.h" @interface PaymillBridge : NSObject <RCTBridgeModule> @end We can follow a similar process to create the .m file: Right-click our project’s main folder in XCode > New File > Objective-C file. The module implementation file should include the RCT_EXPORT_MODULE macro (also provided in any React Native project): // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); @end A macro is just a predefined piece of functionality that can be imported just by calling it. This will make sure React is aware of this module and would make it available for importing in your app. Now we need to expose the method we need in order to use Paymill’s services from our JavaScript code. For this example we will be using Paymill’s method to generate a token representing a credit card based on a public key and some credit card details: generateTokenWithPublicKey. To do so, we need to use another macro provided by React Native: RCT_EXPORT_METHOD. // PaymillBridge.m @implementation PaymillBridge RCT_EXPORT_MODULE(); RCT_EXPORT_METHOD(generateTokenWithPublicKey: (NSString *)publicKey cardDetails:(NSDictionary *)cardDetails callback:(RCTResponseSenderBlock)callback) { //… Implement the call as described in the SDK’s documentation … callback(@[[NSNull null], token]); } @end In this step we will have to write some Objective-C but most likely it would be a very simple piece of code using the examples stated in the SDK’s documentation. One interesting point is how to send data from the native SDK to our React code. To do so you need to pass a callback as you can see I did as the last parameter of our exported method. Callbacks in React Native’s bridges have to be defined as RCTResponseSenderBlock. Once we do this, we can call this callback passing an array of parameters, which will be sent as parameters for our JavaScript function in React Native (in our case we decided to pass two parameters back: an error set to null following the error handling conventions of node.js, and the token generated by Paymill natively). Step 2: Call our bridge from our React Native code. Once the module is properly set up, React Native makes it available in our app just by importing it from our JavaScript code: // PaymentComponent.js var Paymill = require('react-native').NativeModules.PaymillBridge; Paymill.generateTokenWithPublicKey( '56s4ad6a5s4sd5a6', cardDetails, function(error, token){ console.log(token); }); NativeModules holds the list of modules we created implementing the RCTBridgeModule. React Native makes them available by the name we chose for our Objective-C class name (PaymillBridge in our example). Then, we can call any exported native method as a normal JavaScript method from our React Native Component or library. Going Even Further That should do it for any basic SDK, but React Native gives developers a lot more control on how to communicate with native modules. For example, we may want to force the module to be run in the main thread. For that we just need to add an extra method to our native module implementation: // PaymillBridge.m @implementation PaymillBridge //... - (dispatch_queue_t)methodQueue { return dispatch_get_main_queue(); } Just by adding this method to our PaymillBridge.m React Native will force all the functionality related to this module to be run on the main thread, which will be needed when running main-thread-only iOS API. And there is more: promises, exporting constants, sending events to JavaScript, etc. More complex functionality can be found in the official documentation of React Native; the topics covered on this article, however, should solve 80 percent of the cases when implementing most of the third-party SDKs. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 2
  • 33600

article-image-caching-symfony
Packt
05 Apr 2016
15 min read
Save for later

Caching in Symfony

Packt
05 Apr 2016
15 min read
In this article by Sohail Salehi, author of the book, Mastering Symfony, we are going to discuss performance improvement using cache. Caching is a vast subject and needs its own book to be covered properly. However, in our Symfony project, we are interested in two types of caches only: Application cache Database cache We will see what caching facilities are provided in Symfony by default and how we can use them. We are going to apply the caching techniques on some methods in our projects and watch the performance improvement. By the end of this article, you will have a firm understanding about the usage of HTTP cache headers in the application layer and caching libraries. (For more resources related to this topic, see here.) Definition of cache Cache is a temporary place that stores contents that can be served faster when they are needed. Considering that we already have a permanent place on disk to store our web contents (templates, codes, and database tables), cache sounds like a duplicate storage. That is exactly what they are. They are duplicates and we need them because, in return for consuming an extra space to store the same data, they provide a very fast response to some requests. So this is a very good trade-off between storage and performance. To give you an example about how good this deal can be, consider the following image. On the left side, we have a usual client/server request/response model and let's say the response latency is two seconds and there are only 100 users who hit the same content per hour: On the right side, however, we have a cache layer that sits between the client and server. What it does basically is receive the same request and pass it to the server. The server sends a response to the cache and, because this response is new to the cache, it will save a copy (duplicate) of the response and then pass it back to the client. The latency is 2 + 0.2 seconds. However, it doesn't add up, does it? The purpose of using cache was to improve the overall performance and reduce the latency. It has already added more delays to the cycle. With this result, how could it possibly be beneficial? The answer is in the following image: Now, with the response being cached, imagine the same request comes through. (We have about 100 requests/hour for the same content, remember?) This time, the cache layer looks into its space, finds the response, and sends it back to the client, without bothering the server. The latency is 0.2 seconds. Of course, these are only imaginary numbers and situations. However, in the simplest form, this is how cache works. It might not be very helpful on a low traffic website; however, when we are dealing with thousands of concurrent users on a high traffic website, then we can appreciate the value of caching. So, according to the previous images, we can define some terminology and use them in this article as we continue. In the first image, when a client asked for that page, it wasn't exited and the cache layer had to store a copy of its contents for the future references. This is called Cache Miss. However, in the second image, we already had a copy of the contents stored in the cache and we benefited from it. This is called Cache Hit. Characteristics of a good cache If you do a quick search, you will find that a good cache is defined as the one which misses only once. In other words, this cache miss happens only if the content has not been requested before. This feature is necessary but it is not sufficient. To clarify the situation a little bit, let's add two more terminology here. A cache can be in one of the following states: fresh (has the same contents as the original response) and stale (has the old response's contents that have now changed on the server). The important question here is for how long should a cache be kept? We have the power to define the freshness of a cache via a setting expiration period. We will see how to do this in the coming sections. However, just because we have this power doesn't mean that we are right about the content's freshness. Consider the situation shown in the following image: If we cache a content for a long time, cache miss won't happen again (which satisfies the preceding definition), but the content might lose its freshness according to the dynamic resources that might change on the server. To give you an example, nobody likes to read the news of three months ago when they open the BBC website. Now, we can modify the definition of a good cache as follows: A cache strategy is considered to be good if cache miss for the same content happens only once, while the cached contents are still fresh. This means that defining the cache expiry time won't be enough and we need another strategy to keep an eye on cache freshness. This happens via a cache validation strategy. When the server sends a response, we can set the validation rules on the basis of what really matters on the server side, and this way, we can keep the contents stored in the cache fresh, as shown in the following image. We will see how to do this in Symfony soon. Caches in a Symfony project In this article, we will focus on two types of caches: The gateway cache (which is called reverse proxy cache as well) and doctrine cache. As you might have guessed, the gateway cache deals with all of the HTTP cache headers. Symfony comes with a very strong gateway cache out of the box. All you need to do is just activate it in your front controller then start defining your cache expiration and validation strategies inside your controllers. That said, it does not mean that you are forced or restrained to use the Symfony cache only. If you prefer other reverse proxy cache libraries (that is, Varnish or Django), you are welcome to use them. The caching configurations in Symfony are transparent such that you don't need to change a single line inside your controllers when you change your caching libraries. Just modify your config.yml file and you will be good to go. However, we all know that caching is not for application layers and views only. Sometimes, we need to cache any database-related contents as well. For our Doctrine ORM, this includes metadata cache, query cache, and result cache. Doctrine comes with its own bundle to handle these types of caches and it uses a wide range of libraries (APC, Memcached, Redis, and so on) to do the job. Again, we don't need to install anything to use this cache bundle. If we have Doctrine installed already, all we need to do is configure something and then all the Doctrine caching power will be at our disposal. Putting these two caching types together, we will have a big picture to cache our Symfony project: As you can see in this image, we might have a problem with the final cached page. Imagine that we have a static page that might change once a week, and in this page, there are some blocks that might change on a daily or even hourly basis, as shown in the following image. The User dashboard in our project is a good example. Thus, if we set the expiration on the gateway cache to one week, we cannot reflect all of those rapid updates in our project and task controllers. To solve this problem, we can leverage from Edge Side Includes (ESI) inside Symfony. Basically, any part of the page that has been defined inside an ESI tag can tell its own cache story to the gateway cache. Thus, we can have multiple cache strategies living side by side inside a single page. With this solution, our big picture will look as follows: Thus, we are going to use the default Symfony and Doctrine caching features for application and model layers and you can also use some popular third-party bundles for more advanced settings. If you completely understand the caching principals, moving to other caching bundles would be like a breeze. Key players in the HTTP cache header Before diving into the Symfony application cache, let's familiarize ourselves with the elements that we need to handle in our cache strategies. To do so, open https://www.wikipedia.org/ in your browser and inspect any resource with the 304 response code and ponder on request/response headers inside the Network tab: Among the response elements, there are four cache headers that we are interested in the most: expires and cache-control, which will be used for an expiration model, and etag and last-modified, which will be used for a validation model. Apart from these cache headers, we can have variations of the same cache (compressed/uncompressed) via the Vary header and we can define a cache as private (accessible by a specific user) or public (accessible by everyone). Using the Symfony reverse proxy cache There is no complicated or lengthy procedure required to activate the Symfony's gateway cache. Just open the front controller and uncomment the following lines: // web/app.php <?php //... require_once __DIR__.'/../app/AppKernel.php'; //un comment this line require_once __DIR__.'/../app/AppCache.php'; $kernel = new AppKernel('prod', false); $kernel->loadClassCache(); // and this line $kernel = new AppCache($kernel); // ... ?> Now, the kernel is wrapped around the Application Cache layer, which means that any request coming from the client will pass through this layer first. Set the expiration for the dashboard page Log in to your project and click on the Request/Response section in the debug toolbar. Then, scroll down to Response Headers and check the contents: As you can see, only cache-control is sitting there with some default values among the cache headers that we are interested in. When you don't set any value for Cache-Control, Symfony considers the page contents as private to keep them safe. Now, let's go to the Dashboard controller and add some gateway cache settings to the indexAction() method: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundleController; use SymfonyBundleFrameworkBundleControllerController; use SymfonyComponentHttpFoundationResponse; class DashboardController extends Controller { public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); $response = new Response(); $date = new DateTime('+2 days'); $response->setExpires($date); return $this->render( 'CoreBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ), $response ); } } You might have noticed that we didn't change the render() method. Instead, we added the response settings as the third parameter of this method. This is a good solution because now we can keep the current template structure and adding new settings won't require any other changes in the code. However, you might wonder what other options do we have? We can save the whole $this->render() method in a variable and assign a response setting to it as follows: // src/AppBundle/Controller/DashboardController.php <?php // ... $res = $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects ) ); $res->setExpires($date); return $res; ?> Still looks like a lot of hard work for a simple response header setting. So let me introduce a better option. We can use the @Cache annotation as follows: // src/AppBundle/Controller/DashboardController.php <?php namespace AppBundleController; use SymfonyBundleFrameworkBundleControllerController; use SensioBundleFrameworkExtraBundleConfigurationCache; class DashboardController extends Controller { /** * @Cache(expires="next Friday") */ public function indexAction() { $uId = $this->getUser()->getId(); $util = $this->get('mava_util'); $userProjects = $util->getUserProjects($uId); $currentTasks= $util->getUserTasks($uId, 'in progress'); return $this->render( 'AppBundle:Dashboard:index.html.twig', array( 'currentTasks' => $currentTasks, 'userProjects' => $userProjects )); } } Have you noticed that the response object is completely removed from the code? With an annotation, all response headers are sent internally, which helps keep the original code clean. Now that's what I call zero-fee maintenance. Let's check our response headers in Symfony's debug toolbar and see what it looks like: The good thing about the @Cache annotation is that they can be nested. Imagine you have a controller full of actions. You want all of them to have a shared maximum age of half an hour except one that is supposed to be private and should be expired in five minutes. This sounds like a lot of code if you going are to use the response objects directly, but with an annotation, it will be as simple as this: <?php //... /** * @Cache(smaxage="1800", public="true") */ class DashboardController extends Controller { public function firstAction() { //... } public function secondAction() { //... } /** * @Cache(expires="300", public="false") */ public function lastAction() { //... } } The annotation defined before the controller class will apply to every single action, unless we explicitly add a new annotation for an action. Validation strategy In the previous example, we set the expiry period very long. This means that if a new task is assigned to the user, it won't show up in his dashboard because of the wrong caching strategy. To fix this issue, we can validate the cache before using it. There are two ways for validation: We can check the content's date via the Last-Modified header: In this technique, we certify the freshness of a content via the time it has been modified. In other words, if we keep track of the dates and times of each change on a resource, then we can simply compare that date with cache's date and find out if it is still fresh. We can use the ETag header as a unique content signature: The other solution is to generate a unique string based on the contents and evaluate the cache's freshness based on its signature. We are going to try both of them in the Dashboard controller and see them in action. Using the right validation header is totally dependent on the current code. In some actions, calculating modified dates is way easier than creating a digital footprint, while in others, going through the date and time function might looks costly. Of course, there are situations where generating both headers are critical. So creating it is totally dependent on the code base and what you are going to achieve. As you can see, we have two entities in the indexAction() method and, considering the current code, generating the ETag header looks practical. So the validation header will look as follows: // src/AppBundle/Controller/DashboardController.php <?php //... class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } The next time a request arrives, the cache layer looks into the ETag value in the controller, compares it with its own ETag, and calls the indexAction() method; only, there is a difference between these two. How to mix expiration and validation strategies Imagine that we want to keep the cache fresh for 10 minutes and simultaneously keep an eye on any changes over user projects or finished tasks. It is obvious that tasks won't finish every 10 minutes and it is far beyond reality to expect changes on project status during this period. So what we can do to make our caching strategy efficient is that we can combine Expiration and Validation together and apply them to the Dashboard Controller as follows: // src/CoreBundle/Controller/DashboardController.php <?php //... /** * @Cache(expires="600") */ class DashboardController extends Controller { /** * @Cache(ETag="userProjects ~ finishedTasks") */ public function indexAction() { //... } } Keep in mind that Expiration has a higher priority over Validation. In other words, the cache is fresh for 10 minutes, regardless of the validation status. So when you visit your dashboard for the first time, a new cache plus a 302 response (not modified) is generated automatically and you will hit cache for the next 10 minutes. However, what happens after 10 minutes is a little different. Now, the expiration status is not satisfying; thus, the HTTP flow falls into the validation phase and in case nothing happened to the finished tasks status or the your project status, then a new expiration period is generated and you hit the cache again. However, if there is any change in your tasks or project status, then you will hit the server to get the real response, and a new cache from response's contents, new expiration period, and new ETag are generated and stored in the cache layer for future references. Summary In this article, you learned about the basics of gateway and Doctrine caching. We saw how to set expiration and validation strategies using HTTP headers such as Cache-Control, Expires, Last-Modified, and ETag. You learned how to set public and private access levels for a cache and use an annotation to define cache rules in the controller. Resources for Article: Further resources on this subject: User Interaction and Email Automation in Symfony 1.3: Part1 [article] The Symfony Framework – Installation and Configuration [article] User Interaction and Email Automation in Symfony 1.3: Part2 [article]
Read more
  • 0
  • 0
  • 15194
article-image-how-get-started-redux-react-native
Emilio Rodriguez
04 Apr 2016
5 min read
Save for later

How To Get Started with Redux in React Native

Emilio Rodriguez
04 Apr 2016
5 min read
In mobile development there is a need for architectural frameworks, but complex frameworks designed to be used in web environments may end up damaging the development process or even the performance of our app. Because of this, some time ago I decided to introduce in all of my React Native projects the leanest framework I ever worked with: Redux. Redux is basically a state container for JavaScript apps. It is 100 percent library-agnostic so you can use it with React, Backbone, or any other view library. Moreover, it is really small and has no dependencies, which makes it an awesome tool for React Native projects. Step 1: Install Redux in your React Native project. Redux can be added as an npm dependency into your project. Just navigate to your project’s main folder and type: npm install --save react-redux By the time this article was written React Native was still depending on React Redux 3.1.0 since versions above depended on React 0.14, which is not 100 percent compatible with React Native. Because of this, you will need to force version 3.1.0 as the one to be dependent on in your project. Step 2: Set up a Redux-friendly folder structure. Of course, setting up the folder structure for your project is totally up to every developer but you need to take into account that you will need to maintain a number of actions, reducers, and components. Besides, it’s also useful to keep a separate folder for your API and utility functions so these won’t be mixing with your app’s core functionality. Having this in mind, this is my preferred folder structure under the src folder in any React Native project: Step 3: Create your first action. In this article we will be implementing a simple login functionality to illustrate how to integrate Redux inside React Native. A good point to start this implementation is the action, a basic function called from the component whenever we want the whole state of the app to be changed (i.e. changing from the logged out state into the logged in state). To keep this example as concise as possible we won’t be doing any API calls to a backend – only the pure Redux integration will be explained. Our action creator is a simple function returning an object (the action itself) with a type attribute expressing what happened with the app. No business logic should be placed here; our action creators should be really plain and descriptive. Step 4: Create your first reducer. Reducers are the ones in charge of updating the state of the app. Unlike in Flux, Redux only has one store for the whole app, but it will be conveniently name-spaced automatically by Redux once the reducers have been applied. In our example, the user reducer needs to be aware of when the user is logged in. Because of that, it needs to import the LOGIN_SUCCESS constant we defined in our actions before and export a default function, which will be called by Redux every time an action occurs in the app. Redux will automatically pass the current state of the app and the action occurred. It’s up to the reducer to realize if it needs to modify the state or not based on the action.type. That’s why almost every time our reducer will be a function containing a switch statement, which modifies and returns the state based on what action occurred. It’s important to state that Redux works with object references to identify when the state is changed. Because of this, the state should be cloned before any modification. It’s also interesting to know that the action passed to the reducers can contain other attributes apart from type. For example, when doing a more complex login, the user first name and last name can be added to the action by the action created and used by the reducer to update the state of the app. Step 5: Create your component. This step is almost pure React Native coding. We need a component to trigger the action and to respond to the change of state in the app. In our case it will be a simple View containing a button that disappears when logged in. This is a normal React Native component except for some pieces of the Redux boilerplate: The three import lines at the top will require everything we need from Redux ‘mapStateToProps’ and ‘mapDispatchToProps’ are two functions bound with ‘connect’ to the component: this makes Redux know that this component needs to be passed a piece of the state (everything under ‘userReducers’) and all the actions available in the app. Just by doing this, we will have access to the login action (as it is used in the onLoginButtonPress) and to the state of the app (as it is used in the !this.props.user.loggedIn statement) Step 6: Glue it all from your index.ios.js. For Redux to apply its magic, some initialization should be done in the main file of your React Native project (index.ios.js). This is pure boilerplate and only done once: Redux needs to inject a store holding the app state into the app. To do so, it requires a ‘Provider’ wrapping the whole app. This store is basically a combination of reducers. For this article we only need one reducer, but a full app will include many others and each of them should be passed into the combineReducers function to be taken into account by Redux whenever an action is triggered. About the Author Emilio Rodriguez started working as a software engineer for Sun Microsystems in 2006. Since then, he has focused his efforts on building a number of mobile apps with React Native while contributing to the React Native project. These contributions helped his understand how deep and powerful this framework is.
Read more
  • 0
  • 0
  • 44376

article-image-making-app-react-and-material-design
Soham Kamani
21 Mar 2016
7 min read
Save for later

Making an App with React and Material Design

Soham Kamani
21 Mar 2016
7 min read
There has been much progression in the hybrid app development space, and also in React.js. Currently, almost all hybrid apps use cordova to build and run web applications on their platform of choice. Although learning React can be a bit of a steep curve, the benefit you get is that you are forced to make your code more modular, and this leads to huge long-term gains. This is great for developing applications for the browser, but when it comes to developing mobile apps, most web apps fall short because they fail to create the "native" experience that so many users know and love. Implementing these features on your own (through playing around with CSS and JavaScript) may work, but it's a huge pain for even something as simple as a material-design-oriented button. Fortunately, there is a library of react components to help us out with getting the look and feel of material design in our web application, which can then be ported to a mobile to get a native look and feel. This post will take you through all the steps required to build a mobile app with react and then port it to your phone using cordova. Prerequisites and dependencies Globally, you will require cordova, which can be installed by executing this line: npm install -g cordova Now that this is done, you should make a new directory for your project and set up a build environment to use es6 and jsx. Currently, webpack is the most popular build system for react, but if that's not according to your taste, there are many more build systems out there. Once you have your project folder set up, install react as well as all the other libraries you would be needing: npm init npm install --save react react-dom material-ui react-tap-event-plugin Making your app Once we're done, the app should look something like this:   If you just want to get your hands dirty, you can find the source files here. Like all web applications, your app will start with an index.html file: <html> <head> <title>My Mobile App</title> </head> <body> <div id="app-node"> </div> <script src="bundle.js" ></script> </body> </html> Yup, that's it. If you are using webpack, your CSS will be included in the bundle.js file itself, so there's no need to put "style" tags either. This is the only HTML you will need for your application. Next, let's take a look at index.js, the entry point to the application code: //index.js import React from 'react'; import ReactDOM from 'react-dom'; import App from './app.jsx'; const node = document.getElementById('app-node'); ReactDOM.render( <App/>, node ); What this does is grab the main App component and attach it to the app-node DOM node. Drilling down further, let's look at the app.jsx file: //app.jsx'use strict';import React from 'react';import AppBar from 'material-ui/lib/app-bar';import MyTabs from './my-tabs.jsx';let App = React.createClass({ render : function(){ return ( <div> <AppBar title="My App" /> <MyTabs /> </div> ); }});module.exports = App; Following react's philosophy of structuring our code, we can roughly break our app down into two parts: The title bar The tabs below The title bar is more straightforward and directly fetched from the material-ui library. All we have to do is supply a "title" property to the AppBar component. MyTabs is another component that we have made, put in a different file because of the complexity: 'use strict';import React from 'react';import Tabs from 'material-ui/lib/tabs/tabs';import Tab from 'material-ui/lib/tabs/tab';import Slider from 'material-ui/lib/slider';import Checkbox from 'material-ui/lib/checkbox';import DatePicker from 'material-ui/lib/date-picker/date-picker';import injectTapEventPlugin from 'react-tap-event-plugin';injectTapEventPlugin();const styles = { headline: { fontSize: 24, paddingTop: 16, marginBottom: 12, fontWeight: 400 }};const TabsSimple = React.createClass({ render: () => ( <Tabs> <Tab label="Item One"> <div> <h2 style={styles.headline}>Tab One Template Example</h2> <p> This is the first tab. </p> <p> This is to demonstrate how easy it is to build mobile apps with react </p> <Slider name="slider0" defaultValue={0.5}/> </div> </Tab> <Tab label="Item 2"> <div> <h2 style={styles.headline}>Tab Two Template Example</h2> <p> This is the second tab </p> <Checkbox name="checkboxName1" value="checkboxValue1" label="Installed Cordova"/> <Checkbox name="checkboxName2" value="checkboxValue2" label="Installed React"/> <Checkbox name="checkboxName3" value="checkboxValue3" label="Built the app"/> </div> </Tab> <Tab label="Item 3"> <div> <h2 style={styles.headline}>Tab Three Template Example</h2> <p> Choose a Date:</p> <DatePicker hintText="Select date"/> </div> </Tab> </Tabs> )});module.exports = TabsSimple; This file has quite a lot going on, so let’s break it down step by step: We import all the components that we're going to use in our app. This includes tabs, sliders, checkboxes, and datepickers. injectTapEventPlugin is a plugin that we need in order to get tab switching to work. We decide the style used for our tabs. Next, we make our Tabs react component, which consists of three tabs: The first tab has some text along with a slider. The second tab has a group of checkboxes. The third tab has a pop-up datepicker. Each component has a few keys, which are specific to it (such as the initial value of the slider, the value reference of the checkbox, or the placeholder for the datepicker). There are a lot more properties you can assign, which are specific to each component. Building your App For building on Android, you will first need to install the Android SDK. Now that we have all the code in place, all that is left is building the app. For this, make a new directory, start a new cordova project, and add the Android platform, by running the following on your terminal: mkdir my-cordova-project cd my-cordova-project cordova create . cordova platform add android Once the installation is complete, build the code we just wrote previously. If you are using the same build system as the source code, you will have only two files, that is, index.html and bundle.min.js. Delete all the files that are currently present in the www folder of your cordova project and copy those two files there instead. You can check whether your app is working on your computer by running cordova serve and going to the appropriate address on your browser. If all is well, you can build and deploy your app: cordova build android cordova run android This will build and install the app on your Android device (provided it is in debug mode and connected to your computer). Similarly, you can build and install the same app for iOS or windows (you may need additional tools such as XCode or .NET for iOS or Windows). You can also use any other framework to build your mobile app. The angular framework also comes with its own set of material design components. About the Author Soham Kamani is a full-stack web developer and electronics hobbyist.  He is especially interested in JavaScript, Python, and IoT.
Read more
  • 0
  • 0
  • 14657

Packt
17 Mar 2016
9 min read
Save for later

Microservices – Brave New World

Packt
17 Mar 2016
9 min read
In this article by David Gonzalez, author of the book Developing Microservices with Node.js, we will cover the need for microservices, explain the monolithic approach, and study how to build and deploy microservices. (For more resources related to this topic, see here.) Need for microservices The world of software development has evolved quickly over the past 40 years. One of the key points of this evolution has been the size of these systems. From the days of MS-DOS, we taken a hundred-fold leap into our present systems. This growth in size creates a need for better ways of organizing the code and software components. Usually, when a company grows due to business needs, which is known as organic growth, the software gets organized on a monolithic architecture as it is the easiest and quickest way of building software. After few years (or even months), adding new features becomes harder due to the coupled nature of the created software. Monolithic software There are a few companies that have already started building their software using microservices, which is the ideal scenario. The problem is that not all the companies can plan their software upfront. Instead of planning, these companies build the software based on the organic growth experienced: few software components that group business flows by affinity. It is not rare to see companies having two big software components: the user facing website and the internal administration tools. This is usually known as a monolithic software architecture. Some of these companies face big problems when trying to scale the engineering teams. It is hard to coordinate the teams that build, deploy, and maintain a single software component. Clashes on releases and reintroduction of bugs are a common problem that drains a big chunk of energy from the teams. One of the solution to this problem (it also has other benefits) is to split the monolithic software into microservices so that the teams are able to specialize in few smaller modules and autonomous and isolated software components that can be versioned, updated, and deployed without interfering with the rest of the systems of the company. One of the most interesting solutions to this problem is splitting the monolithic architecture into microservices. This enables the engineering team to create isolated and autonomous units of work that are highly specialized in a given task (such as sending e-mails, processing card payment, and so on). Microservices in the real world Microservices are small software components that specialize in one task and work together to achieve a higher-level task. Forget about software for a second and think about how a company works. When someone applies for a job in a company, he applies for a given position: software engineer, systems administrator, or office manager The reason for it can be summarized in one word—specialization. If you are used to working as a software engineer, you will get better with the experience and add more value to the company. The fact that you don’t know how to deal with a customer, won’t affect your performance as it is not your area of expertise and will hardly add any value to your day-to-day work. A microservice is an autonomous unit of work that can execute one task without interfering with other parts of the system, similar to what a job position is to a company. This has a number of benefits that can be used in favor of the engineering team in order to help to scale the systems of a company. Nowadays, hundreds of systems are built using a microservices-oriented architectures, as follows: Netflix: They are one of the most popular streaming services and have built an entire ecosystem of applications that collaborate in order to provide a reliable and scalable streaming system used across the globe. Spotify: They are one of the leading music streaming services in the world and have built this application using microservices. Every single widget of the application (which is a website exposed as a desktop app using Chromium Embedded Framework (CEF)) is a different microservice that can be updated individually. First, there was the monolith A huge percentage (my estimate is around 90%) of the modern enterprise software is built following a monolithic approach. Huge software components that run in a single container and have a well-defined development life cycle that goes completely against the following agile principles, deliver early and deliver often (https://en.wikipedia.org/wiki/Release_early,_release_often): Deliver early: The sooner you fail, the easier it is to recover. If you are working for two years in a software component and then, it is released, there is a huge risk of deviation from the original requirements, which are usually wrong and changing every few days. Deliver often: Everything of the software is delivered to all the stake holders so that they can have their inputs and see the changes reflected in the software. Errors can be fixed in a few days and improvements are identified easily. Companies build big software components instead of smaller ones that work together as it is the natural thing to do, as follows: The developer has a new requirement. He builds a new method on an existing class on the service layer. The method is exposed on the API via HTTP, SOAP, or any other protocol. Now, repeat it by the number of developers in your company and you will obtain something called organic growth. Organic growth is the type of uncontrolled and unplanned growth on software systems under business pressure without an adequate long-term planning, and it is bad. How to tackle the organic growth? The first thing needed to tackle the organic growth is make sure that business and IT are aligned in the company. Usually, in big companies, IT is not seen as a core part of the business. Organizations outsource their IT systems, keeping the cost in mind, but not the quality so that the partners building these software components are focused on one thing: deliver on time and according to the specification, even if it is incorrect. This produces a less-than-ideal ecosystem to respond to the business needs with a working solution for an existing problem. IT is lead by people who barely understand how the systems are built and usually overlook the complexity of the software development. Fortunately, this is a changing tendency as IT systems have become the drivers of 99% of the businesses around the world, but we need to be smarter about how we build them. The first measure to tackle the organic growth is to align IT and business stakeholders in order to work together, educating the non-technical stakeholders is the key to success. If we go back to the example from the previous section (few releases with quite big changes). Can we do it better? Of course, we can. Divide the work into manageable software artifacts that model a single and well-defined business activity and give it an entity. It does not need to be a microservice at this stage, but keeping the logic inside a separated, well-defined, easy testable, and decoupled module will give us a huge advantage towards future changes in the application. Building microservices – The fallback strategy When you design a system, we usually think about the replaceability of the existing components. For example, when using a persistence technology in Java, we tend to lean towards the standards (Java Persistence API (JPA)) so that we can replace the underneath implementation without too much effort. Microservices take the same approach, but they isolate the problem instead of working towards an easy replaceability. Also, e-mailing is something that, although it seems simple, always ends up giving problems. Consider that we want to replace Mandrill with a plain SMTP server, such as Gmail. We don't need to do anything special, we just change the implementation and rollout the new version of our microservice, as follows: var nodemailer = require('nodemailer'); var seneca = require("seneca")(); var transporter = nodemailer.createTransport({ service: 'Gmail', auth: { user: 'info@micromerce.com', pass: 'verysecurepassword' } }); /** * Sends an email including the content. */ seneca.add({area: "email", action: "send"}, function(args, done) { var mailOptions = { from: 'Micromerce Info ✔ <info@micromerce.com>', to: args.to, subject: args.subject, html: args.body }; transporter.sendMail(mailOptions, function(error, info){ if(error){ done({code: e}, null); } done(null, {status: "sent"}); }); }); For the outer world, our simplest version of the e-mail sender is now at all lights, using SMTP through Gmail to deliver our e-mails. We could even rollout one server with this version and send some traffic to it in order to validate our implementation without affecting all the customers (in other words, contain the failure). Deploying microservices Deployment is usually the ugly friend of the software development life cycle party. There is a missing contact point in between development and system administration, which DevOps is going to solve in the following few years (or has already done it and no one told me). The following is the graph showing the cost of fixing software bugs versus the various phases of development: From the continuous integration up to continuous delivery, the process should be automated as much as possible, where as much as possible means 100%. Remember, humans are imperfect…if we rely on humans carrying on a manual repetitive process for a bug-free software, we are walking the wrong path. Remember that a machine will always be error free (as long as the algorithm that is executed is error free) so…why not let a machine control our infrastructure? Summary In this article, we saw how microservices are required in complex software systems, how the monolithic approach is useful, and how to build and deploy microservices. Resources for Article: Further resources on this subject: Making a Web Server in Node.js [article] Node.js Fundamentals and Asynchronous JavaScript [article] An Introduction to Node.js Design Patterns [article]
Read more
  • 0
  • 0
  • 17495
article-image-flexbox-css
Packt
09 Mar 2016
8 min read
Save for later

Flexbox in CSS

Packt
09 Mar 2016
8 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3, Second Edition, we will look at Flexbox and its uses. In 2015, we have better means to build responsive websites than ever. There is a new CSS layout module called Flexible Box (or Flexbox as it is more commonly known) that now has enough browser support to make it viable for everyday use. It can do more than merely provide a fluid layout mechanism. Want to be able to easily center content, change the source order of markup, and generally create amazing layouts with relevant ease? Flexbox is the layout mechanism for you. (For more resources related to this topic, see here.) Introducing Flexbox Here's a brief overview of Flexbox's superpowers: It can easily vertically center contents It can change the visual order of elements It can automatically space and align elements within a box, automatically assigning available space between them It can make you look 10 years younger (probably not, but in low numbers of empirical tests (me) it has been proven to reduce stress) The bumpy path to Flexbox Flexbox has been through a few major iterations before arriving at the relatively stable version we have today. For example, consider the changes from the 2009 version (http://www.w3.org/TR/2009/WD-css3-flexbox-20090723/), the 2011 version (http://www.w3.org/TR/2011/WD-css3-flexbox-20111129/), and the 2014 version we are basing our examples on (http://www.w3.org/TR/css-flexbox-1/). The syntax differences are marked. These differing specifications mean there are three major implementation versions. How many of these you need to concern yourself with depends on the level of browser support you need. Browser support for Flexbox Let's get this out of the way up front: there is no Flexbox support in Internet Explorer 9, 8, or below. For everything else you'd likely want to support (and virtually all mobile browsers), there is a way to enjoy most (if not all) of Flexbox's features. You can check the support information at http://caniuse.com/. Now, let's look at one of its uses. Changing source order Since the dawn of CSS, there has only been one way to switch the visual ordering of HTML elements in a web page. That was achieved by wrapping elements in something set to display: table and then switching the display property on the items within, between display: table-caption (puts it on top), display: table-footer-group (sends it to the bottom), and display: table-header-group (sends it to just below the item set to display: table-caption). However, as robust as this technique is, it was a happy accident, rather than the true intention of these settings. However, Flexbox has visual source re-ordering built in. Let's have a look at how it works. Consider this markup: <div class="FlexWrapper">     <div class="FlexItems FlexHeader">I am content in the Header.</div>     <div class="FlexItems FlexSideOne">I am content in the SideOne.</div>     <div class="FlexItems FlexContent">I am content in the Content.</div>     <div class="FlexItems FlexSideTwo">I am content in the SideTwo.</div>     <div class="FlexItems FlexFooter">I am content in the Footer.</div> </div> You can see here that the third item within the wrapper has a HTML class of FlexContent—imagine that this div is going to hold the main content for the page. OK, let's keep things simple. We will add some simple colors to more easily differentiate the sections and just get these items one under another in the same order they appear in the markup. .FlexWrapper {     background-color: indigo;     display: flex;     flex-direction: column; }   .FlexItems {     display: flex;     align-items: center;     min-height: 6.25rem;     padding: 1rem; }   .FlexHeader {     background-color: #105B63;    }   .FlexContent {     background-color: #FFFAD5; }   .FlexSideOne {     background-color: #FFD34E; }   .FlexSideTwo {     background-color: #DB9E36; }   .FlexFooter {     background-color: #BD4932; } That renders in the browser like this:   Now, suppose we want to switch the order of .FlexContent to be the first item, without touching the markup. With Flexbox it's as simple as adding a single property/value pair: .FlexContent {     background-color: #FFFAD5;     order: -1; } The order property lets us revise the order of items within a Flexbox simply and sanely. In this example, a value of -1 means that we want it to be before all the others. If you want to switch items around quite a bit, I'd recommend being a little more declarative and add an order number for each. This makes things a little easier to understand when you combine them with media queries. Let's combine our new source order changing powers with some media queries to produce not just a different layout at different sizes but different ordering. As it's generally considered wise to have your main content at the beginning of a document, let's revise our markup to this: <div class="FlexWrapper">     <div class="FlexItems FlexContent">I am content in the Content.</div>     <div class="FlexItems FlexSideOne">I am content in the SideOne.</div>     <div class="FlexItems FlexSideTwo">I am content in the SideTwo.</div>     <div class="FlexItems FlexHeader">I am content in the Header.</div>     <div class="FlexItems FlexFooter">I am content in the Footer.</div> </div> First the page content, then our two sidebar areas, then the header and finally the footer. As I'll be using Flexbox, we can structure the HTML in the order that makes sense for the document, regardless of how things need to be laid out visually. For the smallest screens (outside of any media query), I'll go with this ordering: .FlexHeader {     background-color: #105B63;     order: 1; }   .FlexContent {     background-color: #FFFAD5;     order: 2; }   .FlexSideOne {     background-color: #FFD34E;     order: 3; }   .FlexSideTwo {     background-color: #DB9E36;     order: 4; }   .FlexFooter {     background-color: #BD4932;     order: 5; } Which gives us this in the browser:   And then, at a breakpoint, I'm switching to this: @media (min-width: 30rem) {     .FlexWrapper {         flex-flow: row wrap;     }     .FlexHeader {         width: 100%;     }     .FlexContent {         flex: 1;         order: 3;     }     .FlexSideOne {         width: 150px;         order: 2;     }     .FlexSideTwo {         width: 150px;         order: 4;     }     .FlexFooter {         width: 100%;     } } Which gives us this in the browser: In that example, the shortcut flex-flow: row wrap has been used. That allows the flex items to wrap onto multiple lines. It's one of the poorer supported properties, so depending upon how far back support is needed, it might be necessary to wrap the content and two side bars in another element. Summary There are near endless possibilities when using the Flexbox layout system and due to its inherent "flexiness", it's a perfect match for responsive design. If you've never built anything with Flexbox before, all the new properties and values can seem a little odd and it's sometimes disconcertingly easy to achieve layouts that have previously taken far more work. To double-check implementation details against the latest version of the specification, make sure you check out http://www.w3.org/TR/css-flexbox-1/. I think you'll love building things with Flexbox. To check out the other amazing things you can do with Flexbox, have a look at Responsive Web Design with HTML5 and CSS3, Second Edition. The book also features a plethora of other awesome tips and tricks related to responsive web design. Resources for Article: Further resources on this subject: CodeIgniter Email and HTML Table [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Adding Interactive Course Material in Moodle 1.9: Part 1 [article]
Read more
  • 0
  • 0
  • 12122

Packt
08 Mar 2016
17 min read
Save for later

Magento 2 – the New E-commerce Era

Packt
08 Mar 2016
17 min read
In this article by Ray Bogman and Vladimir Kerkhoff, the authors of the book, Magento 2 Cookbook, we will cover the basic tasks related to creating a catalog and products in Magento 2. You will learn the following recipes: Creating a root catalog Creating subcategories Managing an attribute set (For more resources related to this topic, see here.) Introduction This article explains how to set up a vanilla Magento 2 store. If Magento 2 is totally new for you, then lots of new basic whereabouts are pointed out. If you are currently working with Magento 1, then not a lot has changed since. The new backend of Magento 2 is the biggest improvement of them all. The design is built responsively and has a great user experience. Compared to Magento 1, this is a great improvement. The menu is located vertically on the left of the screen and works great on desktop and mobile environments: In this article, we will see how to set up a website with multiple domains using different catalogs. Depending on the website, store, and store view setup, we can create different subcategories, URLs, and product per domain name. There are a number of different ways customers can browse your store, but one of the most effective one is layered navigation. Layered navigation is located in your catalog and holds product features to sort or filter. Every website benefits from great Search Engine Optimization (SEO). You will learn how to define catalog URLs per catalog. Throughout this article, we will cover the basics on how to set up a multidomain setup. Additional tasks required to complete a production-like setup are out of the scope of this article. Creating a root catalog The first thing that we need to start with when setting up a vanilla Magento 2 website is defining our website, store, and store view structure. So what is the difference between website, store, and store view, and why is it important: A website is the top-level container and most important of the three. It is the parent level of the entire store and used, for example, to define domain names, different shipping methods, payment options, customers, orders, and so on. Stores can be used to define, for example, different store views with the same information. A store is always connected to a root catalog that holds all the categories and subcategories. One website can manage multiple stores, and every store has a different root catalog. When using multiple stores, it is not possible to share one basket. The main reason for this has to do with the configuration setup where shipping, catalog, customer, inventory, taxes, and payment settings are not sharable between different sites. Store views is the lowest level and mostly used to handle different localizations. Every store view can be set with a different language. Besides using store views just for localizations, it can also be used for Business to Business (B2B), hidden private sales pages (with noindex and nofollow), and so on. The option where we use the base link URL, for example, (yourdomain.com/myhiddenpage) is easy to set up. The website, store, and store view structure is shown in the following image: Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create a multi-website setup including three domains (yourdomain.com, yourdomain.de, and yourdomain.fr) and separate root catalogs. The following steps will guide you through this: First, we need to update our NGINX. We need to configure the additional domains before we can connect them to Magento. Make sure that all domain names are connected to your server and DNS is configured correctly. Go to /etc/nginx/conf.d, open the default.conf file, and include the following content at the top of your file: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } Your configuration should look like this now: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } upstream fastcgi_backend { server 127.0.0.1:9000; } server { listen 80; listen 443 ssl http2; server_name yourdomain.com; set $MAGE_ROOT /var/www/html; set $MAGE_MODE developer; ssl_certificate /etc/ssl/yourdomain-com.cert; ssl_certificate_key /etc/ssl/yourdomain-com.key; include /var/www/html/nginx.conf.sample; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location ~ /\.ht { deny all; } } Now let's go to the Magento 2 configuration file in /var/www/html/ and open the nginx.conf.sample file. Go to the bottom and look for the following: location ~ (index|get|static|report|404|503)\.php$ Now we add the following lines to the file under fastcgi_pass   fastcgi_backend;: fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; Your configuration should look like this now (this is only a small section of the bottom section): location ~ (index|get|static|report|404|503)\.php$ { try_files $uri =404; fastcgi_pass fastcgi_backend; fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off"; fastcgi_param PHP_VALUE "memory_limit=256M \n max_execution_time=600"; fastcgi_read_timeout 600s; fastcgi_connect_timeout 600s; fastcgi_param MAGE_MODE $MAGE_MODE; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } The current setup is using the MAGE_RUN_TYPE website variable. You may change website to store depending on your setup preferences. When changing the variable, you need your default.conf mapping codes as well. Now, all you have to do is restart NGINX and PHP-FPM to use your new settings. Run the following command: service nginx restart && service php-fpm restart Before we continue we need to check whether our web server is serving the correct codes. Run the following command in the Magento 2 web directory: var/www/html/pub echo "<?php header("Content-type: text/plain"); print_r($_SERVER); ?>" > magecode.php Don't forget to update your nginx.conf.sample file with the new magecode code. It's located at the bottom of your file and should look like this: location ~ (index|get|static|report|404|503|magecode)\.php$ { Restart NGINX and open the file in your browser. The output should look as follows. As you can see, the created MAGE_RUN variables are available. Congratulations, you just finished configuring NGINX including additional domains. Now let's continue connecting them in Magento 2. Now log in to the backend and navigate to Stores | All Stores. By default, Magento 2 has one Website, Store, and Store View setup. Now click on Create Website and commit the following details: Name My German Website Code de Next, click on Create Store and commit the following details: Web site My German Website Name My German Website Root Category Default Category (we will change this later)  Next, click on Create Store View and commit the following details: Store My German Website Name German Code de Status Enabled  Continue the same step for the French domain. Make sure that the Code in Website and Store View is fr. The next important step is connecting the websites with the domain name. Navigate to Stores | Configuration | Web | Base URLs. Change the Store View scope at the top to My German Website. You will be prompted when switching; press ok to continue. Now, unset the checkbox called Use Default from Base URL and Base Link URL and commit your domain name. Save and continue the same procedure for the other website. The output should look like this: Save your entire configuration and clear your cache. Now go to Products | Categories and click on Add Root Category with the following data: Name Root German Is Active Yes Page Title My German Website Continue the same step for the French domain. You may add additional information here but it is not needed. Changing the current Root Category called Default Category to Root English is also optional but advised. Save your configuration, go to Stores | All Stores, and change all of the stores to the appropriate Root Catalog that we just created. Every Root Category should now have a dedicated Root Catalog. Congratulations, you just finished configuring Magento 2 including additional domains and dedicated Root Categories. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 11, we created a multistore setup for .com, .de, and .fr domains using a separate Root Catalog. In steps 1 through 4, we configured the domain mapping in the NGINX default.conf file. Then, we added the fastcgi_param MAGE_RUN code to the nginx.conf.sample file, which will manage what website or store view to request within Magento. In step 6, we used an easy test method to check whether all domains run the correct MAGE_RUN code. In steps 7 through 9, we configured the website, store, and store view name and code for the given domain names. In step 10, we created additional Root Catalogs for the remaining German and French stores. They are then connected to the previously created store configuration. All stores have their own Root Catalog now. There's more… Are you able to buy additional domain names but like to try setting up a multistore? Here are some tips to create one. Depending on whether you are using Windows, Mac OS, or Linux, the following options apply: Windows: Go to C:\Windows\System32\drivers\etc, open up the hosts file as an administrator, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and click on the Start button; then search for cmd.exe and commit the following: ipconfig /flushdns Mac OS: Go to the /etc/ directory, open the hosts file as a super user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: dscacheutil -flushcache Depending on your Mac version, check out the different commands here: http://www.hongkiat.com/blog/how-to-clear-flush-dns-cache-in-os-x-yosemite/ Linux: Go to the /etc/ directory, open the hosts file as a root user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: service nscd restart Depending on your Linux version, check out the different commands here: http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/ Open your browser and surf to the custom-made domains. These domains work only on your PC. You can copy these IP and domain names on as many PCs as you prefer. This method also works great when you are developing or testing and your production domain is not available on your development environment. Creating subcategories After creating the foundation of the website, we need to set up a catalog structure. Setting up a catalog structure is not difficult, but needs to be thought out well. Some websites have an easy setup using two levels, while others sometimes use five or more subcategories. Always keep in mind the user experience; your customer needs to crawl the pages easily. Keep it simple! Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to set up a catalog including subcategories. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Products | Categories. As we have already created Root Catalogs, we start with using the Root English catalog first. Click on the Root English catalog on the left and then select the Add Subcategory button above the menu. Now commit the following and repeat all steps again for the other Root Catalogs: Name Shoes (Schuhe) (Chaussures) Is Active Yes Page Title Shoes (Schuhe) (Chaussures) Name Clothes (Kleider) (Vêtements) Is Active Yes Page Title Clothes (Kleider) (Vêtements) As we have created the first level of our catalog, we can continue with the second level. Now click on the first level that you need to extend with a subcategory and select the Add Subcategory button. Now commit the following and repeat all steps again for the other Root Catalogs: Name Men (Männer) (Hommes) Is Active Yes Page Title Men (Männer) (Hommes) Name Women (Frau) (Femmes) Is Active Yes Page Title Women (Frau) (Femmes) Congratulations, you just finished configuring subcategories in Magento 2. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. Your categories should now look as follows: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 4, we created subcategories for the English, German, and French stores. In this recipe, we created a dedicated Root Catalog for every website. This way, every store can be configured using their own tax and shipping rules. There's more… In our example, we only submitted Name, Is Active, and Page Title. You may continue to commit the Description, Image, Meta Keywords, and Meta Description fields. By default, the URL key is the same as the Name field; you can change this depending on your SEO needs. Every category or subcategory has a default page layout defined by the theme. You may need to override this. Go to the Custom Design tab and click the drop-down menu of Page Layout. We can choose from the following options: 1 column, 2 columns with left bar, 2 columns with right bar, 3 columns, or Empty. Managing an attribute set Every product has a unique DNA; some products such as shoes could have different colors, brands, and sizes, while a snowboard could have weight, length, torsion, manufacture, and style. Setting up a website with all the attributes does not make sense. Depending on the products that you sell, you should create attributes that apply per website. When creating products for your website, attributes are the key elements and need to be thought through. What and how many attributes do I need? How many values does one need? All types of questions that could have a great impact on your website and, not to forget, the performance of it. Creating an attribute such as color and having 100 K of different key values stored is not improving your overall speed and user experience. Always think things through. After creating the attributes, we combine them in attribute sets that can be picked when starting to create a product. Some attributes can be used more than once, while others are unique to one product of an attribute set. Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create product attributes and sets. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Stores | Products. As we are using a vanilla setup, only system attributes and one attribute set is installed. Now click on Add New Attribute and commit the following data in the Properties tab: Attribute Properties Default label shoe_size Catalog Input Type for Store Owners Dropdown Values Required No Manage Options (values of your attribute) English Admin French German 4 4 35 35 4.5 4.5 35 35 5 5 35-36 35-36 5.5 5.5 36 36 6 6 36-37 36-37 6.5 6.5 37 37 7 7 37-38 37-38 7.5 7.5 38 38 8 8 38-39 38-39 8.5 8.5 39 39 Advanced Attribute Properties Scope Global Unique Value No Add to Column Options Yes Use in Filer Options Yes As we have already set up a multi-website that sells shoes and clothes, we stick with this. The attributes that we need to sell shoes are: shoe_size, shoe_type, width, color, gender, and occasion. Continue with the rest of the chart accordingly (http://www.shoesizingcharts.com). Click on Save and Continue Edit now and continue on the Manage Labels tab with the following information: Manage Titles (Size, Color, etc.) English French German Size Taille Größe Click on Save and Continue Edit now and continue on the Storefront Properties tab with the following information: Storefront Properties Use in Search No Comparable in Storefront No Use in Layered Navigation Filterable (with result) Use in Search Result Layered Navigation No Position 0 Use for Promo Rule Conditions No Allow HTML Tags on Storefront Yes Visible on Catalog Pages on Storefront Yes Used in Product Listing No Used for Sorting in Product Listing No Click on Save Attribute now and clear the cache. Depending on whether you have set up the index management accordingly through the Magento 2 cronjob, it will automatically update the newly created attribute. The additional shoe_type, width, color, gender, and occasion attributes configuration can be downloaded at https://github.com/mage2cookbook/chapter4. After creating all of the attributes, we combine them in an attribute set called Shoes. Go to Stores | Attribute Set, click on Add Attribute Set, and commit the following data: Edit Attribute Set Name Name Shoes Based On Default Now click on the Add New button in the Groups section and commit the group name called Shoes. The newly created group is now located at the bottom of the list. You may need to scroll down before you see it. It is possible to drag and drop the group higher up in the list. Now drag and drop the created attributes, shoe_size, shoe_type, width, color, gender, and occasion to the group and save the configuration. The notice of the cron job is automatically updated depending on your settings. Congratulations, you just finished creating attributes and attribute sets in Magento 2. This can be seen in the following screenshot: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 10, we created attributes that will be used in an attribute set. The attributes and sets are the fundamentals for every website. In steps 1 through 5, we created multiple attributes to define all details about the shoes and clothes that we would like to sell. Some attributes are later used as configurable values on the frontend while others only indicate the gender or occasion. In steps 6 through 9, we connected the attributes to the related attribute set so that when creating a product, all correct elements are available. There's more… After creating the attribute set for Shoes, we continue to create an attribute set for Clothes. Use the following attributes to create the set: color, occasion, apparel_type, sleeve_length, fit, size, length, and gender. Follow the same steps as we did before to create a new attribute set. You may reuse the attributes, color, occasion, and gender. All detailed attributes can be found at https://github.com/mage2cookbook/chapter4#clothes-set. The following is the screenshot of the Clothes attribute set: Summary In this article, you learned how to create a Root Catalog, subcategories, and manage attribute sets. For more information on Magento 2, Refer the following books by Packt Publishing: Magento 2 Development Cookbook (https://www.packtpub.com/web-development/magento-2-development-cookbook) Magento 2 Developer's Guide (https://www.packtpub.com/web-development/magento-2-developers-guide) Resources for Article: Further resources on this subject: Social Media in Magento [article] Upgrading from Magneto 1 [article] Social Media and Magento [article]
Read more
  • 0
  • 0
  • 12025
Modal Close icon
Modal Close icon