Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-internationalization
Packt
10 Nov 2016
16 min read
Save for later

Internationalization

Packt
10 Nov 2016
16 min read
In this article by Jérémie Bouchet author of the book Magento Extensions Development. We will see how to handle this aspect of our extension and how it is handled in a complex extension using an EAV table structure. In this article, we will cover the following topics: The EAV approach Store relation table Translation of template interface texts (For more resources related to this topic, see here.) The EAV approach The EAV structure in Magento is used for complex models, such as customer and product entities. In our extension, if we want to add a new field for our events, we would have to add a new column in the main table. With the EAV structure, each attribute is stored in a separate table depending on its type. For example, catalog_product_entity, catalog_product_entity_varchar and catalog_product_entity_int. Each row in the subtables has a foreign key reference to the main table. In order to handle multiple store views in this structure, we will add a column for the store ID in the subtables. Let's see an example for a product entity, where our main table contains only the main attribute: The varchar table structure is as follows: The 70 attribute corresponds to the product name and is linked to our 1 entity. There is a different product name for the store view, 0 (default) and 2 (in French in this example). In order to create an EAV model, you will have to extend the right class in your code. You can inspire your development on the existing modules, such as customers or products. Store relation table In our extension, we will handle the store views scope by using a relation table. This behavior is also used for the CMS pages or blocks, reviews, ratings, and all the models that are not EAV-based and need to be store views-related. Creating the new table The first step is to create the new table to store the new data: Create the [extension_path]/Setup/UpgradeSchema.php file and add the following code: <?php namespace BlackbirdTicketBlasterSetup; use MagentoEavSetupEavSetup; use MagentoEavSetupEavSetupFactory; use MagentoFrameworkSetupUpgradeSchemaInterface; use MagentoFrameworkSetupModuleContextInterface; use MagentoFrameworkSetupSchemaSetupInterface; /** * @codeCoverageIgnore */ class UpgradeSchema implements UpgradeSchemaInterface { /** * EAV setup factory * * @varEavSetupFactory */ private $eavSetupFactory; /** * Init * * @paramEavSetupFactory $eavSetupFactory */ public function __construct(EavSetupFactory $eavSetupFactory) { $this->eavSetupFactory = $eavSetupFactory; } public function upgrade(SchemaSetupInterface $setup, ModuleContextInterface $context) { if (version_compare($context->getVersion(), '1.3.0', '<')) { $installer = $setup; $installer->startSetup(); /** * Create table 'blackbird_ticketblaster_event_store' */ $table = $installer->getConnection()->newTable( $installer->getTable('blackbird_ticketblaster_event_store') )->addColumn( 'event_id', MagentoFrameworkDBDdlTable::TYPE_SMALLINT, null, ['nullable' => false, 'primary' => true], 'Event ID' )->addColumn( 'store_id', MagentoFrameworkDBDdlTable::TYPE_SMALLINT, null, ['unsigned' => true, 'nullable' => false, 'primary' => true], 'Store ID' )->addIndex( $installer->getIdxName('blackbird_ticketblaster_event_store', ['store_id']), ['store_id'] )->addForeignKey( $installer->getFkName('blackbird_ticketblaster_event_store', 'event_id', 'blackbird_ticketblaster_event', 'event_id'), 'event_id', $installer->getTable('blackbird_ticketblaster_event'), 'event_id', MagentoFrameworkDBDdlTable::ACTION_CASCADE )->addForeignKey( $installer->getFkName('blackbird_ticketblaster_event_store', 'store_id', 'store', 'store_id'), 'store_id', $installer->getTable('store'), 'store_id', MagentoFrameworkDBDdlTable::ACTION_CASCADE )->setComment( 'TicketBlaster Event To Store Linkage Table' ); $installer->getConnection()->createTable($table); $installer->endSetup(); } } } The upgrade method will handle all the necessary updates in our database for our extension. In order to differentiate the update for a different version of the extension, we surround the script with a version_compare() condition. Once this code is set, we need to tell Magento that our extension has new database upgrades to process. Open the [extension_path]/etc/module.xml file and change the version number 1.2.0 to 1.3.0: <?xml version="1.0"?> <config xsi_noNamespaceSchemaLocation="../../../../../lib/internal/Magento/Framework/Module/etc/module.xsd"> <module name="Blackbird_TicketBlaster" setup_version="1.3.0"> <sequence> <module name="Magento_Catalog"/> <module name="Blackbird_AnotherModule"/> </sequence> </module> </config> In your terminal, run the upgrade by typing the following command: php bin/magentosetup:upgrade The new table structure now contains two columns: event_id and store_id. This table will store which events are available for store views: If you have previously created events, we recommend emptying the existing blackbird_ticketblaster_event table, because they won't have a default store view and this may trigger an error output. Adding the new input to the edit form In order to select the store view for the content, we will need to add the new input to the edit form. Before running this code, you should add a new store view: Here's how to do that: Open the [extension_path]/Block/Adminhtml/Event/Edit/Form.php file and add the following code in the _prepareForm() method, below the last addField() call: /* Check is single store mode */ if (!$this->_storeManager->isSingleStoreMode()) { $field = $fieldset->addField( 'store_id', 'multiselect', [ 'name' => 'stores[]', 'label' => __('Store View'), 'title' => __('Store View'), 'required' => true, 'values' => $this->_systemStore->getStoreValuesForForm(false, true) ] ); $renderer = $this->getLayout()->createBlock( 'MagentoBackendBlockStoreSwitcherFormRendererFieldsetElement' ); $field->setRenderer($renderer); } else { $fieldset->addField( 'store_id', 'hidden', ['name' => 'stores[]', 'value' => $this->_storeManager->getStore(true)->getId()] ); $model->setStoreId($this->_storeManager->getStore(true)->getId()); } This results in a new multiselect field in the form. Saving the new data in the new table Now we have the form and the database table, we have to write the code to save the data from the form: Open the [extension_path]/Model/Event.php file and add the following method at its end: /** * Receive page store ids * * @return int[] */ public function getStores() { return $this->hasData('stores') ? $this->getData('stores') : $this->getData('store_id'); } Open the [extension_path]/Model/ResourceModel/Event.php file and replace all the code with the following code: <?php namespace BlackbirdTicketBlasterModelResourceModel; class Event extends MagentoFrameworkModelResourceModelDbAbstractDb { [...] The afterSave() method is handling our insert queries in the new table. The afterload() and getLoadSelect() methods are handling the new load mode to select the right events. Your new table is now filled when you save your events; they are also properly loaded when you go back to your edit form. Showing the store views in the admin grid In order to inform admin users of the selected store views for one event, we will add a new column in the admin grid: Open the [extension_path]/Model/ResourceModel/Event/Collection.php file and replace all the code with the following code: <?php namespace BlackbirdTicketBlasterModelResourceModelEvent; class Collection extends MagentoFrameworkModelResourceModelDbCollectionAbstractCollection { [...] Open the [extention_path]/view/adminhtml/ui_component/ticketblaster_event_listing.xml file and add the following XML instructions before the end of the </filters> tag: <filterSelect name="store_id"> <argument name="optionsProvider" xsi_type="configurableObject"> <argument name="class" xsi_type="string">MagentoCmsUiComponentListingColumnCmsOptions</argument> </argument> <argument name="data" xsi_type="array"> <item name="config" xsi_type="array"> <item name="dataScope" xsi_type="string">store_id</item> <item name="label" xsi_type="string" translate="true">Store View</item> <item name="captionValue" xsi_type="string">0</item> </item> </argument> </filterSelect> Before the actionsColumn tag, add the new column: <column name="store_id" class="MagentoStoreUiComponentListingColumnStore"> <argument name="data" xsi_type="array"> <item name="config" xsi_type="array"> <item name="bodyTmpl" xsi_type="string">ui/grid/cells/html</item> <item name="sortable" xsi_type="boolean">false</item> <item name="label" xsi_type="string" translate="true">Store View</item> </item> </argument> </column> You can refresh your grid page and see the new column added at the end. Magento remembers the previous column's order. If you add a new column, it will always be added at the end of the table. You will have to manually reorder them by dragging and dropping them. Modifying the frontend event list Our frontend list (/events) is still listing all the events. In order to list only the events available for our current store view, we need to change a file: Edit the [extension_path]/Block/EventList.php file and replace the code with the following code: <?php namespace BlackbirdTicketBlasterBlock; use BlackbirdTicketBlasterApiDataEventInterface; use BlackbirdTicketBlasterModelResourceModelEventCollection as EventCollection; use MagentoCustomerModelContext; class EventList extends MagentoFrameworkViewElementTemplate implements MagentoFrameworkDataObjectIdentityInterface { /** * Store manager * * @var MagentoStoreModelStoreManagerInterface */ protected $_storeManager; /** * @var MagentoCustomerModelSession */ protected $_customerSession; /** * Construct * * @param MagentoFrameworkViewElementTemplateContext $context * @param BlackbirdTicketBlasterModelResourceModelEventCollectionFactory $eventCollectionFactory, * @param array $data */ public function __construct( MagentoFrameworkViewElementTemplateContext $context, BlackbirdTicketBlasterModelResourceModelEventCollectionFactory $eventCollectionFactory, MagentoStoreModelStoreManagerInterface $storeManager, MagentoCustomerModelSession $customerSession, array $data = [] ) { parent::__construct($context, $data); $this->_storeManager = $storeManager; $this->_eventCollectionFactory = $eventCollectionFactory; $this->_customerSession = $customerSession; } /** * @return BlackbirdTicketBlasterModelResourceModelEventCollection */ public function getEvents() { if (!$this->hasData('events')) { $events = $this->_eventCollectionFactory ->create() ->addOrder( EventInterface::CREATION_TIME, EventCollection::SORT_ORDER_DESC ) ->addStoreFilter($this->_storeManager->getStore()->getId()); $this->setData('events', $events); } return $this->getData('events'); } /** * Return identifiers for produced content * * @return array */ public function getIdentities() { return [BlackbirdTicketBlasterModelEvent::CACHE_TAG . '_' . 'list']; } /** * Is logged in * * @return bool */ public function isLoggedIn() { return $this->_customerSession->isLoggedIn(); } } Note that we have a new property available and instantiated in our constructor: storeManager. Thanks to this class, we can filter our collection with the store view ID by calling the addStoreFilter() method on our events collection. Restricting the frontend access by store view The events will not be listed in our list page if they are not available for the current store view, but they can still be accessed with their direct URL, for example http://[magento_url]/events/view/index/event_id/2. We will change this to restrict the frontend access by store view: Open the [extention_path]/Helper/Event.php file and replace the code with the following code: <?php namespace BlackbirdTicketBlasterHelper; use BlackbirdTicketBlasterApiDataEventInterface; use BlackbirdTicketBlasterModelResourceModelEventCollection as EventCollection; use MagentoFrameworkAppActionAction; class Event extends MagentoFrameworkAppHelperAbstractHelper { /** * @var BlackbirdTicketBlasterModelEvent */ protected $_event; /** * @var MagentoFrameworkViewResultPageFactory */ protected $resultPageFactory; /** * Store manager * * @var MagentoStoreModelStoreManagerInterface */ protected $_storeManager; /** * Constructor * * @param MagentoFrameworkAppHelperContext $context * @param BlackbirdTicketBlasterModelEvent $event * @param MagentoFrameworkViewResultPageFactory $resultPageFactory * @SuppressWarnings(PHPMD.ExcessiveParameterList) */ public function __construct( MagentoFrameworkAppHelperContext $context, BlackbirdTicketBlasterModelEvent $event, MagentoFrameworkViewResultPageFactory $resultPageFactory, MagentoStoreModelStoreManagerInterface $storeManager, ) { $this->_event = $event; $this->_storeManager = $storeManager; $this->resultPageFactory = $resultPageFactory; $this->_customerSession = $customerSession; parent::__construct($context); } /** * Return an event from given event id. * * @param Action $action * @param null $eventId * @return MagentoFrameworkViewResultPage|bool */ public function prepareResultEvent(Action $action, $eventId = null) { if ($eventId !== null && $eventId !== $this->_event->getId()) { $delimiterPosition = strrpos($eventId, '|'); if ($delimiterPosition) { $eventId = substr($eventId, 0, $delimiterPosition); } $this->_event->setStoreId($this->_storeManager->getStore()->getId()); if (!$this->_event->load($eventId)) { return false; } } if (!$this->_event->getId()) { return false; } /** @var MagentoFrameworkViewResultPage $resultPage */ $resultPage = $this->resultPageFactory->create(); // We can add our own custom page handles for layout easily. $resultPage->addHandle('ticketblaster_event_view'); // This will generate a layout handle like: ticketblaster_event_view_id_1 // giving us a unique handle to target specific event if we wish to. $resultPage->addPageLayoutHandles(['id' => $this->_event->getId()]); // Magento is event driven after all, lets remember to dispatch our own, to help people // who might want to add additional functionality, or filter the events somehow! $this->_eventManager->dispatch( 'blackbird_ticketblaster_event_render', ['event' => $this->_event, 'controller_action' => $action] ); return $resultPage; } } The setStoreId() method called on our model will load the model only for the given ID. The events are no longer available through their direct URL if we are not on their available store view. Translation of template interface texts In order to translate the texts written directly in the template file, for the interface or in your PHP class, you need to use the __('Your text here') method. Magento looks for a corresponding match within all the translation CSV files. There is nothing to be declared in XML; you simply have to create a new folder at the root of your module and create the required CSV: Create the [extension_path]/i18n folder. Create [extension_path]/i18n/en_US.csv and add the following code: "Event time:","Event time:" "Please sign in to read more details.","Please sign in to read more details." "Read more","Read more" Create [extension_path]/i18n/en_US.csv and add the following code: "Event time:","Date de l'évènement :" "Pleasesign in to read more details.","Merci de vous inscrire pour plus de détails." "Read more","Lire la suite" The CSV file contains the correspondences between the key used in the code and the value in its final language. Translation of e-mail templates: creating and translating the e-mails We will add a new form in the Details page to share the event to a friend. The first step is to declare your e-mail template. To declare your e-mail template, create a new [extension_path]/etc/email_templates.xml file and add the following code: <?xml version="1.0"?> <config xsi_noNamespaceSchemaLocation="urn:magento:module:Magento_Email:etc/email_templates.xsd"> <template id="ticketblaster_email_email_template" label="Share Form" file="share_form.html" type="text" module="Blackbird_TicketBlaster" area="adminhtml"/> </config> This XML line declares a new template ID, label, file path, module, and area (frontend or adminhtml). Next, create the corresponding template by creating the [extension_path]/view/adminhtml/email/share_form.html file and add the following code: <!--@subject Share Form@--> <!--@vars { "varpost.email":"Sharer Email", "varevent.title":"Event Title", "varevent.venue":"Event Venue" } @--> <p>{{trans "Your friend %email is sharing an event with you:" email=$post.email}}</p> {{trans "Title: %title" title=$event.title}}<br/> {{trans "Venue: %venue" venue=$event.venue}}<br/> <p>{{trans "View the detailed page: %url" url=$event.url}}</p> Note that in order to translate texts within the HTML file, we use the trans function, which works like the default PHP printf() function. The function will also use our i18n CSV files to find a match for the text. Your e-mail template can also be overridden directly from the backoffice: Marketing | Email templates. The e-mail template is ready; we will also add the ability to change it in the system configuration and allow users to determine the sender's e-mail and name: Create the [extension_path]/etc/adminhtml/system.xml file and add the following code: <?xml version="1.0"?> <config xsi_noNamespaceSchemaLocation="urn:magento:module:Magento_Config:etc/system_file.xsd"> <system> <section id="ticketblaster" translate="label" type="text" sortOrder="100" showInDefault="1" showInWebsite="1" showInStore="1"> <label>Ticket Blaster</label> <tab>general</tab> <resource>Blackbird_TicketBlaster::event</resource> <group id="email" translate="label" type="text" sortOrder="50" showInDefault="1" showInWebsite="1" showInStore="1"> <label>Email Options</label> <field id="recipient_email" translate="label" type="text" sortOrder="10" showInDefault="1" showInWebsite="1" showInStore="1"> <label>Send Emails To</label> <validate>validate-email</validate> </field> <field id="sender_email_identity" translate="label" type="select" sortOrder="20" showInDefault="1" showInWebsite="1" showInStore="1"> <label>Email Sender</label> <source_model>MagentoConfigModelConfigSourceEmailIdentity</source_model> </field> <field id="email_template" translate="label comment" type="select" sortOrder="30" showInDefault="1" showInWebsite="1" showInStore="1"> <label>Email Template</label> <comment>Email template chosen based on theme fallback when "Default" option is selected.</comment> <source_model>MagentoConfigModelConfigSourceEmailTemplate</source_model> </field> </group> </section> </system> </config> Create the [extension_path]/etc/config.xml file and add the following code: <?xml version="1.0"?> <config xsi_noNamespaceSchemaLocation="urn:magento:module:Magento_Store:etc/config.xsd"> <default> <ticketblaster> <email> <recipient_email> <![CDATA[hello@example.com]]> </recipient_email> <sender_email_identity>custom2</sender_email_identity> <email_template>ticketblaster_email_email_template</email_template> </email> </ticketblaster> </default> </config> Thanks to these two files, you can change the configuration for the e-mail template in the Admin panel (Stores | Configuration). Let's create our HTML form and the controller that will handle our submission: Open the existing [extension_path]/view/frontend/templates/view.phtml file and add the following code at the end: <form action="<?php echo $block->getUrl('events/view/share', array('event_id' => $event->getId())); ?>" method="post" id="form-validate" class="form"> <h3> <?php echo __('Share this event to my friend'); ?> </h3> <input type="email" name="email" class="input-text" placeholder="email" /> <button type="submit" class="button"><?php echo __('Share'); ?></button> </form> Create the [extension_path]/Controller/View/Share.php file and add the following code: <?php namespace BlackbirdTicketBlasterControllerView; use MagentoFrameworkExceptionNotFoundException; use MagentoFrameworkAppRequestInterface; use MagentoStoreModelScopeInterface; use BlackbirdTicketBlasterApiDataEventInterface; class Share extends MagentoFrameworkAppActionAction { [...] This controller will get the necessary configuration entirely from the admin and generate the e-mail to be sent. Testing our code by sending the e-mail Go to the page of an event and fill in the form we prepared. When you submit it, Magento will send the e-mail immediately. Summary In this article, we addressed all the main processes that are run for internationalization. We can now create and control the availability of our events with regards to Magento's stores and translate the contents of our pages and e-mails. Resources for Article: Further resources on this subject: Magento Theme Distribution [article] Installing Magento [article] Magento 2 – the New E-commerce Era [article]
Read more
  • 0
  • 0
  • 33122

article-image-introducing-r-rstudio-and-shiny
Packt
25 Sep 2015
9 min read
Save for later

Introducing R, RStudio, and Shiny

Packt
25 Sep 2015
9 min read
 In this article, by Hernán G. Resnizky, author of the book Learning Shiny, the main objective will be to learn how to install all the needed components to build an application in R with Shiny. Additionally, some general ideas about what R is will be covered in order to be able to dive deeper into programming using R. The following topics will be covered: A brief introduction to R, RStudio, and Shiny Installation of R and Shiny General tips and tricks (For more resources related to this topic, see here.) About R As stated on the R-project main website: "R is a language and environment for statistical computing and graphics." R is a successor of S and is a GNU project. This means, briefly, that anyone can have access to its source codes and can modify or adapt it to their needs. Nowadays, it is gaining territory over classic commercial software, and it is, along with Python, the most used language for statistics and data science. Regarding R's main characteristics, the following can be considered: Object oriented: R is a language that is composed mainly of objects and functions. Can be easily contributed to: Similar to GNU projects, R is constantly being enriched by user's contributions either by making their codes accessible via "packages" or libraries, or by editing/improving its source code. There are actually almost 7000 packages in the common R repository, Comprehensive R Archive Network (CRAN). Additionally, there are R repositories of public access, such as bioconductor project that contains packages for bioinformatics. Runtime execution: Unlike C or Java, R does not need compilation. This means that you can, for instance, write 2 + 2 in the console and it will return the value. Extensibility: The R functionalities can be extended through the installation of packages and libraries. Standard proven libraries can be found in CRAN repositories and are accessible directly from R by typing install.packages(). Installing R R can be installed in every operating system. It is highly recommended to download the program directly from http://cran.rstudio.com/ when working on Windows or Mac OS. On Ubuntu, R can be easily installed from the terminal as follows: sudo apt-get update sudo apt-get install r-base sudo apt-get install r-base-dev The installation of r-base-dev is highly recommended as it is a package that enables users to compile the R packages from source, that is, maintain the packages or install additional R packages directly from the R console using the install.packages() command. To install R on other UNIX-based operating systems, visit the following links: http://cran.rstudio.com/ http://cran.r-project.org/doc/manuals/r-release/R-admin.html#Obtaining-R A quick guide to R When working on Windows, R can be launched via its application. After the installation, it is available as any other program on Windows. When opening the program, a window like this will appear: When working on Linux, you can access the R console directly by typing R on the command line: In both the cases, R executes in runtime. This means that you can type in code, press Enter, and the result will be given immediately as follows: > 2+2 [1] 4 The R application in any operating system does not provide an easy environment to develop code. For this reason, it is highly recommended (not only to write web applications in R with Shiny, but for any task you want to perform in R) to use an Integrated Development Environment (IDE). About RStudio As with other programming languages, there is a huge variety of IDEs available for R. IDEs are applications that make code development easier and clearer for the programmer. RStudio is one of the most important ones for R, and it is especially recommended to write web applications in R with Shiny because this contains features specially designed for R. Additionally, RStudio provides facilities to write C++, Latex, or HTML documents and also integrates them to the R code. RStudio also provides version control, project management, and debugging features among many others. Installing RStudio RStudio for desktop computers can be downloaded from its official website at http://www.rstudio.com/products/rstudio/download/ where you can get versions of the software for Windows, MAC OS X, Ubuntu, Debian, and Fedora. Quick guide to RStudio Before installing and running RStudio, it is important to have R installed. As it is an IDE and not the programming language, it will not work at all. The following screenshot shows RStudio's starting view: At the first glance, the following four main windows are available: Text editor: This provides facilities to write the R scripts such as highlighting and a code completer (when hitting Tab, you can see the available options to complete the code written). It is also possible to include the R code in an HTML, Latex, or C++ piece of code. Environment and history: They are defined as follows: In the Environment section, you can see the active objects in each environment. By clicking on Global Environment (which is the environment shown by default), you can change the environment and see the active objects. In the History tab, the pieces of codes executed are stored line by line. You can select one or more lines and send them either to the editor or to the console. In addition, you can look up for a certain specific piece of code by typing it in the textbox in the top right part of this window. Console: This is an exact equivalent of R console, as described in Quick guide of R. Tabs: The different tabs are defined as follows: Files: This consists of a file browser with several additional features (renaming, deleting, and copying). Clicking on a file will open it in editor or the Environment tab depending on the type of the file. If it is a .rda or .RData file, it will open in both. If it is a text file, it will open in one of them. Plots: Whenever a plot is executed, it will be displayed in that tab. Packages: This shows a list of available and active packages. When the package is active, it will appear as clicked. Packages can also be installed interactively by clicking on Install Packages. Help: This is a window to seek and read active packages' documentation. Viewer: This enables us to see the HTML-generated content within RStudio. Along with numerous features, RStudio also provides keyboard shortcuts. A few of them are listed as follows: Description Windows/Linux OSX Complete the code. Tab Tab Run the selected piece of code. If no piece of code is selected, the active line is run. Ctrl + Enter ⌘ + Enter Comment the selected block of code. Ctrl + Shift + C ⌘ + / Create a section of code, which can be expanded or compressed by clicking on the arrow to the left. Additionally, it can be accessed by clicking on it in the bottom left menu. ##### ##### Find and replace. Ctrl + F ⌘ + F The following screenshots show how a block of code can be collapsed by clicking on the arrow and how it can be accessed quickly by clicking on its name in the bottom-left part of the window: Clicking on the circled arrow will collapse the Section 1 block, as follows: The full list of shortcuts can be found at https://support.rstudio.com/hc/en-us/articles/200711853-Keyboard-Shortcuts. For further information about other RStudio features, the full documentation is available at https://support.rstudio.com/hc/en-us/categories/200035113-Documentation. About Shiny Shiny is a package created by RStudio, which enables to easily interface R with a web browser. As stated in its official documentation, Shiny is a web application framework for R that makes it incredibly easy to build interactive web applications with R. One of its main advantages is that there is no need to combine R code with HTML/JavaScript code as the framework already contains prebuilt features that cover the most commonly used functionalities in a web interactive application. There is a wide range of software that has web application functionalities, especially oriented to interactive data visualization. What are the advantages of using R/Shiny then, you ask? They are as follows: It is free not only in terms of money, but as all GNU projects, in terms of freedom. As stated in the GNU main page: To understand the concept (GNU), you should think of free as in free speech, not as in free beer. Free software is a matter of the users' freedom to run, copy, distribute, study, change, and improve the software. All the possibilities of a powerful language such as R is available. Thanks to its contributive essence, you can develop a web application that can display any R-generated output. This means that you can, for instance, run complex statistical models and return the output in a friendly way in the browser, obtain and integrate data from the various sources and formats (for instance, SQL, XML, JSON, and so on) the way you need, and subset, process, and dynamically aggregate the data the way you want. These options are not available (or are much more difficult to accomplish) under most of the commercial BI tools. Installing and loading Shiny As with any other package available in the CRAN repositories, the easiest way to install Shiny is by executing install.packages("shiny"). The following output should appear on the console: Due to R's extensibility, many of its packages use elements (mostly functions) from other packages. For this reason, these packages are loaded or installed when the package that is dependent on them is loaded or installed. This is called dependency. Shiny (on its 0.10.2.1 version) depends on Rcpp, httpuv, mime, htmltools, and R6. An R session is started only with the minimal packages loaded. So if functions from other packages are used, they need to be loaded before using them. The corresponding command for this is as follows: library(shiny) When installing a package, the package name must be quoted but when loading the package, it must be unquoted. Summary After these instructions, the reader should be able to install all the fundamental elements to create a web application with Shiny. Additionally, he or she must have acquired at least a general idea of what R and the R project is. Resources for Article: Further resources on this subject: R ─ Classification and Regression Trees[article] An overview of common machine learning tasks[article] Taking Control of Reactivity, Inputs, and Outputs [article]
Read more
  • 0
  • 0
  • 33103

article-image-testing-single-page-apps-spas-vue-js-dev-tools
Pravin Dhandre
25 May 2018
8 min read
Save for later

Testing Single Page Applications (SPAs) using Vue.js developer tools

Pravin Dhandre
25 May 2018
8 min read
Testing, especially for big applications, is paramount – especially when deploying your application to a development environment. Whether you choose unit testing or browser automation, there are a host of articles and books available on the subject. In this tutorial, we have covered the usage of Vue developer tools to test Single Page Applications. We will also touch upon other alternative tools like Nightwatch.js, Selenium, and TestCafe for testing. This article is an excerpt from a book written by Mike Street, titled Vue.js 2.x by Example.  Using the Vue.js developer tools The Vue developer tools are available for Chrome and Firefox and can be downloaded from GitHub. Once installed, they become an extension of the browser developer tools. For example, in Chrome, they appear after the Audits tab. The Vue developer tools will only work when you are using Vue in development mode. By default, the un-minified version of Vue has the development mode enabled. However, if you are using the production version of the code, the development tools can be enabled by setting the devtools variable to true in your code: Vue.config.devtools = true We've been using the development version of Vue, so the dev tools should work with all three of the SPAs we have developed. Open the Dropbox example and open the Vue developer tools. Inspecting Vue components data and computed values The Vue developer tools give a great overview of the components in use on the page. You can also drill down into the components and preview the data in use on that particular instance. This is perfect for inspecting the properties of each component on the page at any given time. For example, if we inspect the Dropbox app and navigate to the Components tab, we can see the <Root> Vue instance and we can see the <DropboxViewer> component. Clicking this will reveal all of the data properties of the component – along with any computed properties. This lets us validate whether the structure is constructed correctly, along with the computed path property: Drilling down into each component, we can access individual data objects and computed properties. Using the Vue developer tools for inspecting your application is a much more efficient way of validating data while creating your app, as it saves having to place several console.log() statements. Viewing Vuex mutations and time-travel Navigating to the next tab, Vuex, allows us to watch store mutations taking place in real time. Every time a mutation is fired, a new line is created in the left-hand panel. This element allows us to view what data is being sent, and what the Vuex store looked like before and after the data had been committed. It also gives you several options to revert, commit, and time-travel to any point. Loading the Dropbox app, several structure mutations immediately populate within the left-hand panel, listing the mutation name and the time they occurred. This is the code pre-caching the folders in action. Clicking on each one will reveal the Vuex store state – along with a mutation containing the payload sent. The state display is after the payload has been sent and the mutation committed. To preview what the state looked like before that mutation, select the preceding option: On each entry, next to the mutation name, you will notice three symbols that allow you to carry out several actions and directly mutate the store in your browser: Commit this mutation: This allows you to commit all the data up to that point. This will remove all of the mutations from the dev tools and update the Base State to this point. This is handy if there are several mutations occurring that you wish to keep track of. Revert this mutation: This will undo the mutation and all mutations after this point. This allows you to carry out the same actions again and again without pressing refresh or losing your current place. For example, when adding a product to the basket in our shop app, a mutation occurs. Using this would allow you to remove the product from the basket and undo any following mutations without navigating away from the product page. Time-travel to this state: This allows you to preview the app and state at that particular mutation, without reverting any mutations that occur after the selected point. The mutations tab also allows you to commit or revert all mutations at the top of the left-hand panel. Within the right-hand panel, you can also import and export a JSON encoded version of the store's state. This is particularly handy when you want to re-test several circumstances and instances without having to reproduce several steps. Previewing event data The Events tab of the Vue developer tools works in a similar way to the Vuex tab, allowing you to inspect any events emitted throughout your app. Changing the filters in this app emits an event each time the filter type is updated, along with the filter query: The left-hand panel again lists the name of the event and the time it occurred. The right panel contains information about the event, including its component origin and payload. This data allows you to ensure the event data is as you expected it to be and, if not, helps you locate where the event is being triggered. The Vue dev tools are invaluable, especially as your JavaScript application gets bigger and more complex. Open the shop SPA we developed and inspect the various components and Vuex data to get an idea of how this tool can help you create applications that only commit mutations they need to and emit the events they have to. Testing your Single Page Application The majority of Vue testing suites revolve around having command-line knowledge and creating a Vue application using the CLI (command-line interface). Along with creating applications in frontend-compatible JavaScript, Vue also has a CLI that allows you to create applications using component-based files. These are files with a .vue extension and contain the template HTML along with the JavaScript required for the component. They also allow you to create scoped CSS – styles that only apply to that component. If you chose to create your app using the CLI, all of the theory and a lot of the practical knowledge you have learned in this book can easily be ported across. Command-line unit testing Along with component files, the Vue CLI allows you to integrate with command-line unit tests easier, such as Jest, Mocha, Chai, and TestCafe (https://testcafe.devexpress.com/). For example, TestCafe allows you to specify several different tests, including checking whether content exists, to clicking buttons to test functionality. An example of a TestCafe test checking to see if our filtering component in our first app contains the work Field would be: test('The filtering contains the word "filter"', async testController => { const filterSelector = await new Selector('body > #app > form > label:nth-child(1)'); await testController.expect(paragraphSelector.innerText).eql('Filter'); }); This test would then equate to true or false. Unit tests are generally written in conjunction with components themselves, allowing components to be reused and tested in isolation. This allows you to check that external factors have no bearing on the output of your tests. Most command-line JavaScript testing libraries will integrate with Vue.js; there is a great list available in the awesome Vue GitHub repository (https://github.com/vuejs/awesome-vue#test). Browser automation The alternative to using command-line unit testing is to automate your browser with a testing suite. This kind of testing is still triggered via the command line, but rather than integrating directly with your Vue application, it opens the page in the browser and interacts with it like a user would. A popular tool for doing this is Nightwatch.js (http://nightwatchjs.org/). You may use this suite for opening your shop and interacting with the filtering component or product list ordering and comparing the result. The tests are written in very colloquial English and are not restricted to being on the same domain name or file network as the site to be tested. The library is also language agnostic – working for any website regardless of what it is built with. The example Nightwatch.js gives on their website is for opening Google and ensuring the first result of a Google search for rembrandt van rijn is the Wikipedia entry: module.exports = { 'Demo test Google' : function (client) { client .url('http://www.google.com') .waitForElementVisible('body', 1000) .assert.title('Google') .assert.visible('input[type=text]') .setValue('input[type=text]', 'rembrandt van rijn') .waitForElementVisible('button[name=btnG]', 1000) .click('button[name=btnG]') .pause(1000) .assert.containsText('ol#rso li:first-child', 'Rembrandt - Wikipedia') .end(); } }; An alternative to Nightwatch is Selenium (http://www.seleniumhq.org/). Selenium has the advantage of having a Firefox extension that allows you to visually create tests and commands. We covered usage of Vue.js dev tools and learned to build automated tests for your web applications. If you found this tutorial useful, do check out the book Vue.js 2.x by Example and get complete knowledge resource on the process of building single-page applications with Vue.js. Building your first Vue.js 2 Web application 5 web development tools will matter in 2018
Read more
  • 0
  • 0
  • 32861

article-image-app-and-web-development-in-2020-what-you-need-to-learn
Richard Gall
19 Dec 2019
6 min read
Save for later

App and web development in 2020: what you need to learn

Richard Gall
19 Dec 2019
6 min read
Web developers and app developers: not sure what you should be learning over the next 12 months? In such a fast-changing industry it can be hard to get a sense of which direction the proverbial wind is blowing, which means that when it comes to your future you can leave yourself in a somewhat weakened position.    So, if you’re looking for some ideas on how to organize your learning, look no further than this quick list.   Learn how to build microservices  Microservices are the dominant architectural model. There are many different reasons for this - the growth of APIs and web services, containerization - ultimately what’s important is that microservices allow you to build software in a way that’s modular. That promotes efficiency and agility which, in turn, empowers developers.   So, if you don’t yet have experience of developing microservices, you could do a lot worse than learning how to build them in 2020. Indeed, even if you don’t have a professional reason to learn more about microservices, exploring the topic through personal projects could be a real benefit to you in the future.   Find microservices content in Packt's range of cloud bundles. Learn a new language   This one might not be at the top of your agenda. If you’ve been working with JavaScript or Java for years, it seems strange to make time to learn something completely new. Surely, you’re probably thinking, I’d rather invest my time and energy into learning something that I can use at work?    In fact, learning a new language might just be one of the best things you can do in 2020. Not only will it give your resume a gentle kick, and potentially open up new opportunities for you in the future, it will also give you a better holistic understanding of how programming languages work, what their relative limitations and advantages over one another are. It’s a cliche that travel broadens the mind but when it comes to exploring and adventuring into new languages it’s certainly true.    But what should you learn? That, really is down to you, what you’re background is, and where you want to go. Some options are obvious - if you’re a Java developer Kotlin is the natural next step. Sometimes it’s not so obvious - JavaScript developers, for example might want to learn Go if they’re becoming more familiar with backend development, maybe even Rust or C++ if they’re feeling particularly adventurous and ambitious.  Explore new programming languages. Click here to find Packt's programming bundles of eBooks and videos. Learn a new framework Even if you don’t think learning a new language is appropriate for you, learning a new framework might be a more practical option that you can begin to use immediately. In the middle of the decade, when Angular.js was gaining traction across web development there was a lot of debate and discussion about the ‘best’ framework. Fortunately, this discourse has declined as it’s become clearer that choosing a framework is really about the best tool for the job, not a badge of personal identity. With this change, it means being open to learning new tools and frameworks will help you not only in terms of building out your resume, but also in terms of having a wide range of options for solving problems in your day to day work. Packt's web development bundles feature a range of eBooks on the most in-demand frameworks and tools. Explore them here. Re-learn a language you know Although it’s good to learn new things (obviously), we don’t talk enough about how valuable it can be to go back and learn something anew. This is for a couple of different reasons: on the one hand, it’s good to be able to review your level of understanding and to get up to speed with new features and capabilities that you might not have previously known about, but it also gives you a chance to explore a language from scratch and try to uncover a new perspective on it. This is particularly useful if you want to learn a new paradigm, like functional programming. Going back to core principles and theory is essential if you’re to unlock new levels of performance and control. Although it’s easy to dismiss theory as something academic, let’s make 2020 the year we properly begin to appreciate the close kinship between theory and practice. Learn how to go about learning It’s a given that being a developer means lifelong learning. And while we encourage you to take our advice and be open to new frameworks, microservices, and new programming languages, ultimately what you learn starts and ends with you. This isn’t easy - in fact, it’s probably one of the hardest aspects of working in technology full stop. That’s why, in 2020, make it your goal to learn more about learning. This article written by Jenn Schiffer is a couple of years old now but it articulates this challenge very well. She focuses mainly on some of the anxieties around learning new things and entering new communities, but the overarching point - that the conversation around how and why technologies should be used needs to be clearer - is a good one. True, there’s not much you can do about poor documentation or a toxic community, but you can think about your learning in a way that’s both open minded and well structured. For example, think about what you want or need to do - be reflective about your work and career. Be inquisitive and exploratory in your research - talk to people you know and trust, and read opinions and experiences from people you’ve never even heard of. By adopting a more intentional approach to the things you learn and the way you learn about them you’ll find that you’ll not only be able to learn new skills more efficiently, you’ll also enjoy it more too. Explore Packt’s full range of eBooks and videos on Packt store.
Read more
  • 0
  • 0
  • 32463

article-image-web-development-react-and-bootstrap
Packt
04 Jan 2017
18 min read
Save for later

Web Development with React and Bootstrap

Packt
04 Jan 2017
18 min read
In this article by Harmeet Singh and Mehul Bhat, the authors of the book Learning Web Development with React and Bootstrap, we are going to see how we can build responsive web applications with the help of Bootstrap and ReactJS. (For more resources related to this topic, see here.) There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and a lot of new theory to learn. This book introduces you to ReactJS and Bootstrap which you will likely come across as you learn about modern web app development. They both are used for building fast and scalable user interfaces. React is famously known as a view (V) in MVC when we talk about defining M and C we need to look somewhere else or we can use other frameworks like Redux and Flux to handle remote data. The best way to learn code is to write code, so we're going to jump right in. To show you just how easy it is to get up and running with Bootstrap and ReactJS, we're going to cover theory and will see how we can make super simple applications as well as integrate with other applications. ReactJS React (sometimes styled React.js or ReactJS) is an open-source JavaScript library which provides a view for data rendered as HTML. Components have been used typically to render React views which contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow and explicit mutation. It is very methodical in updating the HTML document when data changes; and a clean separation between components on a modern single-page application. As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner and the React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only V in MVC, it has stateful components, it handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC would do. Let's look at React's component life cycle and it's different levels: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a faster and an easier way to develop powerful mobile-first user interface. Bootstrap grid system allows us to create responsive 12 column grids, layout, and components. It includes predefined classes for easy layout options (fixed width and full width). Bootstrap have pre-styled dozen reusable components and custom jQuery plugins like button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons and many more. Bootstrap package includes the compiled and minified version of CSS and JS for our app we just need CSS bootstrap.min.css and fonts folder. This style sheet will provide you the look and feel of all components, responsive layout structure for our application. In the previous version Bootstrap included icons as image but in version 3 they have replaced icons as fonts. We can also customize the Bootstrap CSS stylesheet as per the component featured in our application. React-Bootstrap The React-Bootstrap JavaScript framework is similar to Bootstrap rebuilt for React. It's a complete reimplementation of the Bootstrap frontend reusable components in React. React-Bootstrap has no dependency on any other framework, such as Bootstrap.js or jQuery. It means that if you are using React-Bootstrap then we don't need to include the jQuery in your project as a dependency. Using React-Bootstrap, we can be sure that, there won't be external JavaScript calls to render the component which might be incompatible with the React DOM render. However, you can still achieve the same functionality and look and feel as Twitter Bootstrap, but with much cleaner code. Benefits of React-Bootstrap Compare to Twitter Bootstrap, we can import required code/component. It saves a bit of typing and bugs by compressing the Bootstrap. It reduces typing efforts and, more importantly, conflicts by compressing the Bootstrap. We don't need to think about the different approaches taken by Bootstrap versus. React It is easy to use It encapsulates in elements It uses JSX syntax It avoids React rendering of the virtual DOM It is easy to detect DOM changes and update the DOM without any conflict It doesn't have any dependency on other libraries, such as jQuery Bootstrap grid system Bootstrap is based on a 12-column grid system which includes a powerful responsive structure and a mobile-first fluid grid system that allows us to scaffold our web app with very few elements. In Bootstrap, we have a predefined series of classes to compose of rows and columns, so before we start we need to include the <div>tag with the container class to wrap our rows and columns. Otherwise, the framework won't respond as expected because Bootstrap has written CSS which is dependent on it. The preceding snippet has HTML structure of container class <div> tag: <div class="container"><div> This will make your web app the centre of the page as well as control the rows and columns to work as expected in responsive. There are four class prefixes which help to define the behaviour of the columns. All the classes are related to different device screen sizes and react in familiar ways. The following table from http://getbootstrap.com defines the variations between all four classes:   Extra small devices Phones (<768px) Small devices Tablets (≥768px) Medium devices Desktops (≥992px) Large devices Desktops (≥1200px) Grid behavior Horizontal at all times Collapsed to start, horizontal above breakpoints Container width None (auto) 750px 970px 1170px Class prefix .col-xs- .col-sm- .col-md- .col-lg- # of columns 12 Column width Auto ~62px ~81px ~97px Gutter width 30px (15px on each side of a column) Nestable Yes Offsets Yes Column ordering Yes React components React is basically based on a modular build, with encapsulated components that manage their own state so it will efficiently update and render your components when data changes. In React, components logic is written in JavaScript instead of templates so you can easily pass rich data through your app and manage the state out of the DOM. Using the render() method, we are rendering a component in React that takes input data and returns what you want to display. It can either take HTML tags (strings) or React components (classes). Let's take a quick look at examples of both: var myReactElement = <div className="hello" />; ReactDOM.render(myReactElement, document.getElementById('example')); In this example, we are passing HTML as a string into the render method which we have used before creating the <Navbar>: var ReactComponent = React.createClass({/*...*/}); var myReactElement = <ReactComponent someProperty={true} />; ReactDOM.render(myReactElement, document.getElementById('example')); In the preceding example, we are rendering the component, just to create a local variable that starts with an uppercase convention. Using the upper versus lowercase convention in React's JSX will distinguish between local component classes and HTML tags. So, we can create our React elements or components in two ways: either we can use Plain JavaScript with React.createElement or React's JSX. React.createElement() Using JSX in React is completely optional for creating the react app. As we know, we can create elements with React.createElement which take three arguments: a tag name or component, a properties object, and a variable number of child elements, which is optional: var profile = React.createElement('li',{className:'list-group-item'}, 'Profile'); var profileImageLink = React.createElement('a',{className:'center-block text-center',href:'#'},'Image'); var profileImageWrapper = React.createElement('li',{className:'list-group-item'}, profileImageLink); var sidebar = React.createElement('ul', { className: 'list-group' }, profile, profileImageWrapper); ReactDOM.render(sidebar, document.getElementById('sidebar')); In the preceding example, we have used React.createElement to generate a ulli structure. React already has built-in factories for common DOM HTML tags. JSX in React JSX is extension of JavaScript syntax and if you observe the syntax or structure of JSX, you will find it similar to XML coding. JSX is doing preprocessor footstep which adds XML syntax to JavaScript. Though, you can certainly use React without JSX but JSX makes react a lot more neat and elegant. Similar like XML, JSX tags are having tag name, attributes, and children and in that if an attribute value is enclosed in quotes that value becomes a string. The way XML is working with balanced opening and closing tags, JSX works similarly and it also helps to understand and read huge amount of structures easily than JavaScript functions and objects. Advantages of using JSX in React Take a look at the following points: JSX is very simple to understand and think about than JavaScript functions Mark-up of JSX would be more familiar to non-programmers Using JSX, your markup becomes more semantic, organized, and significant JSX – acquaintance or understanding In the development region, user interface developer, user experience designer, and quality assurance people are not much familiar with any programming language but JSX makes their life easy by providing easy syntax structure which is visually similar to HTML structure. JSX shows a path to indicate and see through your mind's eye, the structure in a solid and concise way. JSX – semantics/structured syntax Till now, we have seen how JSX syntax is easy to understand and visualize, behind this there is big reason of having semantic syntax structure. JSX with pleasure converts your JavaScript code into more standard way, which gives clarity to set your semantic syntax and significance component. With the help of JSX syntax you can declare structure of your custom component with information the way you do in HTML syntax and that will do all magic to transform your syntax to JavaScript functions. ReactDOM namespace helps us to use all HTML elements with the help of ReactJS, isn't this an amazing feature! It is. Moreover, the good part is, you can write your own named components with help of ReactDOM namespace. Please check out below HTML simple mark-up and how JSX component helps you to have semantic markup. <div className="divider"> <h2>Questions</h2><hr /> </div> As you can see in the preceding example, we have wrapped <h2>Questions</h2><hr /> with <div> tag which has classNamedivider so, in React composite component, you can create similar structure and it is as easy as you do your HTML coding with semantic syntax: <Divider> Questions </Divider> Composite component As we know that, you can create your custom component with JSX markup and JSX syntax will transform your component to JavaScript syntax component. Namespace components It's another feature request which is available in React JSX. We know that JSX is just an extension of JavaScript syntax and it also provides ability to use namespace so, React is also using JSX namespace pattern rather than XML namespacing. By using, standard JavaScript syntax approach which is object property access, this feature is useful for assigning component directly as <Namespace.Component/> instead of assigning variables to access components which are stored in an object. JSXTransformer JSXTransformer is another tool to compile JSX in the browser. While reading a code, browser will read attribute type="text/jsx" in your mentioned <script> tag and it will only transform those scripts which has mentioned type attribute and then it will execute your script or written function in that file. The code will be executed in same manner the way React-tools executes on the server. JSXTransformer is deprecating in current version of React, but you can find the current version on any provided CDNs and Bower. As per my opinion, it would be great to use Babel REPL tool to compile JavaScript. It has already adopted by React and broader JavaScript community. Attribute expressions If you can see above example of show/Hide we have used attribute expression for show the message panel and hide it. In react, there is a bit change in writing an attribute value, in JavaScript expression we write attribute in quotes ("") but we have to provide pair of curly braces ({}). var showhideToggle = this.state.collapse ? (<MessagePanel>):null/>; Boolean attributes As in Boolean attribute, there are two values, either it can be true or false and if we neglect its value in JSX while declaring attribute, it by default takes value as true. If we want to have attribute value false then we have to use an attribute expression. This scenario can come regularly when we use HTML form elements, for example disabled attribute, required attribute, checked attribute, and readOnly attribute. In Bootstrap example: aria-haspopup="true"aria-expanded="true" // example of writing disabled attribute in JSX <input type="button" disabled />; <input type="button" disabled={true} />; JavaScript expressions As seen in the preceding example, you can embed JavaScript expressions in JSX using syntax that will be accustomed to any handlebars user, for example style = { displayStyle } allocates the value of the JavaScript variable displayStyle to the element's style attribute. Styles Same as the expression, you can set styles by assigning an ordinary JavaScript object to the style attribute. How interesting, if someone tells you, not to write CSS syntax but you can write JavaScript code to achieve the same, no extra efforts. Isn't it superb stuff! Yes, it is. Events There is a set of event handlers that you can bind in a way that should look much acquainted to anybody who knows HTML. Generally, as per our practice we set properties on to the object which is anti-pattern in JSX attribute standard. var component = <Component />; component.props.foo = x; // bad component.props.bar = y; // also bad As shown in the preceding example, you can see the anti-pattern and it's not the best practice. If you don't know about properties of JSX attributes then propTypes won't be set and it will throw errors which would be difficult for you to trace. Props is very sensitive part of attribute so, you should not change it, as each props is having predefined method and you should use it as it is meant for, like we use other JavaScript methods or HTML tags. This doesn't mean that it is impossible to change Props, it is possible but it is against standard defined by React. Even in React, it will throw error. Spread attributes Let's check out JSX feature—spread attributes: var props = {}; props.foo = x; props.bar = y; var component = <Component {...props} />; As you see in above example, your properties which you have declared have become part of your component's props as well. Reusability of attributes is also possible here and you can also map it with other attributes. But you have to be very careful in ordering your attributes while you declare it, as it will override the previous declared attribute with lastly declared one. Props and state React components translate your raw data into Rich HTML, the props and state together build with that raw data to keep your UI consistent. Ok, let's identify what exactly it is: Props and state are both plain JS objects. It triggers with a render update. React manage the component state by calling setState (data, callback). This method will merge data into this.state, and re-renders the component to keep our UI up to date. For example, the state of the drop-down menu (visible or hidden). React component props - short for "properties" that don't change over time. For example, drop-down menu items. Sometimes components only take some data with this .props method and render it, which makes your component stateless. Using props and state together helps you to make an interactive app. Component life cycle methods In React each component has its own specific callback function. These callback's functions play an important role when we are thinking about DOM manipulation or integrating other plugins in React (jQuery). Let's look at some commonly used methods in the lifecycle of a component: getInitialState(): This method will help you to get the initial state of a component. componentDidMount: This method is called automatically when a component is rendered or mounted for the first time in DOM. Integrate JavaScript frameworks, we'll use this method to perform operations like setTimeout or setInterval, or send AJAX requests. componentWillReceiveProps: This method will be used to receive a new props. componentWillUnmount: This method is invoked before component is unmounted from DOM. Cleanup the DOM memory elements which are mounted in componentDidMount method. componentWillUpdate: This method invoked before updating a new props and state. componentDidUpdate: This is invoked immediately when the component has been updated in DOM What is Redux? As we know, in single page applications (SPAs) when we have to contract with state and time, it would be difficult to handgrip state over time. Here, Redux helps a lot, how? Because, in JavaScript application, Redux is handling two states: one is Data state and another is UI state and it's standard option for SPAs (single page applications). Moreover, bearing in mind, Redux can be used with AngularJS or jQuery or React JavaScript libraries or frameworks. Now we know, what does Redux mean? In short, Redux is a helping hand to play with states while developing JavaScript applications. We have seen in our previous examples like, the data flows in one direction only from parent level to child level and it is known as "unidirectional data flow". React has same flow direction from data to components so in this case it would be very difficult for proper communication of two components in React. Redux's architecture benefits: Compare to other frameworks, it has more benefits: It might not have any other way effects As we know, binning is not needed because components cant not interact directly States are managed globally so less possibility of mismanagement Sometimes, for middleware it would be difficult to manage other way effects React Top Level API When we are talking about React API, it's the starting step to get into React library. Different usage of React will provide different output like using React script tag will make top-level APIs available on the React global, using ES6 with npm will allow us to write import React from 'react' and using ES5 with npm will allow us to write var React = require('react'), so there are multiple ways to intialize the React with different features. Mount/Unmount component Always, it's recommended to have custom wrapper API in your API, suppose we have single root or more than one root and it will be deleted at some period, so you will not lose it. Facebook is also having the similar set up which automatically calls unmountComponentAtNode. I also suggest not to call ReactDOM.render() every time but ideal way is to write or use it through library so, by that way Application will have mounting and unmounting to manage it. Creating custom wrapper will help you to manage configuration at one place like internationalization, routers, user data and it would be very painful to set up all configuration every time at different places. React integration with other APIs React integration is nothing but converting Web component to React component by using JSX, Redux, and other methods of React. I would like to share here, some of the best practices to be followed to have 100% quality output. Things to remember while creating application with React Take a look at the following points to remember: Before you start working on React, always remember that it is just a View library, not the MVC framework. It is advisable to have small length of component to deal with classes and modules as well as it makes life easy while code understanding, unit testing and long run maintenance of component. React has introduced functions of props in its 0.14 version which is recommended to use, it is also known as functional component which helps to split your component. To avoid painful journey while dealing with React-based app, please don't use much states. As I said earlier that React is only view library so, to deal with rendering part, I recommend to use Redux rather than other frameworks of Flux. If you want to have more type safety then always use propTypes which also helps to catch bug early and acts as a document. I recommend use of shallow rendering method to test React component which allows rendering single component without touching their child components. While dealing with large React applications, always use Webpack, NPM, ES6, JSX, and Babel to complete your application. If you want to have deep dive into React's application and its elements, you can use Reduxdev tools. Summary To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. With Bootstrap, we work towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 32341

article-image-api-gateway-and-its-need
Packt
21 Feb 2018
9 min read
Save for later

API Gateway and its Need

Packt
21 Feb 2018
9 min read
 In this article by Umesh R Sharma, author of the book Practical Microservices, we will cover API Gateway and its need with simple and short examples. (For more resources related to this topic, see here.) Dynamic websites show a lot on a single page, and there is a lot of information that needs to be shown on the page. The common success order summary page shows the cart detail and customer address. For this, frontend has to fire a different query to the customer detail service and order detail service. This is a very simple example of having multiple services on a single page. As a single microservice has to deal with only one concern, in result of that to show much information on page, there are many API calls on the same page. So, a website or mobile page can be very chatty in terms of displaying data on the same page. Another problem is that, sometimes, microservice talks on another protocol, then HTTP only, such as thrift call and so on. Outer consumers can't directly deal with microservice in that protocol. As a mobile screen is smaller than a web page, the result of the data required by the mobile or desktop API call is different. A developer would want to give less data to the mobile API or have different versions of the API calls for mobile and desktop. So, you could face a problem such as this: each client is calling different web services and keeping track of their web service and developers have to give backward compatibility because API URLs are embedded in clients like in mobile app. Why do we need the API Gateway? All these preceding problems can be addressed with the API Gateway in place. The API Gateway acts as a proxy between the API consumer and the API servers. To address the first problem in that scenario, there will only be one call, such as /successOrderSummary, to the API Gateway. The API Gateway, on behalf of the consumer, calls the order and user detail, then combines the result and serves to the client. So basically, it acts as a facade or API call, which may internally call many APIs. The API Gateway solves many purposes, some of which are as follows. Authentication API Gateways can take the overhead of authenticating an API call from outside. After that, all the internal calls remove security check. If the request comes from inside the VPC, it can remove the check of security, decrease the network latency a bit, and make the developer focus more on business logic than concerning about security. Different protocol Sometimes, microservice can internally use different protocols to talk to each other; it can be thrift call, TCP, UDP, RMI, SOAP, and so on. For clients, there can be only one rest-based HTTP call. Clients hit the API Gateway with the HTTP protocol and the API Gateway can make the internal call in required protocol and combine the results in the end from all web service. It can respond to the client in required protocol; in most of the cases, that protocol will be HTTP. Load-balancing The API Gateway can work as a load balancer to handle requests in the most efficient manner. It can keep a track of the request load it has sent to different nodes of a particular service. Gateway should be intelligent enough to load balances between different nodes of a particular service. With NGINX Plus coming into the picture, NGINX can be a good candidate for the API Gateway. It has many of the features to address the problem that is usually handled by the API Gateway. Request dispatching (including service discovery) One main feature of the gateway is to make less communication between client and microservcies. So, it initiates the parallel microservices if that is required by the client. From the client side, there will only be one hit. Gateway hits all the required services and waits for the results from all services. After obtaining the response from all the services, it combines the result and sends it back to the client. Reactive microservice designs can help you achieve this. Working with service discovery can give many extra features. It can mention which is the master node of service and which is the slave. Same goes for DB in case any write request can go to the master or read request can go to the slave. This is the basic rule, but users can apply so many rules on the basis of meta information provided by the API Gateway. Gateway can record the basic response time from each node of service instance. For higher priority API calls, it can be routed to the fastest responding node. Again, rules can be defined on the basis of the API Gateway you are using and how it will be implemented. Response transformation Being a first and single point of entry for all API calls, the API Gateway knows which type of client is calling a mobile, web client, or other external consumer; it can make the internal call to the client and give the data to different clients as per needs and configuration. Circuit breaker To handle the partial failure, the API Gateway uses a technique called circuit breaker pattern. A service failure in one service can cause the cascading failure in the flow to all the service calls in stack. The API Gateway can keep an eye on some threshold for any microservice. If any service passes that threshold, it marks that API as open circuit and decides not to make the call for a configured time. Hystrix (by Netflix) served this purpose efficiently. Default value in this is failing of 20 requests in 5 seconds. Developers can also mention the fall back for this open circuit. This fall back can be of dummy service. Once API starts giving results as expected, then gateway marks it as a closed service again. Pros and cons of API Gateway Using the API Gateway itself has its own pros and cons. In the previous section, we have described the advantages of using the API Gateway already. I will still try to make them in points as the pros of the API Gateway. Pros Microservice can focus on business logic Clients can get all the data in a single hit Authentication, logging, and monitoring can be handled by the API Gateway Gives flexibility to use completely independent protocols in which clients and microservice can talk It can give tailor-made results, as per the clients needs It can handle partial failure Addition to the preceding mentioned pros, some of the trade-offs are also to use this pattern. Cons It can cause performance degrade due to lots of happenings on the API Gateway With this, discovery service should be implemented Sometimes, it becomes the single point of failure Managing routing is an overhead of the pattern Adding additional network hope in the call Overall. it increases the complexity of the system Too much logic implementation in this gateway will lead to another dependency problem So, before using the API Gateway, both of the aspects should be considered. Decision of including the API Gateway in the system increases the cost as well. Before putting effort, cost, and management in this pattern, it is recommended to analysis how much you can gain from it. Example of API Gateway In this example, we will try to show only sample product pages that will fetch the data from service product detail to give information about the product. This example can be increased in many aspects. Our focus of this example is to only show how the API Gateway pattern works; so we will try to keep this example simple and small. This example will be using Zuul from Netflix as an API Gateway. Spring also had an implementation of Zuul in it, so we are creating this example with Spring Boot. For a sample API Gateway implementation, we will be using http://start.spring.io/ to generate an initial template of our code. Spring initializer is the project from Spring to help beginners generate basic Spring Boot code. A user has to set a minimum configuration and can hit the Generate Project button. If any user wants to set more specific details regarding the project, then they can see all the configuration settings by clicking on the Switch to the full version button, as shown in the following screenshot: Let's create a controller in the same package of main application class and put the following code in the file: @SpringBootApplication @RestController public class ProductDetailConrtoller { @Resource ProductDetailService pdService; @RequestMapping(value = "/product/{id}") public ProductDetail getAllProduct( @PathParam("id") String id) { return pdService.getProductDetailById(id); } }   In the preceding code, there is an assumption of the pdService bean that will interact with Spring data repository for product detail and get the result for the required product ID. Another assumption is that this service is running on port 10000. Just to make sure everything is running, a hit on a URL such as http://localhost:10000/product/1 should give some JSON as response. For the API Gateway, we will create another Spring Boot application with Zuul support. Zuul can be activated by just adding a simple @EnableZuulProxy annotation. The following is a simple code to start the simple Zuul proxy: @SpringBootApplication @EnableZuulProxy public class ApiGatewayExampleInSpring { public static void main(String[] args) { SpringApplication.run(ApiGatewayExampleInSpring.class, args); } }   Rest all the things are managed in configuration. In the application.properties file of the API Gateway, the content will be something as follows: zuul.routes.product.path=/product/** zuul.routes.produc.url=http://localhost:10000 ribbon.eureka.enabled=false server.port=8080  With this configuration, we are defining rules such as this: for any request for a URL such as /product/xxx, pass this request to http://localhost:10000. For outer world, it will be like http://localhost:8080/product/1, which will internally be transferred to the 10000 port. If we defined a spring.application.name variable as product in product detail microservice, then we don't need to define the URL path property here (zuul.routes.product.path=/product/** ), as Zuul, by default, will make it a URL/product. The example taken here for an API Gateway is not very intelligent, but this is a very capable API Gateway. Depending on the routes, filter, and caching defined in the Zuul's property, one can make a very powerful API Gateway. Summary In this article, you learned about the API Gateway, its need, and its pros and cons with the code example. Resources for Article:   Further resources on this subject: What are Microservices? [article] Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article]
Read more
  • 0
  • 0
  • 31663
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-web-scraping-electron
Dylan Frankland
23 Dec 2016
12 min read
Save for later

Web Scraping with Electron

Dylan Frankland
23 Dec 2016
12 min read
Web scraping is a necessary evil that takes unordered information from the Web and transforms it into something that can be used programmatically. The first big use of this was indexing websites for use in search engines like Google. Whether you like it or not, there will likely be a time in your life, career, academia, or personal projects where you will need to pull information that has no API or easy way to digest programatically. The simplest way to accomplish this is to do something along the lines of a curl request and then RegEx for the necessary information. That has been what the standard procedure for much of the history of web scraping until the single-page application came along. With the advent of Ajax, JavaScript became the mainstay of the Web and prevented much of it from being scraped with traditional methods such as curl that could only get static server rendered content. This is where Electron truly comes into play because it is a full-fledged browser but can be run programmatically. Electron's Advantages Electron goes beyond the abilities of other programmatic browser solutions: PhantomJS, SlimerJS, or Selenium. This is because Electron is more than just a headless browser or automation framework. PhantomJS (and by extension SlimerJS), run headlessly only preventing most debugging and because they have their own JavaScript engines, they are not 100% compatible with node and npm. Selenium, used for automating other browsers, can be a bit of a pain to set up because it requires setting up a central server and separately installing a browser you would like to scrape with. On the other hand, Electron can do all the above and more which makes it much easier to setup, run, and debug as you start to create your scraper. Applications of Electron Considering the flexibility of Electron and all the features it provides, it is poised to be used in a variety of development, testing, and data collection situations. As far as development goes, Electron provides all the features that Chrome does with its DevTools, making it easy to inspect, run arbitrary JavaScript, and put debugger statements. Testing can easily be done on websites that may have issues with bugs in browsers, but work perfectly in a different environment. These tests can easily be hooked up to a continuous integration server too. Data collection, the real reason we are all here, can be done on any type of website, including those with logins, using a familiar API with DevTools for inspection, and optionally run headlessly for automation. What more could one want? How to Scrape with Electron Because of Electron's ability to integrate with Node, there are many different npm modules at your disposal. This takes doing most of the leg work of automating the web scraping process and makes development a breeze. My personal favorite, nightmare, has never let me down. I have used it for tons of different projects from collecting info and automating OAuth processes to grabbing console errors, and also taking screenshots for visual inspection of entire websites and services. Packt's blog actually gives a perfect situation to use a web scraper. On the blog page, you will find a small single-page application that dynamically loads different blog posts using JavaScript. Using Electron, let's collect the latest blog posts so that we can use that information elsewhere. Full disclosure, Packt's Blog server-side renders the first page and could possibly be scraped using traditional methods. This won't be the case for all single-page applications. Prerequisites and Assumptions Following this guide, I assume that you already have Node (version 6+) and npm (version 3+) installed and that you are using some form of a Unix shell. I also assume that you are already inside a directory to do some coding in. Setup First of all, you will need to install some npm modules, so create a package.json like this: { "scripts": { "start": "DEBUG=nightmare babel-node ./main.js" }, "devDependencies": { "babel-cli": "6.18.0", "babel-preset-modern-node": "3.2.0", "babel-preset-stage-0": "6.16.0" }, "dependencies": { "nightmare": "2.8.1" }, "babel": { "presets": [ [ "modern-node", { "version": "6.0" } ], "stage-0" ] } } Using babel we can make use of some of the newer JavaScript utilities like async/await, making our code much more readable. nightmare has electron as a dependency, so there is no need to list that in our package.json (so easy). After this file is made, run npm install as usual. Investgating the Content to be Scraped First, we will need to identify the information we want to collect and how to get it: We go to here and we can see a page of different blog posts, but we are not sure of the order in which they are. We only want the latest and greatest, so let's change the sort order to "Date of Publication (New to Older)". Now we can see that we have only the blog posts that we want. Changing the sort order Next, to make it easier to collect more blog posts, change the number of blog posts shown from 12 to 48. Changing post view Last, but not least, are the blog posts themselves. Packt blog posts Next, we will figure the selectors that we will be using with nightmare to change the page and collect data: Inspecting the sort order selector, we find something like the following HTML: <div class="solr-pager-sort-selector"> <span>Sort By: &nbsp;</span> <select class="solr-page-sort-selector-select selectBox" style="display: none;"> <option value="" selected="">Relevance</option> <option value="ds_blog_release_date asc">Date of Publication (Older - Newer)</option> <option value="ds_blog_release_date desc">Date of Publication (Newer - Older)</option> <option value="sort_title asc">Title (A - Z)</option> <option value="sort_title desc">Title (Z - A)</option> </select> <a class="selectBox solr-page-sort-selector-select selectBox-dropdown" title="" tabindex="0" style="width: 243px; display: inline-block;"> <span class="selectBox-label" style="width: 220.2px;">Relevance</span> <span class="selectBox-arrow">▼</span> </a> </div> Checking out the "View" options, we see the following HTML: <div class="solr-pager-rows-selector"> <span>View: &nbsp;</span> <strong>12</strong> &nbsp; <a class="solr-page-rows-selector-option" data-rows="24">24</a> &nbsp; <a class="solr-page-rows-selector-option" data-rows="48">48</a> &nbsp; </div> Finally, the info we need from the post should look similar to this: <div class="packt-article-line-view float-left"> <div class="packt-article-line-view-image"> <a href="/books/content/mapt-v030-release-notes"> <noscript> &lt;img src="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" alt="" title="" class="bookimage imagecache imagecache-ppv4_article_thumb"/&gt; </noscript> <img src="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" alt="" title="" data-original="//d1ldz4te4covpm.cloudfront.net/sites/default/files/imagecache/ppv4_article_thumb/Mapt FB PPC3_0.png" class="bookimage imagecache imagecache-ppv4_article_thumb" style="opacity: 1;"> <div class="packt-article-line-view-bg"></div> <div class="packt-article-line-view-title">Mapt v.0.3.0 Release Notes</div> </a> </div> <a href="/books/content/mapt-v030-release-notes"> <div class="packt-article-line-view-text ellipsis"> <div> <p>New features and fixes for the Mapt platform</p> </div> </div> </a> </div> Great! Now we have all the information necessary to start collecting data from the page! Creating the Scraper First things first, create a main.js file in the same directory as the package.json. Now let's put some initial code to get the ball rolling on the first step: import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .click(SELECTOR_SORT); })(); If you try to run this code, you will notice a window pop up with Packt's blog, then resize the window, and then nothing. This is because the sort selector only listens for mousedown events, so we will need to modify our code a little, by changing click to mousedown: await nightmare .goto(BLOG_URL) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT); Running the code again, after the mousedown, we get a small HTML dropdown, which we can inspect and see the following: <ul class="selectBox-dropdown-menu selectBox-options solr-page-sort-selector-select-selectBox-dropdown-menu selectBox-options-bottom" style="width: 274px; top: 354.109px; left: 746.188px; display: block;"> <li class="selectBox-selected"><a rel="">Relevance</a></li> <li class=""><a rel="ds_blog_release_date asc">Date of Publication (Older - Newer)</a></li> <li class=""><a rel="ds_blog_release_date desc">Date of Publication (Newer - Older)</a></li> <li class=""><a rel="sort_title asc">Title (A - Z)</a></li> <li class=""><a rel="sort_title desc">Title (Z - A)</a></li> </ul> So, to select the option to sort the blog posts by date, we will need to alter our code a little further (again, it does not work with click, so use mousedown plus mouseup): import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION); })(); Awesome! We can clearly see the page reload, with posts from newest to oldest. After we have sorted everything, we need to change the number of blog posts shown (finally we can use click; lol!): import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; const SELECTOR_VIEW = '.solr-page-rows-selector-option[data-rows="48"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION) .wait(SELECTOR_VIEW) .click(SELECTOR_VIEW); })(); Collecting the data is next; all we need to do is evaluate parts of the HTML and return the formatted information. The way the evaluate function works is by stringifying the function provided to it and injecting it into Electron's event loop. Because the function gets stringified, only a function that does not rely on outside variables or functions will execute properly. The value that is returned by this function must also be able to be stringified, so only JSON, strings, numbers, and booleans are applicable. import Nightmare from 'nightmare'; const URL_BLOG = 'https://www.packtpub.com/books/content/blogs'; const SELECTOR_SORT = '.selectBox.solr-page-sort-selector-select.selectBox-dropdown'; const SELECTOR_SORT_OPTION = '.solr-page-sort-selector-select-selectBox-dropdown-menu > li > a[rel="ds_blog_release_date desc"]'; const SELECTOR_VIEW = '.solr-page-rows-selector-option[data-rows="48"]'; // Viewport must have a width at least 1040 for the desktop version of Packt's blog const nightmare = new Nightmare({ show: true }).viewport(1041, 800); (async() => { await nightmare .goto(URL_BLOG) .wait(SELECTOR_SORT) // Always good to wait before performing an action on an element .mousedown(SELECTOR_SORT) .wait(SELECTOR_SORT_OPTION) .mousedown(SELECTOR_SORT_OPTION) // Drop down menu doesn't listen for `click` events like normal... .mouseup(SELECTOR_SORT_OPTION) .wait(SELECTOR_VIEW) .click(SELECTOR_VIEW) .evaluate( () => { let posts = []; document.querySelectorAll('.packt-article-line-view').forEach( post => { posts = posts.concat({ image: post.querySelector('img').src, title: post.querySelector('.packt-article-line-view-title').textContent.trim(), link: post.querySelector('a').href, description: post.querySelector('p').textContent.trim(), }); } ); return posts; } ) .end() .then( result => console.log(result) ); })(); Once you run your function again, you should get a very long array of objects containing information about each post. There you have it! You have successfully scraped a page using Electron! There’s a bunch more of possibilities and tricks with this, but hopefully this small example gives you a good idea of what can be done. About the author Dylan Frankland is a frontend engineer at Narvar. He is an agile web developer, with over 4 years of experience developing and designing for start-ups and medium-sized businesses to create a functional, fast, and beautiful web experiences.
Read more
  • 0
  • 0
  • 31612

article-image-reference-generator-for-job-portal-breadth-first-search-algorithm
Savia Lobo
05 Apr 2018
11 min read
Save for later

Creating a reference generator for a job portal using Breadth First Search (BFS) algorithm

Savia Lobo
05 Apr 2018
11 min read
In this tutorial, we will create a reference generator for a job portal with the help of Breadth First Search (BFS) algorithm. For instance, we have few users who are friends with each other, we will create nodes for each of the user and associate each of the nodes with data, such as their name and the company they work in. Once we create all the nodes, we will join them based on some predefined relationships between the nodes. Then, we will use these predefined relationships to determine who a user would have to talk to, in order to get referred for a job interview at a company of their choice. For example, A who works at company X and B who works at company Y are friends, B and C who works at company Z are friends. So, if A wants to get referred to company Z, then A talks to B, who can introduce them to C for a referral to company Z. In most production-level apps, you will not be creating graphs in such a fashion. You can simply use a graph database, which can perform a lot of features out of the box. Returning to our example, in more technical terms, we have an undirected graph (think of users as nodes and friendship as edges between them), and we want to determine the shortest path from one node to another. To perform what we have described so far, we will be using a technique known as Breadth First Search (BFS). BFS is a graph traversal mechanism in which the neighboring nodes are examined or evaluated first before moving on to the next level. This helps to ensure that the number of links found in the resulting chain is always minimum, hence we always get the shortest possible path from node A to node B. Although there are other algorithms, such as Dijkstra, to achieve similar results, we will go with BFS because Dijkstra is a more complex algorithm that is well suited when each edge has an associated cost with it. For example, in our case, we would go with Dijkstra if our user's friendships have a weight associated with it such as acquaintance, friend, and close friend, which would help us associate weights with each of those paths. A good use case to consider Dijkstra would be for something such as a Maps application, which would give you directions from point A to B based on the traffic (that is, the weight or cost associated with each edge) in between. Creating a bidirectional graph We can start with logic for our graph by creating a new file under utils/graph.js, which will hold the edges and then provide a simple shortestPath method to access the Graph and apply the BFS algorithm on the graph that is generated, as shown in the following code: var _ = require('lodash'); class Graph { constructor(users) { // initialize edges this.edges = {}; // save users for later access this.users = users; // add users and edges of each _.forEach(users, (user) => { this.edges[user.id] = user.friends; }); } } module.exports = Graph; Once we add the edges to our graph, it has nodes (user IDs), and edges are defined as the relationship between each user ID and friend in the friends array, which is available for each user. Forming the graph was an easy task, thanks to the way our data is structured. In our example dataset, each user has a set of friends list, which is listed in the following code: [ { id: 1, name: 'Adam', company: 'Facebook', friends: [2, 3, 4, 5, 7] }, { id: 2, name: 'John', company: 'Google', friends: [1, 6, 8] }, { id: 3, name: 'Bill', company: 'Twitter', friends: [1, 4, 5, 8] }, { id: 4, name: 'Jose', company: 'Apple', friends: [1, 3, 6, 8] }, { id: 5, name: 'Jack', company: 'Samsung', friends: [1, 3, 7] }, { id: 6, name: 'Rita', company: 'Toyota', friends: [2, 4, 7, 8] }, { id: 7, name: 'Smith', company: 'Matlab', friends: [1, 5, 6, 8] }, { id: 8, name: 'Jane', company: 'Ford', friends: [2, 3, 4, 6, 7] } ] As you can note in the preceding code, we did not really have to establish a bidirectional edge exclusively here because if user 1 is a friend of user 2 then user 2 is also a friend of user 1. Generating a pseudocode  for the shortest path generation Before its implementation, let's quickly jot down what we are about to do so that the actual implementation becomes a lot easier: INITIALIZE tail to 0 for subsequent iterations MARK source node as visited WHILE result not found GET neighbors of latest visited node (extracted using tail) FOR each of the node IF node already visited RETURN Mark node as visited IF node is our expected result INITIALIZE result with current neighbor node WHILE not source node BACKTRACK steps by popping users from previously visited path until the source user ADD source user to the result CREATE and format result variable IF result found return control NO result found, add user to previously visited path ADD friend to queue for BFS in next iteration INCREMENT tail for next loop RETURN NO_RESULT Implementing the shortest path generation Let's now create our customized BFS algorithm to parse the graph and generate the shortest possible path for our user to get referred to company A: var _ = require('lodash'); class Graph { constructor(users) { // initialize edges this.edges = {}; // save users for later access this.users = users; // add users and edges of each _.forEach(users, (user) => { this.edges[user.id] = user.friends; }); } shortestPath(sourceUser, targetCompany) { // final shortestPath var shortestPath; // for iterating along the breadth var tail = 0; // queue of users being visited var queue = [ sourceUser ]; // mark visited users var visitedNodes = []; // previous path to backtrack steps when shortestPath is found var prevPath = {}; // request is same as response if (_.isEqual(sourceUser.company, targetCompany)) { return; } // mark source user as visited so // next time we skip the processing visitedNodes.push(sourceUser.id); // loop queue until match is found // OR until the end of queue i.e no match while (!shortestPath && tail < queue.length) { // take user breadth first var user = queue[tail]; // take nodes forming edges with user var friendsIds = this.edges[user.id]; // loop over each node _.forEach(friendsIds, (friendId) => { // result found in previous iteration, so we can stop if (shortestPath) return; // get all details of node var friend = _.find(this.users, ['id', friendId]); // if visited already, // nothing to recheck so return if (_.includes(visitedNodes, friendId)) { return; } // mark as visited visitedNodes.push(friendId); // if company matched if (_.isEqual(friend.company, targetCompany)) { // create result path with the matched node var path = [ friend ]; // keep backtracking until source user and add to path while (user.id !== sourceUser.id) { // add user to shortest path path.unshift(user); // prepare for next iteration user = prevPath[user.id]; } // add source user to the path path.unshift(user); // format and return shortestPath shortestPath = _.map(path, 'name').join(' -> '); } // break loop if shortestPath found if (shortestPath) return; // no match found at current user, // add it to previous path to help backtracking later prevPath[friend.id] = user; // add to queue in the order of visit // i.e. breadth wise for next iteration queue.push(friend); }); // increment counter tail++; } return shortestPath || `No path between ${sourceUser.name} & ${targetCompany}`; } } module.exports = Graph; The most important part of the code is when the match is found, as shown in the following code block from the preceding code: // if company matched if (_.isEqual(friend.company, targetCompany)) { // create result path with the matched node var path = [ friend ]; // keep backtracking until source user and add to path while (user.id !== sourceUser.id) { // add user to shortest path path.unshift(user); // prepare for next iteration user = prevPath[user.id]; } // add source user to the path path.unshift(user); // format and return shortestPath shortestPath = _.map(path, 'name').join(' -> '); } Here, we are employing a technique called backtracking, which helps us retrace our steps when the result is found. The idea here is that we add the current state of the iteration to a map whenever the result is not found—the key as the node being visited currently, and the value as the node from which we are visiting. So, for example, if we visited node 1 from node 3, then the map would contain { 1: 3 } until we visit node 1 from some other node, and when that happens, our map will update to point to the new node from which we got to node 1, such as { 1: newNode }. Once we set up these previous paths, we can easily trace our steps back by looking at this map. By adding some log statements (available only in the GitHub code to avoid confusion), we can easily take a look at the long but simple flow of the data. Let us take an example of the data set that we defined earlier, so when Bill tries to look for friends who can refer him to Toyota, we see the following log statements: starting the shortest path determination added 3 to the queue marked 3 as visited shortest path not found, moving on to next node in queue: 3 extracting neighbor nodes of node 3 (1,4,5,8) accessing neighbor 1 mark 1 as visited result not found, mark our path from 3 to 1 result not found, add 1 to queue for next iteration current queue content : 3,1 accessing neighbor 4 mark 4 as visited result not found, mark our path from 3 to 4 result not found, add 4 to queue for next iteration current queue content : 3,1,4 accessing neighbor 5 mark 5 as visited result not found, mark our path from 3 to 5 result not found, add 5 to queue for next iteration current queue content : 3,1,4,5 accessing neighbor 8 mark 8 as visited result not found, mark our path from 3 to 8 result not found, add 8 to queue for next iteration current queue content : 3,1,4,5,8 increment tail to 1 shortest path not found, moving on to next node in queue: 1 extracting neighbor nodes of node 1 (2,3,4,5,7) accessing neighbor 2 mark 2 as visited result not found, mark our path from 1 to 2 result not found, add 2 to queue for next iteration current queue content : 3,1,4,5,8,2 accessing neighbor 3 neighbor 3 already visited, return control to top accessing neighbor 4 neighbor 4 already visited, return control to top accessing neighbor 5 neighbor 5 already visited, return control to top accessing neighbor 7 mark 7 as visited result not found, mark our path from 1 to 7 result not found, add 7 to queue for next iteration current queue content : 3,1,4,5,8,2,7 increment tail to 2 shortest path not found, moving on to next node in queue: 4 extracting neighbor nodes of node 4 (1,3,6,8) accessing neighbor 1 neighbor 1 already visited, return control to top accessing neighbor 3 neighbor 3 already visited, return control to top accessing neighbor 6 mark 6 as visited result found at 6, add it to result path ([6]) backtracking steps to 3 we got to 6 from 4 update path accordingly: ([4,6]) add source user 3 to result form result [3,4,6] return result increment tail to 3 return result Bill -> Jose -> Rita What we basically have here is an iterative process using BFS to traverse the tree and backtracking the result. This forms the core of our functionality. Creating a web server We can now add a route to access this graph and its corresponding shortestPath method. Let's first create the route under routes/references and add it as a middleware to the web server: var express = require('express'); var app = express(); var bodyParser = require('body-parser'); // register endpoints var references = require('./routes/references'); // middleware to parse the body of input requests app.use(bodyParser.json()); // route middleware app.use('/references', references); // start server app.listen(3000, function () { console.log('Application listening on port 3000!'); }); Then, create the route as shown in the following code: var express = require('express'); var router = express.Router(); var Graph = require('../utils/graph'); var _ = require('lodash'); var userGraph; // sample set of users with friends // same as list shown earlier var users = [...]; // middleware to create the users graph router.use(function(req) { // form graph userGraph = new Graph(users); // continue to next step req.next(); }); // create the route for generating reference path // this can also be a get request with params based // on developer preference router.route('/') .post(function(req, res) { // take user Id const userId = req.body.userId; // target company name const companyName = req.body.companyName; // extract current user info const user = _.find(users, ['id', userId]); // get shortest path const path = userGraph.shortestPath(user, companyName); // return res.send(path); }); module.exports = router; Running the reference generator To test this, simply start the web server by running the npm start command from the root of the project as shown earlier. Once the server is up and running, you can use any tool you wish to post the request to your web server, as shown in the following screenshot: As you can see in the preceding screenshot, we get the response back as expected. This can, of course, be changed in a way to return all the user objects instead of just the names. That could be a fun extension of the example for you to try on your own. We learned to create a reference generator for a job portal using the Breadth First Search (BFS) algorithm in JavaScript. If you have found this post interesting, do check out this book, Hands-On Data Structures and Algorithms with JavaScript to create and employ various data structures in a way that is demanded by your project or use case.  
Read more
  • 0
  • 0
  • 31603

article-image-4-key-findings-from-the-state-of-javascript-2018-developer-survey
Prasad Ramesh
20 Nov 2018
4 min read
Save for later

4 key findings from The State of JavaScript 2018 developer survey

Prasad Ramesh
20 Nov 2018
4 min read
Three JavaScript developers surveyed over 20,000 JavaScript developers to find out what’s happening within the language and its huge ecosystem. From usage to satisfaction to learning habits, this State if JavaScript 2018 report offered another valuable insight on a community that is still going strong, despite the landscape continuing to change. You can check out the results of the State of JavaScript 2018 survey in detail here but keep reading to find out 4 things we found interesting about the State of JavaScript 2018 survey. JavaScript developers love ES6 and TypeScript ES6 and TypeScript were the most well received. 86.3% and 46.7% developers respectively have used and would use these languages again. ClojureScript, Elm, and Flow, however, don’t seem to pique many developers' interests these days (unsurprisingly). React rules the front-end frameworks - Angular's popularity may be dwindling There has been a big battle between a couple of frameworks in the front-end side of web development - namely between React, Vue, and Angular. The State of JavaScript 2018 survey suggests that React is winning out, with Vue in second position. 64.8% and 28.8% developers said that they would use React and Vue.js respectively, again. However, Vue is growing in popularity - 46.6% of respondents expressed an interest in learning it. However, news wasn't great for Angular - 33.8% of respondents said that they wouldn't use Angular again. Vue is gaining popularity as. Ember and polymer were less than well received as more than 50% of the responses for both indicated no interest in learning them. Preact and Polymer, meanwhile, are perhaps still a little new on the scene: 28.1% and 18.5% respondents had never even heard of these frameworks. Vue.js 3.0 is ditching JavaScript for TypeScript. Learn more here. Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL Redux is the most used in the data layer - but JavaScript developers want to learn GraphQL When it comes to data, Redux is the most popular library with 47.2% developers saying that they would use it again. GraphQL is second with 20.4% of respondents vouching for it. But Redux shouldn’t be complacent - 62.5% developers also want to learn GraphQL. It looks like the Redux and GraphQL debate is going to continue well into 2019. What the consensus will be in 12 months time is anyone’s guess. Why do React developers love Redux? Find out here. Express.js popularity confirms Node.js as JavaScript’s quiet hero It was observed that there haven’t been any major break breakthroughs in this area in recent years. But that is, perhaps, a good thing when you consider the frantic pace of change in other areas of JavaScript. It probably also has a lot to do with the dominance of Node.js in this area. Express, a Node.js framework, is by far the most popular, with 64.7% of developers taking the survey saying they would use it again. Sadly, it appears Meteor is languishing despite its meteoric hype just a few years ago. 49.4% of developers had heard of it, but said they had no interest in learning it. In conclusion: The landscape is becoming more clearly defined, but the JavaScript developer role is changing A few years ago, the JavaScript ecosystem was chaotic and almost incoherent. Every week seemed to bring a new framework demanding your attention. It looks, as we move towards the end of the decade, that things are a lot different now - React has established itself at the forefront of the front end, while TypeScript appears to have embedded itself within the ecosystem too. With GraphQL also generating interest, and competing with Redux, we're seeing a clear shift in what JavaScript developers are doing, and what they're being asked to do. As the stack expands, managing data sources and building for speed and scalability is now a problem right at the heart of JavaScript development, not just on its fringes.
Read more
  • 0
  • 0
  • 31575

article-image-web-typography
Packt
13 Jul 2016
14 min read
Save for later

Web Typography

Packt
13 Jul 2016
14 min read
This article by Dario Calonaci, author of Practical Responsive Typography teaches you about typography: it's fascinating mysteries, sensual shapes, and everything else you wanted to know about it; this article is about to reveal everything on the subject for you!Every letter, every curve, and every shape in the written form conveys feelings; so it's important to learn everything about it if you want to be a better designer. You also need to know how readable your text is, therefore you have to set it up following some natural constraints our eyes and minds have built in, how white space influences your message, how every form should be taken into consideration in the writing of a textand this article will tell you exactly that! Plus a little more! You will also learn how to approach all of the above in today number one medium, the World Wide Web. Since 95 percent of the Web is made of typography, according toOliver Reichenstein, it's only logical that if you want to approach the Web you surely need to understand it better. Through this article, you'll learn all the basics of typography and will be introduced to it core features, such as: Anatomy Line Height Families Kerning (For more resources related to this topic, see here.) Note that typography, the art of drawing with words, is really ancient, as much as 3200 years prior to the mythological appearance of Christ and the very first book on this matter is the splendid Manuale Tipograficofrom Giambattista Bodoni, which he self-published in 1818. Taking into consideration all the old data, and the new knowledge, everything started from back then and every rule that has been born in print is still valid today, even for the different medium that the Web is. Typefaces classification The most commonly used type classification is based on the technical style and as such it's the one we are going to analyze and use. They are as follows: Serifs Serifs are referred to as such because of the small details that extend from the ending shapes of the characters; the origin of the word itself is obscure, various explanations have been given but none has been accepted as resolute. Their origin can be traced back to the Latin alphabetsof Roman times, probably because of the flares of the brush marks in corners, which were later chiseled in stone by the carvers. They generally give better readability in print than on a screen, probably because of the better definition and evolution of the former in hundreds of years, while the latter technology is, on an evolutionary path, a newborn. With the latest technologies and the high definition monitors that can rival the print definition, multiple scientific studies have been found inconclusive, showing that there is no discernible difference in readability between sans and serifs on the screen and as of today they are both used on the Web. Within this general definition, there are multiples sub-families, as Old Style or Humanist. Old Style or Humanist The oldest ones, dating as far back as the mid 1400s are recognized for the diagonal guide on which the characters are built on; these are clearly visible for example on the e and o of Adobe Jenson. Transitional Serifs They are neither antique nor modern and they date back to the 1700s and are generally numerous. They tend to abandon some of the diagonal stress, but not all of them, especially keeping the o. Georgia and Baskerville are some well-known examples. Modern Serifs Modern Serifs tend to rely on the contrast between thick and thin strokes, abandon diagonal for vertical stress, and on more straight serifs. They appeared in the late 1700s. Bodoni and Didot are certainly the most famous typefaces in this family. Slab Serifs Slab Serifs have little to no contrast between strokes, thick serifs, and sometimes appear with fixed widths, the underlying base resembles one of the sansmore. American Typewriter is the most famous typefaces in this familyas shown in the following image: Sans Serifs They are named sodue to the loss of the decorative serifs, in French "sans" stands for "without". Sans Serif isa more recent invention, since it was born in the late 18th century. They are divided into the following four sub-families: Grotesque Sans It is the earliest of the bunch; its appearance is similar to the serif with contrasted strokesbut without serifsand with angled terminals Franklin Gothic is one of the most famous typefaces in this family. Neo-Grotesque Sans It is plain looking with little to no contrast, small apertures, and horizontal terminals. They are one of the most common font styles ranging from Arial and Helvetica to Universe. Humanist font They have a friendly tone due to the calligraphic stylewith a mixture of different widths characters and, most of the times, contrasted strokes. Gill Sans being the flag-carrier. Geometric font Based on the geometric and rigorous shapes, they are more modern and are used less for body copy. They have a general simplicity but readability of their charactersis difficult. Futura is certainly the most famous geometric font. Script typefaces They are usually classified into two sub-familiesbased upon the handwriting, with cursive aspect and connected letterforms. They are as follows: Formal script Casual script Monospaced typefaces Display typefaces Formal script They are reminiscent of the handwritten letterforms common in the 17th and 18th centuries, sometimes they are also based on handwritings offamous people. They are commonly used for elevated and highly elegant designs and are certainly unusable for long body copy. Kunstler Script is a relatively recent formal script. Casual script This is less precise and tends to resemble a more modern and fast handwriting. They are as recent as the mid-twentieth century. Mistral is certainly the most famous casual script. Monospaced typefaces Almost all the aforementioned families are proportional in their style, (each character takes up space that is proportional to its width). This sub-family addresses each character width as the same, with narrower ones, such as i,just gain white space around them, sometimesresulting in weird appearances. Hence,Due to their nature and their spacing, they aren’t advised as copy typefaces, since their mono spacing can bring unwanted visual imbalance to the text. Courier is certainly the most known monospaced typeface. Display typefaces They are the broadest category and are aimed at small copy to draw attention and rarely follow rules, spreading from every one of the above families and expressing every mood. Recently even Blackletters (the very first fonts designed with the very first, physical printing machines) are being named under this category. For example, Danube and Val are just two of the multitude thatare out there: Expressing different moods In conjunction with the division of typography families, it's also really importantfor every project, both in print and web, to know what they express and why. It takes years of experience to understand those characteristics and the methodto use them correctly; here we are just addressing a very basic distinction to help you start with. Remember that in typography and type design, every curve conveys a different mood, so just be patient while studying and designing. Serifs vs Sans Serifs, through their decorations, their widths, and in and out of their every sub-family convey old and antique/traditional serious feelings, even when more modern ones are used; they certainly convey a more formal appearance. On the other hand, sans serifare aimed at a more modern and up-to-date world, conveying technological advancement, rationality, usually but not always,and less of a human feeling. They're more mechanical and colder than a serif, unless the author voluntarily designed them to be more friendly than the standard ones.. Scripts vs scripts As said, they are of two types, and as the name suggests, the division is straightforward. Vladimir is elegant, refined, upper class looking, and expressesfeelings such as respect. Arizonia on the other hand is not completely informal but is still a schizophrenic mess of strokes and a conclusionless expression of feeling; I'm not sure whether I feel amused or offended for its exaggerated confidentiality. Displaytypefaces Since theyare different in aspect from each other and the fact that there is no general rule that surrounds and defines the Display family, they can express the whole range of emotions.They can go from apathy to depression, from a complete childish involvement and joy to some suited, scary seriousness business feeling (the latter definition is usually expression of some monospaced typefaces). Like every other typeface, more specifically here, every change in weight and style brings in a new sentiment to the table: use it in bold and your content will be strong, fierce; change it to a lighter italic and it will look like its moving, ready to exit from the page. As such, they take years to master and we advice not to use them on your first web work, unless you are completely sure of what you are doing. Every font communicates differently, on a conscious as well as on a subconscious level; even within the same typeface,it all comes down to what we are accustomed to. In the case of font color, what a script does and feel in the European culture can drastically change if the same is used for advertising in the Asian market. Always do your research first. Combining typefaces Combining typefaces is a vital aspect of your projects but it's a tool that is hard to master. Generally,it is said that you should use no more than two fonts in your design. It is a good rule; but let me explain it—or better—enlarge it. While working with text for an informational text block, similar tothe one you are reading now, stick to it. You will express enough contrast and interest while stayingbalanced and the reader willnot get distracted. They will follow the flow and understand the hierarchy of what they are reading. However, as a designer, while typesetting you're not always working on a pure text block: you could be working with words on a packaging or on the web. However, if you know enough about typography and your eyes are well trained (usually after years of visual research and of designing with attention) you can break the rules. You get energy only when mixing contrasting fonts, so why not add a third one to bring in a better balance between the two? As a rule, you can combine fonts when: They are not in the same classification. You mix fonts to add contrast and energy and to inject interest and readability in your document and this is why the clash between serif and sans has been proven timeless.Working with two serifs/sans together instead works only with extensive trial and error and you should choose two fonts that carry enough differences. You can usually combine different subfamilies, for example a slab serif with a modern one or a geometric sans with a grotesque. If your scope is readability, find the same structure.A similar height and similar width works easily when choosing two classifications; but if your scope is aesthetic for small portions of text, you can try completely different structures, such as a slab serif with a geometric sans. You willsee that sometimes it does the job! Go extreme!This requires more experience to balance it out, but if you're working with display or script typefaces, it's almost impossible to find something similar without being boring or unreadable. Try to mix them with more simplistic typefaces if the starting point has a lot of decorations; you won't regret the trial! Typography properties Now that you know the families, you need to know the general rules that will make your text and their usage flow like a springtime breeze. Kerning Is the adjusting of space between two characters to achieve a visually balanced word trough anda visually equal distribution of white space. The word originates from the Latin wordcardo meaning Hinge.When letters were made of metal on wooden blocks, parts of them were built to hang off the base, thus giving space for the next character to sit closer. Tracking It is also as called letter-spacingand it is concerned with the entire word—not single characters or the whole text block—to change the density and texture in a text and to affect its readability. The word originates from the metal tracks where the wooden blocks with the characters were moved horizontally. Tracking request careful settings: too much white space and the words won't appear as single coherent blocks anymore –reduce the white space between the letters drastically and the letters themselves won't be readable. As a rule, you want your lines of text to be made of 50 to 75 characters, including dots and spaces, to achieve better readability. Some will ask you to stop your typing as soon as approximately 39 characters are reached, but I tend to differ. Ligatures According to kerning, especially on serifs, two or three character can clash together. Ligatures are born to avoid this; they are stylistic characters that combine two or three letters into one letter: Standard ligatures are naturally and functionally the most common ones and are made between fi, fl, and other letters when placed next to an f. They should be used, as they tend to make the script more legible. Discretionary ligatures are not functional, they just serve a decorative purpose. They are commonly found and designed between Th and st;as mentioned above, you should use them at your discretion. Leading Leading is the space between the baselines of your text, while line-height adds to the notions and also to the height of ascenders and descenders.The name came to be because in the ancient times, stripes of lead were used to add white space between two lines of text. There are many rules in typesetting (none of which came out as a perfect winner) and everything changes according to the typeface you're using. Mechanical print tends to add 2 points to the current measure being used, while a basic rule for digital is to scale the line-spacing as much as 120 percent of your x-height, which is called single spacing. As a rule of thumb, scale in between 120 and 180 percent and youare good to go (of course with the latter being used for typefaces with a major x-height). Just remember, the descenders should never touch the next line ascenders, otherwise the eye will perceive the text as crumpled and you will have difficulties to understand where one line ends and the other start. Summary The preceding text covers the basics of typography, which you should study and know in order to make the text in your assignment flow better. Now, you have a greater understanding of typography: what it is; what it's made of; what are its characteristics; what the brain search for and process in a text; the lengths it will go to understand it; and the alignments, spacing, and other issues that revolve around this beautiful subject. The most important rule to remember is that text is used to express something. It may be an informative reading, may be the expression of a feeling, such as a poem, or it can be something to make you feel something specifically. Every text has a feeling, every text has an inner tone of voice that can be expressed visually through typography. Usually it’s the text itself that dictates its feeling – and help you decide which and how to express it. All the preceding rules, properties, and knowledgeare means for you to express it and there's a large range of properties on the Web for you to use them. There is almost as much variety available in print with properties for leading, kerning, tracking, and typographical hierarchy all built in your browsers. Resources for Article: Further resources on this subject: Exploring Themes [article] A look into responsive design frameworks [article] Joomla! Template System [article]
Read more
  • 0
  • 0
  • 31488
article-image-capability-model-microservices
Packt
17 Jun 2016
19 min read
Save for later

A capability model for microservices

Packt
17 Jun 2016
19 min read
In this article by Rajesh RV, the author of Spring Microservices, you will learn aboutthe concepts of microservices. More than sticking to definitions, it is better to understand microservices by examining some common characteristics of microservices that are seen across many successful microservices implementations. Spring Boot is an ideal framework to implement microservices. In this article, we will examine how to implement microservices using Spring Boot with an example use case. Beyond services, we will have to be aware of the challenges around microservices implementation. This article will also talk about some of the common challenges around microservices. A successful microservices implementation has to have some set of common capabilities. In this article, we will establish a microservices capability model that can be used in a technology-neutral framework to implement large-scale microservices. What are microservices? Microservices is an architecture style used by many organizations today as a game changer to achieve a high degree of agility, speed of delivery, and scale. Microservices give us a way to develop more physically separated modular applications. Microservices are not invented. Many organizations, such as Netflix, Amazon, and eBay, successfully used the divide-and-conquer technique to functionally partition their monolithic applications into smaller atomic units, and each performs a single function. These organizations solved a number of prevailing issues they experienced with their monolithic application. Following the success of these organizations, many other organizations started adopting this as a common pattern to refactor their monolithic applications. Later, evangelists termed this pattern microservices architecture. Microservices originated from the idea of Hexagonal Architecture coined by Alister Cockburn. Hexagonal Architecture is also known as thePorts and Adapters pattern. Microservices is an architectural style or an approach to building IT systems as aset of business capabilities that are autonomous, self-contained, and loosely coupled. The preceding diagram depicts a traditional N-tier application architecture having a presentation layer, business layer, and database layer. Modules A, B, and C represents three different business capabilities. The layers in the diagram represent a separation of architecture concerns. Each layer holds all three business capabilities pertaining to this layer. The presentation layer has the web components of all three modules, the business layer has the business components of all the three modules, and the database layer hosts tables of all the three modules. In most cases, layers are physically spreadable, whereas modules within a layer are hardwired. Let's now examine a microservices-based architecture, as follows: As we can note in the diagram, the boundaries are inverted in the microservices architecture. Each vertical slice represents a microservice. Each microservice has its own presentation layer, business layer, and database layer. Microservices are aligned toward business capabilities. By doing so, changes to one microservice do not impact others. There is no standard for communication or transport mechanisms for microservices. In general, microservices communicate with each other using widely adopted lightweight protocols such as HTTP and REST or messaging protocols such as JMS or AMQP. In specific cases, one might choose more optimized communication protocols such as Thrift, ZeroMQ, Protocol Buffers, or Avro. As microservices are more aligned to the business capabilities and have independently manageable lifecycles, they are the ideal choice for enterprises embarking on DevOps and cloud. DevOps and cloud are two other facets of microservices. Microservices are self-contained, independently deployable, and autonomous services that take full responsibility of a business capability and its execution. They bundle all dependencies, including library dependencies and execution environments such as web servers and containers or virtual machines that abstract physical resources. These self-contained services assume single responsibility and are well enclosed with in a bounded context. Microservices – The honeycomb analogy The honeycomb is an ideal analogy to represent the evolutionary microservices architecture. In the real world, bees build a honeycomb by aligning hexagonal wax cells. They start small, using different materials to build the cells. Construction is based on what is available at the time of building. Repetitive cells form a pattern and result in a strong fabric structure. Each cell in the honeycomb is independent but also integrated with other cells. By adding new cells, the honeycomb grows organically to a big solid structure. The content inside each cell is abstracted and is not visible outside. Damage to one cell does not damage other cells, and bees can reconstruct these cells without impacting the overall honeycomb. Characteristics of microservices The microservices definition discussed at the beginning of this article is arbitrary. Evangelists and practitioners have strong but sometimes different opinions on microservices. There is no single, concrete, and universally accepted definition for microservices. However, all successful microservices implementations exhibit a number of common characteristics. Some of these characteristics are explained as follows: Since microservices are more or less similar to a flavor of SOA, many of the service characteristics of SOA are applicable to microservices, as well. In the microservices world, services are first-class citizens. Microservices expose service endpoints as APIs and abstract all their realization details. The APIs could be synchronous or asynchronous. HTTP/REST is the popular choice for APIs. As microservices are autonomous and abstract everything behind service APIs, it is possible to have different architectures for different microservices. The internal implementation logic, architecture, and technologies, including programming language, database, quality of service mechanisms,and so on, are completely hidden behind the service API. Well-designed microservices are aligned to a single business capability, so they perform only one function. As a result, one of the common characteristics we see in most of the implementations are microservices with smaller footprints. Most of the microservices implementations are automated to the maximum extent possible, from development to production. Most large-scale microservices implementations have a supporting ecosystem in place. The ecosystem's capabilities include DevOps processes, centralized log management, service registry, API gateways, extensive monitoring, service routing and flow control mechanisms, and so on. Successful microservices implementations encapsulate logic and data within the service. This results in two unconventional situations: a distributed data and logic and decentralized governance. A microservice example The Customer profile microservice exampleexplained here demonstrates the implementation of microservice and interaction between different microservices. In this example, two microservices, Customer Profile and Customer Notification, will be developed. As shown in the diagram, the Customer Profile microservice exposes methods to create, read, update, and delete a customer and a registration service to register a customer. The registration process applies a certain business logic, saves the customer profile, and sends a message to the CustomerNotification microservice. The CustomerNotification microservice accepts the message send by the registration service and sends an e-mail message to the customer using an SMTP server. Asynchronous messaging is used to integrate CustomerProfile with the CustomerNotification service. The customer microservices class domain model diagram is as shown here: Implementing this Customer Profile microservice is not a big deal. The Spring framework, together with Spring Boot, provides all the necessary capabilities to implement this microservice without much hassle. The key is CustomerController in the diagram, which exposes the REST endpoint for our microservice. It is also possible to use HATEOAS to explore the repository's REST services directly using the @RepositoryRestResource annotation. The following code sample shows the Spring Boot main class called Application and theREST endpoint definition for theregistration of a new customer: @SpringBootApplication public class Application { public static void main(String[] args) { SpringApplication.run(Application.class, args);  } }   @RestController class CustomerController{ //other code here @RequestMapping( path="/register", method = RequestMethod.POST) Customer register(@RequestBody Customer customer){ returncustomerComponent.register(customer); } } CustomerControllerinvokes a component class, CustomerComponent. The component class/bean handles all the business logic. CustomerRepository is a Spring data JPA repository defined to handlethe persistence of the Customer entity. The whole application will then be deployed as a Spring Boot application by building a standalone jar rather than using the conventional war file. Spring Boot encapsulates the server runtime along with the fat jar it produces. By default, it is an instance of the Tomcat server. CustomerComponent, in addition to calling the CustomerRepository class, sends a message to the RabbitMQ queue, where the CustomerNotification component is listening. This can be easily achieved in Spring using the RabbitMessagingTemplate class as shown in the following Sender implementation: @Component class CustomerComponent { //other code here   Customer register(Customer customer){ customerRespository.save(customer); sender.send(customer.getEmail()); return customer; } }   @Component @Lazy class Sender { RabbitMessagingTemplate template;   @Autowired Sender(RabbitMessagingTemplate template){ this.template = template; }   @Bean Queue queue() { return new Queue("CustomerQ", false); }   public void send(String message){ template.convertAndSend("CustomerQ", message); } } The receiver on the other sideconsumes the message using RabbitListener and sends out an e-mail using theJavaMailSender component. Execute the following code: @Component class Receiver { @Autowired private  JavaMailSenderjavaMailService;   @Bean Queue queue() { return new Queue("CustomerQ", false); }   @RabbitListener(queues = "CustomerQ") public void processMessage(String email) { System.out.println(email); SimpleMailMessagemailMessage=new SimpleMailMessage(); mailMessage.setTo(email); mailMessage.setSubject("Registration"); mailMessage.setText("Successfully Registered"); javaMailService.send(mailMessage);       }   } In this case,CustomerNotification isour secondSpring Boot microservice. In this case, instead of the REST endpoint, it only exposes a message listener end point. Microservices challenges In the previous section,you learned about the right design decisions to be made and the trade-offs to be applied. In this section, we will review some of the challenges with microservices. Take a look at the following list: Data islands: Microservices abstract their own local transactional store, which is used for their own transactional purposes. The type of store and the data structure will be optimized for the services offered by the microservice. This can lead to data islands and, hence, challenges around aggregating data from different transactional stores to derive meaningful information. Logging and monitoring: Log files are a good piece of information for analysis and debugging. As each microservice is deployed independently, they emit separate logs, maybe to a local disk. This will result in fragmented logs. When we scale services across multiple machines, each service instance would produce separate log files. This makes it extremely difficult to debug and understand the behavior of the services through log mining. Dependency management: Dependency management is one of the key issues in large microservices deployments. How do we ensure the chattiness between services is manageable?How do we identify and reduce the impact of a change? How do we know whether all the dependent services are up and running? How will the service behave if one of the dependent services is not available? Organization's culture: One of the biggest challenges in microservices implementation is the organization's culture. An organization following waterfall development or heavyweight release management processes with infrequent release cycles is a challenge for microservices development. Insufficient automation is also a challenge for microservices deployments. Governance challenges: Microservices impose decentralized governance, and this is quite in contrast to the traditional SOA governance. Organizations may find it hard to come up with this change, and this could negatively impact microservices development. How can we know who is consuming service? How do we ensure service reuse? How do we define which services are available in the organization? How do we ensure that the enterprise polices are enforced? Operation overheads: Microservices deployments generally increases the number of deployable units and virtual machines (or containers). This adds significant management overheads and cost of operations.With a single application, a dedicated number of containers or virtual machines in an on-premises data center may not make much sense unless the business benefit is high. With many microservices, the number of Configurable Items (CIs) is too high, and the number of servers in which these CIs are deployed might also be unpredictable. This makes it extremely difficult to manage data in a traditional Configuration Management Database (CMDB). Testing microservices: Microservices also pose a challenge for the testability of services. In order to achieve full service functionality, one service may rely on another service, and this, in turn, may rely on another service, either synchronously or asynchronously. The issue is how we test an end-to-end service to evaluate its behavior. Dependent services may or may not be available at the time of testing. Infrastructure provisioning: As briefly touched upon under operation overheads, manual deployment can severely challenge microservices rollouts. If a deployment has manual elements, the deployer or operational administrators should know the running topology, manually reroute traffic, and then deploy the application one by one until all the services are upgraded. With many server instances running, this could lead to significant operational overheads. Moreover, the chance of error is high in this manual approach. Beyond just services– The microservices capability model Microservice are not as simple as the Customer Profile implementation we discussedearlier. This is specifically true when deploying hundreds or thousands of services. In many cases, an improper microservices implementation may lead to a number of challenges, as mentioned before.Any successful Internet-scale microservices deployment requires a number of additional surrounding capabilities. The following diagram depicts the microservices capability model: The capability model is broadly classified in to four areas, as follows: Core capabilities, which are part of the microservices themselves Supporting capabilities, which are software solutions supporting core microservice implementations Infrastructure capabilities, which are infrastructure-level expectations for a successful microservices implementation Governance capabilities, which are more of process, people, and reference information Core capabilities The core capabilities are explained here: Service listeners (HTTP/Messaging): If microservices are enabled for HTTP-based service endpoints, then the HTTP listener will be embedded within the microservices, thereby eliminating the need to have any external application server requirement. The HTTP listener will be started at the time of the application startup. If the microservice is based on asynchronous communication, then instead of an HTTP listener, a message listener will be stated. Optionally, other protocols could also be considered. There may not be any listeners if the microservices is a scheduled service. Spring Boot and Spring Cloud Streams provide this capability. Storage capability: Microservices have storage mechanisms to store state or transactional data pertaining to the business capability. This is optional, depending on the capabilities that are implemented. The storage could be either a physical storage (RDBMS,such as MySQL, and NoSQL,such as Hadoop, Cassandra, Neo4J, Elasticsearch,and so on), or it could be an in-memory store (cache,such as Ehcache and Data grids,such as Hazelcast, Infinispan,and so on). Business capability definition: This is the core of microservices, in which the business logic is implemented. This could be implemented in any applicable language, such as Java, Scala, Conjure, Erlang, and so on. All required business logic to fulfil the function is embedded within the microservices itself. Event sourcing: Microservices send out state changes to the external world without really worrying about the targeted consumers of these events. They could be consumed by other microservices, supporting services such as audit by replication, external applications,and so on. This will allow other microservices and applications to respond to state changes. Service endpoints and communication protocols: This defines the APIs for external consumers to consume. These could be synchronous endpoints or asynchronous endpoints. Synchronous endpoints could be based on REST/JSON or other protocols such as Avro, Thrift, protocol buffers, and so on. Asynchronous endpoints will be through Spring Cloud Streams backed by RabbitMQ or any other messaging servers or other messaging style implementations, such as Zero MQ. The API gateway: The API gateway provides a level of indirection by either proxying service endpoints or composing multiple service endpoints. The API gateway is also useful for policy enforcements. It may also provide real-time load balancing capabilities. There are many API gateways available in the market. Spring Cloud Zuul, Mashery, Apigee, and 3 Scale are some examples of API gateway providers. User interfaces: Generally, user interfaces are also part of microservices for users to interact with the business capabilities realized by the microservices. These could be implemented in any technology and is channel and device agnostic. Infrastructure capabilities Certain infrastructure capabilities are required for a successful deployment and to manage large-scale microservices. When deploying microservices at scale, not having proper infrastructure capabilities can be challenging and can lead to failures. Cloud: Microservices implementation is difficult in a traditional data center environment with a long lead time to provision infrastructures. Even a large number of infrastructure dedicated per microservice may not be very cost effective. Managing them internally in a data center may increase the cost of ownership and of operations. A cloud-like infrastructure is better for microservices deployment. Containers or virtual machines: Managing large physical machines is not cost effective and is also hard to manage. With physical machines, it is also hard to handle automatic fault tolerance. Virtualization is adopted by many organizations because of its ability to provide an optimal use of physical resources, and it provides resource isolation. It also reduces the overheads in managing large physical infrastructure components. Containers are the next generation of virtual machines. VMWare, Citrix,and so on provide virtual machine technologies. Docker, Drawbridge, Rocket, and LXD are some containerizing technologies. Cluster control and provisioning: Once we have a large number of containers or virtual machines, it is hard to manage and maintain them automatically. Cluster control tools provide a uniform operating environment on top of the containers and share the available capacity across multiple services. Apache Mesos and Kubernetes are examples of cluster control systems. Application lifecycle management: Application lifecycle management tools help to invoke applications when a new container is launched or kill the application when the container shuts down. Application lifecycle management allows to script application deployments and releases. It automatically detects failure scenarios and responds to them, thereby ensuring the availability of the application. This works in conjunction with the cluster control software. Marathon partially address this capability. Supporting capabilities Supporting capabilities are not directly linked to microservices, but these are essential for large-scale microservices development. Software-defined load balancer: The load balancer should be smart enough to understand the changes in deployment topology and respond accordingly. This moves away from the traditional approach of configuring static IP addresses, domain aliases, or cluster address in the load balancer. When new servers are added to the environment, it should automatically detect this and include them in the logical cluster by avoiding any manual interactions. Similarly, if a service instance is unavailable, it should take it out of the load balancer. A combination of Ribbon, Eureka, and Zuul provides this capability in Spring Cloud Netflix. Central log management: As explored earlier in this article, a capability is required to centralize all the logs emitted by service instances with correlation IDs. This helps debug, identify performances bottlenecks, and in predictive analysis. The result of this could feedback into the lifecycle manager to take corrective actions. Service registry: A service registry provides a runtime environment for services to automatically publish their availability at runtime. A registry will be a good source of information to understand the services topology at any point. Eureka from Spring Cloud, ZooKeeper, and Etcd are some of the service registry tools available. Security service: The distributed microservices ecosystem requires a central server to manage service security. This includes service authentication and token services. OAuth2-based services are widely used for microservices security. Spring Security and Spring Security OAuth are good candidates to build this capability. Service configuration: All service configurations should be externalized, as discussed in the Twelve-Factor application principles. A central service for all configurations could be a good choice. The Spring Cloud Config server and Archaius are out-of-the-box configuration servers. Testing tools (Anti-Fragile, RUM, and so on): Netflix uses Simian Army for antifragile testing. Mature services need consistent challenges to see the reliability of the services and how good fallback mechanisms are. Simian Army components create various error scenarios to explore the behavior of the system under failure scenarios. Monitoring and dashboards: Microservices also require a strong monitoring mechanism. This monitoring is not just at the infrastructurelevel but also at the service level. Spring Cloud Netflix Turbine, the Hysterix dashboard,and others provide service-level information. End-to-end monitoring tools,such as AppDynamic, NewRelic, Dynatrace, and other tools such as Statd, Sensu, and Spigo, could add value in microservices monitoring. Dependency and CI management: We also need tools to discover runtime topologies, to find service dependencies, and to manage configurable items (CIs). A graph-based CMDB is more obvious to manage these scenarios. Data lakes: As discussed earlier in this article, we need a mechanism to combine data stored in different microservices and perform near real-time analytics. Data lakesare a good choice to achieve this. Data ingestion tools such as Spring Cloud Data Flow, Flume, and Kafka are used to consume data. HDFS, Cassandra,and others are used to store data. Reliable messaging: If the communication is asynchronous, we may need a reliable messaging infrastructure service, such as RabbitMQ or any other reliable messaging service. Cloud messaging or messaging as service is a popular choice in Internet-scale message-based service endpoints. Process and governance capabilities The last in the puzzle are the process and governance capabilities required for microservices, which are: DevOps: Key in successful implementation is to adopt DevOps. DevOps complements microservices development by supporting agile development, high-velocity delivery, automation, and better change management. DevOps tools: DevOps tools for agile development, continuous integration, continuous delivery, and continuous deployment are essential for a successful delivery of microservices. A lot of emphasis is required in automated, functional, and real user testing as well as synthetic, integration, release, and performance testing. Microservices repository: A microservices repository is where the versioned binaries of microservices are placed. These could be a simple Nexus repository or container repositories such as the Docker registry. Microservice documentation: It is important to have all microservices properly documented. Swagger or API blueprint are helpful in achieving good microservices documentation. Reference architecture and libraries: Reference architecture provides a blueprint at the organization level to ensure that services are developed according to certain standards and guidelines in a consistent manner. Many of these could then be translated to a number of reusable libraries that enforce service development philosophies. Summary In this article,you learned the concepts and characteristics of microservices. We took as example a holiday portal to understand the concept of microservices better. We also examined some of the common challenges in large-scale microservice implementation. Finally, we established a microservices capability model in this article that can be used to deliver successful Internet-scale microservices.
Read more
  • 0
  • 0
  • 31235

article-image-javascript-async-programming-using-promises-tutorial
Pavan Ramchandani
23 Jul 2018
10 min read
Save for later

JavaScript async programming using Promises [Tutorial]

Pavan Ramchandani
23 Jul 2018
10 min read
JavaScript now has a new native pattern for writing asynchronous code called the Promise pattern. This new pattern removes the common code issues that the event and callback pattern had. It also makes the code look more like synchronous code. A promise (or a Promise object) represents an asynchronous operation. Existing asynchronous JavaScript APIs are usually wrapped with promises, and the new JavaScript APIs are purely implemented using promises. Promises are new in JavaScript but are already present in many other programming languages. Programming languages, such as C# 5, C++ 11, Swift, Scala, and more are some examples that support promises. In this tutorial, we will see how to use promises in JavaScript. This article is an excerpt from the book, Learn ECMAScript - Second Edition, written by Mehul Mohan and Narayan Prusty. Promise states A promise is always in one of these states: Fulfilled: If the resolve callback is invoked with a non-promise object as the argument or no argument, then we say that the promise is fulfilled Rejected: If the rejecting callback is invoked or an exception occurs in the executor scope, then we say that the promise is rejected Pending: If the resolve or reject callback is yet to be invoked, then we say that the promise is pending Settled: A promise is said to be settled if it's either fulfilled or rejected, but not pending Once a promise is fulfilled or rejected, it cannot be transitioned back. An attempt to transition it will have no effect. Promises versus callbacks Suppose you wanted to perform three AJAX requests one after another. Here's a dummy implementation of that in callback-style: ajaxCall('http://example.com/page1', response1 => { ajaxCall('http://example.com/page2'+response1, response2 => { ajaxCall('http://example.com/page3'+response2, response3 => { console.log(response3) } }) }) You can see how quickly you can enter into something known as callback-hell. Multiple nesting makes code not only unreadable but also difficult to maintain. Furthermore, if you start processing data after every call, and the next call is based on a previous call's response data, the complexity of the code will be unmatchable. Callback-hell refers to multiple asynchronous functions nested inside each other's callback functions. This makes code harder to read and maintain. Promises can be used to flatten this code. Let's take a look: ajaxCallPromise('http://example.com/page1') .then( response1 => ajaxCallPromise('http://example.com/page2'+response1) ) .then( response2 => ajaxCallPromise('http://example.com/page3'+response2) ) .then( response3 => console.log(response3) ) You can see the code complexity is suddenly reduced and the code looks much cleaner and readable. Let's first see how ajaxCallPromise would've been implemented. Please read the following explanation for more clarity of preceding code snippet. Promise constructor and (resolve, reject) methods To convert an existing callback type function to Promise, we have to use the Promise constructor. In the preceding example, ajaxCallPromise returns a Promise, which can be either resolved or rejected by the developer. Let's see how to implement ajaxCallPromise: const ajaxCallPromise = url => { return new Promise((resolve, reject) => { // DO YOUR ASYNC STUFF HERE $.ajaxAsyncWithNativeAPI(url, function(data) { if(data.resCode === 200) { resolve(data.message) } else { reject(data.error) } }) }) } Hang on! What just happened there? First, we returned Promise from the ajaxCallPromise function. That means whatever we do now will be a Promise. A Promise accepts a function argument, with the function itself accepting two very special arguments, that is, resolve and reject. resolve and reject are themselves functions. When, inside a Promise constructor function body, you call resolve or reject, the promise acquires a resolved or rejected value that is unchangeable later on. We then make use of the native callback-based API and check if everything is OK. If everything is indeed OK, we resolve the Promise with the value being the message sent by the server (assuming a JSON response). If there was an error in the response, we reject the promise instead. You can return a promise in a then call. When you do that, you can flatten the code instead of chaining promises again.For example, if foo() and bar() both return Promise, then, instead of: foo().then( res => { bar().then( res2 => { console.log('Both done') }) }) We can write it as follows: foo() .then( res => bar() ) // bar() returns a Promise .then( res => { console.log('Both done') }) This flattens the code. The then (onFulfilled, onRejected) method The then() method of a Promise object lets us do a task after a Promise has been fulfilled or rejected. The task can also be another event-driven or callback-based asynchronous operation. The then() method of a Promise object takes two arguments, that is, the onFulfilled and onRejected callbacks. The onFulfilled callback is executed if the Promise object was fulfilled, and the onRejected callback is executed if the Promise was rejected. The onRejected callback is also executed if an exception is thrown in the scope of the executor. Therefore, it behaves like an exception handler, that is, it catches the exceptions. The onFulfilled callback takes a parameter, that is, the fulfilment value of the promise. Similarly, the onRejected callback takes a parameter, that is, the reason for rejection: ajaxCallPromise('http://example.com/page1').then( successData => { console.log('Request was successful') }, failData => { console.log('Request failed' + failData) } ) When we reject the promise inside the ajaxCallPromise definition, the second function will execute (failData one) instead of the first function. Let's take one more example by converting setTimeout() from a callback to a promise. This is how setTimeout() looks: setTimeout( () => { // code here executes after TIME_DURATION milliseconds }, TIME_DURATION) A promised version will look something like the following: const PsetTimeout = duration => { return new Promise((resolve, reject) => { setTimeout( () => { resolve() }, duration); }) } // usage: PsetTimeout(1000) .then(() => { console.log('Executes after a second') }) Here we resolved the promise without a value. If you do that, it gets resolved with a value equal to undefined. The catch (onRejected) method The catch() method of a Promise object is used instead of the then() method when we use the then() method only to handle errors and exceptions. There is nothing special about how the catch() method works. It's just that it makes the code much easier to read, as the word catch makes it more meaningful. The catch() method just takes one argument, that is, the onRejected callback. The onRejected callback of the catch() method is invoked in the same way as the onRejected callback of the then() method. The catch() method always returns a promise. Here is how a new Promise object is returned by the catch() method: If there is no return statement in the onRejected callback, then a new fulfilled Promise is created internally and returned. If we return a custom Promise, then it internally creates and returns a new Promise object. The new promise object resolves the custom promise object. If we return something else other than a custom Promise in the onRejected callback, then a new Promise object is created internally and returned. The new Promise object resolves the returned value. If we pass null instead of the onRejected callback or omit it, then a callback is created internally and used instead. The internally created onRejected callback returns a rejected Promise object. The reason for the rejection of the new Promise object is the same as the reason for the rejection of a parent Promise object. If the Promise object to which catch() is called gets fulfilled, then the catch() method simply returns a new fulfilled promise object and ignores the onRejected callback. The fulfillment value of the new Promise object is the same as the fulfillment value of the parent Promise. To understand the catch() method, consider this code: ajaxPromiseCall('http://invalidURL.com') .then(success => { console.log(success) }, failed => { console.log(failed) }); This code can be rewritten in this way using the catch() method: ajaxPromiseCall('http://invalidURL.com') .then(success => console.log(success)) .catch(failed => console.log(failed)); These two code snippets work more or less in the same way. The Promise.resolve(value) method The resolve() method of the Promise object takes a value and returns a Promise object that resolves the passed value. The resolve() method is basically used to convert a value to a Promise object. It is useful when you find yourself with a value that may or may not be a Promise, but you want to use it as a Promise. For example, jQuery promises have different interfaces from ES6 promises. Therefore, you can use the resolve() method to convert jQuery promises into ES6 promises. Here is an example that demonstrates how to use the resolve() method: const p1 = Promise.resolve(4); p1.then(function(value){ console.log(value); }); //passed a promise object Promise.resolve(p1).then(function(value){ console.log(value); }); Promise.resolve({name: "Eden"}) .then(function(value){ console.log(value.name); }); The output is as follows: 4 4 Eden The Promise.reject(value) method The reject() method of the Promise object takes a value and returns a rejected Promise object with the passed value as the reason. Unlike the Promise.resolve() method, the reject() method is used for debugging purposes and not for converting values into promises. Here is an example that demonstrates how to use the reject() method: const p1 = Promise.reject(4); p1.then(null, function(value){ console.log(value); }); Promise.reject({name: "Eden"}) .then(null, function(value){ console.log(value.name); }); The output is as follows: 4 Eden The Promise.all(iterable) method The all() method of the Promise object takes an iterable object as an argument and returns a Promise that fulfills when all of the promises in the iterable object have been fulfilled. This can be useful when we want to execute a task after some asynchronous operations have finished. Here is a code example that demonstrates how to use the Promise.all() method: const p1 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 1000); }); const p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 2000); }); const arr = [p1, p2]; Promise.all(arr).then(function(){ console.log("Done"); //"Done" is logged after 2 seconds }); If the iterable object contains a value that is not a Promise object, then it's converted to the Promise object using the Promise.resolve() method. If any of the passed promises get rejected, then the Promise.all() method immediately returns a new rejected Promise for the same reason as the rejected passed Promise. Here is an example to demonstrate this: const p1 = new Promise(function(resolve, reject){ setTimeout(function(){ reject("Error"); }, 1000); }); const p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve(); }, 2000); }); const arr = [p1, p2]; Promise.all(arr).then(null, function(reason){ console.log(reason); //"Error" is logged after 1 second }); The Promise.race(iterable) method The race() method of the Promise object takes an iterable object as the argument and returns a Promise that fulfills or rejects as soon as one of the promises in the iterable object is fulfilled or rejected, with the fulfillment value or reason from that Promise. As the name suggests, the race() method is used to race between promises and see which one finishes first. Here is a code example that shows how to use the race() method: var p1 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve("Fulfillment Value 1"); }, 1000); }); var p2 = new Promise(function(resolve, reject){ setTimeout(function(){ resolve("fulfillment Value 2"); }, 2000); }); var arr = [p1, p2]; Promise.race(arr).then(function(value){ console.log(value); //Output "Fulfillment value 1" }, function(reason){ console.log(reason); }); Now at this point, I assume you have a basic understanding of how promises work, what they are, and how to convert a callback-like API into a promised API. Let's take a look at async/await, the future of asynchronous programming. If you found this article useful, do check out the book Learn ECMAScript, Second Edition for learning the ECMAScript standards to design your web applications. Implementing 5 Common Design Patterns in JavaScript (ES8) What's new in ECMAScript 2018 (ES9)? How to build a weather app using Kotlin for JavaScript
Read more
  • 0
  • 0
  • 30970

article-image-how-to-publish-docker-and-integrate-with-maven
Pravin Dhandre
11 Apr 2018
6 min read
Save for later

How to publish Docker and integrate with Maven

Pravin Dhandre
11 Apr 2018
6 min read
We have learned how to create Dockers, and how to run them, but these Dockers are stored in our system. Now we need to publish them so that they are accessible anywhere. In this post, we will learn how to publish our Docker images, and how to finally integrate Maven with Docker to easily do the same steps for our microservices. Understanding repositories In our previous example, when we built a Docker image, we published it into our local system repository so we can execute Docker run. Docker will be able to find them; this local repository exists only on our system, and most likely we need to have this access to wherever we like to run our Docker. For example, we may create our Docker in a pipeline that runs on a machine that creates our builds, but the application itself may run in our pre production or production environments, so the Docker image should be available on any system that we need. One of the great advantages of Docker is that any developer building an image can run it from their own system exactly as they would on any server. This will minimize the risk of having something different in each environment, or not being able to reproduce production when you try to find the source of a problem. Docker provides a public repository, Docker Hub, that we can use to publish and pull images, but of course, you can use private Docker repositories such as Sonatype Nexus, VMware Harbor, or JFrog Artifactory. To learn how to configure additional repositories refer to the repositories documentation. Docker Hub registration After registering, we need to log into our account, so we can publish our Dockers using the Docker tool from the command line using Docker login: docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.Docker.com to create one. Username: mydockerhubuser Password: Login Succeeded When we need to publish a Docker, we must always be logged into the registry that we are working with; remember to log into Docker. Publishing a Docker Now we'd like to publish our Docker image to Docker Hub; but before we can, we need to build our images for our repository. When we create an account in Docker Hub, a repository with our username will be created; in this example, it will be mydockerhubuser. In order to build the Docker for our repository, we can use this command from our microservice directory: docker build . -t mydockerhubuser/chapter07 This should be quite a fast process since all the different layers are cached: Sending build context to Docker daemon 21.58MB Step 1/3 : FROM openjdk:8-jdk-alpine ---> a2a00e606b82 Step 2/3 : ADD target/*.jar microservice.jar ---> Using cache ---> 4ae1b12e61aa Step 3/3 : ENTRYPOINT java -jar microservice.jar ---> Using cache ---> 70d76cbf7fb2 Successfully built 70d76cbf7fb2 Successfully tagged mydockerhubuser/chapter07:latest Now that our Docker is built, we can push it to Docker Hub with the following command: docker push mydockerhubuser/chapter07 This command will take several minutes since the whole image needs to be uploaded. With our Docker published, we can now run it from any Docker system with the following command: docker run mydockerhubuser/chapter07 Or else, we can run it as a daemon, with: docker run -d mydockerhubuser/chapter07 Integrating Docker with Maven Now that we know most of the Docker concepts, we can integrate Docker with Maven using the Docker-Maven-plugin created by fabric8, so we can create Docker as part of our Maven builds. First, we will move our Dockerfile to a different folder. In the IntelliJ Project window, right-click on the src folder and choose New | Directory. We will name it Docker. Now, drag and drop the existing Dockerfile into this new directory, and we will change it to the following: FROM openjdk:8-jdk-alpine ADD maven/*.jar microservice.jar ENTRYPOINT ["java","-jar", "microservice.jar"] To manage the Dockerfile better, we just move into our project folders. When our Docker is built using the plugin, the contents of our application will be created in a folder named Maven, so we change the Dockerfile to reference that folder. Now, we will modify our Maven pom.xml, and add the Dockerfile-Maven-plugin in the build | plugins section: <build> .... <plugins> .... <plugin> <groupId>io.fabric8</groupId> <artifactId>Docker-maven-plugin</artifactId> <version>0.23.0</version> <configuration> <verbose>true</verbose> <images>  </images> </configuration> </plugin> </plugins> </build> Here, we are specifying how to create our Docker, where the Dockerfile is, and even which version of the Docker we are building. Additionally, we specify some parameters when our Docker runs, such as the port that it exposes. If we need IntelliJ to reload the Maven changes, we may need to click on the Reimport all maven projects button in the Maven Project window. For building our Docker using Maven, we can use the Maven Project window by running the task Docker: build, or by running the following command: mvnw docker:build This will build the Docker image, but we require to have it before it's packaged, so we can perform the following command: mvnw package docker:build We can also publish our Docker using Maven, either with the Maven Project window to run the Docker: push task, or by running the following command: mvnw docker:push This will push our Docker into the Docker Hub, but if we'd like to do everything in just one command, we can just use the following code: mvnw package docker:build docker:push Finally, the plugin provides other tasks such as Docker: run, Docker: start, and Docker: stop, which we can use in the commands that we've already learned on the command line. With this, we learned how to publish docker manually and integrate them into the Maven lifecycle. Do check out the book Hands-On Microservices with Kotlin to start simplifying development of microservices and building high quality service environment. Check out other posts: The key differences between Kubernetes and Docker Swarm How to publish Microservice as a service onto a Docker Building Docker images using Dockerfiles  
Read more
  • 0
  • 0
  • 30719
article-image-what-are-microservices
Packt
20 Jun 2017
12 min read
Save for later

What are Microservices?

Packt
20 Jun 2017
12 min read
In this article written by Gaurav Kumar Aroraa, Lalit Kale, Kanwar Manish, authors of the book Building Microservices with .NET Core, we will start with a brief introduction. Then, we will define its predecessors: monolithic architecture and service-oriented architecture (SOA). After this, we will see how microservices fare against both SOA and the monolithic architecture. We will then compare the advantages and disadvantages of each one of these architectural styles. This will enable us to identify the right scenario for these styles. We will understand the problems that arise from having a layered monolithic architecture. We will discuss the solutions available to these problems in the monolithic world. At the end, we will be able to break down a monolithic application into a microservice architecture. We will cover the following topics in this article: Origin of microservices Discussing microservices (For more resources related to this topic, see here.) Origin of microservices The term microservices was used for the first time in mid-2011 at a workshop of software architects. In March 2012, James Lewis presented some of his ideas about microservices. By the end of 2013, various groups from the IT industry started having discussions on microservices, and by 2014, it had become popular enough to be considered a serious contender for large enterprises. There is no official introduction available for microservices. The understanding of the term is purely based on the use cases and discussions held in the past. We will discuss this in detail, but before that, let's check out the definition of microservices as per Wikipedia (https://en.wikipedia.org/wiki/Microservices), which sums it up as: Microservices is a specialization of and implementation approach for SOA used to build flexible, independently deployable software systems. In 2014, James Lewis and Martin Fowler came together and provided a few real-world examples and presented microservices (refer to http://martinfowler.com/microservices/) in their own words and further detailed it as follows: The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. It is very important that you see all the attributes James and Martin defined here. They defined it as an architectural style that developers could utilize to develop a single application with the business logic spread across a bunch of small services, each having their own persistent storage functionality. Also, note its attributes: it can be independently deployable, can run in its own process, is a lightweight communication mechanism, and can be written in different programming languages. We want to emphasize this specific definition since it is the crux of the whole concept. And as we move along, it will come together by the time we finish this book. Discussing microservices Until now, we have gone through a few definitions of microservices; now, let's discuss microservices in detail. In short, a microservice architecture removes most of the drawbacks of SOA architectures.  Slicing your application into a number of services is neither SOA nor microservices. However, combining service design and best practices from the SOA world along with a few emerging practices, such as isolated deployment, semantic versioning, providing lightweight services, and service discovery in polyglot programming, is microservices. We implement microservices to satisfy business features and implement them with reduced time to market and greater flexibility. Before we move on to understand the architecture, let's discuss the two important architectures that have led to its existence: The monolithic architecture style SOA Most of us would be aware of the scenario where during the life cycle of an enterprise application development, a suitable architectural style is decided. Then, at various stages, the initial pattern is further improved and adapted with changes that cater to various challenges, such as deployment complexity, large code base, and scalability issues. This is exactly how the monolithic architecture style evolved into SOA, further leading up to microservices. Monolithic architecture The monolithic architectural style is a traditional architecture type and has been widely used in the industry. The term "monolithic" is not new and is borrowed from the Unix world. In Unix, most of the commands exist as a standalone program whose functionality is not dependent on any other program. As seen in the succeeding image, we can have different components in the application such as: User interface: This handles all of the user interaction while responding with HTML or JSON or any other preferred data interchange format (in the case of web services). Business logic: All the business rules applied to the input being received in the form of user input, events, and database exist here. Database access: This houses the complete functionality for accessing the database for the purpose of querying and persisting objects. A widely accepted rule is that it is utilized through business modules and never directly through user-facing components. Software built using this architecture is self-contained. We can imagine a single .NET assembly that contains various components, as described in the following image: As the software is self-contained here, its components are interconnected and interdependent. Even a simple code change in one of the modules may break a major functionality in other modules. This would result in a scenario where we'd need to test the whole application. With the business depending critically on its enterprise application frameworks, this amount of time could prove to be very critical. Having all the components tightly coupled poses another challenge: whenever we execute or compile such software, all the components should be available or the build will fail; refer to the preceding image that represents a monolithic architecture and is a self-contained or a single .NET assembly project. However, monolithic architectures might also have multiple assemblies. This means that even though a business layer (assembly, data access layer assembly, and so on) is separated, at run time, all of them will come together and run as one process.  A user interface depends on other components' direct sale and inventory in a manner similar to all other components that depend upon each other. In this scenario, we will not be able to execute this project in the absence of any one of these components. The process of upgrading any one of these components will be more complex as we may have to consider other components that require code changes too. This results in more development time than required for the actual change. Deploying such an application will become another challenge. During deployment, we will have to make sure that each and every component is deployed properly; otherwise, we may end up facing a lot of issues in our production environments. If we develop an application using the monolithic architecture style, as discussed previously, we might face the following challenges: Large code base: This is a scenario where the code lines outnumber the comments by a great margin. As components are interconnected, we will have to bear with a repetitive code base. Too many business modules: This is in regard to modules within the same system. Code base complexity: This results in a higher chance of code breaking due to the fix required in other modules or services. Complex code deployment: You may come across minor changes that would require whole system deployment. One module failure affecting the whole system: This is in regard to modules that depend on each other. Scalability: This is required for the entire system and not just the modules in it. Intermodule dependency: This is due to tight coupling. Spiraling development time: This is due to code complexity and interdependency. Inability to easily adapt to a new technology: In this case, the entire system would need to be upgraded. As discussed earlier, if we want to reduce development time, ease of deployment, and improve maintainability of software for enterprise applications, we should avoid the traditional or monolithic architecture. Service-oriented architecture In the previous section, we discussed the monolithic architecture and its limitations. We also discussed why it does not fit into our enterprise application requirements. To overcome these issues, we should go with some modular approach where we can separate the components such that they should come out of the self-contained or single .NET assembly. The main difference between SOA & monolithic is not one or multiple assembly. But as the service in SOA runs as separate process, SOA scales better compared to monolithic. Let's discuss the modular architecture, that is, SOA. This is a famous architectural style using which the enterprise applications are designed with a collection of services as its base. These services may be RESTful or ASMX Web services. To understand SOA in more detail, let's discuss "service" first. What is service? Service, in this case, is an essential concept of SOA. It can be a piece of code, program, or software that provides some functionality to other system components. This piece of code can interact directly with the database or indirectly through another service. Furthermore, it can be consumed by clients directly, where the client may either be a website, desktop app, mobile app, or any other device app. Refer to the following diagram: Service refers to a type of functionality exposed for consumption by other systems (generally referred to as clients/client applications). As mentioned earlier, it can be represented by a piece of code, program, or software. Such services are exposed over the HTTP transport protocol as a general practice. However, the HTTP protocol is not a limiting factor, and a protocol can be picked as deemed fit for the scenario. In the following image, Service – direct selling is directly interacting with Database, and three different clients, namely Web, Desktop, and Mobile, are consuming the service. On the other hand, we have clients consuming Service – partner selling, which is interacting with Service – channel partners for database access. A product selling service is a set of services that interacts with client applications and provides database access directly or through another service, in this case, Service – Channel partner.  In the case of Service – direct selling, shown in the preceding example, it is providing some functionality to a Web Store, a desktop application, and a mobile application. This service is further interacting with the database for various tasks, namely fetching data, persisting data, and so on. Normally, services interact with other systems via some communication channel, generally the HTTP protocol. These services may or may not be deployed on the same or single servers. In the preceding image, we have projected an SOA example scenario. There are many fine points to note here, so let's get started. Firstly, our services can be spread across different physical machines. Here, Service-direct selling is hosted on two separate machines. It is a possible scenario that instead of the entire business functionality, only a part of it will reside on Server 1 and the remaining on Server 2. Similarly, Service – partner selling appears to be having the same arrangement on Server 3 and Server 4. However, it doesn't stop Service – channel partners being hosted as a complete set on both the servers: Server 5 and Server 6. A system that uses a service or multiple services in a fashion mentioned in the preceding figure is called an SOA. We will discuss SOA in detail in the following sections. Let's recall the monolithic architecture. In this case, we did not use it because it restricts code reusability; it is a self-contained assembly, and all the components are interconnected and interdependent. For deployment, in this case, we will have to deploy our complete project after we select the SOA (refer to preceding image and subsequent discussion). Now, because of the use of this architectural style, we have the benefit of code reusability and easy deployment. Let's examine this in the wake of the preceding figure: Reusability: Multiple clients can consume the service. The service can also be simultaneously consumed by other services. For example, OrderService is consumed by web and mobile clients. Now, OrderService can also be used by the Reporting Dashboard UI. Stateless: Services do not persist any state between requests from the client, that is, the service doesn't know, nor care, that the subsequent request has come from the client that has/hasn't made the previous request. Contract-based: Interfaces make it technology-agnostic on both sides of implementation and consumption. It also serves to make it immune to the code updates in the underlying functionality. Scalability: A system can be scaled up; SOA can be individually clustered with appropriate load balancing. Upgradation: It is very easy to roll out new functionalities or introduce new versions of the existing functionality. The system doesn't stop you from keeping multiple versions of the same business functionality. Summary In this article, we discussed what the microservice architectural style is in detail, its history, and how it differs from its predecessors: monolithic and SOA. We further defined the various challenges that monolithic faces when dealing with large systems. Scalability and reusability are some definite advantages that SOA provides over monolithic. We also discussed the limitations of the monolithic architecture, including scaling problems, by implementing a real-life monolithic application. The microservice architecture style resolves all these issues by reducing code interdependency and isolating the dataset size that any one of the microservices works upon. We utilized dependency injection and database refactoring for this. We further explored automation, CI, and deployment. These easily allow the development team to let the business sponsor choose what industry trends to respond to first. This results in cost benefits, better business response, timely technology adoption, effective scaling, and removal of human dependency. Resources for Article: Further resources on this subject: Microservices and Service Oriented Architecture [article] Breaking into Microservices Architecture [article] Microservices – Brave New World [article]
Read more
  • 0
  • 0
  • 30666

article-image-what-is-functional-reactive-programming
Packt
08 Feb 2017
4 min read
Save for later

What is functional reactive programming?

Packt
08 Feb 2017
4 min read
Reactive programming is, quite simply, a programming paradigm where you are working with an asynchronous data flow. There are a lot of books and blog posts that argue about what reactive programming is, exactly, but if you delve too deeply too quickly it's easy to get confused. Then reactive programming isn't useful at all. Functional reactive programming takes the principles of functional programming and uses them to enhance reactive programming. You take functions - like map, filter, and reduce - and use them to better manage streams of data. Read now: What is the Reactive manifesto? How does imperative programming compare to reactive programming? Imperative programming makes you describe the steps a computer must do to execute a task. In comparison, functional reactive programming gives you the constructs to propagate changes. This means so you have to think more about what to do than how to do it. This can be illustrated in a simple sum of two numbers. This could be presented as a = b + c in an imperative programming. A single line of code expresses the sum - that's straightforward, right? However, if we change the value of b or c, the value doesn't change - you wouldn't want it to change if you were using an imperative approach. In reactive programming, by contrast, the changes you make to different figures would react accordingly. Imagine the sum in a Microsoft Excel spreadsheet. Every time you change the value of the column b or c it recalculate the value of a. This is like a very basic form of software propagation. You probably already use an asynchronous data flow. Every time you add a listener to a mouse click or a keystroke in a web page we pass a function to react to that user input. So, a mouse click might be seen as a stream of events which you can observe; you can then execute a function when it happens. But, this is only one way of using event streams. You might want more sophistication and control over your streams. Reactive programming takes this to the next level. When you use it you can react to changes in anything - that could be changes in: user inputs external sources database changes changes to variables and properties This then means you can create a stream of events following on from specific actions. For example, we can see the changing value of stocks stock as an EventStream. If you can do this you can then use it to show a user when to buy or sell those stocks in real time. Facebook and Twitter are another good example of where software reacts to changes in external source streams -reactive programming is an important component in developing really dynamic UI that are characteristic of social media sites. Functional reactive programming Functional reactive programming, then gives you the ability to do a lot with streams of data or events. You can filter, combine, map buffer, for example. Going back to the stock example above, you can 'listen' to different stocks and use a filter function to present ones worth buying to the user in real time: Why do I need functional reactive programming? Functional reactive programming is especially useful for: Graphical user interface Animation Robotics Simulation Computer Vision A few years ago, all a user could do in a web app was fill in a form with bits of data and post it to a server.  Today web and mobile apps are much richer for users. To go into more detail, by using reactive programming, you can abstract the source of data to the business logic of the application. What this means in practice is that you can write more concise and decoupled code. In turn, this makes code much more reusable and testable, as you can easily mock streams to test your business logic when testing application. Read more: Introduction to JavaScript Breaking into Microservices Architecture JSON with JSON.Net
Read more
  • 0
  • 0
  • 30567
Modal Close icon
Modal Close icon