Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-web-components
Packt
17 Jun 2016
12 min read
Save for later

Web Components

Packt
17 Jun 2016
12 min read
In this article by Arshak Khachatryan, the author of Getting Started with Polymer, we will discuss web components. Currently, web technologies are growing rapidly. Though most websites use these technologies nowadays, we come across many with a bad, unresponsive UI design and awful performance. The only reason we should think about a responsive website is that users are now moving to the mobile web. 55% of the web users use mobile phones because they are faster and more comfortable. This is why we need to provide mobile content in the simplest way possible. Everything is moving to minimalism, even the Web. The new web standards are changing rapidly too. In this article, we will cover one of these new technologies, web components, and what they do. We will discuss the following specifications of web components in this article: Templates Shadow DOM (For more resources related to this topic, see here.) Templates In this section, we will discuss what we can do with templates. However, let's answer a few questions before this. What are templates, and why should we use them? Templates are basically fragments of HTML, but let's call these fragments as the "zombie" fragments of HTML as they are neither alive nor dead. What is meant by "neither alive nor dead"? Let me explain this with a real-life example. Once, when I was working on the ucraft.me project (it's a website built with a lot of cool stuff in it), we faced some rather new challenges with the templates. We had a lot of form elements, but we didn't know where to save the form elements content. We didn't want to load the DOM of each form element, but what could we do? As always, we did some magic; we created a lot of div elements with the form elements and hid it with CSS. But the CSS display: none property did not render the element, but it loaded the element. This was also a problem because there were a lot of form element templates, and it affected the performance of the website. I recommended to my team that they work with templates. Templates can contain HTML content, but they do not load the element nor render. We call template elements "dead elements" because they do not load the content until you get their content with JavaScript. Let's move ahead, and let me show you some examples of how you can create templates and do some stuff with its content. Imagine that you are working on a big project where you need to load some dynamic content without AJAX. If I had a task such as this, I would create a PHP file and get its content by calling the jQuery .load() function. However, now, you can save your content inside of the <template> element and get the content without any jQuery and AJAX but with a single line of JavaScript code. Let's create a template. In index.html, we have <template> and some content we want to get in the future, as shown in the following code block: <template class="superman"> <div> <img src="assets/img/superman.png" class="animated_superman" /> </div> </template> The time has now come for JavaScript! Execute the following code: <script> // selecting the template element with querySelector() var tmpl = document.querySelector('.superman'); //getting the <template> content var content = tmpl.content; // making some changes in the content content.querySelector('.animated_superman').width = 200; // appending the template to the body document.body.appendChild(content); </script> So, that's it! Cool, right? The content will load only after you append the content to the document. So, do you realize that templates are a part of the future web? If you are using Chrome Canary, just turn on the flags of experimental web platform features and enable HTML imports and experimental JavaScript. There are four ways to use templates, which are: Add templates with hidden elements in the document and just copy and paste the data when you need it, as follows: <div hidden data-template="superman"> <div> <p>SuperMan Head</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> However, the problem is that a browser will load all the content. It means that the browser will load but not render images, video, audio, and so on. Get the content of the template as a string (by requesting with AJAX or from <script type="x-template">). However, we might have some problems in working with the string. This can be dangerous for XSS attacks; we just need to pay some more attention to this: <script data-template="batman" type="x-template"> <div> <p>Batman Head this time!</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> Compiled templates such as Hogan.js (http://twitter.github.io/hogan.js/) work with strings. So, they have the same flaw as the patterns of the second type. Templates do not have these disadvantages. We will work with DOM and not with the strings. We will then decide when to run the code. In conclusion: The <template> tag is not intended to replace the system of standardization. There are no tricky iteration operators or data bindings. Its main feature is to be able to insert "live" content along with scripts. Lastly, it does not require any libraries. Shadow DOM The Shadow DOM specification is a separate standard. A part of it is used for standard DOM elements, but it is also used to create with web components. In this section, you will learn what Shadow DOM is and how to use it. Shadow DOM is an internal DOM element that is separated from an external document. It can store your ID, styles, and so on. Most importantly, Shadow DOM is not visible outside of its scope without the use of special techniques. Hence, there are no conflicts with the external world; it's like an iframe. Inside the browser The Shadow DOM concept has been used for a long time inside browsers themselves. When the browser shows complex controls, such as a <input type = "range"> slider or a <input type = "date"> calendar within itself, it constructs them out of the most ordinary styled <div>, <span>, and other elements. They are invisible at the first glance, but they can be easily seen if the checkbox in Chrome DevTools is set to display Shadow DOM: In the preceding code, #shadow-root is the Shadow DOM. Getting items from the Shadow DOM can only be done using special JavaScript calls or selectors. They are not children but a more powerful separation of content from the parent. In the preceding Shadow DOM, you can see a useful pseudo attribute. It is nonstandard and is present for solely historical reasons. It can be styled via CSS with the help of subelements—for example, let's change the form input dates to red via the following code: <style> input::-webkit-datetime-edit { background: red; } </style> <input type="date" /> Once again, make a note of the pseudo custom attribute. Speaking chronologically, in the beginning, the browsers started to experiment with encapsulated DOM structure inside their scopes, then Shadow DOM appeared which allowed developers to do the same. Now, let's work with the Shadow DOM from JavaScript or the standard Shadow DOM. Creating a Shadow DOM The Shadow DOM can create any element within the elem.createShadowRoot() call, as shown by the following code: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = "Because I'm Batman!"; </script> If you run this example, you will see that the contents of the #container element disappeared somewhere, and it only shows "Because I'm Batman!". This is because the element has a Shadow DOM and ignores the previous content of the element. Because of the creation of Shadow DOM, instead of the content, the browser has shown only the Shadow DOM. If you wish, you can put the contents of the ordinary inside this Shadow DOM. To do this, you need to specify where it is to be done. The Shadow DOM is done through the "insertion point", and it is declared using the <content> tag; here's an example: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1><p>Winter is coming!</p>'; </script> Now, you will see "You know why?" in the title followed by "Winter is coming!". Here's a Shadow DOM example in Chrome DevTool: The following are some important details about the Shadow DOM: The <content> tag affects only the display, and it does not move the nodes physically. As you can see in the preceding picture, the node "You know why?" remained inside the div#container. It can even be obtained using container.firstElementChild. Inside the <content> tag, we have the content of the element itself. In this example string "You know why?". With the select attribute of the <content> element, you can specify a particular selector content you want to transfer; for example, <content select="p"></content> will transfer only paragraphs. Inside the Shadow DOM, you can use the <content> tag multiple times with different values of select, thus indicating where to place which part of the original content. However, it is impossible to duplicate nodes. If the node is shown in a <content> tag, then the next node will be missed. For example, if there is a <content select="h3.title"> tag and then <content select= "h3">, the first <content> will show the headers <h3> with the class title, while the second will show all the others, except for the ones already shown. In the preceding example from DevTools, the <content></content> tag is empty. If we add some content in the <content> tag, it will show that in that case, if there are no other nodes. Check out the following code: <div id="container"> <h3>Once upon a time, in Westeros</h3> <strong>Ruled a king by name Joffrey and he's dead!</strong> </div> <script> var root = container.createShadowRoot(); root.innerHTML = '<content select='h3'></content> <content select=".writer"> Jon Snow </content> <content></content>'; </script> When you run the JS code, you will see the following: The first <content select='h3'> tag will display the title The second <content select = ".hero"> tag would show the hero name, but if there's no any element with this selector, it will take the default value: <content select=".hero"> The third <content> tag displays the rest of the original contents of the elements without the header <h3>, which it had launched earlier Once again, note that <content> moves nodes on the DOM physically. Root shadowRoot After the creation of a root in the internal DOM, the tree will be available as container.shadowRoot. It is a special object that supports the basic methods of CSS requests and is described in detail in ShadowRoot. You need to go through container.shadowRoot if you need to work with content in the Shadow DOM. You can create a new Shadow DOM tree of JavaScript; here's an example: <div id="container">Polycasts</div> <script> // create a new Shadow DOM tree for element var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1> <strong>Hey googlers! Let's code today.</strong>'; </script> <script> // read data from Shadow DOM for elem var root = container.shadowRoot; // Hey googlers! Let's code today. document.write('<br/><em>container: ' + root. querySelector('strong').innerHTML); // empty as physical nodes - is content document.write('<br/><em>content: ' + root. querySelector('content').innerHTML); </script> To finish up, Shadow DOM is a tool to create a separate DOM tree inside the cell, which is not visible from outside without using special techniques: A lot of browser components with complex structures have Shadow DOM already. You can create Shadow DOM inside every element by calling elem.createShadowRoot(). In the future, it will be available as a elem.shadowRoot root, and you can access it inside the Shadow DOM. It is not available for custom elements. Once the Shadow DOM appears in the element, the content of it is hidden. You can see just the Shadow DOM. The <content> element moves the contents of the original item in the Shadow DOM only visually. However, it remains in the same place in the DOM structure. Detailed specifications are given at http://w3c.github.io/webcomponents/spec/shadow/. Summary Using web components, you can easily create your web application by splitting it into parts/components. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Manipulation of DOM Objects using Firebug [article] jQuery 1.4 DOM Manipulation Methods for Style Properties and Class Attributes [article]
Read more
  • 0
  • 0
  • 13341

article-image-drupal-8-configuration-management
Packt
18 Mar 2015
14 min read
Save for later

Drupal 8 Configuration Management

Packt
18 Mar 2015
14 min read
In this article, by the authors, Stefan Borchert and Anja Schirwinski, of the book, Drupal 8 Configuration Management,we will learn the inner workings of the Configuration Management system in Drupal 8. You will learn about config and schema files and read about the difference between simple configuration and configuration entities. (For more resources related to this topic, see here.) The config directory During installation, Drupal adds a directory within sites/default/files called config_HASH, where HASH is a long random string of letters and numbers, as shown in the following screenshot: This sequence is a random hash generated during the installation of your Drupal site. It is used to add some protection to your configuration files. Additionally to the default restriction enforced by the .htaccess file within the subdirectories of the config directory that prevents unauthorized users from seeing the content of the directories. As a result, would be really hard for someone to guess the folder's name. Within the config directory, you will see two additional directories that are empty by default (leaving the .htaccess and README.txt files aside). One of the directories is called active. If you change the configuration system to use file storage instead of the database for active Drupal site configuration, this directory will contain the active configuration. If you did not customize the storage mechanism of the active configuration (we will learn later how to do this), Drupal 8 uses the database to store the active configuration. The other directory is called staging. This directory is empty by default, but can host the configuration you want to be imported into your Drupal site from another installation. You will learn how to use this later on in this article. A simple configuration example First, we want to become familiar with configuration itself. If you look into the database of your Drupal installation and open up the config table , you will find the entire active configuration of your site, as shown in the following screenshot: Depending on your site's configuration, table names may be prefixed with a custom string, so you'll have to look for a table name that ends with config. Don't worry about the strange-looking text in the data column; this is the serialized content of the corresponding configuration. It expands to single configuration values—that is, system.site.name, which holds the value of the name of your site. Changing the site's name in the user interface on admin/config/system/site-information will immediately update the record in the database; thus, put simply the records in the table are the current state of your site's configuration, as shown in the following screenshot: But where does the initial configuration of your site come from? Drupal itself and the modules you install must use some kind of default configuration that gets added to the active storage during installation. Config and schema files – what are they and what are they used for? In order to provide a default configuration during the installation process, Drupal (modules and profiles) comes with a bunch of files that hold the configuration needed to run your site. To make parsing of these files simple and enhance readability of these configuration files, the configuration is stored using the YAML format. YAML (http://yaml.org/) is a data-orientated serialization standard that aims for simplicity. With YAML, it is easy to map common data types such as lists, arrays, or scalar values. Config files Directly beneath the root directory of each module and profile defining or overriding configuration (either core or contrib), you will find a directory named config. Within this directory, there may be two more directories (although both are optional): install and schema. Check the image module inside core/modules and take a look at its config directory, as shown in the following screenshot: The install directory shown in the following screenshot contains all configuration values that the specific module defines or overrides and that are stored in files with the extension .yml (one of the default extensions for files in the YAML format): During installation, the values stored in these files are copied to the active configuration of your site. In the case of default configuration storage, the values are added to the config table; in file-based configuration storage mechanisms, on the other hand, the files are copied to the appropriate directories. Looking at the filenames, you will see that they follow a simple convention: <module name>.<type of configuration>[.<machine name of configuration object>].yml (setting aside <module name>.settings.yml for now). The explanation is as follows: <module name>: This is the name of the module that defines the settings included in the file. For instance, the image.style.large.yml file contains settings defined by the image module. <type of configuration>: This can be seen as a type of group for configuration objects. The image module, for example, defines several image styles. These styles are a set of different configuration objects, so the group is defined as style. Hence, all configuration files that contain image styles defined by the image module itself are named image.style.<something>.yml. The same structure applies to blocks (<block.block.*.yml>), filter formats (<filter.format.*.yml>), menus (<system.menu.*.yml>), content types (<node.type.*.yml>), and so on. <machine name of configuration object>: The last part of the filename is the unique machine-readable name of the configuration object itself. In our examples from the image module, you see three different items: large, medium, and thumbnail. These are exactly the three image styles you will find on admin/config/media/image-styles after installing a fresh copy of Drupal 8. The image styles are shown in the following screenshot: Schema files The primary reason schema files were introduced into Drupal 8 is multilingual support. A tool was needed to identify all translatable strings within the shipped configuration. The secondary reason is to provide actual translation forms for configuration based on your data and to expose translatable configuration pieces to external tools. Each module can have as many configuration the .yml files as needed. All of these are explained in one or more schema files that are shipped with the module. As a simple example of how schema files work, let's look at the system's maintenance settings in the system.maintenance.yml file at core/modules/system/config/install. The file's contents are as follows: message: '@site is currently under maintenance. We should be back shortly. Thank you for your patience.' langcode: en The system module's schema files live in core/modules/system/config/schema. These define the basic types but, for our example, the most important aspect is that they define the schema for the maintenance settings. The corresponding schema section from the system.schema.yml file is as follows: system.maintenance: type: mapping label: 'Maintenance mode' mapping:    message:      type: text      label: 'Message to display when in maintenance mode'    langcode:      type: string      label: 'Default language' The first line corresponds to the filename for the .yml file, and the nested lines underneath the first line describe the file's contents. Mapping is a basic type for key-value pairs (always the top-level type in .yml). The system.maintenance.yml file is labeled as label: 'Maintenance mode'. Then, the actual elements in the mapping are listed under the mapping key. As shown in the code, the file has two items, so the message and langcode keys are described. These are a text and a string value, respectively. Both values are given a label as well in order to identify them in configuration forms. Learning the difference between active and staging By now, you know that Drupal works with the two directories active and staging. But what is the intention behind those directories? And how do we use them? The configuration used by your site is called the active configuration since it's the configuration that is affecting the site's behavior right now. The current (active) configuration is stored in the database and direct changes to your site's configuration go into the specific tables. The reason Drupal 8 stores the active configuration in the database is that it enhances performance and security. Source: https://www.drupal.org/node/2241059. However, sometimes you might not want to store the active configuration in the database and might need to use a different storage mechanism. For example, using the filesystem as configuration storage will enable you to track changes in the site's configuration using a versioning system such as Git or SVN. Changing the active configuration storage If you do want to switch your active configuration storage to files, here's how: Note that changing the configuration storage is only possible before installing Drupal. After installing it, there is no way to switch to another configuration storage! To use a different configuration storage mechanism, you have to make some modifications to your settings.php file. First, you'll need to find the section named Active configuration settings. Now you will have to uncomment the line that starts with $settings['bootstrap_config_storage'] to enable file-based configuration storage. Additionally, you need to copy the existing default.services.yml (next to your settings.php file) to a file named services.yml and enable the new configuration storage: services: # Override configuration storage. config.storage:    class: DrupalCoreConfigCachedStorage    arguments: ['@config.storage.active', '@cache.config'] config.storage.active:    # Use file storage for active configuration.    alias: config.storage.file This tells Drupal to override the default service used for configuration storage and use config.storage.file as the active configuration storage mechanism instead of the default database storage. After installing the site with these settings, we will take another look at the config directory in sites/default/files (assuming you didn't change to the location of the active and staging directory): As you can see, the active directory contains the entire site's configuration. The files in this directory get copied here during the website's installation process. Whenever you make a change to your website, the change is reflected in these files. Exporting a configuration always exports a snapshot of the active configuration, regardless of the storage method. The staging directory contains the changes you want to add to your site. Drupal compares the staging directory to the active directory and checks for changes between them. When you upload your compressed export file, it actually gets placed inside the staging directory. This means you can save yourself the trouble of using the interface to export and import the compressed file if you're comfortable enough with copy-and-pasting files to another directory. Just make sure you copy all of the files to the staging directory even if only one of the files was changed. Any missing files are interpreted as deleted configuration, and will mess up your site. In order to get the contents of staging into active, we simply have to use the synchronize option at admin/config/development/configuration again. This page will show us what was changed and allows us to import the changes. On importing, your active configuration will get overridden with the configuration in your staging directory. Note that the files inside the staging directory will not be removed after the synchronization is finished. The next time you want to copy-and-paste from your active directory, make sure you empty staging first. Note that you cannot override files directly in the active directory. The changes have to be made inside staging and then synchronized. Changing the storage location of the active and staging directories In case you do not want Drupal to store your configuration in sites/default/files, you can set the path according to your wishes. Actually, this is recommended for security reasons, as these directories should never be accessible over the Web or by unauthorized users on your server. Additionally, it makes your life easier if you work with version control. By default, the whole files directory is usually ignored in version-controlled environments because Drupal writes to it, and having the active and staging directory located within sites/default/files would result in them being ignored too. So how do we change the location of the configuration directories? Before installing Drupal, you will need to create and modify the settings.php file that Drupal uses to load its basic configuration data from (that is, the database connection settings). If you haven't done it yet, copy the default.settings.php file and rename the copy to settings.php. Afterwards, open the new file with the editor of your choice and search for the following line: $config_directories = array(); Change the preceding line to the following (or simply insert your addition at the bottom of the file). $config_directories = array( CONFIG_ACTIVE_DIRECTORY => './../config/active', // folder outside the webroot CONFIG_STAGING_DIRECTORY => './../config/staging', // folder outside the webroot ); The directory names can be chosen freely, but it is recommended that you at least use similar names to the default ones so that you or other developers don't get confused when looking at them later. Remember to put these directories outside your webroot, or at least protect the directories using an .htaccess file (if using Apache as the server). Directly after adding the paths to your settings.php file, make sure you remove write permissions from the file as it would be a security risk if someone could change it. Drupal will now use your custom location for its configuration files on installation. You can also change the location of the configuration directories after installing Drupal. Open up your settings.php file and find these two lines near the end of the file and start with $config_directories. Change their paths to something like this: $config_directories['active'] = './../config/active'; $config_directories['staging] = './../config/staging'; This path places the directories above your Drupal root. Now that you know about active and staging, let's learn more about the different types of configuration you can create on your own. Simple configuration versus configuration entities As soon as you want to start storing your own configuration, you need to understand the differences between simple configuration and configuration entities. Here's a short definition of the two types of configuration used in Drupal. Simple configuration This configuration type is easier to implement and therefore ideal for basic configuration settings that result in Boolean values, integers, or simple strings of text being stored, as well as global variables that are used throughout your site. A good example would be the value of an on/off toggle for a specific feature in your module, or our previously used example of the site name configured by the system module: name: 'Configuration Management in Drupal 8' Simple configuration also includes any settings that your module requires in order to operate correctly. For example, JavaScript aggregation has to be either on or off. If it doesn't exist, the system module won't be able to determine the appropriate course of action. Configuration entities Configuration entities are much more complicated to implement but far more flexible. They are used to store information about objects that users can create and destroy without breaking the code. A good example of configuration entities is an image style provided by the image module. Take a look at the image.style.thumbnail.yml file: uuid: fe1fba86-862c-49c2-bf00-c5e1f78a0f6c langcode: en status: true dependencies: { } name: thumbnail label: 'Thumbnail (100×100)' effects: 1cfec298-8620-4749-b100-ccb6c4500779:    uuid: 1cfec298-8620-4749-b100-ccb6c4500779    id: image_scale    weight: 0    data:      width: 100      height: 100      upscale: false third_party_settings: { } This defines a specific style for images, so the system is able to create derivatives of images that a user uploads to the site. Configuration entities also come with a complete set of create, read, update, and delete (CRUD) hooks that are fired just like any other entity in Drupal, making them an ideal candidate for configuration that might need to be manipulated or responded to by other modules. As an example, the Views module uses configuration entities that allow for a scenario where, at runtime, hooks are fired that allow any other module to provide configuration (in this case, custom views) to the Views module. Summary In this article, you learned about how to store configuration and briefly got to know the two different types of configuration. Resources for Article: Further resources on this subject: Tabula Rasa: Nurturing your Site for Tablets [article] Components - Reusing Rules, Conditions, and Actions [article] Introduction to Drupal Web Services [article]
Read more
  • 0
  • 0
  • 13299

article-image-implementing-wcf-service-real-world
Packt
09 Jun 2010
18 min read
Save for later

Implementing a WCF Service in the Real World

Packt
09 Jun 2010
18 min read
WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. In this article by, Mike Liu, author of  WCF 4.0 Multi-tier Services Development with LINQ to Entities, we will create and test the WCF service by following these steps: Create the project using a WCF Service Library template Create the project using a WCF Service Application template Create the Service Operation Contracts Create the Data Contracts Add a Product Entity project Add a business logic layer project Call the business logic layer from the service interface layer Test the service Here ,In this article, we will learn how to separate the service interface layer from the business logic layer (Read more interesting articles on WCF 4.0 here.) Why layer a service? An important aspect of SOA design is that service boundaries should be explicit, which means hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. Furthermore, inside the implementation of a service, the code responsible for the data manipulation should be separated from the code responsible for the business logic. So in the real world, it is always good practice to implement a WCF service in three or more layers. The three layers are the service interface layer, the business logic layer, and the data access layer. Service interface layer: This layer will include the service contracts and operation contracts that are used to define the service interfaces that will be exposed at the service boundary. Data contracts are also defined to pass in and out of the service. If any exception is expected to be thrown outside of the service, then Fault contracts will also be defined at this layer. Business logic layer: This layer will apply the actual business logic to the service operations. It will check the preconditions of each operation, perform business activities, and return any necessary results to the caller of the service. Data access layer: This layer will take care of all of the tasks needed to access the underlying databases. It will use a specific data adapter to query and update the databases. This layer will handle connections to databases, transaction processing, and concurrency controlling. Neither the service interface layer nor the business logic layer needs to worry about these things. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split out layers into separate physical tiers for scalability. The data access code should be separated into its own layer that focuses on performing translation services between the databases and the application domain. Services should be placed in a separate service layer that focuses on performing translation services between the service-oriented external world and the application domain. The service interface layer will be compiled into a separate class assembly and hosted in a service host environment. The outside world will only know about and have access to this layer. Whenever a request is received by the service interface layer, the request will be dispatched to the business logic layer, and the business logic layer will get the actual work done. If any database support is needed by the business logic layer, it will always go through the data access layer. Creating a new solution and project using WCF templates We need to create a new solution for this example and add a new WCF project to this solution. This time we will use the built-in Visual Studio WCF templates for the new project. Using the C# WCF service library template There are a few built-in WCF service templates within Visual Studio 2010; two of them are Visual Studio WCF Service Library and Visual Studio WCF Service Application. In this article, we will use the service library template. Follow these steps to create the RealNorthwind solution and the project using the service library template: Start Visual Studio 2010, select menu option File New | Project…|, and you will see the New Project dialog box. From this point onwards, we will create a completely new solution and save it in a different location. In the New Project window, specify Visual C# WCF | WCF| Service Library as the project template, RealNorthwindService as the (project) name, and RealNorthwind as the solution name. Make sure that the checkbox Create directory for solution is selected. Click on the OK button, and the solution is created with a WCF project inside it. The project already has an IService1.cs file to define a service interface and Service1.cs to implement the service. It also has an app.config file, which we will cover shortly. Using the C# WCF service application template Instead of using the Visual Studio WCF Service Library template to create our new WCF project, we can use the Visual Studio Service Application template to create the new WCF project. Because we have created the solution, we will add a new project using the Visual Studio WCF Service Application template. Right-click on the solution item in Solution Explorer, select menu option Add New Project…| from the context menu, and you will see the Add New Project dialog box. In the Add New Project window, specify Visual C# | WCF Service Application as the project template, RealNorthwindService2 as the (project) name, and leave the default location of C:SOAWithWCFandLINQProjectsRealNorthwind unchanged. Click on the OK button and the new project will be added to the solution.The project already has an IService1.cs file to define a service interface, and Service1.svc.cs to implement the service. It also has a Service1.svc file and a web.config file, which are used to host the new WCF service. It has also had the necessary references added to the project such as System.ServiceModel. You can follow these steps to test this service: Change this new project, RealNorthwindService2, to be the startup project(right-click on it from Solution Explorer and select Set as Startup Project). Then run it (Ctrl + F5 or F5). You will see that it can now run. You will see that ASP.NET Development Server has been started, and a browser is open listing all of the files under the RealNorthwindService2 project folder.Clicking on the Service1.svc file will open the metadata page of the WCF service in this project. If you have pressed F5 in the previous step to run this project, you might see a warning message box asking you if you want to enable debugging for the WCF service. As we said earlier, you can choose enable debugging or just run in the non-debugging mode. You may also have noticed that the WCF Service Host is started together with ASP.NET Development Server. This is actually another way of hosting a WCF service in Visual Studio 2010. It has been started at this point because, within the same solution, there is a WCF service project (RealNorthwindService) created using the WCF Service Library template. So far we have used two different Visual Studio WCF templates to create two projects. The first project, using the C# WCF Service Library template, is a more sophisticated one because this project is actually an application containing a WCF service, a hosting application (WcfSvcHost), and a WCF Test Client. This means that we don't need to write any other code to host it, and as soon as we have implemented a service, we can use the built-in WCF Test Client to invoke it. This makes it very convenient for WCF development. The second project, using the C# WCF Service Application template, is actually a website. This is the hosting application of the WCF service so you don't have to create a separate hosting application for the WCF service. As we have already covered them and you now have a solid understanding of these styles, we will not discuss them further. But keep in mind that you have this option, although in most cases it is better to keep the WCF service as clean as possible, without any hosting functionalities attached to it. To focus on the WCF service using the WCF Service Library template, we now need to remove the project RealNorthwindService2 from the solution. In Solution Explorer, right-click on the RealNorthwindService2 project item and select Remove from the context menu. Then you will see a warning message box. Click on the OK button in this message box and the RealNorthwindService2 project will be removed from the solution. Note that all the files of this project are still on your hard drive. You will need to delete them using Windows Explorer. Creating the service interface layer In this article, we will create the service interface layer contracts. Because two sample files have already been created for us, we will try to reuse them as much as possible. Then we will start customizing these two files to create the service contracts. Creating the service interfaces To create the service interfaces, we need to open the IService1.cs file and do the following: Change its namespace from RealNorthwindService to: MyWCFServices.RealNorthwindService Change the interface name from IService1 to IProductService. Don't be worried if you see the warning message before the interface definition line, as we will change the web.config file in one of the following steps. Change the first operation contract definition from this line: string GetData(int value); to this line: Product GetProduct(int id); Change the second operation contract definition from this line: CompositeType GetDataUsingDataContract(CompositeType composite); to this line: bool UpdateProduct(Product product); Change the filename from IService1.cs to IProductService.cs. With these changes, we have defined two service contracts. The first one can be used to get the product details for a specific product ID, while the second one can be used to update a specific product. The product type, which we used to define these service contracts, is still not defined. The content of the service interface for RealNorthwindService.ProductService should look like this now: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ [ServiceContract] public interface IProductService { [OperationContract] Product GetProduct(int id); [OperationContract] bool UpdateProduct(Product product); // TODO: Add your service operations here }} This is not the whole content of the IProductService.cs file. The bottom part of this file should still have the class, CompositeType. Creating the data contracts Another important aspect of SOA design is that you shouldn't assume that the consuming application supports a complex object model. One part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values. For maximum interoperability and alignment with SOA principles, you should not pass any .NET-specific types such as DataSet or Exceptions across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as 'Customer with an Order collection'. However, you shouldn't make any assumption about the consumer being able to support object-oriented constructs such as inheritance or base-classes for interoperable web services. In our example, we will create a complex data type to represent a product object. This data contract will have five properties: ProductID, ProductName, QuantityPerUnit, UnitPrice, and Discontinued. These will be used to communicate with client applications. For example, a supplier may call the web service to update the price of a particular product or to mark a product for discontinuation. It is preferable to put data contracts in separate files within a separate assembly but, to simplify our example, we will put DataContract in the same file as the service contract. We will modify the file, IProductService.cs, as follows: Change the DataContract name from CompositeType to Product. Change the fields from the following lines: bool boolValue = true;string stringValue = "Hello "; to these seven lines: int productID;string productName;string quantityPerUnit;decimal unitPrice;bool discontinued; Delete the old boolValue and StringValue DataMember properties. Then, for each of the above fields, add a DataMember property. For example, for productID, we will have this DataMember property: [DataMember]public int ProductID{ get { return productID; } set { productID = value; }} A better way is to take advantage of the automatic property feature of C#, and add the following ProductID DataMember without defining the productID field: [DataMember]public int ProductID { get; set; } To save some space, we will use the latter format. So, we need to delete all of those field definitions and add an automatic property for each field, with the first letter capitalized. The data contract part of the finished service contract file, IProductService.cs,should now look like this: [DataContract]public class Product{ [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; }} Implementing the service contracts To implement the two service interfaces that we defined, open the Service1.cs file and do the following: Change its namespace from RealNorthwindService to MyWCFServices.RealNorthwindService. Change the class name from Service1 to ProductService. Make it inherit from the IProductService interface, instead of IService1. The class definition line should be like this: public class ProductService : IProductService Delete the GetData and GetDataUsingDataContract methods. Add the following method, to get a product: public Product GetProduct(int id){ // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10.0; return product;} In this method, we created a fake product and returned it to the client.Later, we will remove the hard-coded product from this method and call the business logic to get the real product. Add the following method to update a product: public bool UpdateProduct(Product product){ // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true;} Also, in this method, we don't update anything. Instead, we always return true if a valid price is passed in. Change the filename from Service1.cs to ProductService.cs. The content of the ProductService.cs file should be like this: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ public class ProductService : IProductService { public Product GetProduct(int id) { // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10; return product; } public bool UpdateProduct(Product product) { // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true; } }} Modifying the app.config file Because we have changed the service name, we have to make the appropriate changes to the configuration file. Note that when you rename the service, if you have used the refactor feature of Visual Studio, some of the following tasks may have been done by Visual Studio. Follow these steps to change the configuration file: Open the app.config file from Solution Explorer. Change all instances of the RealNorthwindService string except the one in baseAddress to MyWCFServices.RealNorthwindService. This is for the namespace change. Change the RealNorthwindService string in baseAddress to MyWCFServices/RealNorthwindService. Change all instances of the Service1 string to ProductService. This is for the actual service name change. Change the service address port from 8731 to 8080. This is to prepare for the client application, which we will create soon. You can also change Design_Time_Addresses to whatever address you want, or delete the baseAddress part from the service. This can be used to test your service locally. We will leave it unchanged for our example. The content of the app.config file should now look like this: <?xml version="1.0" encoding="utf-8" ?><configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="MyWCFServices.RealNorthwindService. ProductService"> <endpoint address="" binding="wsHttpBinding" contract="MyWCFServices. RealNorthwindService.IProductService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8080/Design_Time_ Addresses/MyWCFServices/ RealNorthwindService/ProductService/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Testing the service using WCF Test Client Because we are using the WCF Service Library template in this example, we are now ready to test this web service. As we pointed out when creating this project, this service will be hosted in the Visual Studio 2010 WCF Service Host environment. To start the service, press F5 or Ctrl + F5. WcfSvcHost will be started and WCF Test Client is also started. This is a Visual Studio 2010 built-in test client for WCF Service Library projects. In order to run the WCF Test Client you have to log into your machine as a local administrator. You also have to start Visual Studio as an administrator because we have changed the service port from 8732 to 8080 (port 8732 is pre-registered but 8080 is not). Again, if you get an Access is denied error, make sure you run Visual Studio as an administrator (under Windows XP you need to log on as an administrator). Now from this WCF Test Client we can double-click on an operation to test it.First, let us test the GetProduct operation. Now the message Invoking Service… will be displayed in the status bar as the client is trying to connect to the server. It may take a while for this initial connection to be made as several things need to be done in the background. Once the connection has been established, a channel will be created and the client will call the service to perform the requested operation. Once the operation has been completed on the server side, the response package will be sent back to the client, and the WCF Test Client will display this response in the bottom panel. If you started the test client in debugging mode (by pressing F5), you can set a breakpoint at a line inside the GetProduct method in the RealNorthwindService.cs file, and when the Invoke button is clicked, the breakpoint will be hit so that you can debug the service as we explained earlier. However, here you don't need to attach to the WCF Service Host. Note that the response is always the same, no matter what product ID you use to retrieve the product. Specifically, the product name is hard-coded, as shown in the diagram. Moreover, from the client response panel, we can see that several properties of the Product object have been assigned default values. Also, because the product ID is an integer value from the WCF Test Client, you can only enter an integer for it. If a non-integer value is entered, when you click on the Invoke button, you will get an error message box to warn you that you have entered a value with the wrong type. Now let's test the operation, UpdateProduct. The Request/Response packages are displayed in grids by default but you have the option of displaying them in XML format. Just select the XML tab at the bottom of the right-side panel, and you will see the XML-formatted Request/Response packages. From these XML strings, you can see that they are SOAP messages. Besides testing operations, you can also look at the configuration settings of the web service. Just double-click on Config File from the left-side panel and the configuration file will be displayed in the right-side panel. This will show you the bindings for the service, the addresses of the service, and the contract for the service. What you see here for the configuration file is not an exact image of the actual configuration file. It hides some information such as debugging mode and service behavior, and includes some additional information on reliable sessions and compression mode. If you are satisfied with the test results, just close the WCF Test Client, and you will go back to Visual Studio IDE. Note that as soon as you close the client, the WCF Service Host is stopped. This is different from hosting a service inside ASP.NET Development Server, where ASP.NET Development Server still stays active even after you close the client.
Read more
  • 0
  • 0
  • 13236

article-image-animation-effects-aspnet-using-jquery
Packt
03 May 2011
9 min read
Save for later

Animation Effects in ASP.NET using jQuery

Packt
03 May 2011
9 min read
  ASP.NET jQuery Cookbook Over 60 practical recipes for integrating jQuery with ASP.NET   Introduction Some useful inbuilt functions in jQuery that we will explore in this article for achieving animation effects are: animate ( properties, [ duration ], [ easing ], [ complete ] ): This method allows us to create custom animation effects on any numeric css property. The parameters supported by this method are: properties: This is the map of css properties to animate, for e.g. width, height, fontSize, borderWidth, opacity, etc. duration: This is the duration of the animation in milliseconds. The constants slow and fast can be used to specify the durations, and they represent 600 ms and 200 ms respectively. easing: This is the easing function to use. Easing indicates the speed of the animation at different points during the animation. jQuery provides inbuilt swing and linear easing functions. Various plugins can be interfaced if other easing functions are required. complete: This indicates the callback function on completion of the animation. fadeIn ( [ duration ], [ callback ] ): This method animates the opacity of the matched elements from 0 to 1 i.e. transparent to opaque. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation fadeOut( [ duration ], [ callback ] ): This method animates the opacity of the matched elements from 1 to 0 i.e. opaque to transparent. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation slideUp( [ duration ], [ callback ] ): This method animates the height of the matched elements with an upward sliding motion. When the height of the element reaches 0, the css property display of the element is updated to none so that the element is hidden on the page. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation slideDown( [ duration ], [ callback ] ): This method animates the height of the matched elements from 0 to the specified maximum height. Thus, the element appears to slide down on the page. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation slideToggle( [ duration ], [ callback ] ): This method animates the height of the matched elements. If the element is initially hidden, it will slide down and become completely visible. If the element is initially visible, it will slide up and become hidden on the page. The parameters accepted are: duration: This is the duration of the animation callback: This is the callback function on completion of the animation jQuery.fx.off: If there is a need to disable animations because of a resource constraint or due to difficulties in viewing the animations, then this utility can be used to turn off the animation completely. This is achieved by setting all animated controls to their final state. stop ( [ clearQueue ], [ jumpToEnd ] ): This method stops the currently running animations on the page. The parameters accepted are: clearQueue: This indicates whether any queued up animations are required to be cleared. The default value is false. jumpToEnd: This indicates if the current animation is to be cleared immediately. The default value is false. In this article, we will cover some of the animation effects that can be achieved in ASP.NET using the capabilities of jQuery. Getting started Let's start by creating a new ASP.NET website in Visual Studio and name it Chapter5. Save the jQuery library in a script folder js in the project. To enable jQuery on any web form, drag-and-drop to add the following to the page: <scriptsrc="js/jquery-1.4.1.js" type="text/javascript"></script> Now let's move on to the recipes where we will see different animation techniques using jQuery. Enlarging text on hover In this recipe, we will animate the font size of text content on hover. Getting ready Add a new web form Recipe1.aspx to the current project. Create a Css class for the text content that we want to animate. The font size specified in the css Class is the original font size of the text before animation is applied to it: .enlarge { font-size:12.5px; font-family:Arial,sans-serif; } Add an ASP.NET Label control on the form and set its Css Class to the preceding style: <asp:LabelCssClass="enlarge"runat="server">Lorem ipsum dolor sit ...............</asp:Label> Thus, the ASPX markup of the form is as follows: <form id="form1" runat="server"> <div align="center"> Mouseover to enlarge text:<br /> <fieldset id="content" style="width:500px;height:300px;"> <asp:LabelCssClass="enlarge" runat="server">Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</ asp:Label> </fieldset> </div> </form> Thus, initially, the page will display the Label control as follows: We will now animate the font size of the Label on hover on the containing fieldset element. How to do it… In the document.ready() function of the jQuery script block, retrieve the original font size of the Label: var origFontSize = parseFloat($(".enlarge").css('font-size')); The parseFloat() function takes in an input string and returns the first floating point value in the string. It discards any content after the floating point value. For example, if the css property returns 12.5 px, then the function will discard the px. Define the hover event of the containing fieldset element: $("#content").hover( In the mouseenter event handler of the hover method, update the cursor style to pointer: function() { $(".enlarge").css("cursor", "pointer"); Calculate the maximum font size that we want to animate to. In this example, we will set the maximum size to thrice the original: var newFontSize = origFontSize * 3; Animate the fontSize css property of the Label in 300 ms: $(".enlarge").animate({ fontSize: newFontSize }, 300); }, In the mouseleave event handler of the hover method, animate the fontSize to the original value in 300 ms as shown: function() { $(".enlarge").animate({ fontSize: origFontSize }, 300); } ); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { var origFontSize = parseFloat($(".enlarge").css('fontsize')); $("#content").hover( function() { $(".enlarge").css("cursor", "pointer"); var newFontSize = origFontSize * 3; $(".enlarge").animate({ fontSize: newFontSize }, 300); }, function() { $(".enlarge").animate({ fontSize: origFontSize }, 300); } ); }); </script> How it works… Run the web form. Mouseover on the fieldset area. The text size will animate over the stated duration and change to the maximum specified font size as displayed in the following screenshot: On removing the mouse from the fieldset area, the text size will return back to the original. Creating a fade effect on hover In this recipe, we will create a fade effect on an ASP.NET Image control on hover. We will use the fadeIn and fadeOut methods to achieve the same. Getting ready Add a new web form Recipe2.aspx to the current project. Add an image control to the form: <asp:Image src="images/Image1.jpg" ID="Image1" runat="server" /> Define the properties of the image in the css: #Image1 { width:438px; height:336px; } Thus, the complete ASPX markup of the web form is as follows: <form id="form1" runat="server"> <div align="center"> Mouseover on the image to view fade effect: <fieldset id="content" style="width:480px;height:370px;"> <br /> <asp:Image src="images/Image1.jpg" ID="Image1" runat="server" /> </fieldset> </div> </form> On page load, the image is displayed as follows: We will now create a fade effect on the image on hover on the containing fieldset area. How to do it… In the document.ready() function of the jQuery script block, define the hover event on the containing fieldset area: $("#content").hover( In the mouseenter event handler of the hover method, update the cursor to pointer: function() { $("#Image1").css("cursor", "pointer"); Apply the fadeOut method on the Image control with an animation duration of 1000 ms: $("#Image1").fadeOut(1000); }, In the mouseleave event handler of the hover method, apply the fadeIn method on the Image control with an animation duration of 1000 ms: function() { $("#Image1").fadeIn(1000); } ); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { $("#content").hover( function() { $("#Image1").css("cursor", "pointer"); $("#Image1").fadeOut(1000); }, function() { $("#Image1").fadeIn(1000); } ); }); </script> How it works... Run the web page. Mouseover on the Image control on the web page. The image will slowly fade away as shown in the following screenshot: On mouseout from the containing fieldset area, the image reappears. Sliding elements on a page In this recipe, we will use the slideUp and slideDown methods for achieving sliding effects on an ASP.NET panel. Getting ready Add a new web form Recipe3.aspx in the current project. Add an ASP.NET panel to the page as follows: <asp:Panel class="slide" runat="server"> Sliding Panel </asp:Panel> The css class for the panel is defined as follows: .slide { font-size:12px; font-family:Arial,sans-serif; display:none; height:100px; background-color:#9999FF; } Add a button control to trigger the sliding effect on the panel: <asp:Button ID="btnSubmit" runat="server" Text="Trigger Slide" /> Thus, the complete ASPX markup of the web form is as follows: <form id="form1" runat="server"> <div align="center"> <fieldset style="width:400px;height:150px;"> <asp:Button ID="btnSubmit" runat="server" Text="Trigger Slide" /> <br /><br/> <asp:Panel class="slide" runat="server"> Sliding Panel </asp:Panel> </fieldset> </div> </form> On page load, the page appears as shown in the following screenshot: We will now use jQuery to slide up and slide down the panel. How to do it… In the document.ready() function of the jQuery script block, define the click event of the button control: $("#btnSubmit").click(function(e) { Prevent default form submission: e.preventDefault(); Check if the ASP.NET panel control is hidden: if ($(".slide").is(":hidden")) The jQuery selector :hidden selects matched elements that are hidden on the page. If yes, then slide down the panel until its height reaches the maximum (100 px) defined in the css property. $(".slide").slideDown("slow"); If the panel is initially visible then slide up so that its height slowly reduces until it becomes 0 and the panel disappears from the page: else $(".slide").slideUp("slow"); }); Thus, the complete jQuery solution is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function() { $("#btnSubmit").click(function(e) { e.preventDefault(); if ($(".slide").is(":hidden")) $(".slide").slideDown("slow"); else $(".slide").slideUp("slow"); }); }); </script>  
Read more
  • 0
  • 0
  • 13230

article-image-extending-yii
Packt
03 Oct 2016
14 min read
Save for later

Extending Yii

Packt
03 Oct 2016
14 min read
Introduction      In this article by Dmitry Eliseev, the author of the book Yii Application Development Cookbook Third Edition, we will see three Yii extensions—helpers, behaviors, and components. In addition, we will learn how to make your extension reusable and useful for the community and will focus on the many things you should do in order to make your extension as efficient as possible. (For more resources related to this topic, see here.) Helpers There are a lot of built-in framework helpers, like StringHelper in the yiihelpers namespace. It contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. In many cases, for additional behavior you can create your own helper and put any static functions into one. For example, we will implement a number helper in this recipe. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… Create the helpers directory in your project and write the NumberHelper class: <?php namespace apphelpers; class NumberHelper { public static function format($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Add the actionNumbers method into SiteController: <?php ... class SiteController extends Controller { … public function actionNumbers() { return $this->render('numbers', ['value' => 18878334526.3]); } } Add the views/site/numbers.php view: <?php use apphelpersNumberHelper; use yiihelpersHtml; /* @var $this yiiwebView */ /* @var $value float */ $this->title = 'Numbers'; $this->params['breadcrumbs'][] = $this->title; ?> <div class="site-numbers"> <h1><?= Html::encode($this->title) ?></h1> <p> Raw number:<br /> <b><?= $value ?></b> </p> <p> Formatted number:<br /> <b><?= NumberHelper::format($value) ?></b> </p> </div> Open the action and see this result: In other cases you can specify another count of decimal numbers; for example: NumberHelper::format($value, 3) How it works… Any helper in Yii2 is just a set of functions implemented as static methods in corresponding classes. You can use one to implement any different format of output for manipulations with values of any variable, and for other cases. Note: Usually, static helpers are light-weight clean functions with a small count of arguments. Avoid putting your business logic and other complicated manipulations into helpers . Use widgets or other components instead of helpers in other cases. See also For more information about helpers, refer to http://www.yiiframework.com/doc-2.0/guide-helper-overview.html. And for examples of built-in helpers, see sources in the helpers directory of the framework, refer to https://github.com/yiisoft/yii2/tree/master/framework/helpers. Creating model behaviors There are many similar solutions in today's web applications. Leading products such as Google's Gmail are defining nice UI patterns; one of these is soft delete. Instead of a permanent deletion with multiple confirmations, Gmail allows users to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. Let's create a behavior that will allow marking models as deleted, restoring models, selecting not yet deleted models, deleted models, and all models. In this recipe we'll follow a test-driven development approach to plan the behavior and test if the implementation is correct. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Create two databases for working and for tests. Configure Yii to use the first database in your primary application in config/db.php. Make sure the test application uses a second database in tests/codeception/config/config.php. Create a new migration: <?php use yiidbMigration; class m160427_103115_create_post_table extends Migration { public function up() { $this->createTable('{{%post}}', [ 'id' => $this->primaryKey(), 'title' => $this->string()->notNull(), 'content_markdown' => $this->text(), 'content_html' => $this->text(), ]); } public function down() { $this->dropTable('{{%post}}'); } } Apply the migration to both working and testing databases: ./yii migrate tests/codeception/bin/yii migrate Create a Post model: <?php namespace appmodels; use appbehaviorsMarkdownBehavior; use yiidbActiveRecord; /** * @property integer $id * @property string $title * @property string $content_markdown * @property string $content_html */ class Post extends ActiveRecord { public static function tableName() { return '{{%post}}'; } public function rules() { return [ [['title'], 'required'], [['content_markdown'], 'string'], [['title'], 'string', 'max' => 255], ]; } } How to do it… Let's prepare a test environment, starting with defining the fixtures for the Post model. Create the tests/codeception/unit/fixtures/PostFixture.php file: <?php namespace apptestscodeceptionunitfixtures; use yiitestActiveFixture; class PostFixture extends ActiveFixture { public $modelClass = 'appmodelsPost'; public $dataFile = '@tests/codeception/unit/fixtures/data/post.php'; } Add a fixture data file in tests/codeception/unit/fixtures/data/post.php: <?php return [ [ 'id' => 1, 'title' => 'Post 1', 'content_markdown' => 'Stored *markdown* text 1', 'content_html' => "<p>Stored <em>markdown</em> text 1</p>n", ], ]; Then, we need to create a test case tests/codeception/unit/MarkdownBehaviorTest: . .php: <?php namespace apptestscodeceptionunit; use appmodelsPost; use apptestscodeceptionunitfixturesPostFixture; use yiicodeceptionDbTestCase; class MarkdownBehaviorTest extends DbTestCase { public function testNewModelSave() { $post = new Post(); $post->title = 'Title'; $post->content_markdown = 'New *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>New <em>markdown</em> text</p>n", $post->content_html); } public function testExistingModelSave() { $post = Post::findOne(1); $post->content_markdown = 'Other *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>Other <em>markdown</em> text</p>n", $post->content_html); } public function fixtures() { return [ 'posts' => [ 'class' => PostFixture::className(), ] ]; } } Run unit tests: codecept run unit MarkdownBehaviorTest and ensure that tests have not passed Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Error Trying to test ... MarkdownBehaviorTest::testExistingModelSave Error --------------------------------------------------------------------------- Time: 289 ms, Memory: 16.75MB Now we need to implement a behavior, attach it to the model, and make sure the test passes. Create a new directory, behaviors. Under this directory, create the MarkdownBehavior class: <?php namespace appbehaviors; use yiibaseBehavior; use yiibaseEvent; use yiibaseInvalidConfigException; use yiidbActiveRecord; use yiihelpersMarkdown; class MarkdownBehavior extends Behavior { public $sourceAttribute; public $targetAttribute; public function init() { if (empty($this->sourceAttribute) || empty($this->targetAttribute)) { throw new InvalidConfigException('Source and target must be set.'); } parent::init(); } public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } private function processContent() { $model = $this->owner; $source = $model->{$this->sourceAttribute}; $model->{$this->targetAttribute} = Markdown::process($source); } } Let's attach the behavior to the Post model: class Post extends ActiveRecord { ... public function behaviors() { return [ 'markdown' => [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Run the test and make sure it passes: Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Ok Trying to test ... MarkdownBehaviorTest::testExistingModelSave Ok --------------------------------------------------------------------------- Time: 329 ms, Memory: 17.00MB That's it. We've created a reusable behavior and can use it for all future projects by just connecting it to a model. How it works… Let's start with the test case. Since we want to use a set of models, we will define fixtures. A fixture set is put into the DB each time the test method is executed. We will prepare unit tests for specifying how the behavior works: First, we test processing new model content. The behavior must convert Markdown text from a source attribute to HTML and store the second one to target attribute. Second, we test updated content of an existing model. After changing Markdown content and saving the model, we must get updated HTML content. Now let's move to the interesting implementation details. In behavior, we can add our own methods that will be mixed into the model that the behavior is attached to. We can also subscribe to our own component events. We are using it to add our own listener: public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } And now we can implement this listener: public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } In all methods, we can use the owner property to get the object the behavior is attached to. In general we can attach any behavior to your models, controllers, application, and other components that extend the yiibaseComponent class. We can also attach one behavior again and again to model for the processing of different attributes: class Post extends ActiveRecord { ... public function behaviors() { return [ [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'description_markdown', 'targetAttribute' => 'description_html', ], [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Besides, we can also extend the yiibaseAttributeBehavior class, like yiibehaviorsTimestampBehavior, to update specified attributes for any event. See also To learn more about behaviors and events, refer to the following pages: http://www.yiiframework.com/doc-2.0/guide-concept-behaviors.html http://www.yiiframework.com/doc-2.0/guide-concept-events.html For more information about Markdown syntax, refer to http://daringfireball.net/projects/markdown/. Creating components If you have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, it's most probably a component. The component should be inherited from the yiibaseComponent class. Later on, the component can be attached to the application and configured using the components section of a configuration file. That's the main benefit compared to using just a plain PHP class. We are also getting behaviors, events, getters, and setters support. For our example, we'll implement a simple Exchange application component that will be able to get currency rates from the http://fixer.io site, attach them to the application, and use them. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… To get a currency rate, our component should send an HTTP GET query to a service URL, like http://api.fixer.io/2016-05-14?base=USD. The service must return all supported rates on the nearest working day: { "base":"USD", "date":"2016-05-13", "rates": { "AUD":1.3728, "BGN":1.7235, ... "ZAR":15.168, "EUR":0.88121 } } The component should extract needle currency from the response in a JSON format and return a target rate. Create a components directory in your application structure. Create the component class example with the following interface: <?php namespace appcomponents; use yiibaseComponent; class Exchange extends Component { public function getRate($source, $destination, $date = null) { } } Implement the component functional: <?php namespace appcomponents; use yiibaseComponent; use yiibaseInvalidConfigException; use yiibaseInvalidParamException; use yiicachingCache; use yiidiInstance; use yiihelpersJson; class Exchange extends Component { /** * @var string remote host */ public $host = 'http://api.fixer.io'; /** * @var bool cache results or not */ public $enableCaching = false; /** * @var string|Cache component ID */ public $cache = 'cache'; public function init() { if (empty($this->host)) { throw new InvalidConfigException('Host must be set.'); } if ($this->enableCaching) { $this->cache = Instance::ensure($this->cache, Cache::className()); } parent::init(); } public function getRate($source, $destination, $date = null) { $this->validateCurrency($source); $this->validateCurrency($destination); $date = $this->validateDate($date); $cacheKey = $this->generateCacheKey($source, $destination, $date); if (!$this->enableCaching || ($result = $this->cache->get($cacheKey)) === false) { $result = $this->getRemoteRate($source, $destination, $date); if ($this->enableCaching) { $this->cache->set($cacheKey, $result); } } return $result; } private function getRemoteRate($source, $destination, $date) { $url = $this->host . '/' . $date . '?base=' . $source; $response = Json::decode(file_get_contents($url)); if (!isset($response['rates'][$destination])) { throw new RuntimeException('Rate not found.'); } return $response['rates'][$destination]; } private function validateCurrency($source) { if (!preg_match('#^[A-Z]{3}$#s', $source)) { throw new InvalidParamException('Invalid currency format.'); } } private function validateDate($date) { if (!empty($date) && !preg_match('#d{4}-d{2}-d{2}#s', $date)) { throw new InvalidParamException('Invalid date format.'); } if (empty($date)) { $date = date('Y-m-d'); } return $date; } private function generateCacheKey($source, $destination, $date) { return [__CLASS__, $source, $destination, $date]; } } Attach our component in the config/console.php or config/web.php configuration files: 'components' => [ 'cache' => [ 'class' => 'yiicachingFileCache', ], 'exchange' => [ 'class' => 'appcomponentsExchange', 'enableCaching' => true, ], // ... db' => $db, ], We can now use a new component directly or via a get method: echo Yii::$app->exchange->getRate('USD', 'EUR'); echo Yii::$app->get('exchange')->getRate('USD', 'EUR', '2014-04-12'); Create a demonstration console controller: <?phpnamespace appcommands;use yiiconsoleController;class ExchangeController extends Controller{ public function actionTest($currency, $date = null) { echo Yii::$app->exchange->getRate('USD', $currency, $date) . PHP_EOL; }} And try to run any commands: $ ./yii exchange/test EUR > 0.90196 $ ./yii exchange/test EUR 2015-11-24 > 0.93888 $ ./yii exchange/test OTHER > Exception 'yiibaseInvalidParamException' with message 'Invalid currency format.' $ ./yii exchange/test EUR 2015/24/11 Exception 'yiibaseInvalidParamException' with message 'Invalid date format.' $ ./yii exchange/test ASD > Exception 'RuntimeException' with message 'Rate not found.' As a result you must see rate values in success cases or specific exceptions in error ones. In addition to creating your own components, you can do more. Overriding existing application components Most of the time there will be no need to create your own application components, since other types of extensions, such as widgets or behaviors, cover almost all types of reusable code. However, overriding core framework components is a common practice and can be used to customize the framework's behavior for your specific needs without hacking into the core. For example, to be able to format numbers using the Yii::app()->formatter->asNumber($value) method instead of the NumberHelper::format method from the Helpers recipe, follow the next steps: Extend the yiii18nFormatter component like the following: <?php namespace appcomponents; class Formatter extends yiii18nFormatter { public function asNumber($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Override the class of the built-in formatter component: 'components' => [ // ... formatter => [ 'class' => 'appcomponentsFormatter, ], // … ], Right now, we can use this method directly: echo Yii::app()->formatter->asNumber(1534635.2, 3); or as a new format for GridView and DetailView widgets: <?= yiigridGridView::widget([ 'dataProvider' => $dataProvider, 'columns' => [ 'id', 'created_at:datetime', 'title', 'value:number', ], ]) ?> You can also extend every existing component without overwriting its source code. How it works… To be able to attach a component to an application it can be extended from the yiibaseComponent class. Attaching is as simple as adding a new array to the components’ section of configuration. There, a class value specifies the component's class and all other values are set to a component through the corresponding component's public properties and setter methods. Implementation itself is very straightforward; We are wrapping http://api.fixer.io calls into a comfortable API with validators and caching. We can access our class by its component name using Yii::$app. In our case, it will be Yii::$app->exchange. See also For official information about components, refer to http://www.yiiframework.com/doc-2.0/guide-concept-components.html. For the NumberHelper class sources, see Helpers recipe. Summary In this article we learnt about the Yii extensions—helpers, behavior, and components. Helpers contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. Behaviors allow you to enhance the functionality of an existing component class without needing to change the class's inheritance. Components are the main building blocks of Yii applications. A component is an instance of CComponent or its derived class. Using a component mainly involves accessing its properties and raising/handling its events. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Atmosfall – Managing Game Progress with Coroutines [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 13205

article-image-testing-node-and-hapi
Packt
09 Feb 2016
22 min read
Save for later

Testing in Node and Hapi

Packt
09 Feb 2016
22 min read
In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi's test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage. (For more resources related to this topic, see here.) The benefits and importance of testing code Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let's explore this statement in more detail. Even very simple applications will generally comprise: Features, which the end user interacts with Shared services, such as authentication and authorization, that features interact with These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain. So let's take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What's the risk if we don't upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don't enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive. Now, let's look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it's not easy to update, you won't do it often, so you could be on an outdated version that won't receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning. A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers. A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here? For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is. When working as a team, for every code change from yourself or another member of the team, you're faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests. Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it's unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code. Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this. The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let's look at hapi's test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience. Introducing hapi's testing utilities The test runner in the hapi ecosystem is called lab. If you're not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi's needs diverged from the original focus of mocha, lab was born. The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken. Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai. Installing lab and code You can install lab and code the same as any other module on npm: npm install lab code -–save-dev Note the --save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module. The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don't need to download all the development dependencies for every module. The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install. After you have installed lab, you can then run it with the following: ./node_modules/lab/bin/lab test.js This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity): ... "scripts": { "test": "echo "Error: no test specified" && exit 1" }, ... Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let's just focus on what is important to us here. To get a list of available scripts for a module application, in the module directory, simply run: $ npm run To then run the listed scripts, such as test you can just use: $ npm run test As you can see, this gives a very clean API for scripts and the documentation for each of them in the project's package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others. Let's now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following: ... "scripts": { "test": "./node_modules/lab/bin/lab ./test/index.js" }, ... It is common practice in node to place all tests in a project within the test directory. A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to: … "scripts": { "test": "lab ./test/index.js" }, … This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows: $ npm install lab code -g This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious. Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches. The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo! Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let's look at writing our first test script and take a more in-depth look at lab and code. Our First Test Script Let's take a look at what a simple test script in lab looks like using the code assertion library: const Code = require('code'); [1] const Lab = require('lab'); [1] const lab = exports.lab = Lab.script(); [2] lab.experiment('Testing example', () => { [3] lab.test('fails here', (done) => { [4] Code.expect(false).to.be.true(); [4] return done(); [4] }); [4] lab.test('passes here', (done) => { [4] Code.expect(true).to.be.true(); [4] return done(); [4] }); [4] }); This script, even though small, includes a number of new concepts, so let's go through it with reference to the numbers in the preceding code: [1]: Here, we just include the code and lab modules, as we would any other node module. [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren't tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab. [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, "Testing example fails here". This is optional, however. [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case. Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure. In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code's API is more consistent in that every assertion is a function call. Here is a comparison of the two: Chai: expect('hello').to.equal('hello'); expect(foo).to.exist; Code: expect('hello').to.equal('hello'); expect('foot').to.exist(); Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test. So let's try running our first test script. As we already updated our package.json script, we can run our test with the following command: $ npm run test This will generate the following output: There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command. There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, --verbose and --lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time. You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won't be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab. Testing approaches This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation. Testing with hapi As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies. Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let's take the example of a Hello World server and write a simple test for it: const Code = require('code'); const Lab = require('lab'); const Hapi = require('hapi'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not. Code coverage As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don't know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall. Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort. It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100. Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/). It's also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab. Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the lib folder and call index.js. It's worth noting here that it's good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use. So, having separated our server from our tests, let's look at what our two files now look like; first, ./lib/index.js: const Hapi = require('hapi'); const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); module.exports = server; The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this: const Code = require('code'); const Lab = require('lab'); const server = require('../lib/index.js'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Finally, for us to test our code coverage, we update our npm test script to include the coverage flag --coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you'll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command: $ npm install hapi@10 This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0). In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code. Linting Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules. The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables. To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I recommend referring to the lab documentation. Summary In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers. We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications. We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab's linting integration. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article]
Read more
  • 0
  • 0
  • 13200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-downloading-pyrocms-and-its-pre-requisites
Packt
31 Oct 2013
6 min read
Save for later

Downloading PyroCMS and it's pre-requisites

Packt
31 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting started PyroCMS, like many other content management systems including WordPress, Typo3, or Drupal, comes with a pre-developed installation process. For PyroCMS, this installation process is easy to use and comes with a number of helpful hints just in case you hit a snag while installing the system. If, for example, your system files don't have the correct permissions profile (writeable versus write-protected), the PyroCMS installer will help you, along with all the other installation details, such as checking for required software and taking care of file permissions. Before you can install PyroCMS (the version used for examples in this article is 2.2) on a server, there are a number of server requirements that need to be met. If you aren't sure if these requirements have been met, the PyroCMS installer will check to make sure they are available before installation is complete. Following are the software requirements for a server before PyroCMS can be installed: HTTP Web Server MySQL 5.x or higher PHP 5.2.x or higher GD2 cURL Among these requirements, web developers interested in PyroCMS will be glad to know that it is built on CodeIgniter, a popular MVC patterned PHP framework. I recommend that the developers looking to use PyroCMS should also have working knowledge of CodeIgniter and the MVC programming pattern. Learn more about CodeIgniter and see their excellent system documentation online at http://ellislab.com/codeigniter. CodeIgniter If you haven't explored the Model-View-Controller (MVC) programming pattern, you'll want to brush up before you start developing for PyroCMS. The primary reason that CodeIgniter is a good framework for a CMS is that it is a well-documented framework that, when leveraged in the way PyroCMS has done, gives developers power over how long a project will take to build and the quality with which it is built. Add-on modules for PyroCMS, for example, follow the MVC method, a programming pattern that saves developers time and keeps their code dry and portable. Dry and portable programming are two different concepts. Dry is an acronym for "don't repeat yourself" code. Portable code is like "plug-and-play" code—write it once so that it can be shared with other projects and used quickly. HTTP web server Out of the PyroCMS software requirements, it is obvious, you can guess, that a good HTTP web server platform will be needed. Luckily, PyroCMS can run on a variety of web server platforms, including the following: Abyss Web Server Apache 2.x Nginx Uniform Server Zend Community Server If you are new to web hosting and haven't worked with web hosting software before, or this is your first time installing PyroCMS, I suggest that you use Apache as a HTTP web server. It will be the system for which you will find the most documentation and support online. If you'd prefer to avoid Apache, there is also good support for running PyroCMS on Nginx, another fairly-well documented web server platform. MySQL Version 5 is the latest major release of MySQL, and it has been in use for quite some time. It is the primary database choice for PyroCMS and is thoroughly supported. You don't need expert level experience with MySQL to run PyroCMS, but you'll need to be familiar with writing SQL queries and building relational databases if you plan to create add-ons for the system. You can learn more about MySQL at http://www.mysql.com. PHP Version 5.2 of PHP is no longer the officially supported release of PHP, which is, at the time of this article, Version 5.4. Version 5.2, which has been criticized as being a low server requirement for any CMS, is allowed with PyroCMS because it is the minimum version requirement for CodeIgniter, the framework upon which PyroCMS is built. While future versions of PyroCMS may upgrade this minimum requirement to PHP 5.3 or higher, you can safely use PyroCMS with PHP 5.2. Also, many server operating systems, like SUSE and Ubuntu, install PHP 5.2 by default. You can, of course, upgrade PHP to the latest version without causing harm to your instance of PyroCMS. To help future-proof your installation of PyroCMS, it may be wise to install PHP 5.3 or above, to maximize your readiness for when PyroCMS more strictly adopts features found in PHP 5.3 and 5.4, such as namespaceing. GD2 GD2, a library used in the manipulation and creation of images, is used by PyroCMS to dynamically generate images (where needed) and to crop and resize images used in many PyroCMS modules and add-ons. The image-based support offered by this library is invaluable. cURL As described on the cURL project website, cURL is "a command line tool for transferring data with URL syntax" using a large number of methods, including HTTP(S) GET, POST, PUT, and so on. You can learn more about the project and how to use cURL on their website http://curl.haxx.se. If you've never used cURL with PHP, I recommend taking time to learn how to use it, especially if you are thinking about building a web-based API using PyroCMS. Most popular web hosting companies meet the basic server requirements for PyroCMS. Downloading PyroCMS Getting your hands on a copy of PyroCMS is very simple. You can download the system files from one of two locations, the PryoCMS project website and GitHub. To download PyroCMS from the project website, visit http://www.pyrocms.com and click on the green button labeled Get PyroCMS! This will take you to a download page that gives you the choice between downloading the Community version of PyroCMS and buying the Professional version. If you are new to PyroCMS, you can start with the Community version, currently at Version 2.2.3. The following screenshot shows the download screen: To download PyroCMS from GitHub, visit https://github.com/pyrocms/pyrocms and click on the button labeled Download ZIP to get the latest Community version of PyroCMS, as shown in the following screenshot: If you know how to use Git, you can also clone a fresh version of PyroCMS using the following command. A word of warning, cloning PyroCMS from GitHub will usually give you the latest, stable release of the system, but it could include changes not described in this article. Make sure you checkout a stable release from PyroCMS's repository. git clone https://github.com/pyrocms/pyrocms.git As a side-note, if you've never used Git, I recommend taking some time to get started using it. PyroCMS is an open source project hosted in a Git repository on Github, which means that the system is open to being improved by any developer looking to contribute to the well-being of the project. It is also very common for PyroCMS developers to host their own add-on projects on Github and other online Git repository services. Summary In this article, we have covered the pre-requisites for using PyroCMS, and also how to download PyroCMS. Resources for Article : Further resources on this subject: Kentico CMS 5 Website Development: Managing Site Structure [Article] Kentico CMS 5 Website Development: Workflow Management [Article] Web CMS [Article]
Read more
  • 0
  • 0
  • 13109

article-image-dhtmlx-grid
Packt
30 Oct 2013
7 min read
Save for later

The DHTMLX Grid

Packt
30 Oct 2013
7 min read
(For more resources related to this topic, see here.) The DHTMLX grid component is one of the more widely used components of the library. It has a vast amount of settings and abilities that are so robust we could probably write an entire book on them. But since we have an application to build, we will touch on some of the main methods and get into utilizing it. Some of the cool features that the grid supports is filtering, spanning rows and columns, multiple headers, dynamic scroll loading, paging, inline editing, cookie state, dragging/ordering columns, images, multi-selection, and events. By the end of this article, we will have a functional grid where we will control the editing, viewing, adding, and removing of users. The grid methods and events When creating a DHTMLX grid, we first create the object; second we add all the settings and then call a method to initialize it. After the grid is initialized data can then be added. The order of steps to create a grid is as follows: Create the grid object Apply settings Initialize Add data Now we will go over initializing a grid. Initialization choices We can initialize a DHTMLX grid in two ways, similar to the other DHTMLX objects. The first way is to attach it to a DOM element and the second way is to attach it to an existing DHTMLX layout cell or layout. A grid can be constructed by either passing in a JavaScript object with all the settings or built through individual methods. Initialization on a DOM element Let's attach the grid to a DOM element. First we must clear the page and add a div element using JavaScript. Type and run the following code line in the developer tools console: document.body.innerHTML = "<div id='myGridCont'></div>"; We just cleared all of the body tags content and replaced it with a div tag having the id attribute value of myGridCont. Now, create a grid object to the div tag, add some settings and initialize it. Type and run the following code in the developer tools console: var myGrid = new dhtmlXGridObject("myGridCont"); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1", "Column2", "Column3"]); myGrid.init(); You should see the page with showing just the grid header with three columns. Next, we will create a grid on an existing cell object. Initialization on a cell object Refresh the page and add a grid to the appLayout cell. Type and run the following code in the developer tools console: var myGrid = appLayout.cells("a").attachGrid(); myGrid.setImagePath(config.imagePath); myGrid.setHeader(["Column1","Column2","Column3"]); myGrid.init(); You will now see the grid columns just below the toolbar. Grid methods Now let's go over some available grid methods. Then we can add rows and call events on this grid. For these exercises we will be using the global appLayout variable. Refresh the page. attachGrid We will begin by creating a grid to a cell. The attachGrid method creates and attaches a grid object to a cell. This is the first step in creating a grid. Type and run the following code line in the console: var myGrid = appLayout.cells("a").attachGrid(); setImagePath The setImagePath method allows the grid to know where we have the images placed for referencing in the design. We have the application image path set in the config object. Type and run the following code line in the console: myGrid.setImagePath(config.imagePath); setHeader The setHeader method sets the column headers and determines how many headers we will have. The argument is a JavaScript array. Type and run the following code line in the console: myGrid.setHeader(["Column1", "Column2", "Column3"]); setInitWidths The setinitWidths method will set the initial widths of each of the columns. The asterisk mark (*) is used to set the width automatically. Type and run the following code line in the console: myGrid.setInitWidths("125,95,*");   setColAlign The setColAlign method allows us to align the column's content position. Type and run the following code line in the console: myGrid.setColAlign("right,center,left"); init Up until this point, we haven't seen much going on. It was all happening behind the scenes. To see these changes the grid must be initialized. Type and run the following code line in the console: myGrid.init(); Now you see the columns that we provided. addRow Now that we have a grid created let's add a couple rows and start interacting. The addRow method adds a row to the grid. The parameters are ID and columns. Type and run the following code in the console: myGrid.addRow(1,["test1","test2","test3"]); myGrid.addRow(2,["test1","test2","test3"]); We just created two rows inside the grid. setColTypes The setColTypes method sets what types of data a column will contain. The available type options are: ro (readonly) ed (editor) txt (textarea) ch (checkbox) ra (radio button) co (combobox) Currently, the grid allows for inline editing if you were to double-click on grid cell. We do not want this for the application. So, we will set the column types to read-only. Type and run the following code in the console: myGrid.setColTypes("ro,ro,ro"); Now the cells are no longer editable inside the grid. getSelectedRowId The getSelectedRowId method returns the ID of the selected row. If there is nothing selected it will return null. Type and run the following code line in the console: myGrid.getSelectedRowId(); clearSelection The clearSelection method clears all selections in the grid. Type and run the following code line in the console: myGrid.clearSelection(); Now any previous selections are cleared. clearAll The clearAll method removes all the grid rows. Prior to adding more data to the grid we first must clear it. If not we will have duplicated data. Type and run the following code line in the console: myGrid.clearAll(); Now the grid is empty. parse The parse method allows the loading of data to a grid in the format of an XML string, CSV string, XML island, XML object, JSON object, and JavaScript array. We will use the parse method with a JSON object while creating a grid for the application. Here is what the parse method syntax looks like (do not run this in console): myGrid.parse(data, "json"); Grid events The DHTMLX grid component has a vast amount of events. You can view them in their entirety in the documentation. We will cover the onRowDblClicked and onRowSelect events. onRowDblClicked The onRowDblClicked event is triggered when a grid row is double-clicked. The handler receives the argument of the row ID that was double-clicked. Type and run the following code in console: myGrid.attachEvent("onRowDblClicked", function(rowId){ console.log(rowId); }); Double-click one of the rows and the console will log the ID of that row. onRowSelect The onRowSelect event will trigger upon selection of a row. Type and run the following code in console: myGrid.attachEvent("onRowSelect", function(rowId){ console.log(rowId); }); Now, when you select a row the console will log the id of that row. This can be perceived as a single click. Summary In this article, we learned about the DHTMLX grid component. We also added the user grid to the application and tested it with the storage and callbacks methods. Resources for Article: Further resources on this subject: HTML5 Presentations - creating our initial presentation [Article] HTML5: Generic Containers [Article] HTML5 Canvas [Article]
Read more
  • 0
  • 0
  • 13091

article-image-creating-video-streaming-site
Packt
16 Sep 2015
16 min read
Save for later

Creating a Video Streaming Site

Packt
16 Sep 2015
16 min read
 In this article by Rachel McCollin, the author of WordPress 4.0 Site Blueprints Second Edition, you'll learn how to stream video from YouTube to your own video sharing site, meaning that you can add more than just the videos to your site and have complete control over how your videos are shown. We'll create a channel on YouTube and then set up a WordPress site with a theme and plugin to help us stream video from that channel WordPress is the world's most popular Content Management System (CMS) and you can use it to create any kind of site you or your clients need. Using free plugins and themes for WordPress, you can create a store, a social media site, a review site, a video site, a network of sites or a community site, and more. WordPress makes it easy for you to create a site that you can update and add to over time, letting you add posts, pages, and more without having to write code. WordPress makes your job of creating your own website simple and hassle-free! (For more resources related to this topic, see here.) Planning your video streaming site The first step is to plan how you want to use your video site. Ask yourself a few questions: Will I be streaming all my video from YouTube? Will I be uploading any video manually? Will I be streaming from multiple sources? What kind of design do I want? Will I include any other types of content on my site? How will I record and upload my videos? Who is my target audience and how will I reach them? Do I want to make money from my videos? How often will I create videos and what will my recording and editing process be? What software and hardware will I need for recording and editing videos? It's beyond the scope of this article to answer all of these questions, but it's worth taking some time before you start to consider how you're going to be using your video site, what you'll be adding to it, and what your objectives are. Streaming from YouTube or uploading videos direct? WordPress lets you upload your videos directly to your site using the Add Media button, the same button you use to insert images. This can seem like the simplest way of doing things as you only need to work in one place. However, I would strongly recommend using a third-party video service instead, for the following reasons: It saves on storage space in your site. It ensures your videos will play on any device people choose to view your site from. It keeps the formats your video is played in up to date so that you don't have to re-upload them when things change. It can have massive SEO benefits socially if you use YouTube. YouTube is owned by Google and has excellent search engine rankings. You'll find that videos streamed via YouTube get better Google rankings than any videos you upload directly to your site. In this article, the focus will be on creating a YouTube channel and streaming video from it to your website. We'll set things up so that when you add new videos to your channel, they'll be automatically streamed to your site. To do that, we'll use a plugin. Understanding copyright considerations Before you start uploading video to YouTube, you need to understand what you're allowed to add, and how copyright affects your videos. You can find plenty of information on YouTube's copyright rules and processes at https://www.youtube.com/yt/copyright/, but it can quite easily be summarized as this: if you created the video, or it was created by someone who has given you explicit permission to use it and publish it online, then you can upload it. If you've recorded a video from the TV or the Web that you didn't make and don't have permission to reproduce (or if you've added copyrighted music to your own videos without permission), then you can't upload it. It may seem tempting to ignore copyright and upload anything you're able to find and record (and you'll find plenty of examples of people who've done just that), but you are running a risk of being prosecuted for copyright infringement and being forced to pay a huge fine. I'd also suggest that if you can create and publish original video content rather than copying someone else's, you'll find an audience of fans for that content, and it will be a much more enjoyable process. If your videos involve screen capture of you using software or playing games, you'll need to check the license for that software or game to be sure that you're entitled to publish video of you interacting with it. Most software and games developers have no problem with this as it provides free advertising for them, but you should check with the software provider and the YouTube copyright advice. Movies and music have stricter rules than games generally do however. If you upload videos containing someone else's video or music content that's copyrighted and you haven't got permission to reproduce, then you will find yourself in violation of YouTube's rules and possibly in legal trouble too. Creating a YouTube channel and uploading videos So, you've planned your channel and you have some videos you want to share with the world. You'll need a YouTube channel so you can upload your videos. Creating your YouTube channel You'll need a YouTube channel in order to do this. Let's create a YouTube channel by following these steps: If you don't already have one, create a Google account for yourself at https://accounts.google.com/SignUp. Head over to YouTube at https://www.youtube.com and sign in. You'll have an account with YouTube because it's part of Google, but you won't have a channel yet. Go to https://www.youtube.com/channel_switcher. Click on the Create a new channel button. Follow the instructions onscreen to create your channel. Customize your channel, uploading images to your profile photo or channel art and adding a description using the About tab. Here's my channel: It can take a while for artwork from Google+ to show up on your channel, so don't worry if you don't see it straight away. Uploading videos The next step is to upload some videos. YouTube accepts videos in the following formats: .MOV .MPEG4 .AVI .WMV .MPEGPS .FLV 3GPP WebM Depending on the video software you've used to record, your video may already be in one of these formats or you may need to export it to one of these and save it before you can upload it. If you're not sure how to convert your file to one of the supported formats, you'll find advice at https://support.google.com/youtube/troubleshooter/2888402 to help you do it. You can also upload videos to YouTube directly from your phone or tablet. On an Android device, you'll need to use the YouTube app, while on an iOS device you can log in to YouTube on the device and upload from the camera app. For detailed instructions and advice for other devices, refer to https://support.google.com/youtube/answer/57407. If you're uploading directly to the YouTube website, simply click on the Upload a video button when viewing your channel and follow the onscreen instructions. Make sure you add your video to a playlist by clicking on the +Add to playlist button on the right-hand side while you're setting up the video as this will help you categorize the videos in your site later. Now when you open your channel page and click on the Videos tab, you'll see all the videos you uploaded: When you click on the Playlists tab, you'll see your new playlist: So you now have some videos and a playlist set up in YouTube. It's time to set up your WordPress site for streaming those videos. Installing and configuring the YouTube plugin Now that you have your videos and playlists set up, it's time to add a plugin to your site that will automatically add new videos to your site when you upload them to YouTube. Because I've created a playlist, I'm going to use a category in my site for the playlist and automatically add new videos to that category as posts. If you prefer you can use different channels for each category or you can just use one video category and link your channel to that. The latter is useful if your site will contain other content as well, such as photos or blog posts. Note that you don't need a plugin to stream YouTube videos to your site. You can simply paste the URL for a video into the editing pane when you're creating a post or page in your site, and WordPress will automatically stream the video. You don't even need to add an embed code, just add the YRL. But if you don't want to automate the process of streaming all of the videos in your channel to your site, this plugin will make that process easy. Installing the Automatic YouTube Video Posts plugin The Automatic YouTube Video Posts plugin lets you link your site to any YouTube channel or playlist and automatically adds each new video to your site as a post. Let's start by installing it. I'm working with a fresh WordPress installation but you can also do this on your existing site if that's what you're working with. Follow these steps: In the WordPress admin, go to Plugins | Add New. In the Search box, type Automatic Youtube. The plugins that meet the search criteria will be displayed. Select the Automatic YouTube Video Posts plugin and then install and activate it. For the plugin to work, you'll need to configure its settings and add one or more channels or playlists. Configuring the plugin settings Let's start with the plugin settings screen. You do this via the Youtube Posts menu, which the plugin has added to your admin menu: Go to Youtube Posts | Settings. Edit the settings as follows:     Automatically publish posts: Set this to Yes     Display YouTube video meta: Set this to Yes     Number of words and Video dimensions: Leave these at the default values     Display related videos: Set this to No     Display videos in post lists: Set this to Yes    Import the latest videos every: Set this to 1 hours (note that the updates will happen every hour if someone visits the site, but not if the site isn't visited) Click on the Save changes button. The settings screen will look similar to the following screenshot: Adding a YouTube channel or playlist The next step is to add a YouTube channel and/or playlist so that the plugin will create posts from your videos. I'm going to add the "Dizzy" playlist I created earlier on. But first, I'll create a category for all my videos from that playlist. Creating a category for a playlist Create a category for your playlist in the normal way: In the WordPress admin, go to Posts | Categories. Add the category name and slug or description if you want to (if you don't, WordPress will automatically create a slug). Click on the Add New Category button. Adding your channel or playlist to the plugin Now you need to configure the plugin so that it creates posts in the category you've just created. In the WordPress admin, go to Youtube Posts | Channels/Playlists. Click on the Add New button. Add the details of your channel or playlist, as shown in the next screenshot. In my case, the details are as follows:     Name: Dizzy     Channel/playlist: This is the ID of my playlist. To find this, open the playlist in YouTube and then copy the last part of its URL from your browser. The URL for my playlist is   https://www.youtube.com/watch?v=vd128vVQc6Y&list=PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv and the playlist ID is after the &list= text, so it's PLG9W2ELAaa-Wh6sVbQAIB9RtN_1UV49Uv. If you want to add a channel, add its unique name.      Type: Select Channel or Playlist; I'm selecting Playlist.      Add videos from this channel/playlist to the following categories: Select the category you just created.      Attribute videos from this channel to what author: Select the author you want to attribute videos to, if your site has more than one author. Finally, click on the Add Channel button. Adding a YouTube playlist Once you click on the Add Channel button, you'll be taken back to the Channels/Playlists screen, where you'll see your playlist or channel added: The newly added playlist If you like, you can add more channels or playlists and more categories. Now go to the Posts listing screen in your WordPress admin, and you'll see that the plugin has created posts for each of the videos in your playlist: Automatically added posts Installing and configuring a suitable theme You'll need a suitable theme in your site to make your videos stand out. I'm going to use the Keratin theme which is grid-based with a right-hand sidebar. A grid-based theme works well as people can see your videos on your home page and category pages. Installing the theme Let's install the theme: Go to Appearance | Themes. Click on the Add New button. In the search box, type Keratin. The theme will be listed. Click on the Install button. When prompted, click on the Activate button. The theme will now be displayed in your admin screen as active: The installed and activated theme Creating a navigation menu Now that you've activated a new theme, you'll need to make sure your navigation menu is configured so that it's in the theme's primary menu slot, or if you haven't created a menu yet, you'll need to create one. Follow these steps: Go to Appearance | Menus. If you don't already have a menu, click on the Create Menu button and name your new menu. Add your home page to the menu along with any category pages you've created by clicking on the Categories metabox on the left-hand side. Once everything is in the right place in your menu, click on the Save Menu button. Your Menus screen will look something similar to this: Now that you have a menu, let's take a look at the site: The live site That's looking good, but I'd like to add some text in the sidebar instead of the default content. Adding a text widget to the sidebar Let's add a text widget with some information about the site: In the WordPress admin, go to Appearance | Widgets. Find the text widget on the left-hand side and drag it into the widget area for the main sidebar. Give the widget a title. Type the following text into the widget's contents: Welcome to this video site. To see my videos on YouTube, visit <a href="https://www.youtube.com/channel/UC5NPnKZOjCxhPBLZn_DHOMw">my channel</a>. Replace the link I've added here with a link to your own channel: The Widgets screen with a text widget added Text widgets accept text and HTML. Here we've used HTML to create a link. For more on HTML links, visit http://www.w3schools.com/html/html_links.asp. Alternatively if you'd rather create a widget that gives you an editing pane like the one you use for creating posts, you can install the TinyMCE Widget plugin from https://wordpress.org/plugins/black-studio-tinymce-widget/screenshots/. This gives you a widget that lets you create links and format your text just as you would when creating a post. Now go back to your live site to see how things are looking:The live site with a text widget added It's looking much better! If you click on one of these videos, you're taken to the post for that video: A single post with a video automatically added Your site is now ready. Managing and updating your videos The great thing about using this plugin is that once you've set it up you'll never have to do anything in your website to add new videos. All you need to do is upload them to YouTube and add them to the playlist you've linked to, and they'll automatically be added to your site. If you want to add extra content to the posts holding your videos you can do so. Just edit the posts in the normal way, adding text, images, and anything you want. These will be displayed as well as the videos. If you want to create new playlists in future, you just do this in YouTube and then create a new category on your site and add the playlist in the settings for the plugin, assigning the new channel to the relevant category. You can upload your videos to YouTube in a variety of ways—via the YouTube website or directly from the device or software you use to record and/or edit them. Most phones allow you to sign in to your YouTube account via the video or YouTube app and directly upload videos, and video editing software will often let you do the same. Good luck with your video site, I hope it gets you lots of views! Summary In this article, you learned how to create a WordPress site for streaming video from YouTube. You created a YouTube channel and added videos and playlists to it and then you set up your site to automatically create a new post each time you add a new video, using a plugin. Finally, you installed a suitable theme and configured it, creating categories for your channels and adding these to your navigation menu. Resources for Article: Further resources on this subject: Adding Geographic Capabilities via the GeoPlaces Theme[article] Adding Flash to your WordPress Theme[article] Adding Geographic Capabilities via the GeoPlaces Theme [article]
Read more
  • 0
  • 1
  • 12966

article-image-breaking-microservices-architecture
Packt
08 Nov 2016
15 min read
Save for later

Breaking into Microservices Architecture

Packt
08 Nov 2016
15 min read
In this article by Narayan Prusty, the author of the book Modern JavaScript Applications, we will see the architecture of server side application development for complex and large applications (applications with huge number of users and large volume of data) shouldn't just involve faster response and providing web services for wide variety of platforms. It should be easy to scale, upgrade, update, test, and deploy. It should also be highly available, allowing the developers write components of the server side application in different programming languages and use different databases. Therefore, this leads the developers who build large and complex applications to switch from the common monolithic architecture to microservices architecture that allows us to do all this easily. As microservices architecture is being widely used in enterprises that build large and complex applications, it's really important to learn how to design and create server side applications using this architecture. In this chapter, we will discuss how to create applications based on microservices architecture with Node.js using the Seneca toolkit. (For more resources related to this topic, see here.) What is monolithic architecture? To understand microservices architecture, it's important to first understand monolithic architecture, which is its opposite. In monolithic architecture, different functional components of the server side application, such as payment processing, account management, push notifications, and other components, all blend together in a single unit. For example, applications are usually divided into three parts. The parts are HTML pages or native UI that run on the user's machine, server side application that runs on the server, and database that also runs on the server. The server side application is responsible for handling HTTP requests, retrieving and storing data in a database, executing algorithms, and so on. If the server side application is a single executable (that is running is a single process) that does all these task, than we say that the server side application is monolithic. This is a common way of building server side applications. Almost every major CMS, web servers, server side frameworks, and so on are built using monolithic architecture. This architecture may seem successful, but problems are likely to arise when your application is large and complex. Demerits of monolithic architecture The following are some of the issues caused by server side applications built using the monolithic architecture. Scaling monolithic architecture As traffic to your server side application increases, you will need to scale your server side application to handle the traffic. In case of monolithic architecture, you can scale the server side application by running the same executable on multiple servers and place the servers behind a load balancer or you can use round robin DNS to distribute the traffic among the servers: In the preceding diagram, all the servers will be running the same server side application. Although scaling is easy, scaling monolithic server side application ends up with scaling all the components rather than the components that require greater resource. Thus, causing unbalanced utilization of resources sometimes, depending on the quantity and types of resources the components need. Let's consider some examples to understand the issues caused while scaling monolithic server side applications: Suppose there is a component of server side application that requires a more powerful or special kind of hardware, we cannot simply scale this particular component as all the components are packed together, therefore everything needs to be scaled together. So, to make sure that the component gets enough resources, you need to run the server side application on some more servers with powerful or special hardware, leading to consumption of more resources than actually required. Suppose we have a component that requires to be executed on a specific server operating system that is not free of charge, we cannot simply run this particular component in a non-free operating system as all the components are packed together and therefore, just to execute this specific component, we need to install the non-free operating system in all servers, increasing the cost largely. These are just some examples. There are many more issues that you are likely to come across while scaling a monolithic server side application. So, when we scale monolithic server side applications, the components that don't need more powerful or special kind of resource starts receiving them, therefore deceasing resources for the component that needs them. We can say that scaling monolithic server side application involves scaling all components that are forcing to duplicate everything in the new servers. Writing monolithic server side applications Monolithic server side applications are written in a particular programming language using a particular framework. Enterprises usually have developers who are experts in different programming languages and frameworks to build server side applications; therefore, if they are asked to build a monolithic server side application, then it will be difficult for them to work together. The components of a monolithic server side application can be reused only in the same framework using, which it's built. So, you cannot reuse them for some other kind of project that's built using different technologies. Other issues of monolithic architecture Here are some other issues that developers might face. Depending on the technology that is used to build the monolithic server side application: It may need to be completely rebuild and redeployed for every small change made to it. This is a time-consuming task and makes your application inaccessible for a long time. It may completely fail if any one of the component fails. It's difficult to build a monolithic application to handle failure of specific components and degrade application features accordingly. It may be difficult to find how much resources are each components consuming. It may be difficult to test and debug individual components separately. Microservices architecture to the rescue We saw the problems caused by monolithic architecture. These problems lead developers to switch from monolithic architecture to microservices architecture. In microservices architecture, the server side application is divided into services. A service (or microservice) is a small and independent process that constitutes a particular functionality of the complete server side application. For example, you can have a service for payment processing, another service for account management, and so on; the services need to communicate with each other via network. What do you mean by "small" service? You must be wondering how small a service needs to be and how to tell whether a service is small or not? Well, it actually depends on many factors such as the type of application, team management, availability of resources, size of application, and how small you think is small? However, a small service doesn't have to be the one that is written is less lines of code or provides a very basic functionality. A small service can be the one on which a team of developers can work independently, which can be scaled independently to other services, scaling it doesn't cause unbalanced utilization of recourses, and overall they are highly decoupled (independent and unaware) of other services. You don't have to run each service in a different server, that is, you can run multiple services in a single computer. The ratio of server to services depends on different factors. A common factor is the amount and type of resources and technologies required. For example, if a service needs a lot of RAM and CPU time, then it would be better to run it individually on a server. If there are some services that don't need much resources, then you can run them all in a single server together. The following diagram shows an example of the microservices architecture: Here, you can think of Service 1 as the web server with which a browser communicates and other services providing APIs for various functionalities. The web services communicate with other services to get data. Merits of microservices architecture Due to the fact that services are small and independent and communicate via network, it solves many problems that monolithic architecture had. Here are some of the benefits of microservices architecture: As the services communicate via network, they can be written in different programming languages using different frameworks Making a change to a service only requires that particular service to be redeployed instead of all the services, which is a faster procedure It becomes easier to measure how much resources are consumed by each service as each service runs in a different process It becomes easier to test and debug, as you can analyze each service separately Services can be reused by other applications as they interact via network calls Scaling services Apart from the preceding benefits, one of the major benefits of microservices architecture is that you can scale individual services that require scaling instead of all the services, therefore preventing duplication of resources and unbalanced utilization of resources. Suppose we want to scale Service 1 in the preceding diagram. Here is a diagram that shows how it can be scaled: Here, we are running two instances of Service 1 on two different servers kept behind a load balancer, which distributes the traffic between them. All other services run the same way as scaling them wasn't required. If you wanted to scale Service 3, then you can run multiple instances of Service 3 on multiple servers and place them behind a load balancer. Demerits of microservices architecture Although there are a lot of merits of using microservices architecture compared to monolithic architecture, there are some demerits of microservices architecture as well: As the server side application is divided into services, deploying, and optionally, configuring each service separately is cumbersome and a time-consuming task. Note that developers often use some sort automation technology (such as AWS, Docker, and so on) to make deployment somewhat easier; however, to use it, you still need a good level of experience and expertise of that technology. Communication between services is likely to lag as it's done via network. This sort of server side applications is more prone to network security vulnerabilities as services communicate via network. Writing code for communicating with other services can be harder, that is, you need to make network calls and then parse the data to read it. This also requires more processing. Note that although there are frameworks to build server side applications using microservices that make fetching and parsing of data easier, it still doesn't deduct the processing and network wait time. You will surely need some sort of monitoring tool to monitor services as they may go down due to network, hardware, or software failure. Although you may use the monitoring tool only when your application suddenly stops, to build the monitoring software or use some sort of service, monitoring software needs some level of extra experience and expertise. Microservices-based server side applications are slower than monolithic-based server side applications as communication via networks is slower compared to memory. When to use microservices architecture? It may seem like its difficult to choose between monolithic and microservices architecture, but it's actually not so hard to decide between them. If you are building a server side application using monolithic architecture and you feel that you are unlikely to face any monolithic issues that we discussed earlier, then you can stick to monolithic architecture. In future, if you are facing issues that can be solved using microservices architecture, then you should switch to microservices architecture. If you are switching from a monolithic architecture to microservices architecture, then you don't have to rewrite the complete application, instead you can only convert the components that are causing issues to services by doing some code refactoring. This sort of server side applications where the main application logic is monolithic but some specific functionality is exposed via services is called microservices architecture with monolithic core. As issues increase further, you can start converting more components of the monolithic core to services. If you are building a server side application using monolithic architecture and you feel that you are likely to face any of the monolithic issues that we discussed earlier, then you should immediately switch to microservices architecture or microservices architecture with monolithic core, depending on what suits you the best. Data management In microservices architecture, each service can have its own database to store data and can also use a centralized database to store. Some developers don't use a centralized database at all, instead all services have their own database to store the data. To synchronize the data between the services, the services omit events when their data is changed and other services subscribe to the event and update the data. The problem with this mechanism is that if a service is down, then it may miss some events. There is also going to be a lot of duplicate data, and finally, it is difficult to code this kind of system. Therefore, it's a good idea to have a centralized database and also let each service to maintain their own database if they want to store something that they don't want to share with others. Services should not connect to the centralized database directly, instead there should be another service called database service that provides APIs to work with the centralized database. This extra layer has many advantages, such as the underlying schema can be changed without updating and redeploying all the services that are dependent on the schema, we can add a caching layer without making changes to the services, you can change the type of database without making any changes to the services and there are many other benefits. We can also have multiple database services if there are multiple schemas, or if there are different types of databases, or due to some other reason that benefits the overall architecture and decouples the services. Implementing microservices using Seneca Seneca is a Node.js framework for creating server side applications using microservices architecture with monolithic core. Earlier, we discussed that in microservices architecture, we create a separate service for every component, so you must be wondering what's the point of using a framework for creating services that can be done by simply writing some code to listen to a port and reply to requests. Well, writing code to make requests, send responses, and parse data requires a lot of time and work, but a framework like Seneca make all this easy. Also converting components of monolithic core to services is also a cumbersome task as it requires a lot of code refactoring, but Seneca makes it easy by introducing a concept of actions and plugins. Finally, services written in any other programming language or framework will be able to communicate with Seneca services. In Seneca, an action represents a particular operation. An action is a function that's identified by an object literal or JSON string called as the action's pattern. In Seneca, these operations of a component of monolithic core are written using actions, which we may later want to move from monolithic core to a service and expose it to other services and monolithic core via network. Why actions? You might be wondering what is the benefit of using actions instead of functions to write operations and how actions make it easy to convert components of monolithic core to services? Suppose you want to move an operation of monolithic core that is written using a function to a separate service and expose the function via network then you cannot simply copy and paste the function to the new service, instead you need to define a route (if you are using Express). To call the function inside the monolithic core, you will need to write code to make an HTTP request to the service. To call this operation inside the service, you can simply call a function so that there are two different code snippets depending from where you are executing the operation. Therefore, moving operations requires a lot of code refactoring. However, if you would have written the preceding operation using the Seneca action, then it would have been really easy to move the operation to a separate service. In case the operation is written using action, and you want to move the operation to a separate service and expose the operation via network, then you can simply copy and paste the action to the new service. That's it. Obviously, we also need to tell the service to expose the action via network and tell the monolithic core where to find the action, but all these require just couple of lines of code. A Seneca service exposes actions to other services and monolithic core. While making request to a service, we need to provide a pattern matching an action's pattern to be called in the service. Why patterns? Patterns make it easy to map a URL to action, patterns can overwrite other patterns for specific conditions, therefore it prevents editing of the existing code, as editing of the existing code in a production site is not safe and have many other disadvantages. Seneca also has a concept of plugins. A seneca plugin is actually a set of actions that can be easily distributed and plugged in to a service or monolithic core. As our monolithic core becomes larger and complex, we can convert components to services. That is, move actions of certain components to services. Summary In this chapter, we saw the difference between monolithic and microservices architecture. Then we discussed what microservices architecture with monolithic core means and its benefits. Finally, we jumped into the Seneca framework for implementing microservices architecture with monolithic core and discussed how to create a basic login and registration functionality to demonstrate various features of the Seneca framework and how to use it. In the next chapter, we will create a fully functional e-commerce website using Seneca and Express frameworks Resources for Article: Further resources on this subject: Microservices – Brave New World [article] Patterns for Data Processing [article] Domain-Driven Design [article]
Read more
  • 0
  • 0
  • 12965
article-image-app-development-using-react-native-vs-androidios
Manuel Nakamurakare
03 Mar 2016
6 min read
Save for later

App Development Using React Native vs. Android/iOS

Manuel Nakamurakare
03 Mar 2016
6 min read
Until two years ago, I had exclusively done Android native development. I had never developed iOS apps, but that changed last year, when my company decided that I had to learn iOS development. I was super excited at first, but all that excitement started to fade away as I started developing our iOS app and I quickly saw how my productivity was declining. I realized I had to basically re-learn everything I learnt in Android: the framework, the tools, the IDE, etc. I am a person who likes going to meetups, so suddenly I started going to both Android and iOS meetups. I needed to keep up-to-date with the latest features in both platforms. It was very time-consuming and at the same time somewhat frustrating since I was feeling my learning pace was not fast enough. Then, React Native for iOS came out. We didn’t start using it until mid 2015. We started playing around with it and we really liked it. What is React Native? React Native is a technology created by Facebook. It allows developers to use JavaScript in order to create mobile apps in both Android and iOS that look, feel, and are native. A good way to explain how it works is to think of it as a wrapper of native code. There are many components that have been created that are basically wrapping the native iOS or Android functionality. React Native has been gaining a lot of traction since it was released because it has basically changed the game in many ways. Two Ecosystems One reason why mobile development is so difficult and time consuming is the fact that two entirely different ecosystems need to be learned. If you want to develop an iOS app, then you need to learn Swift or Objective-C and Cocoa Touch. If you want to develop Android apps, you need to learn Java and the Android SDK. I have written code in the three languages, Swift, Objective C, and Java. I don’t really want to get into the argument of comparing which of these is better. However, what I can say is that they are different and learning each of them takes a considerable amount of time. A similar thing happens with the frameworks: Cocoa Touch and the Android SDK. Of course, with each of these frameworks, there is also a big bag of other tools such as testing tools, libraries, packages, etc. And we are not even considering that developers need to stay up-to-date with the latest features each ecosystem offers. On the other hand, if you choose to develop on React Native, you will, most of the time, only need to learn one set of tools. It is true that there are many things that you will need to get familiar with: JavaScript, Node, React Native, etc. However, it is only one set of tools to learn. Reusability Reusability is a big thing in software development. Whenever you are able to reuse code that is a good thing. React Native is not meant to be a write once, run everywhere platform. Whenever you want to build an app for them, you have to build a UI that looks and feels native. For this reason, some of the UI code needs to be written according to the platform's best practices and standards. However, there will always be some common UI code that can be shared together with all the logic. Being able to share code has many advantages: better use of human resources, less code to maintain, less chance of bugs, features in both platforms are more likely to be on parity, etc. Learn Once, Write Everywhere As I mentioned before, React Native is not meant to be a write once, run everywhere platform. As the Facebook team that created React Native says, the goal is to be a learn once, write everywhere platform. And this totally makes sense. Since all of the code for Android and iOS is written using the same set of tools, it is very easy to imagine having a team of developers building the app for both platforms. This is not something that will usually happen when doing native Android and iOS development because there are very few developers that do both. I can even go farther and say that a team that is developing a web app using React.js will not have a very hard time learning React Native development and start developing mobile apps. Declarative API When you build applications using React Native, your UI is more predictable and easier to understand since it has a declarative API as opposed to an imperative one. The difference between these approaches is that when you have an application that has different states, you usually need to keep track of all the changes in the UI and modify them. This can become a complex and very unpredictable task as your application grows. This is called imperative programming. If you use React Native, which has declarative APIs, you just need to worry about what the current UI state looks like without having to keep track of the older ones. Hot Reloading The usual developer routine when coding is to test changes every time some code has been written. For this to happen, the application needs to be compiled and then installed in either a simulator or a real device. In case of React Native, you don’t, most of the time, need to recompile the app every time you make a change. You just need to refresh the app in the simulator, emulator, or device and that’s it. There is even a feature called Live Reload that will refresh the app automatically every time it detects a change in the code. Isn’t that cool? Open Source React Native is still a very new technology; it was made open source less than a year ago. It is not perfect yet. It still has some bugs, but, overall, I think it is ready to be used in production for most mobile apps. There are still some features that are available in the native frameworks that have not been exposed to React Native but that is not really a big deal. I can tell from experience that it is somewhat easy to do in case you are familiar with native development. Also, since React Native is open source, there is a big community of developers helping to implement more features, fix bugs, and help people. Most of the time, if you are trying to build something that is common in mobile apps, it is very likely that it has already been built. As you can see, I am really bullish on React Native. I miss native Android and iOS development, but I really feel excited to be using React Native these days. I really think React Native is a game-changer in mobile development and I cannot wait until it becomes the to-go platform for mobile development!
Read more
  • 0
  • 0
  • 12911

article-image-react-dashboard-and-visualizing-data
Xavier Bruhiere
26 Nov 2015
8 min read
Save for later

React Dashboard and Visualizing Data

Xavier Bruhiere
26 Nov 2015
8 min read
I spent the last six months working on data analytics and machine learning to feed my curiosity and prepare for my new job. It is a challenging mission and I chose to give up for a while on my current web projects to stay focused. Back then, I was coding a dashboard for an automated trading system, powered by an exciting new framework from Facebook : React. In my opinion, Web Components was the way to go and React seemed gentler with my brain than, say, Polymer. One just needed to carefully design components boundaries, properties and states and bam, you got a reusable piece of web to plug anywhere. Beautiful. This is quite a naive way to put it of course but, for an MVP, it actually kind of worked. Fast forward to last week, I was needing a new dashboard to monitor various metrics from my shiny new infrastructure. Specialized requirements kept me away from a full-fledged solution like InfluxDB and Grafana combo, so I naturally starred at my old code. Well, it turned out I did not reuse a single line of code. Since the last time I spent in web development, new tools, frameworks and methodologies had taken over the world : es6 (and transpilers), isomorphic applications, one-way data flow, hot reloading, module bundler, ... Even starter kits are remarkably complex (at least for me) and I got overwhelmed. But those new toys are also truly empowering and I persevered. In this post, we will learn to leverage them, build the simplest dashboard possible and pave the way toward modern, real-time metrics monitoring. Tooling & Motivations I think the points of so much tooling are productivity and complexity management. New single page applications usually involve a significant number of moving parts : front and backend development, data management, scaling, appealing UX, ... Isomorphic webapps with nodejs and es6 try to harmonize this workflow sharing one readable language across the stack. Node already sells the "javascript everywhere" argument but here, it goes even further, with code that can be executed both on the server and in the browser, indifferently. Team work and reusability are improved, as well as SEO (Search Engine optimization) when rendering HTML on server-side. Yet, applications' codebase can turn into a massive mess and that's where Web Components come handy. Providing clear contracts between modules, a developer is able to focus on subpart of the UI with an explicit definition of its parameters and states. This level of abstraction makes the application much more easy to navigate, maintain and reuse. Working with React gives a sense of clarity with components as Javascript objects. Lifecycle and behavior are explicitly detailed by pre-defined hooks, while properties and states are distinct attributes. We still need to glue all of those components and their dependencies together. That's where npm, Webpack and Gulp join the party. Npm is the de facto package manager for nodejs, and more and more for frontend development. What's more, it can run for you scripts and spare you from using a task runner like Gulp. Webpack, meanwhile, bundles pretty much anything thanks to its loaders. Feed it an entrypoint which require your js, jsx, css, whatever ... and it will transform and package them for the browser. Given the steep learning curve of modern full-stack development, I hope you can see the mean of those tools. Last pieces I would like to introduce for our little project are metrics-graphics and react-sparklines (that I won't actually describe but worth noting for our purpose). Both are neat frameworks to visualize data and play nicely with React, as we are going to see now. Graph Component When building components-based interfaces, first things to define are what subpart of the UI those components are. Since we start a spartiate implementation, we are only going to define a Graph. // Graph.jsx // new es6 import syntax import React from 'react'; // graph renderer import MG from 'metrics-graphics'; export default class Graph extends React.Component { // called after the `render` method below componentDidMount () { // use d3 to load data from metrics-graphics samples d3.json('node_modules/metrics-graphics/examples/data/confidence_band.json', function(data) { data = MG.convert.date(data, 'date'); MG.data_graphic({ title: {this.props.title}, data: data, format: 'percentage', width: 600, height: 200, right: 40, target: '#confidence', show_secondary_x_label: false, show_confidence_band: ['l', 'u'], x_extended_ticks: true }); }); } render () { // render the element targeted by the graph return <div id="confidence"></div>; } } This code, a trendy combination of es6 and jsx, defines in the DOM a standalone graph from the json data in confidence_band.json I stole on Mozilla official examples. Now let's actually mount and render the DOM in the main entrypoint of the application (I mentioned above with Webpack). // main.jsx // tell webpack to bundle style along with the javascript import 'metrics-graphics/dist/metricsgraphics.css'; import 'metrics-graphics/examples/css/metricsgraphics-demo.css'; import 'metrics-graphics/examples/css/highlightjs-default.css'; import React from 'react'; import Graph from './components/Graph'; function main() { // it is recommended to not directly render on body var app = document.createElement('div'); document.body.appendChild(app); // key/value pairs are available under `this.props` hash within the component React.render(<Graph title={Keep calm and build a dashboard}/>, app); } main(); Now that we defined in plain javascript the web page, it's time for our tools to take over and actually build it. Build workflow This is mostly a matter of configuration. First, create the following structure. $ tree . ├── app │ ├── components │ │ ├── Graph.jsx │ ├── main.jsx ├── build └── package.json Where package.json is defined like below. { "name": "react-dashboard", "scripts": { "build": "TARGET=build webpack", "dev": "TARGET=dev webpack-dev-server --host 0.0.0.0 --devtool eval-source --progress --colors --hot --inline --history-api-fallback" }, "devDependencies": { "babel-core": "^5.6.18", "babel-loader": "^5.3.2", "css-loader": "^0.15.1", "html-webpack-plugin": "^1.5.2", "node-libs-browser": "^0.5.2", "react-hot-loader": "^1.2.7", "style-loader": "^0.12.3", "webpack": "^1.10.1", "webpack-dev-server": "^1.10.1", "webpack-merge": "^0.1.2" }, "dependencies": { "metrics-graphics": "^2.6.0", "react": "^0.13.3" } } A quick npm install will download every package we need for development and production. Two scripts are even defined to build a static version of the site, or serve a dynamic one that will be updated on file changes detection. This formidable feature becomes essential once tasted. But we have yet to configure Webpack to enjoy it. var path = require('path'); var HtmlWebpackPlugin = require('html-webpack-plugin'); var webpack = require('webpack'); var merge = require('webpack-merge'); // discern development server from static build var TARGET = process.env.TARGET; // webpack prefers abolute path var ROOT_PATH = path.resolve(__dirname); // common environments configuration var common = { // input main.js we wrote earlier entry: [path.resolve(ROOT_PATH, 'app/main')], // import requirements with following extensions resolve: { extensions: ['', '.js', '.jsx'] }, // define the single bundle file output by the build output: { path: path.resolve(ROOT_PATH, 'build'), filename: 'bundle.js' }, module: { // also support css loading from main.js loaders: [ { test: /.css$/, loaders: ['style', 'css'] } ] }, plugins: [ // automatically generate a standard index.html to attach on the React app new HtmlWebpackPlugin({ title: 'React Dashboard' }) ] }; // production specific configuration if(TARGET === 'build') { module.exports = merge(common, { module: { // compile es6 jsx to standard es5 loaders: [ { test: /.jsx?$/, loader: 'babel?stage=1', include: path.resolve(ROOT_PATH, 'app') } ] }, // optimize output size plugins: [ new webpack.DefinePlugin({ 'process.env': { // This has effect on the react lib size 'NODE_ENV': JSON.stringify('production') } }), new webpack.optimize.UglifyJsPlugin({ compress: { warnings: false } }) ] }); } // development specific configuration if(TARGET === 'dev') { module.exports = merge(common, { module: { // also transpile javascript, but also use react-hot-loader, to automagically update web page on changes loaders: [ { test: /.jsx?$/, loaders: ['react-hot', 'babel?stage=1'], include: path.resolve(ROOT_PATH, 'app'), }, ], }, }); } Webpack configuration can be hard to swallow at first but, given the huge amount of transformations to operate, this style scales very well. Plus, once setup, the development environment becomes remarkably productive. To convince yourself, run webpack-dev-server and reach localhost:8080/assets/bundle.js in your browser. Tweak the title argument in main.jsx, save the file and watch the browser update itself. We are ready to build new components and extend our modular dashboard. Conclusion We condensed in a few paragraphs a lot of what makes the current web ecosystem effervescent. I strongly encourage the reader to deepen its knowledge on those matters and consider this post as it is : an introduction. Web components, like micro-services, are fun, powerful and bleeding edges. But also complex, fast-moving and unstable. The tooling, especially, is impressive. Spend a hard time to master them and craft something cool ! About the Author Xavier Bruhiere is a Lead Developer at AppTurbo in Paris, where he develops innovative prototypes to support company growth. He is addicted to learning, hacking on intriguing hot techs (both soft and hard), and practicing high intensity sports.
Read more
  • 0
  • 0
  • 12873

article-image-using-reactjs-without-jsx
Richard Feldman
30 Jun 2014
6 min read
Save for later

Using React.js without JSX

Richard Feldman
30 Jun 2014
6 min read
React.js was clearly designed with JSX in mind, however, there are plenty of good reasons to use React without it. Using React as a standalone library lets you evaluate the technology without having to spend time learning a new syntax. Some teams—including my own—prefer to have their entire frontend code base in one compile-to-JavaScript language, such as CoffeeScript or TypeScript. Others might find that adding another JavaScript library to their dependencies is no big deal, but adding a compilation step to the build chain is a deal-breaker. There are two primary drawbacks to eschewing JSX. One is that it makes using React significantly more verbose. The other is that the React docs use JSX everywhere; examples demonstrating vanilla JavaScript are few and far between. Fortunately, both drawbacks are easy to work around. Translating documentation The first code sample you see in the React Documentation includes this JSX snippet: /** @jsx React.DOM */ React.renderComponent( <h1>Hello, world!</h1>, document.getElementById('example') ); Suppose we want to see the vanilla JS equivalent. Although the code samples on the React homepage include a helpful Compiled JS tab, the samples in the docs—not to mention React examples you find elsewhere on the Web—will not. Fortunately, React’s Live JSX Compiler can help. To translate the above JSX into vanilla JS, simply copy and paste it into the left side of the Live JSX Compiler. The output on the right should look like this: /** @jsx React.DOM */ React.renderComponent( React.DOM.h1(null, "Hello, world!"), document.getElementById('example') ); Pretty similar, right? We can discard the comment, as it only represents a necessary directive in JSX. When writing React in vanilla JS, it’s just another comment that will be disregarded as usual. Take a look at the call to React.renderComponent. Here we have a plain old two-argument function, which takes a React DOM element (in this case, the one returned by React.DOM.h1) as its first argument, and a regular DOM element (in this case, the one returned by document.getElementById('example')) as its second. jQuery users should note that the second argument will not accept jQuery objects, so you will have to extract the underlying DOM element with $("#example")[0] or something similar. The React.DOM object has a method for every supported tag. In this case we’re using h1, but we could just as easily have used h2, div, span, input, a, p, or any other supported tag. The first argument to these methods is optional; it can either be null (as in this case), or an object specifying the element’s attributes. This argument is how you specify things like class, ID, and so on. The second argument is either a string, in which case it specifies the object’s text content, or a list of child React DOM elements. Let’s put this together with a more advanced example, starting with the vanilla JS: React.DOM.form({className:"commentForm"}, React.DOM.input({type:"text", placeholder:"Your name"}), React.DOM.input({type:"text", placeholder:"Say something..."}), React.DOM.input({type:"submit", value:"Post"}) ) For the most part, the attributes translate as you would expect: type, value, and placeholder do exactly what they would do if used in HTML. The one exception is className, which you use in place of the usual class. The above is equivalent to the following JSX: /** @jsx React.DOM */ <form className="commentForm"> <input type="text" placeholder="Your name" /> <input type="text" placeholder="Say something..." /> <input type="submit" value="Post" /> </form> This JSX is a snippet found elsewhere in the React docs, and again you can view its vanilla JS equivalent by pasting it into the Live JSX Compiler. Note that you can include pure JSX here without any surrounding JavaScript code (unlike the JSX playground), but you do need the /** @jsx React.DOM */ comment at the top of the JSX side. Without the comment, the compiler will simply output the JSX you put in. Simple DSLs to make things concise Although these two implementations are functionally identical, clearly the JSX version is more concise. How can we make the vanilla JS version less verbose? A very quick improvement is to alias the React.DOM object: var R = React.DOM; R.form({className:"commentForm"}, R.input({type:"text", placeholder:"Your name"}), R.input({type:"text", placeholder:"Say something..."}), R.input({type:"submit", value:"Post"})) You can take it even further with a tiny bit of DSL: var R = React.DOM; var form = R.form; var input = R.input; form({className:"commentForm"}, input({type:"text", placeholder:"Your name"}), input({type:"text", placeholder:"Say something..."}), input({type:"submit", value:"Post"}) ) This is more verbose in terms of lines of code, but if you have a large DOM to set up, the extra up-front declarations can make the rest of the file much nicer to read. In CoffeeScript, a DSL like this can tidy things up even further: {form, input} = React.DOM form {className:"commentForm"}, [ input type: "text", placeholder:"Your name" input type:"text", placeholder:"Say something..." input type:"submit", value:"Post" ] Note that in this example, the form’s children are passed as an array rather than as a list of extra arguments (which, in CoffeeScript, allows you to omit commas after each line). React DOM element constructors support either approach. (Also note that CoffeeScript coders who don’t mind mixing languages can use the coffee-react compiler or set up a custom build chain that allows for inline JSX in CoffeeScript sources instead.) Takeaways No matter your particular use case, there are plenty of ways to effectively use React without JSX. Thanks to the Live JSX Compiler ’s ability to quickly translate documentation code samples, and the ease with which you can set up a simple DSL to reduce verbosity, there really is very little overhead to using React as a JavaScript library like any other. About the author Richard Feldman is a functional programmer who specializes in pushing the limits of browser-based UIs. He’s built a framework that performantly renders hundreds of thousands of shapes in the HTML5 canvas, a writing web app that functions like a desktop app in the absence of an Internet connection, and much more in between
Read more
  • 0
  • 0
  • 12853
Packt
16 Aug 2010
12 min read
Save for later

URL Shorteners – Designing the TinyURL Clone with Ruby

Packt
16 Aug 2010
12 min read
(For more resources on Ruby, see here.) We start off with an easy application, a simple yet very useful Internet application, URL shorteners. We will take a quick tour of URL shorteners before jumping into the design of a simple URL shortener, followed by an in-depth discussion of how we clone our own URL shortener, Tinyclone. All about URL shorteners Internet applications don't always need to be full of features or cover all aspects of your Internet life to be successful. Sometimes it's ok to be simple and just focus on providing a single feature. It doesn't even need to be earth-shatteringly important—it should be just useful enough for its target users. The archetypical and probably most extreme example of this is the URL shortening application or URL shortener. This service offers a very simple but surprisingly useful feature. It provides a shorter URL that represents a normally longer URL. When a user goes to the short URL, he will be redirected to the original URL. For this simple feature, top three most popular URL shortening services (TinyURL, bit.ly, and is.gd) collectively had about 11 million unique visitors, 110 million page views and a reach of about one percent of the Internet in June 2009. In 2008, the most popular URL shortener at that time, TinyURL, was made one of Time Magazine's Top 50 Best Websites. The idea to shorten long and unwieldy URLs into shorter, more manageable ones has been around for some time. One of the earlier attempts to make it a public service is Make A Shorter Link (MASL), which appeared around July 2001. MASL did just that, though the usefulness was debatable as the domain name was long and the shortened URL could potentially be longer than the original. However, the pioneering site that popularized this concept (and subsequently bought over MASL and a few other similar sites) is TinyURL. TinyURL was launched in January 2002 by Kevin Gilbertson to help him to link directly to newsgroup postings which frequently had long URLs. It rapidly became one of the most popular URL shorteners around. In 2008, an estimated 100 similar services came to existence in various forms. URLs or Uniform Resource Locators are resource identifiers that specify where identified resources are available and how they can be retrieved. A popular term for URL is a Web address. Every URL is made up of the following: <resource type>://<username>:<password>@<domain>:<port>/<file path name>?<query string>#<anchor> Not all parts of the URL are required by a browser, if the resource type is missing, it is normally assumed to be http, if the port is missing, it is normally assumed to be 80 (for http). The username, password, query string and anchor components are optional. Initially, TinyURL and similar types of URL shorteners focused on simply providing a short representative URL to their users. Naturally the competitive breadth for shortening URLs was rather well, short. Many chose TinyURL over MASL because TinyURL had a shorter and easier to remember domain name (http://tinyurl.com over http://makeashorterlink.com) Subsequent competition over this space intensified and extended to providing various other features, including custom short URLs (TinyURL, bit.ly), analysis of click-through statistics (bit.ly), advertisements (Adjix, Linkbee), preview pages (TinyURL, is.gd) and so on. The explosive growth of Twitter (from June 2008 to June 2009, Twitter grew 1,164%) opened a new chapter for URL shorteners. Twitter chose a limit of 140 characters for each tweet to accommodate the 160 characters in an SMS message (Twitter was invented as a service for people to use SMS to tell small groups what they are doing). With Twitter's popularity skyrocketing, came the need for users to shorten URLs to fit into the 140 characters limit. Originally Twitter used TinyURL as its default URL shortener and this triggered a steep climb in the usage of TinyURL during the early days of Twitter. However, in May 2009, bit.ly replaced TinyURL as Twitter's default URL shortener and the impact was immediate. For the first time in that period, TinyURL recorded a drop in the number of users in May 2009, dropping from 6.1 million to 5.3 million unique users, while bit.ly jumped from 1.8 million to 2.9 million almost overnight. That's not the end of the story though. In April 2010 during Twitter's Chirp conference, Twitter announced its own URL shortener (twt.tl). As of writing it is still unclear the market share will pan out but it's clear that URL shorteners have good value and everyone is jumping into this market. In December 2009, Google came up with its own two URL shorteners goo.gl and youtu.be. Amazon.com (amzn.to), Facebook (fb.me) and Wordpress (wp.me) all have their own URL shorteners as well. Next, let's do a quick review of why URLs shorteners are so popular and why they attract criticism as well. Here's a quick summary of the benefits: Create short and easy to remember URLs Allow passing of links in character-limited services such as Twitter Create vanity URLs for marketing purposes Can verbally pass URLs The most obvious benefit of having a shortened URL is that it's, well, short. A typical example of an URL gone bad is a link to a location in Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=singapore +flyer&vps=1&jsv=169c&sll=1.352083,103.819836&sspn=0.68645,1.382904&g =singapore&ie=UTF8&latlng=8354962237652576151&ei=Shh3SsSRDpb4vAPsxLS3 BQ&cd=1&usq=Singapore+Flyer Such URLs are meant to be clicked on as it is virtually impossible to pass it around verbally. It might be justifiable if the URL is cut and pasted on documents, but sometimes certain applications will truncate parts of the URL while processing. This makes a long URL difficult to click on and even produces erroneous links. In fact, this was the main motivation in creating most of the earlier URL shorteners—older email clients tend to truncate URLs when they are more than 80 characters. Short links are of course crucial in character-limited message passing systems like Twitter, Plurk, and SMS. Passing long URLs is impossible without URL shorteners. Short URLs are very useful in cases of vanity URLs where for example, the Google Maps link above could be shortened to http://tinyurl.com/singapore-flyer. Such vanity URLs are useful when passing from one person to another, or even when using in a mass marketing way. Sticking to the maps theme in our examples, if you want to give a Google Maps link to your restaurant and put it up in catalogs and brochures, you will not want to give the long URL. Instead you would want a nice, descriptive and short URL. Short URLs are also useful in cases of accessibility. For example, reading out the Google Maps link above is almost impossible, but reading out the TinyURL link (vanity or otherwise) is much easier in comparison. Many popular URL shorteners also provide some form of statistics and analytics on the usage of the links. This feature allows you to track your short URLs to see how many clicks it received and what kind of patterns can be derived from the clicks. Although the metrics are usually not advanced, they do provide basic usefulness. On the other hand, URL shorteners have it fair share of criticisms as well. Here is a summary of the bad side of URL shorteners: Provides opportunity to spammers because it hide original URLs Could be unreliable if dependent on it for redirection Possible undesirable or vulgar short URLs URL shorteners have security issues. When a URL shortener creates a short URL, it effectively hides the original link and this provides opportunity for spammers or other abusers to redirect users to their sites. One relatively mild form of such attack is 'rickrolling'. Rickrolling uses a classic bait-and-switch trick to redirect users to a Rick Astley music video of Never Gonna Give You Up. For example, you might feel that the URL http://tinyurl.com/singapore-flyer goes to Google Map, but when you click on it, you might be rickrolled and redirected to that Rick Astley music video instead. Also, because most short URLs are not customized, it is quite difficult to see if the link is genuine or not just from the URL. Many prominent websites and applications have such concerns, including MySpace, Flickr and even Microsoft Live Messenger, and have one time or another banned or restricted usage of TinyURL because of this problem. To combat spammers and fraud, URL shortening services have come up with the idea of link previews, which allows users to preview a short URL before it redirects the user to the long URL. For example TinyURL will show the user the long URL on a preview page and requires the user to explicitly go to the long URL. Another problem is performance and reliability. When you access a website, your browser goes to a few DNS servers to resolve the address, but the URL shortener adds another layer of indirection. While DNS servers have redundancy and failsafe measures, there is no such assurance from URL shorteners. If the traffic to a particular link becomes too high, will the shortening service provider be able to add more servers to improve performance or even prevent a meltdown altogether? The problem of course lies in over-dependency on the shortening service. Finally, a negative side effect of random or even customized short URLs is that undesirable, vulgar or embarrassing short URLs can be created. Earlier on TinyURL short URLs were predictable and it was exploited, such as embarrassing short URLs that were made to redirect to the White House websites of then U.S. Vice President Dick Cheney and Second Lady Lynne Cheney. We have just covered significant ground on URL shorteners. If you are a programmer you might be wondering, "Why do I need to know such information? I am really interested in the programming bits, the others are just fluff to me." Background information on the application we want to clone is very important. It tells us what why that application exists in the first place and gives us an idea what are the main features (what makes it popular). It also tells us what problems it faces such that we are aware of the problem while programming it, or even avoid it altogether. This is important when we come to the design of the application. Finally it gives us better appreciation of the application and the motivations and issues faced by the product and technical people behind the application we wish to clone. Main features Next, let's list down the features of a URL shortener. The intention in this section is to distill the basic features of the application, features that define the service. Features listed here will be features that make the application what it is. However, as much as possible we want to also explore some additional features that extend the application and are provided by many of its competitors. Most importantly, the features here are mostly features of the most popular and definitive web application in the category. In this article, this will be TinyURL. These are the main features of a URL shortener: Users can create a short URL that represents a long URL Users who visit the short URL will be redirected to the long URL Users can preview a short URL to enable them to see what the long URL is Users can provide a custom URL to represent the long URL Undesirable words are not allowed in the short URL Users are able to view various statistics involving the short URL, including the number of clicks and where the clicks come from (optional, not in TinyURL) URL shorteners are simple web applications and the one that we will design and build will also be simple. Designing the clone Cloning TinyURL is relatively simple but there is some thought behind the design of the application. We will be building a clone of TinyURL called Tinyclone, which will be hosted at the domain http://tinyclone.saush.com. Creating a short URL for each long URL The domain of the short URL is fixed. What's left is the file path name. We need to represent the long URL with a unique file path name (a key), one for each long URL. This means we need to persist the relationship between the key and the URL. One of the ways we can associate the long URL with a unique key is to hash the long URL and use the resulting hash as the unique key. However, the resulting hash might be long and hashing functions could be slow. The faster and easier way is to use a relational database's auto-incremented row ID as the unique key. The database will help ensure the uniqueness of the ID. However, the running row ID number is base 10. To represent a million URLs would already require 7 characters, to represent 1 billion would take up 9 characters. In order to keep the number of characters smaller, we will need a larger base numbering system. In this clone we will use base 36, which is 26 characters of the alphabet (case insensitive) and 10 numbers. Using this system, we will only need 5 characters to represent 1 million URLs: 1,000,000 base 36 = lfls And 1 billion URLs can be represented in just six characters: 1,000,000,000 base 36 = gjdgxs
Read more
  • 0
  • 1
  • 12828

article-image-displaying-mysql-data-aspnet-web-page
Packt
05 Oct 2009
5 min read
Save for later

Displaying MySQL data on an ASP.NET Web Page

Packt
05 Oct 2009
5 min read
Web enabling business data is one of the key devices used to advertise and market products. This can be done with various technologies such as VB, ASP, JSP, ASP.Net and many others. This article shows how you may view data from a table on a MySQL database server on a web page using ASP.NET. The table used in this tutorial was the one described in the first article in this series on Exporting data from MS Access 2003 to MySQL. This article by Dr. Jay Krishnaswamy explains how to populate a GridView on an ASP.NET web page by data retrieved from a MySQL Server. MySQL.Data.MySqlClient is a connector (provider) provided by MySQL which you can use with the .NET Framework applications whose details may be reviewed here. MySQL is well integrated with Visual Studio (MySQL Visual Studio Tools: MySQL.VisualStudio.dll). Overview We first create an ASP.NET 3.5 Website Project (even .NET Framework 2.0 should be OK) in Visual Studio 2008. We then drag and drop a GridView ASP.NET control on to the Default.aspx page. We will then use the smart task on the GridView and follow it up to bring data to the GridView. Then we build the web site project and display the data in the GridView on the Default.aspx page. Create a ASP.NET 3.5 Web Site Project Launch Visual Studio 2008. Click File | New |Web Site... to open up the New Web Site window as shown. Change the default name of the web site to something suitable. Herein it is named WebMySQL as shown. Drag and drop a GridView Control From Toolbox under Data find the GridView Control. Drag and drop this control on to the Default.aspx page as shown. The GridView is 'unbound' when it is dropped and has a few template columns and the smart tasks menu. The menu item is shown in its drop-down state and  displays the menu items under 'Choose Data Source'. Click on the <New Data Source...> item in Choose data source. This will bring up the Data Source Configuration wizard as shown. Herein you need to choose a source of the data you are trying to bring into the application to be bound to the GridView control. You have several options here and for the present article we will be using data from a database. Click on the Database icon as shown in the previous figure. With this you will be specifying an instance of SQLDataSource1 as your source of data. Click OK. This will take you to the next window shown here. Herein you will try to establish a connection to the data source. In the combo-box shown you may see some of the existing connections you have previously established one of which may initially show up. Herein we will be making a new connection. Click the New Connection... button. This brings up the Add Connection window which gets displayed with the default data source, the Microsoft SQL Server Compact 3.5 as shown. Connecting to MySQL Before establishing the connection make sure that your MySQL Server is running. If you have not started you may do so as described in the article mentioned earlier(the first article). You can start the server from the command line as shown in the next figure. Click the Change... button to open the Change Data Source window as shown in the next figure. This window shows a number of Data Sources one of which is the MySQL Database. Scroll down and highlight MySQL Database as shown and click OK. This will bring you back to the Add Connection window with form controls appropriate for making a connection to a MySQL Database. The Server name; user name and Password are appropriate to the MySQL Server on the local computer and you should enter those appropriate for your installation. You may also test the connection as shown. Click OK after the connection is successful. This adds the connection information to the Configure Data Source wizard. You may expand the connection string item to review the connection string created by your entries. Click Next. Here you have an option to save the connection string to the Application Configuration File. This is a recommended practice and hence shown checked. Click Next. Here you will be selecting the set of columns that you want to bring in to your application. It has already chosen the 'employees' table on the MySQL database Testmove. Choose several columns from the list of columns. The SELECT statement is shown at the bottom of the above figure. If you were to click Next you would probably face a page which throws an exception. The square braces [ ] shown for each of the columns is not acceptable to the server.  Click on the first option, "Specify a custom SQL Statement or stored procedure" and then click Next. This opens the "Define Custom Statements or Stored Procedures" page with a Query Builder... button. Here you can not only select columns but also other data modification operations such as Update, Insert and Delete. For now we will be doing just a selection.
Read more
  • 0
  • 0
  • 12821
Modal Close icon
Modal Close icon