Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-creating-attention-grabbing-pricing-tables
Packt
17 Feb 2014
6 min read
Save for later

Creating attention-grabbing pricing tables

Packt
17 Feb 2014
6 min read
(For more resources related to this topic, see here.) Let's revisit the mockup of how our client would like the pricing tables to look on desktop-sized screens: Let's see how close we can get to the desired result, and what we can work out for other viewport sizes. Setting up the variables, files, and markup As shown in the preceding screenshot, there are a few tables in this design. We can begin by adjusting a few fundamental variables for all tables. These are found in _variables.less. Search for the tables section and adjust the variables for background, accented rows, and borders as desired. I've made these adjustments as shown in the following lines of code: // Tables // ------------------------- ... @table-bg: transparent; // overall background-color @table-bg-accent: hsla(0,0,1%,.1); // for striping @table-bg-hover: hsla(0,0,1%,.2); @table-bg-active: @table-bg-hover; @table-border-color: #ccc; // table and cell border Save the file, compile it to CSS, and refresh to see the result as shown in the following screenshot: That's a start. Now we need to write the more specific styles. The _page-contents.less file is now growing long, and the task before us is extensive and highly focused on table styles. To carry the custom styles, let's create a new LESS file for these pricing tables: Create _pricing-tables.less in the main less folder. Import it into __main.less just after _page-contents.less as shown in the following line: @import "_pricing-tables.less"; Open _pricing-tables.less in your editor and begin writing your new styles. But before we begin writing styles, let's review the markup that we'll be working with. We have the following special classes already provided in the markup on the parent element of each respective table:   package package-basic package package-premium package package-pro   Thus, for the first table, you'll see the following markup on its parent div: <div class="package package-basic col-lg-4"> <table class="table table-striped"> ... Similarly, we'll use package package-premium and package package-pro for the second and third table, respectively. These parent containers obviously also provide basic layout instructions using the col-md-4 class to set up a three-column layout in medium viewports. Next, we will observe the markup for each table. We see that the basic table and table-striped classes have been applied: <table class="table table-striped"> The table uses the <thead> element for its top-most block. Within this, there is <th> spanning two columns, with an <h2> heading for the package name and <div class="price"> to markup the dollar amount: <thead><tr><th colspan="2"><h2>Basic Plan</h2><div class="price">$19</div></th></tr></thead> Next is the tfoot tag with the Sign up Now! button: <tfoot><tr><td colspan="2"><a href="#" class="btn">Sign upnow!</a></td></tr></tfoot> Then is the tbody tag with the list of features laid out in a straightforward manner in rows with two columns: <tbody><tr><td>Feature</td><td>Name</td></tr><tr><td>Feature</td><td>Name</td></tr><tr><td>Feature</td><td>Name</td></tr><tr><td>Feature</td><td>Name</td></tr><tr><td>Feature</td><td>Name</td></tr></tbody> And finally, of course, the closing tags for the table and parent div tags: </table></div><!-- /.package .package-basic --> Each table repeats this basic structure. This gives us what we need to start work! Beautifying the table head To beautify the thead element of all of our tables, we'll do the following: Align the text at the center Add a background color—for now, add a gray color that is approximately a midtone similar to the colors we'll apply to the final version Turn the font color white Convert the h2 heading to uppercase Increase the size of the price table Add the necessary padding all around the tables We can apply many of these touches with the following lines of code. We'll specify the #signup section as the context for these special table styles: #signup {table {border: 1px solid @table-border-color;thead th {text-align: center;background-color: @gray-light;color: #fff;padding-top: 12px;padding-bottom: 32px;h2 {text-transform: uppercase;}}}} In short, we've accomplished everything except increasing the size of the price tables. We can get started on this by adding the following lines of code, which are still nested within our #signup table selector: .price { font-size: 7em; line-height: 1;} This yields the following result: This is close to our desired result, but we need to decrease the size of the dollar sign. To give ourselves control over that character, let's go to the markup and wrap a span tag around it: <em class="price"><span>$</span>19</em> Remember to do the same for the other two tables. With this new bit of markup in place, we can nest this within our styles for .price: .price {...span {font-size: .5em;vertical-align: super;} These lines reduce the dollar sign to half its size and align it at the top. Now to recenter the result, we need to add a bit of negative margin to the parent .price selector: .price {margin-left: -0.25em;... The following screenshot shows the result: Styling the table body and foot By continuing to focus on the styles that apply to all three pricing tables, let's make the following adjustments: Add left and right padding to the list of features Stretch the button to full width Increase the button size We can accomplish this by adding the following rules: #signup {table {...tbody {td {padding-left: 16px;padding-right: 16px;}}a.btn {.btn-lg;display: block;width: 100%;background-color: @gray-light;color: #fff;}}} Save the file, compile it to CSS, and refresh the browser. You should see the following result: We're now ready to add styles to differentiate our three packages.
Read more
  • 0
  • 0
  • 12033

Packt
08 Mar 2016
17 min read
Save for later

Magento 2 – the New E-commerce Era

Packt
08 Mar 2016
17 min read
In this article by Ray Bogman and Vladimir Kerkhoff, the authors of the book, Magento 2 Cookbook, we will cover the basic tasks related to creating a catalog and products in Magento 2. You will learn the following recipes: Creating a root catalog Creating subcategories Managing an attribute set (For more resources related to this topic, see here.) Introduction This article explains how to set up a vanilla Magento 2 store. If Magento 2 is totally new for you, then lots of new basic whereabouts are pointed out. If you are currently working with Magento 1, then not a lot has changed since. The new backend of Magento 2 is the biggest improvement of them all. The design is built responsively and has a great user experience. Compared to Magento 1, this is a great improvement. The menu is located vertically on the left of the screen and works great on desktop and mobile environments: In this article, we will see how to set up a website with multiple domains using different catalogs. Depending on the website, store, and store view setup, we can create different subcategories, URLs, and product per domain name. There are a number of different ways customers can browse your store, but one of the most effective one is layered navigation. Layered navigation is located in your catalog and holds product features to sort or filter. Every website benefits from great Search Engine Optimization (SEO). You will learn how to define catalog URLs per catalog. Throughout this article, we will cover the basics on how to set up a multidomain setup. Additional tasks required to complete a production-like setup are out of the scope of this article. Creating a root catalog The first thing that we need to start with when setting up a vanilla Magento 2 website is defining our website, store, and store view structure. So what is the difference between website, store, and store view, and why is it important: A website is the top-level container and most important of the three. It is the parent level of the entire store and used, for example, to define domain names, different shipping methods, payment options, customers, orders, and so on. Stores can be used to define, for example, different store views with the same information. A store is always connected to a root catalog that holds all the categories and subcategories. One website can manage multiple stores, and every store has a different root catalog. When using multiple stores, it is not possible to share one basket. The main reason for this has to do with the configuration setup where shipping, catalog, customer, inventory, taxes, and payment settings are not sharable between different sites. Store views is the lowest level and mostly used to handle different localizations. Every store view can be set with a different language. Besides using store views just for localizations, it can also be used for Business to Business (B2B), hidden private sales pages (with noindex and nofollow), and so on. The option where we use the base link URL, for example, (yourdomain.com/myhiddenpage) is easy to set up. The website, store, and store view structure is shown in the following image: Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create a multi-website setup including three domains (yourdomain.com, yourdomain.de, and yourdomain.fr) and separate root catalogs. The following steps will guide you through this: First, we need to update our NGINX. We need to configure the additional domains before we can connect them to Magento. Make sure that all domain names are connected to your server and DNS is configured correctly. Go to /etc/nginx/conf.d, open the default.conf file, and include the following content at the top of your file: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } Your configuration should look like this now: map $http_host $magecode { hostnames; default base; yourdomain.de de; yourdomain.fr fr; } upstream fastcgi_backend { server 127.0.0.1:9000; } server { listen 80; listen 443 ssl http2; server_name yourdomain.com; set $MAGE_ROOT /var/www/html; set $MAGE_MODE developer; ssl_certificate /etc/ssl/yourdomain-com.cert; ssl_certificate_key /etc/ssl/yourdomain-com.key; include /var/www/html/nginx.conf.sample; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; location ~ /\.ht { deny all; } } Now let's go to the Magento 2 configuration file in /var/www/html/ and open the nginx.conf.sample file. Go to the bottom and look for the following: location ~ (index|get|static|report|404|503)\.php$ Now we add the following lines to the file under fastcgi_pass   fastcgi_backend;: fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; Your configuration should look like this now (this is only a small section of the bottom section): location ~ (index|get|static|report|404|503)\.php$ { try_files $uri =404; fastcgi_pass fastcgi_backend; fastcgi_param MAGE_RUN_TYPE website; fastcgi_param MAGE_RUN_CODE $magecode; fastcgi_param PHP_FLAG "session.auto_start=off \n suhosin.session.cryptua=off"; fastcgi_param PHP_VALUE "memory_limit=256M \n max_execution_time=600"; fastcgi_read_timeout 600s; fastcgi_connect_timeout 600s; fastcgi_param MAGE_MODE $MAGE_MODE; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } The current setup is using the MAGE_RUN_TYPE website variable. You may change website to store depending on your setup preferences. When changing the variable, you need your default.conf mapping codes as well. Now, all you have to do is restart NGINX and PHP-FPM to use your new settings. Run the following command: service nginx restart && service php-fpm restart Before we continue we need to check whether our web server is serving the correct codes. Run the following command in the Magento 2 web directory: var/www/html/pub echo "<?php header("Content-type: text/plain"); print_r($_SERVER); ?>" > magecode.php Don't forget to update your nginx.conf.sample file with the new magecode code. It's located at the bottom of your file and should look like this: location ~ (index|get|static|report|404|503|magecode)\.php$ { Restart NGINX and open the file in your browser. The output should look as follows. As you can see, the created MAGE_RUN variables are available. Congratulations, you just finished configuring NGINX including additional domains. Now let's continue connecting them in Magento 2. Now log in to the backend and navigate to Stores | All Stores. By default, Magento 2 has one Website, Store, and Store View setup. Now click on Create Website and commit the following details: Name My German Website Code de Next, click on Create Store and commit the following details: Web site My German Website Name My German Website Root Category Default Category (we will change this later)  Next, click on Create Store View and commit the following details: Store My German Website Name German Code de Status Enabled  Continue the same step for the French domain. Make sure that the Code in Website and Store View is fr. The next important step is connecting the websites with the domain name. Navigate to Stores | Configuration | Web | Base URLs. Change the Store View scope at the top to My German Website. You will be prompted when switching; press ok to continue. Now, unset the checkbox called Use Default from Base URL and Base Link URL and commit your domain name. Save and continue the same procedure for the other website. The output should look like this: Save your entire configuration and clear your cache. Now go to Products | Categories and click on Add Root Category with the following data: Name Root German Is Active Yes Page Title My German Website Continue the same step for the French domain. You may add additional information here but it is not needed. Changing the current Root Category called Default Category to Root English is also optional but advised. Save your configuration, go to Stores | All Stores, and change all of the stores to the appropriate Root Catalog that we just created. Every Root Category should now have a dedicated Root Catalog. Congratulations, you just finished configuring Magento 2 including additional domains and dedicated Root Categories. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 11, we created a multistore setup for .com, .de, and .fr domains using a separate Root Catalog. In steps 1 through 4, we configured the domain mapping in the NGINX default.conf file. Then, we added the fastcgi_param MAGE_RUN code to the nginx.conf.sample file, which will manage what website or store view to request within Magento. In step 6, we used an easy test method to check whether all domains run the correct MAGE_RUN code. In steps 7 through 9, we configured the website, store, and store view name and code for the given domain names. In step 10, we created additional Root Catalogs for the remaining German and French stores. They are then connected to the previously created store configuration. All stores have their own Root Catalog now. There's more… Are you able to buy additional domain names but like to try setting up a multistore? Here are some tips to create one. Depending on whether you are using Windows, Mac OS, or Linux, the following options apply: Windows: Go to C:\Windows\System32\drivers\etc, open up the hosts file as an administrator, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and click on the Start button; then search for cmd.exe and commit the following: ipconfig /flushdns Mac OS: Go to the /etc/ directory, open the hosts file as a super user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: dscacheutil -flushcache Depending on your Mac version, check out the different commands here: http://www.hongkiat.com/blog/how-to-clear-flush-dns-cache-in-os-x-yosemite/ Linux: Go to the /etc/ directory, open the hosts file as a root user, and add the following: (Change the IP and domain name accordingly.) 123.456.789.0 yourdomain.de 123.456.789.0 yourdomain.fr 123.456.789.0 www.yourdomain.de 123.456.789.0 www.yourdomain.fr Save the file and run the following command on the shell: service nscd restart Depending on your Linux version, check out the different commands here: http://www.cyberciti.biz/faq/rhel-debian-ubuntu-flush-clear-dns-cache/ Open your browser and surf to the custom-made domains. These domains work only on your PC. You can copy these IP and domain names on as many PCs as you prefer. This method also works great when you are developing or testing and your production domain is not available on your development environment. Creating subcategories After creating the foundation of the website, we need to set up a catalog structure. Setting up a catalog structure is not difficult, but needs to be thought out well. Some websites have an easy setup using two levels, while others sometimes use five or more subcategories. Always keep in mind the user experience; your customer needs to crawl the pages easily. Keep it simple! Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to set up a catalog including subcategories. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Products | Categories. As we have already created Root Catalogs, we start with using the Root English catalog first. Click on the Root English catalog on the left and then select the Add Subcategory button above the menu. Now commit the following and repeat all steps again for the other Root Catalogs: Name Shoes (Schuhe) (Chaussures) Is Active Yes Page Title Shoes (Schuhe) (Chaussures) Name Clothes (Kleider) (Vêtements) Is Active Yes Page Title Clothes (Kleider) (Vêtements) As we have created the first level of our catalog, we can continue with the second level. Now click on the first level that you need to extend with a subcategory and select the Add Subcategory button. Now commit the following and repeat all steps again for the other Root Catalogs: Name Men (Männer) (Hommes) Is Active Yes Page Title Men (Männer) (Hommes) Name Women (Frau) (Femmes) Is Active Yes Page Title Women (Frau) (Femmes) Congratulations, you just finished configuring subcategories in Magento 2. Now let's open a browser and surf to your created domain names: yourdomain.com, yourdomain.de, and yourdomain.fr. Your categories should now look as follows: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 4, we created subcategories for the English, German, and French stores. In this recipe, we created a dedicated Root Catalog for every website. This way, every store can be configured using their own tax and shipping rules. There's more… In our example, we only submitted Name, Is Active, and Page Title. You may continue to commit the Description, Image, Meta Keywords, and Meta Description fields. By default, the URL key is the same as the Name field; you can change this depending on your SEO needs. Every category or subcategory has a default page layout defined by the theme. You may need to override this. Go to the Custom Design tab and click the drop-down menu of Page Layout. We can choose from the following options: 1 column, 2 columns with left bar, 2 columns with right bar, 3 columns, or Empty. Managing an attribute set Every product has a unique DNA; some products such as shoes could have different colors, brands, and sizes, while a snowboard could have weight, length, torsion, manufacture, and style. Setting up a website with all the attributes does not make sense. Depending on the products that you sell, you should create attributes that apply per website. When creating products for your website, attributes are the key elements and need to be thought through. What and how many attributes do I need? How many values does one need? All types of questions that could have a great impact on your website and, not to forget, the performance of it. Creating an attribute such as color and having 100 K of different key values stored is not improving your overall speed and user experience. Always think things through. After creating the attributes, we combine them in attribute sets that can be picked when starting to create a product. Some attributes can be used more than once, while others are unique to one product of an attribute set. Getting ready For this recipe, we will use a Droplet created at DigitalOcean, https://www.digitalocean.com/. We will be using NGINX, PHP-FPM, and a Composer-based setup including Magento 2 preinstalled. No other prerequisites are required. How to do it... For the purpose of this recipe, let's assume that we need to create product attributes and sets. The following steps will guide you through this: First, log in to the backend of Magento 2 and go to Stores | Products. As we are using a vanilla setup, only system attributes and one attribute set is installed. Now click on Add New Attribute and commit the following data in the Properties tab: Attribute Properties Default label shoe_size Catalog Input Type for Store Owners Dropdown Values Required No Manage Options (values of your attribute) English Admin French German 4 4 35 35 4.5 4.5 35 35 5 5 35-36 35-36 5.5 5.5 36 36 6 6 36-37 36-37 6.5 6.5 37 37 7 7 37-38 37-38 7.5 7.5 38 38 8 8 38-39 38-39 8.5 8.5 39 39 Advanced Attribute Properties Scope Global Unique Value No Add to Column Options Yes Use in Filer Options Yes As we have already set up a multi-website that sells shoes and clothes, we stick with this. The attributes that we need to sell shoes are: shoe_size, shoe_type, width, color, gender, and occasion. Continue with the rest of the chart accordingly (http://www.shoesizingcharts.com). Click on Save and Continue Edit now and continue on the Manage Labels tab with the following information: Manage Titles (Size, Color, etc.) English French German Size Taille Größe Click on Save and Continue Edit now and continue on the Storefront Properties tab with the following information: Storefront Properties Use in Search No Comparable in Storefront No Use in Layered Navigation Filterable (with result) Use in Search Result Layered Navigation No Position 0 Use for Promo Rule Conditions No Allow HTML Tags on Storefront Yes Visible on Catalog Pages on Storefront Yes Used in Product Listing No Used for Sorting in Product Listing No Click on Save Attribute now and clear the cache. Depending on whether you have set up the index management accordingly through the Magento 2 cronjob, it will automatically update the newly created attribute. The additional shoe_type, width, color, gender, and occasion attributes configuration can be downloaded at https://github.com/mage2cookbook/chapter4. After creating all of the attributes, we combine them in an attribute set called Shoes. Go to Stores | Attribute Set, click on Add Attribute Set, and commit the following data: Edit Attribute Set Name Name Shoes Based On Default Now click on the Add New button in the Groups section and commit the group name called Shoes. The newly created group is now located at the bottom of the list. You may need to scroll down before you see it. It is possible to drag and drop the group higher up in the list. Now drag and drop the created attributes, shoe_size, shoe_type, width, color, gender, and occasion to the group and save the configuration. The notice of the cron job is automatically updated depending on your settings. Congratulations, you just finished creating attributes and attribute sets in Magento 2. This can be seen in the following screenshot: How it works… Let's recap and find out what we did throughout this recipe. In steps 1 through 10, we created attributes that will be used in an attribute set. The attributes and sets are the fundamentals for every website. In steps 1 through 5, we created multiple attributes to define all details about the shoes and clothes that we would like to sell. Some attributes are later used as configurable values on the frontend while others only indicate the gender or occasion. In steps 6 through 9, we connected the attributes to the related attribute set so that when creating a product, all correct elements are available. There's more… After creating the attribute set for Shoes, we continue to create an attribute set for Clothes. Use the following attributes to create the set: color, occasion, apparel_type, sleeve_length, fit, size, length, and gender. Follow the same steps as we did before to create a new attribute set. You may reuse the attributes, color, occasion, and gender. All detailed attributes can be found at https://github.com/mage2cookbook/chapter4#clothes-set. The following is the screenshot of the Clothes attribute set: Summary In this article, you learned how to create a Root Catalog, subcategories, and manage attribute sets. For more information on Magento 2, Refer the following books by Packt Publishing: Magento 2 Development Cookbook (https://www.packtpub.com/web-development/magento-2-development-cookbook) Magento 2 Developer's Guide (https://www.packtpub.com/web-development/magento-2-developers-guide) Resources for Article: Further resources on this subject: Social Media in Magento [article] Upgrading from Magneto 1 [article] Social Media and Magento [article]
Read more
  • 0
  • 0
  • 12025

article-image-introduction-aspnet-core-web-api
Packt
07 Mar 2018
13 min read
Save for later

Introduction to ASP.NET Core Web API

Packt
07 Mar 2018
13 min read
In this article by MithunPattankarand MalendraHurbuns, the authors of the book, Mastering ASP.NET Web API,we will start with a quick recap of MVC. We will be looking at the following topics:  Quick recap of MVC framework  Why Web APIs were incepted and it's evolution?  Introduction to .NET Core?  Overview of ASP.NET Core architecture (For more resources related to this topic, see here.) Quick recap of MVC framework Model-View-Controller (MVC) is a powerful and elegant way of separating concerns within an application and applies itself extremely well to web applications. With ASP.NETMVC, it's translated roughly as follows: Models (M): These are the classes that represent the domain you are interested in. These domain objects often encapsulate data stored in a database as well as code that manipulates the data and enforces domain-specific business logic. With ASP.NETMVC, this is most likely a Data Access Layer of some kind, using a tool like Entity Framework or NHibernate or classic ADO.NET.  View (V): This is a template to dynamically generate HTML.  Controller(C): This is a special class that manages the relationship between the View and the Model. It responds to user input, talks to the Model, and decides which view to render (if any). In ASP.NETMVC, this class is conventionally denoted by the suffix Controller. Why Web APIs were incepted and it's evolution? Looking back to days when ASP.NETASMX-based XML web service was widely used for building service-oriented applications, it was easiest way to create SOAP-based service which can be used by both .NET applications and non .NET applications. It was available only over HTTP. Around 2006, Microsoft released Windows Communication Foundation (WCF).WCF was and even now a powerful technology for building SOA-based applications. It was giant leap in the world of Microsoft .NET world. WCF was flexible enough to be configured as HTTP service, Remoting service, TCP service, and so on. Using Contracts of WCF, we would keep entire business logic code base same and expose the service as HTTP based or non HTTP based via SOAP/ non SOAP. Until 2010 the ASMX based XML web service or WCF service were widely used in client server based applications, in-fact everything was running smoothly. But the developers of .NET or non .NET community started to feel need for completely new SOA technology for client server applications. Some of reasons behind them were as follows: With applications in production, the amount of data while communicating started to explode and transferring them over the network was bandwidth consuming. SOAP being light weight to some extent started to show signs of payload increase. A few KB SOAP packets were becoming few MBs of data transfer.  Consuming the SOAP service in applications lead to huge applications size because of WSDL and proxy generation. This was even worse when it was used in web applications. Any changes to SOAP services lead to repeat of consuming them by proxy generation. This wasn't easy task for any developers.  JavaScript-based web frameworks were getting released and gaining ground for much simpler way of web development. Consuming SOAP-based services were not that optimal way. Hand-held devices were becoming popular like tablets, smartphones. They had more focused applications and needed very lightweight service oriented approach.  Browser based Single Page Applications (SPA) was gaining ground very rapidly. Using SOAP based services for quite heavy for these SPA. Microsoft released REST based WCF components which can be configured to respond in JSON or XML, but even then it was WCF which was heavy technology to be used.  Applications where no longer just large enterprise services, but there was need was more focused light weight service to be up & running in few days and much easier to use. Any developer who has seen evolving nature of SOA based technologies like ASMX, WCF or any SOAP based felt the need to have much lighter, HTTP based services. HTTP only, JSON compatible POCO based lightweight services was need of the hour and concept of Web API started gaining momentum. What is Web API? A Web API is a programmatic interface to a system that is accessed via standard HTTP methods and headers. A Web API can be accessed by a variety of HTTP clients, including browsers and mobile devices. For Web API to be successful HTTP based service, it needed strong web infrastructure like hosting, caching, concurrency, logging, security etc. One of the best web infrastructure was none other than ASP.NET. ASP.NET either in form Web Form or MVC was widely adopted, so the solid base for web infrastructure was mature enough to be extended as Web API. Microsoft responded to community needs by creating ASP.NET Web API- a super-simple yet very powerful framework for building HTTP-only, JSON-by-default web services without all the fuss of WCF. ASP.NET Web API can be used to build REST based services in matter of minutes and can easily consumed with any front end technologies. It used IIS (mostly) for hosting, caching, concurrency etc. features, it became quite popular. It was launched in 2012 with most basics needs for HTTP based services like convention-based Routing, HTTP Request and Response messages. Later Microsoft released much bigger and better ASP.NET Web API 2 along with ASP.NETMVC 5 in Visual Studio 2013. ASP.NET Web API 2 evolved at much faster pace with these features. Installed via NuGet Installing of Web API 2 was made simpler by using NuGet, either create empty ASP.NET or MVC project and then run command in NuGet Package Manager Console: Install-Package Microsoft.AspNet.WebApi Attribute Routing Initial release of Web API was based on convention-based routing meaning we define one or more route templates and work around it. It's simple without much fuss as routing logic in a single place & it's applied across all controllers. The real world applications are more complicated with resources (controllers/ actions) have child resources like customers having orders, books having authors etc. In such cases convention-based routing is not scalable. Web API 2 introduced a new concept of Attribute Routing which uses attributes in programming languages to define routes. One straight forward advantage is developer has full controls how URIs for Web API are formed. Here is quick snippet of Attribute Routing: [Route("customers/{customerId}/orders")] public IEnumerable<Order>GetOrdersByCustomer(intcustomerId) { ... } For more understanding on this, read Attribute Routing in ASP.NET Web API 2(https://www.asp.net/web-api/overview/web-api-routing-and-actions/attribute-routing-in-web-api-2) OWIN self-host ASP.NET Web API lives on ASP.NET framework, leading to think that it can be hosted on IIS only. The Web API 2 came new hosting package. Microsoft.AspNet.WebApi.OwinSelfHost With this package it can self-hosted outside IIS using OWIN/Katana. CORS (Cross Origin Resource Sharing) Any Web API developed either using .NET or non .NET technologies and meant to be used across different web frameworks, then enabling CORS is must. A must read on CORS&ASP.NET Web API 2 (https://www.asp.net/web-api/overview/security/enabling-cross-origin-requests-in-web-api). IHTTPActionResult and Web API OData improvements are other few notable features which helped evolve Web API 2 as strong technology for developing HTTP based services. ASP.NET Web API 2 has becoming more powerful over the years with C# language improvements like Asynchronous programming using Async/ Await, LINQ, Entity Framework Integration, Dependency Injection with DI frameworks, and so on. ASP.NET into Open Source world Every technology has to evolve with growing needs and advancements in hardware, network and software industry, ASP.NET Web API is no exception to that. Some of the evolution that ASP.NET Web API should undergo from perspectives of developer community, enterprises and end users are: ASP.NETMVC and Web API even though part of ASP.NET stack but their implementation and code base is different. A unified code base reduces burden of maintaining them. It's known that Web API's are consumed by various clients like web applications, Native apps, and Hybrid apps, desktop applications using different technologies (.NET or non .NET). But how about developing Web API in cross platform way, need not rely always on Windows OS/ Visual Studio IDE. Open sourcing the ASP.NET stack so that it's adopted on much bigger scale. End users are benefitted with open source innovations. We saw that why Web APIs were incepted, how they evolved into powerful HTTP based service and some evolutions required. With these thoughts Microsoft made an entry into world of Open Source by launching .NET Core and ASP.NET Core 1.0. What is .NET Core? .NET Core is a cross-platform free and open-source managed software framework similar to .NET Framework. It consists of CoreCLR, a complete cross-platform runtime implementation of CLR. .NET Core 1.0 was released on 27 June 2016 along with Visual Studio 2015 Update 3, which enables .NET Core development. In much simpler terms .NET Core applications can be developed, tested, deployed on cross platforms such as Windows, Linux flavours, macOS systems. With help of .NET Core, we don't really need Windows OS and in particular Visual Studio IDE to develop ASP.NET web applications, command-line apps, libraries, and UWP apps. In short, let's understand .NET Core components: CoreCLR:It is a virtual machine that manages the execution of .NET programs. CoreCLRmeans Core Common Language Runtime, it includes the garbage collector, JIT compiler, base .NET data types and many low-level classes. CoreFX: .NET Core foundational libraries likes class for collections, file systems, console, XML, Async and many others. CoreRT: .NET Core runtime optimized for AOT (ahead of time compilation) scenarios, with the accompanying .NET Native compiler toolchain. Its main responsibility is to do native compilation of code written in any of our favorite .NET programming language. .NET Core shares subset of original .NET framework, plus it comes with its own set of APIs that is not part of .NET framework. This results in some shared APIs that can be used by both .NET core and .NET framework. A .Net Core application can easily work on existing .NET Framework but not vice versa. .NET Core provides a CLI (Command Line Interface) for an execution entry point for operating systems and provides developer services like compilation and package management. The following are the .NET Core interesting points to know: .NET Core can be installed on cross platforms like Windows, Linux, andmacOS. It can be used in device, cloud, and embedded/IoT scenarios.  Visual Studio IDE is not mandatory to work with .NET Core, but when working on Windows OS we can leverage existing IDE knowledge to work.  .NET Core is modular, meaning that instead of assemblies, developers deal with NuGet packages.  .NET Core relies on its package manager to receive updates because cross platform technology can't rely on Windows Updates. To learn .NET Core, we just need a shell, text editor and its runtime installed. .NET Core comes with flexible deployment. It can be included in your app or installed side-by-side user- or machine-wide.  .NET Core apps can also be self-hosted/run as standalone apps. .NET Core supports four cross-platform scenarios--ASP.NET Core web apps, command-line apps, libraries, and Universal Windows Platform apps. It does not implement Windows Forms or WPF which render the standard GUI for desktop software on Windows. At present only C# programming language can be used to write .NET Core apps. F# and VB support are on the way. We will primarily focus on ASP.NET Core web apps which includes MVC and Web API. CLI apps, libraries will be covered briefly. What is ASP.NET Core? A new open-source and cross-platform framework for building modern cloud-based web applications using .NET. ASP.NET Core is completely open-source, you can download it from GitHub. It's cross platform meaning you can develop ASP.NET Core apps on Linux/macOS and of course on Windows OS. ASP.NET was first released almost 15 years back with .NET framework. Since then it's adopted by millions of developers for large, small applications. ASP.NET has evolved with many capabilities. With .NET Core as cross platform, ASP.NET took a huge leap beyond boundaries of Windows OS environment for development and deployment of web applications. ASP.NET Core overview                                                ASP.NET Core Architecture overview ASP.NET Core high level overview provides following insights: ASP.NET Core runs both on Full .NET framework and .NET Core.  ASP.NET Core applications with full .NET framework can only be developed and deployed only Windows OS/Server.  When using .NET core, it can be developed and deployed on platform of choice. The logos of Windows, Linux, macOSindicates that you can work with ASP.NET Core.  ASP.NET Core when on non-Windows machine, use the .NET Core libraries to run the applications. It's obvious you won't have all full .NET libraries but most of them are available.  Developers working on ASP.NET Core can easily switch working on any machine not confined to Visual Studio 2015 IDE. ASP.NET Core can run with different version of .NET Core. ASP.NET Core has much more foundational improvements apart from being cross-platform, we gain following advantages of using ASP.NET Core: Totally Modular: ASP.NET Core takes totally modular approach for application development, every component needed to build application are well factored into NuGet packages. Only add required packages through NuGet to keep overall application lightweight.  ASP.NET Core is no longer based on System.Web.dll. Choose your editors and tools: Visual Studio IDE was used to develop ASP.NET applications on Windows OS box, now since we have moved beyond the Windows world. Then we will require IDE/editors/ Tools required for developingASP.NET applications on Linux/macOS. Microsoft developed powerful lightweight code editors for almost any type of web applications called as Visual Studio Code.  ASP.NET Core is such a framework that we don't need Visual Studio IDE/ code to develop applications. We can use code editors like Sublime, Vim also. To work with C# code in editors, installed and use OmniSharp plugin.  OmniSharp is a set of tooling, editor integrations and libraries that together create an ecosystem that allows you to have a great programming experience no matter what your editor and operating system of choice may be.  Integration with modern web frameworks: ASP.NET Core has powerful, seamless integration with modern web frameworks like Angular, Ember, NodeJS, and Bootstrap.  Using bower andNPM, we can work with modern web frameworks.  Cloud ready: ASP.NET Core apps are cloud ready with configuration system, it just seamlessly gets transitioned from on-premises to cloud.  Built in Dependency Injection. Can be hosted on IIS or self-host in your own process or on nginx.  New light-weight and modular HTTP request pipeline. Unified code base for Web UI and Web APIs. We will see more on this when we explore anatomy of ASP.NET Core application. Summary So in this article we covered MVC framework and introduced .NET Core and its architecture. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 12008

article-image-getting-started-freeradius
Packt
07 Sep 2011
6 min read
Save for later

Getting Started with FreeRADIUS

Packt
07 Sep 2011
6 min read
After FreeRADIUS has been installed it needs to be configured for our requirements. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, will help you to get familiar with FreeRADIUS. It assumes that you already know the basics of the RADIUS protocol. In this article we shall: Perform a basic configuration of FreeRADIUS and test it Discover ways of getting help Learn the recommended way to configure and test FreeRADIUS See how everything fits together with FreeRADIUS (For more resources on this subject, see here.) Before you startThis article assumes that you have a clean installation of FreeRADIUS. You will need root access to edit FreeRADIUS configuration files for the basic configuration and testing. A simple setup We start this article by creating a simple setup of FreeRADIUS with the following: The localhost defined as an NAS device (RADIUS client) Alice defined as a test user After we have defined the client and the test user, we will use the radtest program to fill the role of a RADIUS client and test the authentication of Alice. Time for action – configuring FreeRADIUS FreeRADIUS is set up by modifying configuration files. The location of these files depends on how FreeRADIUS was installed: If you have installed the standard FreeRADIUS packages that are provided with the distribution, it will be under /etc/raddb on CentOS and SLES. On Ubuntu it will be under /etc/freeradius. If you have built and installed FreeRADIUS from source using the distribution's package management system it will also be under /etc/raddb on CentOS and SLEs. On Ubuntu it will be under /etc/freeradius. If you have compiled and installed FreeRADIUS using configure, make, make install it will be under /usr/local/etc/raddb. The following instructions assume that the FreeRADIUS configuration directory is your current working directory: Ensure that you are root in order to be able to edit the configuration files. FreeRADIUS includes a default client called localhost. This client can be used by RADIUS client programs on the localhost to help with troubleshooting and testing. Confirm that the following entry exists in the clients.conf file: client localhost { ipaddr = 127.0.0.1 secret = testing123 require_message_authenticator = no nastype = other} Define Alice as a FreeRADIUS test user. Add the following lines at the top of the users file. Make sure the second and third lines are indented by a single tab character: "alice" Cleartext-Password := "passme" Framed-IP-Address = 192.168.1.65, Reply-Message = "Hello, %{User-Name}" Start the FreeRADIUS server in debug mode. Make sure that there is no other instance running by shutting it down through the startup script. We assume Ubuntu in this case. $> sudo su#> /etc/init.d/freeradius stop#> freeradius -X You can also use the more brutal method of kill -9 $(pidof freeradius) or killall freeradius on Ubuntu and kill -9 $(pidof radius) or killall radiusd on CentOS and SLES if the startup script does not stop FreeRADIUS. Ensure FreeRADIUS has started correctly by confirming that the last line on your screen says the following: Ready to process requests. If this did not happen, read through the output of the FreeRADIUS server started in debug mode to see what problem was identified and a possible location thereof. Authenticate Alice using the following command: $> radtest alice passme 127.0.0.1 100 testing123 The debug output of FreeRADIUS will show how the Access-Request packet arrives and how the FreeRADIUS server responds to this request. Radtest will also show the response of the FreeRADIUS server: Sending Access-Request of id 17 to 127.0.0.1 port 1812 User-Name = "alice" User-Password = "passme" NAS-IP-Address = 127.0.1.1 NAS-Port = 100rad_recv: Access-Accept packet from host 127.0.0.1 port 1812,id=147, length=40 Framed-IP-Address = 192.168.1.65 Reply-Message = "Hello, alice" What just happened? We have created a test user on the FreeRADIUS server. We have also used the radtest command as a client to the FreeRADIUS server to test authentication. Let's elaborate on some interesting and important points. Configuring FreeRADIUS Configuration of the FreeRADIUS server is logically divided into different files. These files are modified to configure a certain function, component, or module of FreeRADIUS. There is, however, a main configuration file that sources the various sub-files. This file is called radiusd.conf. The default configuration is suitable for most installations. Very few changes are required to make FreeRADIUS useful in your environment. Clients Although there are many files inside the FreeRADIUS server configuration directory, only a few require further changes. The clients.conf file is used to define clients to the FreeRADIUS server. Before an NAS can use the FreeRADIUS server it has to be defined as a client on the FreeRADIUS server. Let's look at some points about client definitions. Sections A client is defined by a client section. FreeRADIUS uses sections to group and define various things. A section starts with a keyword indicating the section name. This is followed by enclosing brackets. Inside the enclosing brackets are various settings specific to that section. Sections can also be nested. Sometimes the section's keyword is followed by a single word to differentiate between sections of the same type. This allows us to have different client entries in clients.conf. Each client has a short name to distinguish it from the others. The clients.conf file is not the only file where client sections can be defined although it is the usual and most logical place. The following image shows nested client definitions inside a server section: (Move the mouse over the image to enlarge.) Client identification The FreeRADIUS server identifies a client by its IP Address. If an unknown client sends a request to the server, the request will be silently ignored. Shared secret The client and server also require to have a shared secret, which will be used to encrypt and decrypt certain AVPs. The value of the User-Password AVP is encrypted using this shared secret. When the shared secret differs between the client and server, FreeRADIUS server will detect it and warn you when running in debug mode: Failed to authenticate the user. WARNING: Unprintable characters in the password. Double-check the shared secret on the server and the NAS! Message-Authenticator When defining a client you can enforce the presence of the Message-Authenticator AVP in all the requests. Since we will be using the radtest program, which does not include it, we disable it for localhost by setting require_message_authenticator to no. Nastype The nastype is set to other. The value of nastype will determine how the checkrad Perl script will behave. Checkrad is used to determine if a user is already using resources on an NAS. Since localhost does not have this function or need we will define it as other. Common errors If the server is down or the packets from radtest cannot reach the server because of a firewall between them, radtest will try three times and then give up with the following message: radclient: no response from server for ID 133 socket 3 If you run radtest as a normal user it may complain about not having access to the FreeRADIUS dictionary file. This is required for normal operations. The way to solve this is either to change the permissions on the reported file or to run radtest as root.
Read more
  • 0
  • 0
  • 11984

article-image-scalability-limitations-and-effects
Packt
23 Aug 2013
26 min read
Save for later

Scalability, Limitations, and Effects

Packt
23 Aug 2013
26 min read
(For more resources related to this topic, see here.) HTML5 limitations If you haven't noticed by now, many of the HTML5 features you will use either have failsafes, multiple versions, or special syntax to enable your code to cover the entire spectrum of browsers and supported HTML5 feature sets within them. As time passes and standards become solidified, one can assume that many of these failsafes and other content display measures will mature into a single standard that all browsers will share. However, in reality this process may take a while and even at its best, developers may still have to utilize many of these failsafe features indefinitely. Therefore, a solid understanding of when, where, and why to use these failsafe measures will enable you develop your HTML5 web pages in a way that can be viewed as intended on all modern browsers. To aid developers in overcoming these previously stated issues, many frameworks and external scripts have been created and open sourced, allowing for a more universal development environment saving developers countless hours when starting each new project. Modernizr (http://modernizr.com) has quickly become a must-have addition for many HTML5 developers as it contains many of the conditions and verifications needed to allow developers to write less code and cover more browsers. Modernizr does all this by checking for a large majority (more then 40) of the new features available in HTML5 in the clients browser and reporting back if they are available or not in a matter of milliseconds. This will allow you as the developer to determine if you should display an alternate version of your content or a warning to the user. Getting your web content to display properly in all browsers is and always has been the biggest challenge for any web developer and when it comes to creating cutting edge interesting content, the challenge usually becomes harder. To allow you to better understand how these features look without the use of third-party integration, we will avoid using external libraries for the time being. It is worth noting how each of these features and others look in all browsers. Therefore make sure to test the examples as well as your own work in not just your favorite browser, but many of the other popular choices as well. Object manipulation with CSS3 Prior to the advent of CSS3, web developers used a laundry list of content manipulation, asset preparation, and asset presentation techniques in order to get their web page layout the way they wanted in every browser. Most of these techniques would be considered "hacks" as they would pretty much be a work around to enable the browser to do something it normally wouldn't. Features such as rounded corners, drop shadows, and transforms were all absent from a web developer's arsenal and the process of getting things the way you want could get mind numbing. Understandably, the excitement level surrounding CSS3 for all web developers is very high as it enables developers to perform more content manipulation techniques then ever before without the need for prior preparation or special browser hacks. Although the list of available properties in CSS3 is massive, let's cover some of the newest and most exciting of the lot. box-shadow It's true that some designers and developers say drop shadows are a part of the past, but the usage of shadowing HTML elements is still a popular design choice for many. In the past, web developers needed to perform tricks such as stretching small gradient images or creating the shadow directly into their background image to achieve this effect in their HTML documents. CSS3 has solved this issue by creating the box-shadow property to allow for drop shadow like effects on your HTML elements. To remind us how this effect was accomplished in ActionScript 3, let's review this code snippet: var dropShadow:DropShadowFilter = new DropShadowFilter();dropShadow.distance = 0;dropShadow.angle = 45;dropShadow.color = 0x333333;dropShadow.alpha = 1;dropShadow.blurX = 10;dropShadow.blurY = 10;dropShadow.strength = 1;dropShadow.quality = 15;dropShadow.inner = false;var mySprite:Sprite = new Sprite();mySprite.filters = new Array(dropShadow); As mentioned before, the new box-shadow property in CSS3 allows you to append these shadowing effects with relative ease and many of the same configuration properties: .box-shadow-example { box-shadow: 3px 3px 5px 6px #000000;} Despite the lack of property names on each of the values applied to this style, you can see that many of the value types coincide with what was appended to the drop shadow we created in ActionScript 3. This box-shadow property is assigned to the .box-shadow-example class and therefore will be applied to any element that has that classname appended to it. By creating a div element with the box-shadow-example class, we can alter our content to look something like the following: <div class="box-shadow-example">CSS3 box-shadow Property</div> As straightforward as this CSS property is to add to your project, it declares a lot of values all in a single line. Let's review each of these values in order that we can understand them better for future usage. To simplify the identification of each of the variables in the property, each of these have been updated to be different: box-shadow: 1px 2px 3px 4px #000000; These variables are explained as follows: The initial value (1px) is the shadow's horizontal offset or if the shadow is going to the left or to the right. A positive value would place the shadow on the right of the element, a negative offset will put the shadow on the left. The second value (2px) is the vertical offset, and just like the horizontal offset value, a negative number would generate a shadow going up and a positive value would generate the shadow going down. The third value (3px) is the blur radius that controls how much blur effect will be added to the shadow. Declaring a value, for example, 0 would create no blur and display a very sharp looking shadow. Negative values placed into the blur radius will be ignored and render no different then using 0. The fourth value (4px) and last of the numerical properties is the spread radius. The spread radius controls how far the drop shadow blur will spread past the initial shadow size declaration. If a value 0 is used, the shadow will display with the default blur radius set and apply no changes. Positive numerical values will yield a shadow that blurs further and negative value will make the shadow blur smaller. The final value is the hexadecimal color value, which states the color that the shadow will be in. Alternatively, you could use box-shadow to apply the shadow effect to the interior of your element rather then the exterior. With ActionScript 3, this was accomplished by appending dropShadow.inner = true; to the list of parameters in your DropShadowFiler object. The CSS3 syntax to apply box-shadow properties in this manner is very similar as all that is required is the addition of the inset keyword. Consider the following code snippet, for example: .box-shadow-example { box-shadow: 3px 3px 5px 6px #666666 inset;} This would produce a shadow that would look like the following screenshot: text-shadow Just like the box-shadow property, text-shadow lives up to its name by creating the same drop-shadowing effect, specifically for text: text-shadow: 2px 2px 6px #ff0000; Like box-shadow, the initial two values for text-shadow are the horizontal and vertical offsets for the shadow placement. The third value, which is optional is the blur size and the fourth value is the hexadecimal color: border-radius Just like element or text shadowing, adding rounded corners to your elements prior to CSS3 was a chore. Developers would usually append separate images or use other object manipulation techniques to achieve this effect on the typically square or rectangle shaped elements. With the addition of the border-radius setting in CSS3, developers can easily and dynamically set element corner roundness with only a couple of line of CSS all without the usage of vector 9 slicing like in Flash. Since HTML elements have four corners, when appending the border-radius styling, we can either target each corner individually, or all the corners at once. In order to easily append a border radius setting to all the corners at once, we would create our CSS properties as follows: #example { background-color:#ff0000; // Red background width: 200px; height: 200px;border-radius: 10px;} The preceding CSS not only appends a 10px border radius to all of the corners of the #example element, by using all the properties, which the modern browsers use, we can be assured that the effect will be visible to all users attempting to view this content: As mentioned above, each of the individual corners of the element can be targeted to only append the radius to a specific part of the element: #example { border-top-left-radius: 0px; // This is doing nothing border-top-right-radius: 5px; border-bottom-right-radius: 20px; border-bottom-left-radius: 100px;} The preceding CSS now removes our #example element's left border radius by setting it to 0px and sets a specific radius to each of the other corners. It's worth noting here that setting a border radius equal to 0 is no different than leaving that property completely out of the CSS styles: Fonts Dealing with customized fonts in Flash has had its ups and downs over the years. Any Flash developer who has needed to incorporate and use customized fonts in their Flash applications probably knows the pain that comes with choosing a font embedding method as well as making sure it works properly for users who don't have the font installed on their computer viewing the Flash application. CSS3 font embedding has implemented a "no fuss" way to include custom fonts into your HTML5 documents with the addition of the @font-face declaration: @font-face { font-family: ClickerScript; src: url('ClickerScript-Regular.ttf'), url('ClickerScript-Regular .otf'), url('ClickerScript-Regular .eot');} CSS can now directly reference your TTF, OTF, or EOT font which can be placed on your web server for accessibility. With the font source declared in our CSS document and a unique font-family identification applied to it, we can start using it on specific elements by using the font-family property: #example { font-family: ClickerScript;} Since we declared a specific font family name in the @font-face property, we can use that custom name on pretty much any element henceforth. Custom fonts can be applied to almost anything that contains text in your HTML document. Form elements such as button labels and text inputs also can be styled to used your custom fonts. You can even remake assets such as website logos in pure HTML and CSS with the same custom fonts used in the original asset creation. Acceptable font formats Like many of the other embedding methods for assets online, fonts needs to be converted into multiple formats to enable all common modern browsers to display them properly. Almost all of the available browsers will be able to handle the common True Type Fonts (.ttffile types) or Open Type Fonts (.otffile types), so embedding one of those two formats will be all that is needed. Unfortunately Internet Explorer 9 does not have support built in for either of those two popular formats and requires fonts to be saved in the EOT file format. External font libraries Many great services have appeared online in the last couple of years allowing web developers to painlessly prepare and embed fonts into their websites. Google's Web Fonts archive available at http://www.google.com/webfonts hosts a large set of open source fonts which can be added to your project without the need to worry about licensing or payment issues. Simply add a couple of extras lines of code into your HTML document and you are ready to go. Another great site that is worth checking out is Font Squirrel, which can be found at http://www.fontsquirrel.com. Like Google Web Fonts, Font Squirrel hosts a large archive of web-ready fonts with the copy-and-paste-ready code snippets to add them to your document. Another great feature on this site is the @font-face generator which give you the ability to convert your preexisting fonts into all the web compatible formats. Before getting carried away and converting all your favorite fonts into web ready formats and integrating them into your work, it is worth noting the End User License Agreement or EULA that came with the font to begin with. Converting many available fonts for use on the web will break license agreements and could cause legal issues for you down the road. Opacity More commonly known as "alpha" to the Flash developer, setting the opacity of an element not only allows you to change the look and feel of your designs, but allows you to add features like content that fades in and out. As simple as this concept seems, it is relatively new to the available list of CSS properties available to web developers. Setting the opacity of an element is extremely easy and looks something like the following: #example { opacity: 0.5;} As you can see from the preceding example, like ActionScript 3, the opacity value is a numerical value between 0 and 1. The preceding example would display a element at 50 percent transparency. The opacity property in CSS3 is now supported in all the major browsers, so there is no need to worry about using alternative property syntax when declaring it. RGB and RGBA coloring When dealing with color values in CSS, many developers would typically use hexadecimal values, which would resemble something like #000000 to declare the usage of the color black. Colors can also be implemented in their RGB representation in CSS by utilizing the rgb() or rgba() calls in place of the hexadecimal value. As you can see by the method name, the rgba color lookup in CSS also requires a forth parameter which declares the colors alpha transparency or opacity amount. Using RGBA in CSS3 rather than hexadecimal colors can be beneficial for a couple of reasons. Consider you have just created a div element which will be displayed on top of existing content within your web page layout. If you ever wanted to set a background color to the div as a specific color but wish for only that background to be semi transparent and not the interior content, the RGBA color declaration now allows you to do this easily as you can set the colors transparency: #example { // Background opacity background: rgba(0, 0, 0, 0.5); // Black 50% opacity // Box-shadow box-shadow: 1px 2px 3px 4px rgba(255, 255, 255, 0.8); // White 80% opacity // Text opacity color: rgba(255, 255, 255, 1); // White no transparency color: rgb(255, 255, 255); // This would accomplish the same styling // Text Drop Shadows (with opacity) text-shadow: 5px 5px 3px rgba(135, 100, 240, 0.5);} As you can see in the preceding example, you can freely use RGB and RGBA values rather than hexadecimal anywhere color values are required in CSS syntax. Element transforms Personally, Ifind CSS3 transforms to be one of the most exciting and fun new features in CSS. Transforming assets in the Flash IDE as well as with ActionScript has always been easily accessible and easy to implement. Transforming HTML elements is a relatively new feature to CSS and is still gaining full support by all the modern browsers. Transforming an element allows you to manipulate its shape and size by opening up a ton of possibilities for animations and visual effects to assets without the need to prepare the source before hand. When we refer to "transforming an element", we are actually describing a number of properties that can be applied to the transformation to give it different characteristics. If you have transformed objects in Flash or possibly in Photoshop before, these properties may be familiar to you. Translate As a Flash developer used to primarily dealing with X and Y coordinates when positioning elements, the CSS3 Translate Transform property is a very handy way of placing elements and it works on the same principal. The translate property takes two parameters which are the X and the Y values to translate, or effectively move the element: transform:translate(-25px, -25px); Unfortunately, to get your transforms to work in all browsers, you will need to target each of them when you append transform styles. Therefore, the standard transform style and property would now look something like this: transform:translate(-25px, -25px);-ms-transform:translate(-25px, -25px); /* IE 9 */-moz-transform:translate(-25px, -25px); /* Firefox */-webkit-transform:translate(-25px, -25px); /* Safari and Chrome */-o-transform:translate(-25px, -25px); /* Opera */ Rotate Rotation is pretty self-explanatory and extremely easy to implement. The rotate properties take a single parameter to specify the amount of rotation, in degrees, to apply to the specific element: transform:rotate(45deg);-ms-transform:rotate(45deg); /* IE 9 */-moz-transform:rotate(45deg); /* Firefox */-webkit-transform:rotate(45deg); /* Safari and Chrome */-o-transform:rotate(45deg); /* Opera */ It is worth noting that regardless of the fact that the supplied value is always intended to be a value in degrees, the value must always have deg appended for the value to be properly recognized. Scale Just like rotate transforms, scaling is pretty straightforward. The scale property requires two parameters, which declare the scale amount for both X and Y: transform:scale(0.5, 2);-ms-transform:scale(0.5, 2); /* IE 9 */-moz-transform:scale(0.5, 2); /* Firefox */-webkit-transform:scale(0.5, 2); /* Safari and Chrome */-o-transform:scale(0.5, 2); /* Opera */ Skew Skewing a element will result in the angling of the X and Y axes: transform:skew(10deg, 20deg);-ms-transform:skew(10deg, 20deg); /* IE 9 */-moz-transform:skew(10deg, 20deg); /* Firefox */-webkit-transform:skew(10deg, 20deg); /* Safari and Chrome */-o-transform:skew(10deg, 20deg); /* Opera */ The following illustration is a representation of skewing an image with the preceding properties: Matrix The matrix properties combine all of the preceding transforms into a single property and can easily eliminate many extra lines of CSS in your source: transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* IE 9 */-ms-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* Firefox */-moz-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20); /* Safari and Chrome */ -webkit-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* Opera */-o-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20); The preceding example utilizes the CSS transform matrix property to apply multiple transform styles in a single call. The matrix property requires six parameters to rotate, scale, move, and skew the element. Using the matrix property is only really useful when you actually need to implement all of the transform properties at once. If you only need to utilize one aspect of element transforms, you will be better off using just that CSS style property. 3D transforms Up until now, all of the transform properties we have reviewed have been two dimensional transformations. CSS3 now also supports 3D as well as 2D transforms.One of the best parts of CSS3 3D transforms is the fact that many devices and browsers support hardware acceleration allowing this complex graphical processing to be done on your video cards GPU. At the time of writing this, only Chrome, Safari, and firefox have support for CSS 3D transforms. Interested in what browsers will support all these great HTML5 features before you start developing? Check out http://caniuse.com to see what popular browsers support in a simple, easy-to-use website. When dealing with elements in a 3D world, we make use of the Z coordinate, which allows the use of some new transform properties. transform:rotateX(angle)transform:rotateY(angle)transform:rotateZ(angle)transform:translateZ(px)transform:scaleZ(px) Let's create a 3D cube from HTML elements to put all of these properties into a working example. To start creating our 3D cube, we will begin by writing the HTML elements which will contain the cube as well as the elements which will be making up the cube itself: <body> <div class="container"> <div id="cube"> <div class="front"></div> <div class="back"></div> <div class="right"></div> <div class="left"></div> <div class="top"></div> <div class="bottom"></div> </div> </div></body> This HTML creates a simple layout for our cube by not only creating each of the six sides, which makes up a cube with specific class names, but the container for the entire cube as well as the main container to display all of our page content. Of course, since there is no internal content in these containers and no styling yet, opening this HTML file in your browser would yield an empty page. So let's start writing our CSS to make all of these elements visible and position each to form our three dimensional cube. We will start by setting up our main containers which will position our content and contain our cubes sides: .container { width: 640px; height: 360px; position: relative; margin: 200px auto; /* Currently only supported by Webkit browsers. */ -webkit-perspective: 1000px; perspective: 1000px;}#cube { width: 640px; height: 320px; position: absolute; /* Let the transformed child elements preserve the 3D transformations: */ transform-style: preserve-3d; -webkit-transform-style: preserve-3d; -moz-transform-style: preserve-3d;} The container class is our main element, which contains all of the other elements within this example. After appending a width and height, we set the top margin to 200px to push the display down the page a bit for better viewing and the left and right margins to auto which will align this element in the center of the page: #cube div { display: block; position: absolute; border: 1px solid #000000; width: 640px; height: 320px; opacity:0.8;} By defining properties to the #cube div, we set the styles to every div element within the #cube element. We are also kind of cheating the system of cube by setting the width and height to rectangular proportions as the intention is to add videos to each of the cube sides once we structure and position it. With the basic cube-side styles appended, its time to start transforming each of the sides to form the three-dimensional cube. We will start with the front of the cube by translating it on the Z axis, bringing it closer to the perspective: #cube .front {-webkit-transform: translateZ(320px); -moz-transform: translateZ(320px); transform: translateZ(320px);} In order to append this style to our element in all modern browsers, we will need to specify the property in multiple syntaxes for each browser that doesn't support the default transform property: The preceding screenshot shows what has happened to the .front div after appending a Z translation of 320px. The larger rectangle is the .front div, which is now 320px closer to our perspective. For simplicity's sake, let's do the same to the .back div and push it 320px away from the perspective: #cube .back { -webkit-transform: rotateX(-180deg) rotate(-180deg) translateZ(320px); -moz-transform: rotateX(-180deg) rotate(-180deg) translateZ(320px); transform: rotateX(-180deg) rotate(-180deg) translateZ(320px);} As you can see from the preceding code, to properly move the .back element into place without placing it upside down, we flip the element by 180 degrees on the X axis and then translate Z by 320px just like we did for .front. Note that we didn't set a negative value on the translate Z because the element was flipped. With the .back CSS styles in place, our cube should look like the following: Now the smallest rectangle visible is the element with the classname .back, the largest is our .front element, and the middle rectangle is the remaining elements to be transformed. To position the sides of our cubes we will need to rotate the side elements on the Y axis to get them to face the proper direction. Once they are rotated into place, we can translate the position on the Z axis to push it out from the center as we did with the front and back faces: #cube .right { -webkit-transform: rotateY(90deg) translateZ( 320px ); -moz-transform: rotateY(90deg) translateZ( 320px ); transform: rotateY(90deg) translateZ( 320px );} With the right side in place, we can do the same to the left side but rotate it in the opposite direction to get it facing the other way: #cube .left {-webkit-transform: rotateY(-90deg) translateZ( 320px ); -moz-transform: rotateY(-90deg) translateZ( 320px ); transform: rotateY(-90deg) translateZ( 320px );} Now that we have all four sides of our cube aligned properly, we can finalize the cube positioning by aligning the top and bottom sides. To properly size the top and bottom we will set their own width and height to override the initial values set in the #cube div styles: #cube .top { width: 640px; height: 640px; -webkit-transform: rotateX(90deg) translateZ( 320px ); -moz-transform: rotateX(90deg) translateZ( 320px ); transform: rotateX(90deg) translateZ( 320px );}#cube .bottom { width: 640px; height: 640px; -webkit-transform: rotateX(-90deg) translateZ( 0px ); -moz-transform: rotateX(-90deg) translateZ( 0px ); transform: rotateX(-90deg) translateZ( 0px );} To properly position the top and bottom sides, we rotate the .top and .bottom elements +-90 degrees on the X axis to get them to face up and down, and only need to translate the top on the Z axis to raise it to the proper height to connect with all of the other sides. With all of those transforms appended to our layout, the resulting cube should look like the following: Although it looks 3D, since there is nothing in the containers, the perspective isn't really showing off our cube very well. So let's add some content such as a video in each of the sides of the cube to get a better visualization of our work. Within each of the sides, let's add the same HTML5 video element code: <video width="640" height="320" autoplay="true" loop="true"> <source src = "cube-video.mp4" type="video/mp4"> <source src = "cube-video.webm" type="video/webm"> Your browser does not support the video tag.</video> Since we have not added the element playback controls in order to display more visible area of the cube, our video element is set to autoplay the video as well as loop the playback on completion. Now we get a result that properly demonstrates what 3D transforms can do and is a little more visually appealing: Since we set the opacity of each of the cube sides, we can now see all four videos playing on each side, pretty cool! Since we are already here, why not kick it up one more notch and add user interaction to this cube so we can spin it around and see the video on each side. To perform this user interaction, we need to use JavaScript to translate the mouse coordinates on the page document to the X and Y 3D rotation of our cube. So let's start by creating the JavaScript to listen for mouse events: window.addEventListener("load", init, false);function init() { // Listen for mouse movement window.addEventListener('mousemove', onMouseMove, false);}function onMouseMove(e) { var mouseX = 0; var mouseY = 0; // Get the mouse position if (e.pageX || e.pageY) { mouseX = e.pageX; mouseY = e.pageY; } else if (e.clientX || e.clientY) { mouseX = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; mouseY = e.clientY + document.body.scrollTop + document.documentElement.scrollTop; } console.log("Mouse Position: x:" + mouseX + " y:" + mouseY);} As you can see from the preceding code example, when the mousemove event fires and calls the onMouseMove function, we need to run some conditionals to properly parse the proper mouse position. Since, like so many other parts of web development, retrieving the mouse coordinates differs from browser to browser, we have added a simple condition to attempt to gather the mouse X and Y in a couple of different ways. With the mouse position ready to be translated into the transform rotation of our cube, there is one final bit of preparation we need to complete prior to setting the CSS style updates. Since different browsers support the application of CSS transforms in different syntaxes, we need to figure out, in JavaScript, which syntax to use during runtime to allow our script to run on all browsers. The following code example does just that. By setting a predefined array of the possible property values and attempting to check the type of each as an element style property, we can find which element is not undefined and know it can be used for CSS transform styles: // Get the support transform propertyvar availableProperties = [ 'transform', 'MozTransform', 'WebkitTransform', 'msTransform', 'OTransform' ];// Loop over each of the propertiesfor (var i = 0; i < availableProperties.length; i++) { // Check if the type of the property style is a string (ie. valid) if (typeof document.documentElement.style[availableProperties[i]] == 'string'){ // If we found the supported property, assign it to a variable // for later use. var supportedTranformProperty = availableProperties[i]; }} Now that we have the user's mouse position and the proper syntax for CSS transform updates for our cube, we can put it all together and finally have 3D rotational control of our video cube: <script> var supportedTranformProperty; window.addEventListener("load", init, false); function init() { // Get the support transform property var availableProperties = ['transform', 'MozTransform', 'WebkitTransform', 'msTransform', 'OTransform']; for (var i = 0; i < availableProperties.length; i++) { if (typeof document.documentElement. style[availableProperties[i]] == 'string'){ supportedTranformProperty = availableProperties }} // Listen for mouse movement window.addEventListener('mousemove', onMouseMove, false); } function onMouseMove(e) { // Get the mouse position if (e.pageX || e.pageY) { mouseX = e.pageX; mouseY = e.pageY; } else if (e.clientX || e.clientY) { mouseX = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; mouseY = e.clientY + document.body.scrollTop + document.documentElement.scrollTop;} // Update the cube rotation rotateCube(mouseX, mouseY); } function rotateCube(posX, posY) { // Update the CSS transform styles document.getElementById("cube").style[supportedTranformProperty] = 'rotateY(' + posX + 'deg) rotateX(' + posY * -1 + 'deg)'; }</script> Regardless of the fact we have attempted to allow for multi browser use of this example, it is worth opening it up in each to see how something like 3D transforms with heavy internal content run. During the time of writing this, all WebKit browsers were the easy choice when viewing content like this, as browsers such as firefox and Internet Explorer render this example at a much slower and lower quality output: Transitions With CSS3, we can add an effect when changing from one style to another, without using Flash animations or JavaScripts: div { transition: width 2s; -moz-transition: width 2s; /* Firefox 4 */ -webkit-transition: width 2s; /* Safari and Chrome */ -o-transition: width 2s; /* Opera */} If the duration is not specified, the transition will have no effect, because the default value is 0: div { transition: width 2s, height 2s, transform 2s; -moz-transition: width 2s, height 2s, -moz-transform 2s; -webkit-transition: width 2s, height 2s, -webkit-transform 2s; -o-transition: width 2s, height 2s,-o-transform 2s;} It should be worth noting that Internet Explorer currently does not have support for CSS3 transitions. Browser compatibility If you haven't noticed yet, the battle of browser compatibility is one of the biggest aspects of a web developer's job. Over time, many great services and applications have been created to help developers overcome these hurdles in a much simpler manner than trial-and-error techniques. Websites such as http://css3test.com, http://caniuse.com, and http://html5readiness.com are all great resources to keep on top of HTML5 specification developer and browser support for all the features within.
Read more
  • 0
  • 0
  • 11943

article-image-building-form-applications-joomla-using-ck-forms
Packt
05 Jan 2010
4 min read
Save for later

Building Form Applications in Joomla! using CK Forms

Packt
05 Jan 2010
4 min read
(For more resources on Joomla!, see here.) Add a quick survey  form to your Joomla site Dynamic web application often means database at the back-end. It not only takes data from the database and shows, but also collects data from the visitors. For example, you want to add a quick survey into your Joomla! Site. Instead of searching for different extensions for survey forms, you can quickly build one survey form using an extension named CK Forms. This extensions is freely available for download at http://ckforms.cookex.eu/download/download.php. For building a quick survey, follow the steps below: Download CK Forms extension from http://ckforms.cookex.eu/download/download.php and install it from Extensions | Install/Uninstall screen in Joomla! Administration area. Once installed successfully, you see the component CK Forms in Components menu. Select Components | CK Forms and you get CK Forms screen as shown below: For creating a new form, click on New icon in the toolbar. That opens up CK Forms: [New] screen. In the General tab, type the name and title of the form. The name should be without space, but title can be fairly long and with spaces. Then select Yes in Published field if you want to show the form now. In Description field, type a brief description of the form which will be displayed before the form. In Results tab, select Yes in Save result field. This will store the data submitted by the form into database and allow you to see the data later. In Text result field, type the text that you want to display just after submission of the form. In Redirect URL field, type the URL of a page where the user will be redirected after submitting the form. In E-mail tab, select Yes in Email result field, if you want the results of the forms to be e-mailed to you. If you select Yes in this field, provide mail-to, mail cc, and Mail BCC address. Also specify subject of the mail in Mail subject field. Select Yes in Include fileupload file field if you want the uploaded file to be mailed too. If an e-mail field is present in the form and the visitor submits his/her e-mail address, you can send an acknowledgement on receiving his/her data through e-mail. For sending such receipt messages, select Yes in E-mail receipt field. Then specify e-mail receipt subject and e-mail receipt text. You can also include the data and file submitted via the form with this receipt e-mail. For doing so, select Yes in both Include data and Include fileupload file fields. In Advanced tab, you can specify whether you want to use Captcha or not. Select Yes in Use Captcha field. Then type some tips text in Captcha tips text, for example, 'Please type the text shown in the image'. You can also specify error texts in Captcha custom error text field. In Uploaded files path field, you can specify where the files uploaded by the users will be stored. The directory mentioned here must be writable. You can also specify the maximum size of uploaded file in File uploaded maximum size field. To display Powered by CK Forms text below the form, select Yes in Display "Powered by" text field. In Front-end display tab, you can choose to display IP address of the visitor. You can view help on working with CK Forms by clicking on Help tab. After filling up all the fields, click on Save icon in the toolbar. That will save the form and take you to CK Forms screen. Now you will see the newly created form listed in this screen.
Read more
  • 0
  • 0
  • 11906
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-shipping-module
Packt
17 Feb 2014
12 min read
Save for later

Creating a Shipping Module

Packt
17 Feb 2014
12 min read
(For more resources related to this topic, see here.) Shipping ordered products to customers is one of the key parts of the e-commerce flow. In most cases, a shop owner has a contract with a shipping handler where everyone has their own business rules. In a standard Magento, the following shipping handlers are supported: DHL FedEx UPS USPS If your handler is not on the list, have a look if there is a module available at Magento Connect. If not, you can configure a standard shipping method or you can create your own, which we will do in this article. Initializing module configurations In this recipe, we will create the necessary files for a shipping module, which we will extend with more features using the recipes of this article. Getting ready Open your code editor with the Magento project. Also, get access to the backend where we will check some things. How to do it... The following steps describe how we can create the configuration for a shipping module: Create the following folders: app/code/local/Packt/ app/code/local/Packt/Shipme/ app/code/local/Packt/Shipme/etc/ app/code/local/Packt/Shipme/Model/ app/code/local/Packt/Shipme/Model/Carrier Create the module file named Packt_Shipme.xml in the folder app/etc/modules with the following content: <?xml version="1.0"?> <config> <modules> <Packt_Shipme> <active>true</active> <codePool>local</codePool> <depends> <Mage_Shipping /> </depends> </Packt_Shipme> </modules> </config> Create a config.xml file in the folder app/code/local/Packt/Shipme/etc/ with the following content: <?xml version="1.0" encoding="UTF-8"?> <config> <modules> <Packt_Shipme> <version>0.0.1</version> </Packt_Shipme> </modules> <global> <models> <shipme> <class>Packt_Shipme_Model</class> </shipme> </models> </global> <default> <carriers> <shipme> <active>1</active> <model>shipme/carrier_shipme</model> <title>Shipme shipping</title> <express_enabled>1</express_enabled> <express_title>Express delivery</express_title> <express_price>4</express_price> <business_enabled>1</business_enabled> <business_title>Business delivery</business_title> <business_price>5</business_price> </shipme> </carriers> </default> </config> Clear the cache and navigate in the backend to System | Configuration | Advanced Disable Modules Output. Observe that the Packt_Shipme module is on the list. At this point, the module is initialized and working. Now, we have to create a system.xml file where we will put the configuration parameters for our shipping module. Create the file app/code/local/Packt/Shipme/etc/system.xml. In this file, we will create the configuration parameters for our shipping module. When you paste the following code in the file, you will create an extra group in the shipping method's configuration. In this group, we can set the settings for the new shipping method: <?xml version="1.0" encoding="UTF-8"?> <config> <sections> <carriers> <groups> <shipme translate="label" module="shipping"> <label>Shipme</label> <sort_order>15</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> <fields> <!-- Define configuration fields below --> <active translate="label"> <label>Enabled</label> <frontend_type>select</frontend_type> <source_model>adminhtml/ system_config_source_yesno</source_model> <sort_order>10</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </active> <title translate="label"> <label>Title</label> <frontend_type>text</frontend_type> <sort_order>20</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </title> </fields> </shipme> </groups> </carriers> </sections> </config> Clear the cache and navigate in the backend to the shipping method configuration page. To do that, navigate to System | Configuration | Sales | Shipping methods. You will see that an extra group is added as shown in the following screenshot: You will see that there is a new shipping method called Shipme. We will extend this configuration with some values. Add the following code under the <fields> tag of the module: <active translate="label"> <label>Enabled</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_yesno</source_ model> <sort_order>10</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </active> <title translate="label"> <label>Title</label> <frontend_type>text</frontend_type> <sort_order>20</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </title> <express_enabled translate="label"> <label>Enable express</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_yesno</source_ model> <sort_order>30</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </express_enabled> <express_title translate="label"> <label>Title express</label> <frontend_type>text</frontend_type> <sort_order>40</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </express_title> <express_price translate="label"> <label>Price express</label> <frontend_type>text</frontend_type> <sort_order>50</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </express_price> <business_enabled translate="label"> <label>Enable business</label> <frontend_type>select</frontend_type> <source_model>adminhtml/system_config_source_yesno</source_ model> <sort_order>60</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </business_enabled> <business_title translate="label"> <label>Title business</label> <frontend_type>text</frontend_type> <sort_order>70</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </business_title> <business_price translate="label"> <label>Price business</label> <frontend_type>text</frontend_type> <sort_order>80</sort_order> <show_in_default>1</show_in_default> <show_in_website>1</show_in_website> <show_in_store>1</show_in_store> </business_price> Clear the cache and reload the backend. You will now see the other configurations under the Shipme – Express shipping method as shown in the following screenshot: How it works... The first thing we have done is to create the necessary files to initialize the module. The following files are required to initialize a module: app/etc/modules/Packt_Shipme.xml app/code/local/Packt/Shipme/etc/config.xml In the first file, we will activate the module with the <active> tag. The <codePool> tag describes that the module is located in the local code pool, which represents the folder app/code/local/. In this file, there is also the <depends> tag. First this will check if the Mage_Shipping module is installed or not. If not, Magento will throw an exception. If the module is available, the dependency will load this module after the Mage_Shipping module. This makes it possible to rewrite some values from the Mage_Shipping module. In the second file, config.xml, we configured all the stuff that we will need in this module. These are the following things: The version number (0.0.1) The models Some default values for the configuration values The last thing we did was create a system.xml file so that we can create a custom configuration for the shipping module. The configuration in the system.xml file adds some extra values to the shipping method configuration, which is available in the backend under the menu System | Configuration | Sales | Shipping methods. In this module, we created a new shipping handler called Shipme. Within this handler, you can configure two shipping options: express and business. In the system.xml file, we created the fields to configure the visibility, name, and price of the options. See also In this recipe, we used the system.xml file of the module to create the configuration values. Writing an adapter model A new shipping module is initialized in the previous recipe. What we did in the previous recipe was a preparation to continue with the business part we will see in this recipe. We will add a model with the business logic for the shipping method. The model is called an adapter class because Magento requires an adapter class for each shipping method. This class will extend the Mage_Shipping_Model_Carrier_Abstract class. This class will be used for the following things: Make the shipping method available Calculate the shipping costs Set the title in the frontend of the shipping methods How to do it... Perform the following steps to create the adapter class for the shipping method: Create the folder app/code/local/Packt/Shipme/Model/Carrier if it doesn't already exist. In this folder, create a file named Shipme.php and add the following content to it: <?php class Packt_Shipme_Model_Carrier_Shipme extends Mage_Shipping_Model_Carrier_Abstract implements Mage_Shipping_Model_Carrier_Interface { protected $_code = 'shipme'; public function collectRates (Mage_Shipping_Model_Rate_Request $request) { $result = Mage::getModel('shipping/rate_result'); //Check if express method is enabled if ($this->getConfigData('express_enabled')) { $method = Mage::getModel ('shipping/rate_result_method'); $method->setCarrier($this->_code); $method->setCarrierTitle ($this->getConfigData('title')); $method->setMethod('express'); $method->setMethodTitle ($this->getConfigData('express_title')); $method->setCost ($this->getConfigData('express_price')); $method->setPrice ($this->getConfigData('express_price')); $result->append($method); } //Check if business method is enabled if ($this->getConfigData('business_enabled')) { $method = Mage::getModel ('shipping/rate_result_method'); $method->setCarrier($this->_code); $method->setCarrierTitle ($this->getConfigData('title')); $method->setMethod('business'); $method->setMethodTitle ($this->getConfigData('business_title')); $method->setCost ($this->getConfigData('business_price')); $method->setPrice ($this->getConfigData('business_price')); $result->append($method); } return $result; } public function isActive() { $active = $this->getConfigData('active'); return $active==1 || $active=='true'; } public function getAllowedMethods() { return array('shipme'=>$this->getConfigData('name')); } } Save the file and clear the cache; your adapter model has now created. How it works... The previously created class handles all the business logic that is needed for the shipping method. Because this adapter class is an extension of the Mage_Shipping_Model_Carrier_Abstract class, we can overwrite some methods to customize the business logic of the standard. The first method we overwrite is the isAvailable() function. In this function, we have to return true or false to say that the module is active. In our code, we will activate the module based on the system configuration field active. The second method is the collectRates() function. This function is used to set the right parameters for every shipping method. For every shipping method, we can set the title and price. The class implements the interface Mage_Shipping_Model_Carrier_Interface. In this interface, two functions are declared: the isTrackingAvailable() and getAllowedMethods() functions. We created the function getAllowedMethods() in the adapter class. The isTrackingAvailable() function is declared in the parent class Mage_Shipping_Model_Carrier_Abstract. We configured two options under the Shipme shipping method. These options are called Express delivery and Business delivery. We will check if they are enabled in the configuration and set the configured title and price for each option. The last thing to do is return the right values. We have to return an instance of the class Mage_Shipping_Model_Rate_Result. We created an empty instance of the class, where we will append the methods to when they are available. To add a method, we have to use the function append($method). This function requires an instance of the class Mage_Shipping_Model_Rate_Result_Method that we created in the two if statements.
Read more
  • 0
  • 0
  • 11871

article-image-data-tables-and-datatables-plugin-jquery-13-php
Packt
19 Nov 2009
10 min read
Save for later

Data Tables and DataTables Plugin in jQuery 1.3 with PHP

Packt
19 Nov 2009
10 min read
In this article by Kae Verens, we will look at: How to install and use the DataTables plugin How to load data pages on request from the server Searching and ordering the data From time to time, you will want to show data in your website and allow the data to be sorted and searched. It always impresses me that whenever I need to do anything with jQuery, there are usually plugins available, which are exactly or close to what I need. The DataTables plugin allows sorting, filtering, and pagination on your data. Here's an example screen from the project we will build in this article. The data is from a database of cities of the world, filtered to find out if there is any place called nowhere in the world: Get your copy of DataTables from http://www.datatables.net/, and extract it into the directory datatables, which is in the same directory as the jquery.min.js file. What the DataTables plugin does is take a large table, paginate it, and allow the columns to be ordered, and the cells to be filtered. Setting up DataTables Setting up DataTables involves setting up a table so that it has distinct < thead > and < tbody > sections, and then simply running dataTable() on it. As a reminder, tables in HTML have a header and a body. The HTML elements < thead > and < tbody > are optional according to the specifications, but the DataTables plugin requires that you put them in, so that it knows what to work with. These elements may not be familiar to you, as they are usually not necessary when you are writing your web pages and most people leave them out, but DataTables needs to know what area of the table to turn into a navigation bar, and which area will contain the data, so you need to include them. Client-side code The first example in this article is purely a client-side one. We will provide the data in the same page that is demonstrating the table. Copy the following code into a file in a new demo directory and name it tables.html: <html> <head> <script src="../jquery.min.js"></script> <script src="../datatables/media/js/jquery.dataTables.js"> </script> <style type="text/css"> @import "../datatables/media/css/demo_table.css";</style> <script> $(document).ready(function(){ $('#the_table').dataTable(); }); </script> </head> <body> <div style="width:500px"> <table id="the_table"> <thead> <tr> <th>Artist / Band</th><th>Album</th><th>Song</th> </tr> </thead> <tbody> <tr><td>Muse</td> <td>Absolution</td> <td>Sing for Absolution</td> </tr> <tr><td>Primus</td> <td>Sailing The Seas Of Cheese</td> <td>Tommy the Cat</td> </tr> <tr><td>Nine Inch Nails</td> <td>Pretty Hate Machine</td> <td>Something I Can Never Have</td> </tr> <tr><td>Horslips</td> <td>The Táin</td> <td>Dearg Doom</td> </tr> <tr><td>Muse</td> <td>Absolution</td> <td>Hysteria</td> </tr> <tr><td>Alice In Chains</td> <td>Dirt</td> <td>Rain When I Die</td> </tr> <!-- PLACE MORE SONGS HERE --> </tbody> </table> </div> </body> </html> When this is viewed in the browser, we immediately have a working data table: Note that the rows are in alphabetical order according to Artist/Band. DataTables automatically sorts your data initially based on the first column. The HTML provided has a < div > wrapper around the table, set to a fixed width. The reason for this is that the Search box at the top and the pagination buttons at the bottom are floated to the right, outside the HTML table. The < div > wrapper is provided to try to keep them at the same width as the table. There are 14 entries in the HTML, but only 10 of them are shown here. Clicking the arrow on the right side at the bottom-right pagination area loads up the next page: And finally, we also have the ability to sort by column and search all data: In this screenshot, we have the data filtered by the word horslips, and have ordered Song in descending order by clicking the header twice. With just this example, you can probably manage quite a few of your lower-bandwidth information tables. By this, I mean that you could run the DataTables plugin on complete tables of a few hundred rows. Beyond that, the bandwidth and memory usage would start affecting your reader's experience. In that case, it's time to go on to the next section and learn how to serve the data on demand using jQuery and Ajax. As an example of usage, a user list might reasonably be printed entirely to the page and then converted using the DataTable plugin because, for smaller sites, the user list might only be a few tens of rows and thus, serving it over Ajax may be overkill. It is more likely, though, that the kind of information that you would really want this applied to is part of a much larger data set, which is where the rest of the article comes in! Getting data from the server The rest of the article will build up a sample application, which is a search application for cities of the world. This example will need a database, and a large data set. I chose a list of city names and their spelling variants as my data set. You can get a list of this type online by searching. The exact point at which you decide a data set is large enough to require it to be converted to serve over Ajax, instead of being printed fully to the HTML source, depends on a few factors, which are mostly subjective. A quick test is: if you only ever need to read a few pages of the data, yet there are many pages in the source and the HTML is slow to load, then it's time to convert. The database I'm using in the example is MySQL (http://www.mysql.com/). It is trivial to convert the example to use any other database, such as PostgreSQL or SQLite. For your use, here is a short list of large data sets: http://wordlist.sourceforge.net/—Links to collections of words. http://www.gutenberg.org/wiki/Gutenberg:Offline_Catalogs—A list of books placed online by Project Gutenburg. http://www.world-gazetteer.com/wg.php?men=stdl—A list of all the cities in the world, including populations. The reason I chose a city name list is that I wanted to provide a realistic large example of when you would use this. In your own applications, you might also use the DataTables plugin to manage large lists of products, objects such as pages or images, and anything else that can be listed in tabular form and might be very large. The city list I found has over two million variants in it, so it is an extreme example of how to set up a searchable table. It's also a perfect example of why the Ajax capabilities of the DataTables project are important. Just to see the result, I exported all the entries into an HTML table, and the file size was 179 MB. Obviously, too large for a web page. So, let's find out how to break the information into chunks and load it only as needed. Client-side code On the client side, we do not need to provide placeholder data. Simply print out the table, leaving the < tbody > section blank, and let DataTables retrieve the data from the server. We're starting a new project here, so create a new directory in your demos section and save the following into it as tables.html: <html> <head> <script src="../jquery.min.js"></script> <script src="../datatables/media/js/jquery.dataTables.js"> </script> <style type="text/css"> @import "../datatables/media/css/demo_table.css"; table{width:100%} </style> <script> $(document).ready(function(){ $('#the_table').dataTable({ 'sAjaxSource':'get_data.php' }); }); </script> </head> <body> <div style="width:500px"> <table id="the_table"> <thead> <tr> <th>Country</th> <th>City</th> <th>Latitude</th> <th>Longitude</th> </tr> </thead> <tbody> </tbody> </table> </div> </body> </html> In this example, we've added a parameter to the .dataTable call, sAjaxSource, which is the URL of the script that will provide the data (the file will be named get_data.php). Server-side code On the server side, we will start off by providing the first ten rows from the database. DataTables expects the data to be returned as a two-dimensional array named aaData. In my own database, I've created a table like this: CREATE TABLE `cities` ( `ccode` char(2) DEFAULT NULL, `city` varchar(87) DEFAULT NULL, `longitude` float DEFAULT NULL, `latitude` float DEFAULT NULL, KEY `city` (`city`(5)) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 Most of the searching will be done on city names, so I've indexed city. Initially, let's just extract the first page of information. Create a file called get_data.php and save it in the same directory as tables.html: <?php // { initialise variables $amt=10; $start=0; // } // { connect to database function dbRow($sql){ $q=mysql_query($sql); $r=mysql_fetch_array($q); return $r; } function dbAll($sql){ $q=mysql_query($sql); while($r=mysql_fetch_array($q))$rs[]=$r; return $rs; } mysql_connect('localhost','username','password'); mysql_select_db('phpandjquery'); // } // { count existing records $r=dbRow('select count(ccode) as c from cities'); $total_records=$r['c']; // } // { start displaying records echo '{"iTotalRecords":'.$total_records.', "iTotalDisplayRecords":'.$total_records.', "aaData":['; $rs=dbAll("select ccode,city,longitude,latitude from cities order by ccode,city limit $start,$amt"); $f=0; foreach($rs as $r){ if($f++) echo ','; echo '["',$r['ccode'],'", "',addslashes($r['city']),'", "',$r['longitude'],'", "',$r['latitude'],'"]'; } echo ']}'; // } In a nutshell, what happens is that the script counts how many cities are there in total, and then returns that count along with the first ten entries to the client browser using JSON as the transport.
Read more
  • 0
  • 0
  • 11809

article-image-hello-world-program
Packt
20 Apr 2016
12 min read
Save for later

Hello World Program

Packt
20 Apr 2016
12 min read
In this article by Manoj Kumar, author of the book Learning Sinatra, we will write an application. Make sure that you have Ruby installed. We will get a basic skeleton app up and running and see how to structure the application. (For more resources related to this topic, see here.) In this article, we will discuss the following topics: A project that will be used to understand Sinatra Bundler gem File structure of the application Responsibilities of each file Before we begin writing our application, let's write the Hello World application. Getting started The Hello World program is as follows: 1require 'sinatra'23 get '/' do4 return 'Hello World!'5 end The following is how the code works: ruby helloworld.rb Executing this from the command line will run the application and the server will listen to the 4567 port. If we point our browser to http://localhost:4567/, the output will be as shown in the following screenshot: The application To understand how to write a Sinatra application, we will take a small project and discuss every part of the program in detail. The idea We will make a ToDo app and use Sinatra along with a lot of other libraries. The features of the app will be as follows: Each user can have multiple to-do lists Each to-do list will have multiple items To-do lists can be private, public, or shared with a group Items in each to-do list can be assigned to a user or group The modules that we build are as follows: Users: This will manage the users and groups List: This will manage the to-do lists Items: This will manage the items for all the to-do lists Before we start writing the code, let's see what the file structure will be like, understand why each one of them is required, and learn about some new files. The file structure It is always better to keep certain files in certain folders for better readability. We could dump all the files in the home folder; however, that would make it difficult for us to manage the code: The app.rb file This file is the base file that loads all the other files (such as, models, libs, and so on) and starts the application. We can configure various settings of Sinatra here according to the various deployment environments. The config.ru file The config.ru file is generally used when we need to deploy our application with different application servers, such as Passenger, Unicorn, or Heroku. It is also easy to maintain the different deployment environment using config.ru. Gemfile This is one of the interesting stuff that we can do with Ruby applications. As we know, we can use a variety of gems for different purposes. The gems are just pieces of code and are constantly updated. Therefore, sometimes, we need to use specific versions of gems to maintain the stability of our application. We list all the gems that we are going to use for our application with their version. Before we discuss how to use this Gemfile, we will talk about gem bundler. Bundler The gem bundler manages the installation of all the gems and their dependencies. Of course, we would need to install the gem bundler manually: gem install bundler This will install the latest stable version of bundler gem. Once we are done with this, we need to create a new file with the name Gemfile (yes, with a capital G) and add the gems that we will use. It is not necessary to add all the gems to Gemfile before starting to write the application. We can add and remove gems as we require; however, after every change, we need to run the following: bundle install This will make sure that all the required gems and their dependencies are installed. It will also create a 'Gemfile.lock' file. Make sure that we do not edit this file. It contains all the gems and their dependencies information. Therefore, we now know why we should use Gemfile. This is the lib/routes.rb path for folder containing the routes file. What is a route? A route is the URL path for which the application serves a web page when requested. For example, when we type http://www.example.com/, the URL path is / and when we type http://www.example.com/something/, /something/ is the URL path. Now, we need to explicitly define all the routes for which we will be serving requests so that our application knows what to return. It is not important to have this file in the lib folder or to even have it at all. We can also write the routes in the app.rb file. Consider the following examples: get '/' do # code end post '/something' do # code end Both of the preceding routes are valid. The get and post method are the HTTP methods. The first code block will be executed when a GET request is made on / and the second one will be executed when a POST request is made on /something. The only reason we are writing the routes in a separate file is to maintain clean code. The responsibility of each file will be clearly understood in the following: models/: This folder contains all the files that define model of the application. When we write the models for our application, we will save them in this folder. public/: This folder contains all our CSS, JavaScript, and image files. views/: This folder will contain all the files that define the views, such as HTML, HAML, and ERB files. The code Now, we know what we want to build. You also have a rough idea about what our file structure would be. When we run the application, the rackup file that we load will be config.ru. This file tells the server what environment to use and which file is the main application to load. Before running the server, we need to write a minimum code. It includes writing three files, as follows: app.rb config.ru Gemfile We can, of course, write these files in any order we want; however, we need to make sure that all three files have sufficient code for the application to work. Let's start with the app.rb file. The app.rb file This is the file that config.ru loads when the application is executed. This file, in turn, loads all the other files that help it to understand the available routes and the underlying model: 1 require 'sinatra' 2 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end What does this code do? Let's see what this code does in the following: 1 require 'sinatra' //This loads the sinatra gem into memory. 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end This defines our main application's class. This skeleton is enough to start the basic application. We inherit the Base class of the Sinatra module. Before starting the application, we may want to change some basic configuration settings such as logging, error display, user sessions, and so on. We handle all these configurations through the configure blocks. Also, we might need different configurations for different environments. For example, in development mode, we might want to see all the errors; however, in production we don’t want the end user to see the error dump. Therefore, we can define the configurations for different environments. The first step would be to set the application environment to the concerned one, as follows: 4 set :environment, ENV['RACK_ENV'] We will later see that we can have multiple configure blocks for multiple environments. This line reads the system environment RACK_ENV variable and sets the same environment for the application. When we discuss config.ru, we will see how to set RACK_ENV in the first place: 6 configure do 7 end The following is how we define a configure block. Note that here we have not informed the application that to which environment do these configurations need to be applied. In such cases, this becomes the generic configuration for all the environments and this is generally the last configuration block. All the environment-specific configurations should be written before this block in order to avoid code overriding: 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } If we see the file structure discussed earlier, we can see that models/ is a directory that contains the model files. We need to import all these files in the application. We have kept all our model files in the models/ folder: Dir[File.join(File.dirname(__FILE__),'models','*.rb')] This would return an array of files having the .rb extension in the models folder. Doing this, avoids writing one require line for each file and modifying this file again: 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } Similarly, we will import all the files in the lib/ folder. Therefore, in short, the app.rb configures our application according to the deployment environment and imports the model files and the other library files before starting the application. Now, let's proceed to write our next file. The config.ru file The config.ru is the rackup file of the application. This loads all the gems and app.rb. We generally pass this file as a parameter to the server, as follows: 1 require 'sinatra' 2 require 'bundler/setup' 3 Bundler.require 4 5 ENV["RACK_ENV"] = "development" 6 7 require File.join(File.dirname(__FILE__), 'app.rb') 8 9 Todo .start! W Working of the code Let's go through each of the lines, as follows: 1 require 'sinatra' 2 require 'bundler/setup' The first two lines import the gems. This is exactly what we do in other languages. The gem 'sinatra' command will include all the Sinatra classes and help in listening to requests, while the bundler gem will manage all the other gems. As we have discussed earlier, we will always use bundler to manage our gems. 3 Bundler.require This line of the code will check Gemfile and make sure that all the gems available match the version and all the dependencies are met. This does not import all the gems as all gems may not be needed in the memory at all times: 5 ENV["RACK_ENV"] = "development" This code will set the system environment RACK_ENV variable to development. This will help the server know which configurations does it need to use. We will later see how to manage a single configuration file with different settings for different environments and use one particular set of configurations for the given environment. If we use version control for our application, config.ru is not version controlled. It has to be customized on whether our environment is development, staging, testing, or production. We may version control a sample config.ru. We will discuss this when we talk about deploying our application. Next, we will require the main application file, as follows: 7 require File.join(File.dirname(__FILE__), 'app.rb') We see here that we have used the File class to include app.rb: File.dirname(__FILE__) It is a convention to keep config.ru and app.rb in the same folder. It is good practice to give the complete file path whenever we require a file in order to avoid breaking the code. Therefore, this part of the code will return the path of the folder containing config.ru. Now, we know that our main application file is in the same folder as config.ru, therefore, we do the following: File.join(File.dirname(__FILE__), 'app.rb') This would return the complete file path of app.rb and the line 7 will load the main application file in the memory. Now, all we need to do is execute app.rb to start the application, as follows: 9 Todo .start! We see that the start! method is not defined by us in the Todo class in app.rb. This is inherited from the Sinatra::Base class. It starts the application and listens to incoming requests. In short, config.ru checks the availability of all the gems and their dependencies, sets the environment variables, and starts the application. The easiest file to write is Gemfile. It has no complex code and logic. It just contains a list of gems and their version details. Gemfile In Gemfile, we need to specify the source from where the gems will be downloaded and the list of the gems. Therefore, let's write a Gemfile with the following lines: 1 source 'https://rubygems.org' 2 gem 'bundler', '1.6.0' 3 gem 'sinatra', '1.4.4' The first line specifies the source. The https://rubygems.org website is a trusted place to download gems. It has a large collection of gems hosted. We can view this page, search for gems that we want to use, read the documentation, and select the exact version for our application. Generally, the latest stable version of bundler is used. Therefore, we search the site for bundler and find out its version. We do the same for the Sinatra gem. Summary In this article, you learned how to build a Hello World program using Sinatra. Resources for Article: Further resources on this subject: Getting Ready for RubyMotion[article] Quick start - your first Sinatra application[article] Building tiny Web-applications in Ruby using Sinatra[article]
Read more
  • 0
  • 0
  • 11808

article-image-introducing-web-application-development-rails
Packt
25 Feb 2015
8 min read
Save for later

Introducing Web Application Development in Rails

Packt
25 Feb 2015
8 min read
In this article by Syed Fazle Rahman, author of the book Bootstrap for Rails,we will learn how to present your application in the best possible way, which has been the most important factor for every web developer for ages. In this mobile-first generation, we are forced to go with the wind and make our application compatible with mobiles, tablets, PCs, and every possible display on Earth. Bootstrap is the one stop solution for all woes that developers have been facing. It creates beautiful responsive designs without any extra efforts and without any advanced CSS knowledge. It is a true boon for every developer. we will be focusing on how to beautify our Rails applications through the help of Bootstrap. We will create a basic Todo application with Rails. We will explore the folder structure of a Rails application and analyze which folders are important for templating a Rails Application. This will be helpful if you want to quickly revisit Rails concepts. We will also see how to create views, link them, and also style them. The styling in this article will be done traditionally through the application's default CSS files. Finally, we will discuss how we can speed up the designing process using Bootstrap. In short, we will cover the following topics: Why Bootstrap with Rails? Setting up a Todo Application in Rails Analyzing folder structure of a Rails application Creating views Styling views using CSS Challenges in traditionally styling a Rails Application (For more resources related to this topic, see here.) Why Bootstrap with Rails? Rails is one the most popular Ruby frameworks which is currently at its peak, both in terms of demand and technology trend. With more than 3,100 members contributing to its development, and tens of thousands of applications already built using it, Rails has created a standard for every other framework in the Web today. Rails was initially developed by David Heinemeier Hansson in 2003 to ease his own development process in Ruby. Later, he became generous enough to release Rails to the open source community. Today, it is popularly known as Ruby on Rails. Rails shortens the development life cycle by moving the focus from reinventing the wheel to innovating new features. It is based on the convention of the configurations principle, which means that if you follow the Rails conventions, you would end up writing much less code than you would otherwise write. Bootstrap, on the other hand, is one of the most popular frontend development frameworks. It was initially developed at Twitter for some of its internal projects. It makes the life of a novice web developer easier by providing most of the reusable components that are already built and are ready to use. Bootstrap can be easily integrated with a Rails development environment through various methods. We can directly use the .css files provided by the framework, or can extend it through its Sass version and let Rails compile it. Sass is a CSS preprocessor that brings logic and functionality into CSS. It includes features like variables, functions, mixins, and others. Using the Sass version of Bootstrap is a recommended method in Rails. It gives various options to customize Bootstrap's default styles easily. Bootstrap also provides various JavaScript components that can be used by those who don't have any real JavaScript knowledge. These components are required in almost every modern website being built today. Bootstrap with Rails is a deadly combination. You can build applications faster and invest more time to think about functionality, rather than rewrite codes. Setting up a Todo application in Rails I assume that you already have basic knowledge of Rails development. You should also have Rails and Ruby installed in your machine to start with. Let's first understand what this Todo application will do. Our application will allow us to create, update, and delete items from the Todo list. We will first analyze the folders that are created while scaffolding this application and which of them are necessary for templating the application. So, let's dip our feet into the water: First, we need to select our workspace, which can be any folder inside your system. Let's create a folder named Bootstrap_Rails_Project. Now, open the terminal and navigate to this folder. It's time to create our Todo application. Write the following command to create a Rails application named TODO: rails new TODO This command will execute a series of various other commands that are necessary to create a Rails application. So, just wait for sometime before it stops executing all the codes. If you are using a newer version of Rails, then this command will also execute bundle install command at the end. Bundle install command is used to install other dependencies. The output for the preceding command is as follows: Now, you should have a new folder inside Bootstrap_Rails_Project named TODO, which was created by the preceding code. Here is the output: Analyzing folder structure of a Rails application Let's navigate to the TODO folder to check how our application's folder structure looks like: Let me explain to you some of the important folders here: The first one is the app folder. The assets folder inside the app folder is the location to store all the static files like JavaScript, CSS, and Images. You can take a sneak peek inside them to look at the various files. The controllers folder handles various requests and responses of the browser. The helpers folder contains various helper methods both for the views and controllers. The next folder mailers, contains all the necessary files to send an e-mail. The models folder contains files that interact with the database. Finally, we have the views folder, which contains all the .erb files that will be complied to HTML files. So, let's start the Rails server and check out our application on the browser: Navigate to the TODO folder in the terminal and then type the following command to start a server: rails server You can also use the following command: rails s You will see that the server is deployed under the port 3000. So, type the following URL to view the application: http://localhost:3000. You can also use the following URL: http://0.0.0.0:3000. If your application is properly set up, you should see the default page of Rails in the browser: Creating views We will be using Rails' scaffold method to create models, views, and other necessary files that Rails needs to make our application live. Here's the set of tasks that our application should perform: It should list out the pending items Every task should be clickable, and the details related to that item should be seen in a new view We can edit that item's description and some other details We can delete that item The task looks pretty lengthy, but any Rails developer would know how easy it is to do. We don't actually have to do anything to achieve it. We just have to pass a single scaffold command, and the rest will be taken care of. Close the Rails server using Ctrl + C keys and then proceed as follows: First, navigate to the project folder in the terminal. Then, pass the following command: rails g scaffold todo title:string description:text completed:boolean This will create a new model called todo that has various fields like title, description, and completed. Each field has a type associated with it. Since we have created a new model, it has to be reflected in the database. So, let's migrate it: rake db:create db:migrate The preceding code will create a new table inside a new database with the associated fields. Let's analyze what we have done. The scaffold command has created many HTML pages or views that are needed for managing the todo model. So, let's check out our application. We need to start our server again: rails s Go to the localhost page http://localhost:3000 at port number 3000. You should still see the Rails' default page. Now, type the URL: http://localhost:3000/todos. You should now see the application, as shown in the following screenshot: Click on New Todo, you will be taken to a form which allows you to fill out various fields that we had earlier created. Let's create our first todo and click on submit. It will be shown on the listing page: It was easy, wasn't it? We haven't done anything yet. That's the power of Rails, which people are crazy about. Summary This article was to brief you on how to develop and design a simple Rails application without the help of any CSS frontend frameworks. We manually styled the application by creating an external CSS file styles.css and importing it into the application using another CSS file application.css. We also discussed the complexities that a novice web designer might face on directly styling the application. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 11788
article-image-getting-started-aurelia
Packt
03 Jan 2017
28 min read
Save for later

Getting Started with Aurelia

Packt
03 Jan 2017
28 min read
In this article by Manuel Guilbault, the author of the book Learning Aurelia, we will how Aurelia is such a modern framework. brainchild of Rob Eisenberg, father of Durandal, it is based on cutting edge Web standards, and is built on modern software architecture concepts and ideas, to offer a powerful toolset and an awesome developer experience. (For more resources related to this topic, see here.) This article will teach you how Aurelia works, and how you can use it to build real-world applications from A to Z. In fact, while reading the article and following the examples, that’s exactly what you will do. You will start by setting up your development environment and creating the project, then I will walk you through concepts such as routing, templating, data-binding, automated testing, internationalization, and bundling. We will discuss application design, communication between components, and integration of third parties. We will cover every topic most modern, real-world single-page applications require. In this first article, we will start by defining some terms that will be used throughout the article. We will quickly cover some core Aurelia concepts. Then we will take a look at the core Aurelia libraries and see how they interact with each other to form a complete, full-featured framework. We will see also what tools are needed to develop an Aurelia application and how to install them. Finally, we will start creating our application and explore its global structure. Terminology As this article is about a JavaScript framework, JavaScript plays a central role in it. If you are not completely up to date with the terminology, which has changed a lot in the last few years, let me clear things up. JavaScript (or JS) is a dialect, or implementation, of the ECMAScript (ES) standard. It is not the only implementation, but it definitely is the most popular. In this article, I will use the JS acronym to talk about actual JavaScript code or code files and the ES acronym when talking about an actual version of the ECMAScript standard. Like everything in computer programing, the ECMAScript standard evolves over time. At the moment of writing, the latest version is ES2016 and was published in June 2016. It was originally called ES7, but TC39, the committee drafting the specification, decided to change their approval and naming model, hence the new name. The previous version, named ES2015 (ES6) before the naming model changed, was published in June 2015 and was a big step forward as compared to the version before it. This older version, named ES5, was published in 2009 and was the most recent version for 6 years, so it is now widely supported by all modern browsers. If you have been writing JavaScript in the last five years, you should be familiar with ES5. When they decided to change the ES naming model, the TC39 committee also chose to change the specification’s approval model. This decision was made in an effort to publish new versions of the language at a quicker pace. As such, new features are being drafted and discussed by the community, and must pass through an approval process. Each year, a new version of the specification will be released, comprising the features that were approved during the year. Those upcoming features are often referred to as ESNext. This term encompasses language features that are approved or at least pretty close to approval but not yet published. It can be reasonable to expect that most or at least some of those features will be published in the next language version. As ES2015 and ES2016 are still recent things, they are not fully supported by most browsers. Moreover, ESNext features have typically no browser support at all. Those multiple names can be pretty confusing. To make things simpler, I will stick with the official names ES5 for the previous version, ES2016 for the current version and ESNext for the next version. Before going any further, you should make yourself familiar with the features introduced by ES2016 and with ESNext decorators, if you are not already. We will use these features throughout the article. If you don’t know where to start with ES2015 and ES2016, you can find a great overview of the new features on Babel’s website: https://babeljs.io/docs/learn-es2015/ As for ESNext decorators, Addy Osmani, a Google engineer, explained them pretty well: https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841 For further reading, you can take a look at the feature proposals (decorators, class property declarations, async functions, and so on) for future ES versions: https://github.com/tc39/proposals Core concepts Before we start getting our hands dirty, there are a couple of core concepts that need to be explained. Conventions First, Aurelia relies a lot on conventions. Most of those conventions are configurable, and can be changed if they don’t suit your needs. Each time we’ll encounter a convention throughout the article, we will see how to change it whenever possible. Components Components are a first class citizen of Aurelia. What is an Aurelia component? It is a pair made of an HTML template, called the view, and a JavaScript class, called the view-model. The view is responsible for displaying the component, while the view-model controls its data and behavior. Typically, the view sits in an .html file and the view-model in a .js file. By convention, those two files are bound through a naming rule, they must be in the same directory and have the same name (except for their extension, of course). Here’s an example of an empty component with no data, no behavior, and a static template: component.js export class MyComponent {} component.html <template> <p>My component</p> </template> A component must comply with two constraints, a view’s root HTML element must be the template element, and the view-model class must be exported from the .js file. As a rule of thumb, the only function that should be exported by a component’s JS file should be the view-model class. If multiple classes or functions are exported, Aurelia will iterate on the file’s exported functions and classes and will use the first it finds as the view-model. However, since the enumeration order of an object’s keys is not deterministic as per the ES specification, nothing guarantees that the exports will be iterated in the same order they were declared, so Aurelia may pick the wrong class as the component’s view-model. The only exception to that rule is some view resources In addition to its view-model class, a component’s JS file can export things like value converters, binding behaviors, and custom attributes basically any view resource that can’t have a view, which excludes custom elements. Components are the main building blocks of an Aurelia application. Components can use other components; they can be composed to form bigger or more complex components. Thanks to the slot mechanism, you can design a component’s template so parts of it can be replaced or customized. Architecture Aurelia is not your average monolithic framework. It is a set of loosely coupled libraries with well-defined abstractions. Each of its core libraries solves a specific and well-defined problem common to single-page applications. Aurelia leverages dependency injection and a plugin architecture so you can discard parts of the framework and replace them with third-party or even your own implementations. Or you can just throw away features you don’t need so your application is lighter and faster to load. The core Aurelia libraries can be divided into multiple categories. Let’s have a quick glance. Core features The following libraries are mostly independent and can be used by themselves if needed. They each provide a focused set of features and are at the core of Aurelia: aurelia-dependency-injection: A lightweight yet powerful dependency injection container. It supports multiple lifetime management strategies and child containers. aurelia-logging: A simple logger, supporting log levels and pluggable consumers. aurelia-event-aggregator: A lightweight message bus, used for decoupled communication. aurelia-router: A client-side router, supporting static, parameterized or wildcard routes, and child routers. aurelia-binding: An adaptive and pluggable data-binding library. aurelia-templating: An extensible HTML templating engine. Abstraction layers The following libraries mostly define interfaces and abstractions in order to decouple concerns and enable extensibility and pluggable behaviors. This does not mean that some of the libraries in the previous section do not expose their own abstractions besides their features. Some of them do. But the libraries described in the current section have almost no other purpose than defining abstractions: aurelia-loader: An abstraction defining an interface for loading JS modules, views, and other resources. aurelia-history: An abstraction defining an interface for history management used by routing. aurelia-pal: An abstraction for platform-specific capabilities. It is used to abstract away the platform on which the code is running, such as a browser or Node.js. Indeed, this means that some Aurelia libraries can be used on the server side. Default implementations The following libraries are the default implementations of abstractions exposed by libraries from the two previous sections: aurelia-loader-default: An implementation of the aurelia-loader abstraction for SystemJS and require-based loaders. aurelia-history-browser: An implementation of the aurelia-history abstraction based on standard browser hash change and push state mechanisms. aurelia-pal-browser: An implementation of the aurelia-pal abstraction for the browser. aurelia-logging-console: An implementation of the aurelia-logging abstraction for the browser console. Integration layers The following libraries’ purpose is to integrate some of the core libraries together. They provide interface implementations and adapters, along with default configuration or behaviors: aurelia-templating-router: An integration layer between the aurelia-router and the aurelia-templating libraries. aurelia-templating-binding: An integration layer between the aurelia-templating and the aurelia-binding libraries. aurelia-framework: An integration layer that brings together all of the core Aurelia libraries into a full-featured framework. aurelia-bootstrapper: An integration layer that brings default configuration for aurelia-framework and handles application starting. Additional tools and plugins If you take a look at Aurelia’s organization page on GitHub at https://github.com/aurelia, you will see many more repositories. The libraries listed in the previous sections are just the core of Aurelia—the tip of the iceberg, if I may. Many other libraries exposing additional features or integrating third-party libraries are available on GitHub, some of them developed and maintained by the Aurelia team, many others by the community. I strongly suggest that you explore the Aurelia ecosystem by yourself after reading this article, as it is rapidly growing, and the Aurelia community is doing some very exciting things. Tooling In the following section, we will go over the tools needed to develop our Aurelia application. Node.js and NPM Aurelia being a JavaScript framework, it just makes sense that its development tools are also in JavaScript. This means that the first thing you need to do when getting started with Aurelia is to install Node.js and NPM on your development environment. Node.js is a server-side runtime environment based on Google’s V8 JavaScript engine. It can be used to build complete websites or web APIs, but it is also used by a lot of front-end projects to perform development and build tasks, such as transpiling, linting, and minimizing. NPM is the de facto package manager for Node.js. It uses http://www.npmjs.com as its main repository, where all available packages are stored. It is bundled with Node.js, so if you install Node.js on your computer, NPM will also be installed. To install Node.js and NPM on your development environment, you simply need to go to https://nodejs.org/ and download the proper installer suiting your environment. If Node.js and NPM are already installed, I strongly recommend that you make sure to use at least the version 3 of NPM, as older versions may have issues collaborating with some of the other tools we’ll use. If you are not sure which version you have, you can check it by running the following command in a console: > npm –v If Node.js and NPM are already installed but you need to upgrade NPM, you can do so by running the following command: > npm install npm -g The Aurelia CLI Even though an Aurelia application can be built using any package manager, build system, or bundler you want, the preferred tool to manage an Aurelia project is the command line interface, a.k.a. the CLI. At the moment of writing, the CLI only supports NPM as its package manager and requirejs as its module loader and bundler, probably because they both are the most mature and stable. It also uses Gulp 4 behind the scene as its build system. CLI-based applications are always bundled when running, even in development environments. This means that the performance of an application during development will be very close to what it should be like in production. This also means that bundling is a recurring concern, as new external libraries must be added to some bundle in order to be available at runtime. In this article, we’ll stick with the preferred solution and use the CLI. There are however two appendices at the end of the article covering alternatives, a first for Webpack, and a second for SystemJS with JSPM. Installing the CLI The CLI being a command line tool, it should be installed globally, by opening a console and executing the following command: > npm install -g aurelia-cli You may have to run this command with administrator privileges, depending on your environment. If you already have it installed, make sure you have the latest version, by running the following command: > au -v You can then compare the version this command outputs with the latest version number tagged on GitHub, at https://github.com/aurelia/cli/releases/latest. If you don’t have the latest version, you can simply update it by running the following command: > npm install -g aurelia-cli If for some reason the command to update the CLI fails, simply uninstall then reinstall it: > npm uninstall aurelia-cli -g > npm install aurelia-cli -g This should reinstall the latest version. The project skeletons As an alternative to the CLI, project skeletons are available at https://github.com/aurelia/skeleton-navigation. This repository contains multiple sample projects, sitting on different technologies such as SystemJS with JSPM, Webpack, ASP .Net Core, or TypeScript. Prepping up a skeleton is easy. You simply need to download and unzip the archive from GitHub or clone the repository locally. Each directory contains a distinct skeleton. Depending on which one you chose, you’ll need to install different tools and run setup commands. Generally, the instructions in the skeleton’s README.md file are pretty clear. Our application Creating an Aurelia application using the CLI is extremely simple. You just need to open a console in the directory where you want to create your project and run the following command: > au new The CLI’s project creation process will start, and you should see something like this: The first thing the CLI will ask for is the name you want to give to your project. This name will be used both to create the directory in which the project will live and to set some values, such as the name property in the package.json file it will create. Let’s name our application learning-aurelia: Next, the CLI asks what technologies we want to use to develop our application. Here, you can select a custom transpiler such as TypeScript and a CSS preprocessor such as LESS or SASS. Transpiler: Little cousin of the compiler, it translates one programming language into another. In our case, it will be used to transform ESNext code, which may not be supported by all browsers, into ES5, which is understood by all modern browsers. The default choice is to use ESNext and plain CSS, and this is what we will choose: The following steps simply recap the choices we made and ask for confirmation to create the project, then ask if we want to install our project’s dependencies which it does by default. At this point, the CLI will create the project and run an npm install behind the scene. Once it completes, our application is ready to roll: At this point, the directory you ran au new in will contain a new directory named learning-aurelia. This sub-directory will contain the Aurelia project. We’ll explore it a bit in the following section. The CLI is likely to change and offer more options in the future, as there are plans to support additional tools and technologies. Don’t be surprised if you see different or new options when you run it. The path we followed to create our project uses Visual Studio Code as the default code editor. If you want to use another editor such as Atom, Sublime, or WebStorm, which are the other supported options at the moment of writing, you simply need to select option #3 custom transpilers, CSS pre-processors and more at the beginning of the creation process, then select the default answer for each question until asked to select your default code editor. The rest of the creation process should stay pretty much the same. Note that if you select a different code editor, your own experience may differ from the examples and screenshots you’ll find in this article, as Visual Studio Code is the editor that was used during writing. If you are a TypeScript developer, you may want to create a TypeScript project. I however recommend that you stick with plain ESNext, as every example and code sample in this article has been written in JS. Trying to follow with TypeScript may prove cumbersome, although you can try if you like the challenge. The Structure of a CLI-Based Project If you open the newly created project in a code editor, you should see the following file structure: node_modules: The standard NPM directory containing the project’s dependencies; src: The directory containing the application’s source code; test: The directory containing the application’s automated test suites. .babelrc: The configuration file for Babel, which is used by the CLI to transpile our application’s ESNext code into ES5 so most browsers can run it; index.html: The HTML page that loads and launches the application; karma.conf.js: The configuration file for Karma, which is used by the CLI to run unit tests; package.json: The standard Node.js project file. The directory contains other files such as .editorconfig, .eslintrc.json, and .gitignore that are of little interest to learn Aurelia, so we won’t cover them. In addition to all of this, you should see a directory named aurelia_project. This directory contains things related to the building and bundling of the application using the CLI. Let’s see what it’s made of. The aurelia.json file The first thing of importance in this directory is a file named aurelia.json. This file contains the configuration used by the CLI to test, build, and bundle the application. This file can change drastically depending on the choices you make during the project creation process. There are very few scenarios where this file needs to be modified by hand. Adding an external library to the application is such a scenario. Apart from this, this file should mostly never be updated manually. The first interesting section in this file is the platform: "platform": { "id": "web", "displayName": "Web", "output": "scripts", "index": "index.html" }, This section tells the CLI that the output directory where the bundles are written is named scripts. It also tells that the HTML index page, which will load and launch the application, is the index.html file. The next interesting part is the transpiler section: "transpiler": { "id": "babel", "displayName": "Babel", "fileExtension": ".js", "options": { "plugins": [ "transform-es2015-modules-amd" ] }, "source": "src/**/*.js" }, This section tells the CLI to transpile the application’s source code using Babel. It also defines additional plugins as some are already configured in .babelrc to be used when transpiling the source code. In this case, it adds a plugin that will output transpiled files as AMD-compliant modules, for requirejs compatibility. Tasks The aurelia_project directory contains a subdirectory named tasks. This subdirectory contains various Gulp tasks to build, run, and test the application. These tasks can be executed using the CLI. The first thing you can try is to run au without any argument: > au This will list all available commands, along with their available arguments. This list includes built-in commands such as new, which we used already, or generate, which we’ll see in the next section along with the Gulp tasks declared in the tasks directory. To run one of those tasks, simply execute au with the name of the task as its first argument: > au build This command will run the build task which is defined in aurelia_project/tasks/build.js. This task transpiles the application code using Babel, executes the CSS and markup preprocessors if any, and bundles the code in the scripts directory. After running it, you should see two new files in scripts: app-bundle.js and vendor-bundle.js. Those are the actual files that will be loaded by index.html when the application is launched. The former contains all application code both JS files and templates, while the later contains all external libraries used by the application including Aurelia libraries. You may have noticed a command named run in the list of available commands. This task is defined in aurelia_project/tasks/run.js, and executes the build task internally before spawning a local HTTP server to serve the application: > au run By default, the HTTP server will listen for requests on the port 9000, so you can open your favorite browser and go to http://localhost:9000/ to see the default, demo application in action. If you ever need to change the port number on which the development HTTP server runs, you just need to open aurelia_project/tasks/run.js, and locate the call to the browserSync function. The object passed to this function contains a property named port. You can change its value accordingly. The run task can accept a --watch switch: > au run --watch If this switch is present, the task will keep monitoring the source code and, when any code file changes, will rebuild the application and automatically refresh the browser. This can be pretty useful during development. Generators The CLI also offers a way to generate code, using classes defined in the aurelia_project/generators directory. At the moment of writing, there are generators to create custom attributes, custom elements, binding behaviors, value converters, and even tasks and generators, yes, there is a generator to generate generators. If you are not familiar with Aurelia at all, most of those concepts, value converters, binding behaviors, and custom attributes and elements probably mean nothing to you. Don’t worry. A generator can be executed using the built-in generate command: > au generate attribute This command will run the custom attribute generator. It will ask for the name of the attribute to generate then create it in the src/resources/attributes directory. If you take a look at this generator which is found in aurelia_project/generators/attribute.js, you’ll see that the file exports a single class named AttributeGenerator. This class uses the @inject decorator to declare various classes from the aurelia-cli library as dependencies and have instances of them injected in its constructor. It also defines an execute method, which is called by the CLI when running the generator. This method leverages the services provided by aurelia-cli to interact with the user and generate code files. The exact generator names available by default are attribute, element, binding-behavior, value-converter, task, and generator. Environments CLI-based applications support environment-specific configuration values. By default, the CLI supports three environments—development, staging, and production. The configuration object for each of these environments can be found in the different files dev.js, stage.js, and prod.js located in the aurelia_project/environments directory. A typical environment file looks like this: aurelia_project/environments/dev.js export default { debug: true, testing: true }; By default, the environment files are used to enable debugging logging and test-only templating features in the Aurelia framework depending on the environment we’ll see this in a next section. The environment objects can however be enhanced with whatever properties you may need. Typically, it could be used to configure different URLs for a backend, depending on the environment. Adding a new environment is simply a matter of adding a file for it in the aurelia_project/environments directory. For example, you can add a local environment by creating a local.js file in the directory. Many tasks, basically build and all other tasks using it, such as run and test expect an environment to be specified using the env argument: > au build --env prod Here, the application will be built using the prod.js environment file. If no env argument is provided, dev will be used by default. When executed, the build task just copies the proper environment file to src/environment.js before running the transpiler and bundling the output. This means that src/environment.js should never be modified by hand, as it will be automatically overwritten by the build task. The Structure of an Aurelia application The previous section described the files and folders that are specific to a CLI-based project. However, some parts of the project are pretty much the same whatever the build system and package manager are. These are the more global topics we will see in this section. The hosting page The first entry point of an Aurelia application is the HTML page loading and hosting it. By default, this page is named index.html and is located at the root of the project. The default hosting page looks like this: index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Aurelia</title> </head> <body aurelia-app="main"> <script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script> </body> </html> When this page loads, the script element inside the body element loads the scripts/vendor-bundle.js file, which contains requirejs itself along with definitions for all external libraries and references to app-bundle.js. When loading, requirejs checks the data-main attribute and uses its value as the entry point module. Here, aurelia-bootstrapper kicks in. The bootstrapper first looks in the DOM for elements with the aurelia-app attribute, we can find such an attribute on the body element in the default index.html file. This attribute identifies elements acting as application viewports. The bootstrapper uses the attribute’s value as the name of the application’s main module and locates the module, loads it, and renders the resulting DOM inside the element, overwriting any previous content. The application is now running. Even though the default application doesn’t illustrate this scenario, it is possible for an HTML file to host multiple Aurelia applications. It just needs to contain multiple elements with an aurelia-app attribute, each element referring to its own main module. The main module By convention, the main module referred to by the aurelia-app attribute is named main, and as such is located under src/main.js. This file is expected to export a configure function, which will be called by the Aurelia bootstrapping process and will be passed a configuration object used to configure and boot the framework. By default, the main configure function looks like this: src/main.js import environment from './environment'; export function configure(aurelia) { aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } aurelia.start().then(() => aurelia.setRoot()); } The configure function starts by telling Aurelia to use its defaults configuration, and to load the resources feature. It also conditionally loads the development logging plugin based on the environment’s debug property, and the testing plugin based on the environment’s testing property. This means that, by default, both plugins will be loaded in development, while none will be loaded in production. Lastly, the function starts the framework then attaches the root component to the DOM. The start method returns a Promise, whose resolution triggers the call to setRoot. If you are not familiar with Promises in JavaScript, I strongly suggest that you look it up before going any further, as they are a core concept in Aurelia. The root component At the root of any Aurelia application is a single component, which contains everything within the application. By convention, this root component is named app. It is composed of two files—app.html, which contains the template to render the component, and app.js, which contains its view-model class. In the default application, the template is extremely simple: src/app.html <template> <h1>${message}</h1> </template> This template is made of a single h1 element, which will contain the value of the view-model’s message property as text, thanks to string interpolation. The app view-model looks like this: src/app.js export class App { constructor() { this.message = 'Hello World!'; } } This file simply exports a class having a message property containing the string “Hello World!”. This component will be rendered when the application starts. If you run the application and navigate to the application in your favorite browser, you’ll see a h1 element containing “Hello World!”. You may notice that there is no reference to Aurelia in this component’s code. In fact, the view-model is just plain ESNext and it can be used by Aurelia as is. Of course, we’re going to leverage many Aurelia features in many of our view-models later on, so most of our view-models will in fact have dependencies on Aurelia libraries, but the key point here is that you don’t have to use any Aurelia library in your view-models if you don’t want to, because Aurelia is designed to be as less intrusive as possible. Conventional bootstrapping It is possible to leave the aurelia-app attribute empty in the hosting page: <body aurelia-app> In such a case, the bootstrapping process is much simpler. Instead of loading a main module containing a configure function, the bootstrapper will simply use the framework’s default configuration and load the app component as the application root. This can be a simpler way to get started for a very simple application, as it negates the need for the src/main.js file you can simply delete it. However, it means that you are stuck with the default framework configuration. You cannot load features nor plugins. For most real-life applications, you’ll need to keep the main module, which means specifying it as the aurelia-app attribute’s value. Customizing Aurelia configuration The configure function of the main module receives a configuration object, which is used to configure the framework: src/main.js //Omitted snippet… aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } //Omitted snippet… Here, the standardConfiguration() method is a simple helper that encapsulates the following: aurelia.use .defaultBindingLanguage() .defaultResources() .history() .router() .eventAggregator(); This is the default Aurelia configuration. It loads the default binding language, the default templating resources, the browser history plugin, the router plugin, and the event aggregator. This is the default set of features that a typical Aurelia application uses. All those plugins will be covered at one point or another throughout this article. All those plugins are optional except the binding language, which is needed by the templating engine. If you don’t need one, just don’t load it. In addition to the standard configuration, some plugins are loaded depending on the environment’s settings. When the environment’s debug property is true, Aurelia’s console logger is loaded using the developmentLogging() method, so traces and errors can be seen in the browser console. When the environment’s testing property is true, the aurelia-testing plugin is loaded using the plugin method. This plugin registers some resources that are useful when debugging components. The last line in the configure function starts the application and displays its root component, which is named app by convention. You may, however, bypass the convention and pass the name of your root component as the first argument to setRoot, if you named it otherwise: aurelia.start().then(() => aurelia.setRoot('root')); Here, the root component is expected to sit in the src/root.html and src/root.js files. Summary Getting started with Aurelia is very easy, thanks to the CLI. Installing the tooling and creating an empty project is simply a matter of running a couple of commands, and it takes typically more time waiting for the initial npm install to complete than doing the actual setup. We’ll go over dependency injection and logging, and we’ll start building our application by adding components and configuring routes to navigate between them. Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Breaking into Microservices Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 11752

article-image-bootstrap-box
Packt
07 Aug 2015
6 min read
Save for later

Bootstrap in a Box

Packt
07 Aug 2015
6 min read
In this article written by Snig Bhaumik, author of the book Bootstrap Essentails, we explain the concept of Bootstrap, responsive design patterns, navigation patterns, and the different components that are included in Bootstrap. (For more resources related to this topic, see here.) Responsive design patterns Here are the few established and well-adopted patterns in Responsive Web Design: Fluid design: This is the most popular and easiest option for responsive design. In this pattern, larger screen multiple columns layout renders as a single column in a smaller screen in absolutely same sequence. Column drop: In this pattern also, the page gets rendered in a single column; however, the order of blocks gets altered. That means, if a content block is visible first in order in case of a larger screen, that might be rendered as second or third in case of a smaller screen. Layout shifter: This is a complex but powerful pattern where the whole layout of the screen contents gets altered in case of a smaller screen. This means that you need to develop different page layouts for large, medium, and small screens. Navigation patterns You should take care of the following things while designing a responsive web page. These are essentially the major navigational elements that you would concentrate on while developing a mobile friendly and responsive website: Menu bar Navigation/app bar Footer Main container shell Images Tabs HTML forms and elements Alerts and popups Embedded audios and videos, and so on You can see that there are lots of elements and aspects you need to take care of to create a fully responsive design. While all of these are achieved by using various features and technologies in CSS3, it is of course not an easy problem to solve without a framework that could help you do so. Precisely, you need a frontend framework that takes care of all the pains of technical responsive design implementation and releases you only for your brand and application design. Now, we introduce Bootstrap that would help you design and develop a responsive web design in a much optimized and efficient way. Introducing Bootstrap Simply put, Bootstrap is a frontend framework for faster and easier web development in the new standard of mobile-first philosophy. It uses HTML, CSS, and JavaScript. In August 2010, Twitter released Bootstrap as Open Source. There are quite a few similar frontend frameworks available in the industry, but Bootstrap is arguably the most popular framework in the lot. It is evident when we see Bootstrap is the most starred project in GitHub since 2012. Until now, you must be in a position to fathom why and where we need to use Bootstrap for web development; however, just to recap, here are the points in short. The mobile-first approach A responsive design Automatic browser support and handling Easy to adapt and get going What Bootstrap includes The following diagram demonstrates the overall structure of Bootstrap: CSS Bootstrap comes with fundamental HTML elements styled, global CSS classes, classes for advanced grid patterns, and lots of enhanced and extended CSS classes. For example, this is how the HTML global element is configured in Bootstrap CSS: html { font-family: sans-serif; -webkit-text-size-adjust: 100%; -ms-text-size-adjust: 100%; } This is how a standard HR HTML element is styled: hr { height: 0; -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; } Here is an example of new classes introduced in Bootstrap: .glyphicon { position: relative; top: 1px; display: inline-block; font-family: 'Glyphicons Halflings'; font-style: normal; font-weight: normal; line-height: 1; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } Components Bootstrap offers a rich set of reusable and built-in components, such as breadcrumbs, progress bars, alerts, and navigation bars. The components are technically custom CSS classes specially crafted for the specific purpose. For example, if you want to create a breadcrumb in your page, you simply add a DIV tag in your HTML using Bootstrap’s breadcrumb class: <ol class="breadcrumb"> <li><a href="#">Home</a></li> <li><a href="#">The Store</a></li> <li class="active">Offer Zone</li> </ol> In the background (stylesheet), this Bootstrap class is used to create your breadcrumb: .breadcrumb { padding: 8px 15px; margin-bottom: 20px; list-style: none; background-color: #f5f5f5; border-radius: 4px; } .breadcrumb > li { display: inline-block; } .breadcrumb > li + li:before { padding: 0 5px; color: #ccc; content: "/ 0a0"; } .breadcrumb > .active { color: #777; } Please note that these set of code blocks are simply snippets. JavaScript Bootstrap framework comes with a number of ready-to-use JavaScript plugins. Thus, when you need to create Popup windows, Tabs, Carousels or Tooltips, and so on, you just use one of the prepackaged JavaScript plugins. For example, if you need to create a tab control in your page, you use this: <div role="tabpanel"> <ul class="nav nav-tabs" role="tablist"> <li role="presentation" class="active"><a href="#recent" aria-controls="recent" role="tab" data-toggle="tab">Recent Orders</a></li> <li role="presentation"><a href="#all" aria-controls="al" role="tab" data-toggle="tab">All Orders</a></li> <li role="presentation"><a href="#redeem" aria-controls="redeem" role="tab" data-toggle="tab">Redemptions</a></li> </ul>   <div class="tab-content"> <div role="tabpanel" class="tab-pane active" id="recent"> Recent Orders</div> <div role="tabpanel" class="tab-pane" id="all">All Orders</div> <div role="tabpanel" class="tab-pane" id="redeem">Redemption History</div> </div> </div> To activate (open) a tab, you write this JavaScript code: $('#profileTab li:eq(1) a').tab('show'); As you could guess by looking at the syntax of this JavaScript line that the Bootstrap JS plugins are built on top of jQuery. Thus, the JS code you would write for Bootstrap are also all based on jQuery. Customization Even though Bootstrap offers most (if not all) standard features and functionalities for Responsive Web Design, there might be several cases when you would want to customize and extend the framework. One of the very basic requirements for customization would be to deploy your own branding and color combinations (themes) instead of the Bootstrap default ones. There can be several such use cases where you would want to change the default behavior of the framework. Bootstrap offers very easy and stable ways to customize the platform. When you use the Bootstrap CSS, all the global and fundamental HTML elements automatically become responsive and would properly behave as the client device on which the web page is browsed. The built-in components are also designed to be responsive. As the developer, you shouldn’t be worried about how these advanced components would behave in different devices and client agents. Summary In this article we have discussed the basics of Bootstarp along with a brief explanation on the design patterns and the navigation patterns. Resources for Article: Further resources on this subject: Deep Customization of Bootstrap [article] The Bootstrap grid system [article] Creating a Responsive Magento Theme with Bootstrap 3 [article]
Read more
  • 0
  • 0
  • 11725

article-image-simple-todo-list-web-application-nodejs-express-and-riot
Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
Save for later

Simple ToDo list web application with node.js, Express, and Riot

Pedro NarcisoGarcíaRevington
07 Nov 2016
10 min read
The frontend space is indeed crowded, but none of the more popular solutions are really convincing to me. I feel Angular is bloated and the double binding is not for me. I also do not like React and its syntax. Riot is, as stated by their creators, "A React-like user interface micro-library" with simpler syntax that is five times smaller than React. What we are going to learn We are going to build a simple Riot application backed by Express, using Jade as our template language. The backend will expose a simple REST API, which we will consume from the UI. We are not going to use any other dependency like JQuery, so this is also a good chance to try XMLHttpRequest2. I deliberately ommited the inclusion of a client package manager like webpack or jspm because I want to focus on the Expressjs + Riotjs. For the same reason, the application data is persisted in memory. Requirements You just need to have any recent version of node.js(+4), text editor of your choice, and some JS, Express and website development knowledge. Project layout Under my project directory we are going to have 3 directories: * public For assets like the riot.js library itself. * views Common in most Express setup, this is where we put the markup. * client This directory will host the Riot tags (we will see more of that later) We will also have the package.json, our project manifesto, and an app.js file, containing the Express application. Our Express server exposes a REST API; its code can be found in api.js. Here is how the layout of the final project looks: ├── api.js ├── app.js ├── client │ ├── tag-todo.jade │ └── tag-todo.js ├── package.json ├── node_modules ├── public │ └── js │ ├── client.js │ └── riot.js └── views └── index.jade Project setup Create your project directory and from there run the following to install the node.js dependencies: $ npm init -y $ npm install --save body-parser express jade And the application directories: $ mkdir -p views public/js client Start! Lets start by creating the Express application file, app.js: 'use strict'; const express = require('express'), app = express(), bodyParser = require('body-parser'); // Set the views directory and template engine app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); // Set our static directory for public assets like client scripts app.use(express.static('public')); // Parses the body on incoming requests app.use(bodyParser.json()); // Pretty prints HTML output app.locals.pretty = true; // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.send('hello'); }); // Start listening for connections app.listen(3000, function (err) { if (err) { console.error('Cannot listen at port 3000', err); } console.log('Todo app listening at port 3000'); }); The #app object we just created, is just a plain Express application. After setting up the application we call the listen function, which will create an HTTP server listening at port 3000. To test our application setup, we open another terminal, cd to our project directory and run $ node app.js. Open a web browser and load http://localhost:3000; can you read "hello"? Node.js will not reload the site if you change the files, so I recommend you to install nodemon. Nodemon monitors your code and reloads the site on every change you perform on the JS source code. The command,$ npm install -g nodemon, installs the program on your computer globally, so you can run it from any directory. Okay, kill our previously created server and start a new one with $ nodemon app.js. Our first Riot tag Riot allows you to encapsulate your UI logic in "custom tags". Tag syntax is pretty straightforward. Judge for yourself. <employee> <span>{ name }</span> </employee> Custom tags can contain code and can be nested as showed in the next code snippet: <employeeList> <employee each="{ items }" onclick={ gotoEmployee } /> <script> gotoEmployee (e) { var item = e.item; // do something } </script> </employeeList> This mechanism enables you to build complex functionality from simple units. Of course you can find more information at their documentation. On the next steps we will create our first tag: ./client/tag-todo.jade. Oh, we have not yet downloaded Riot! Here is the non minified Riot + compiler download. Download it to ./public/js/riot.js. Next step is to create our index view and tell our app to serve it. Locate / router handler, remove the res.send('hello) ', and update to: // Define our main route, HTTP "GET /" which will print "hello" app.get('/', function (req, res) { res.render('index'); }); Now, create the ./views/index.jade file: doctype html html head script(src="/js/riot.js") body h1 ToDo App todo Go to your browser and reload the page. You can read the big "ToDo App" but nothing else. There is a <todo></todo> tag there, but since the browser does not understand, this tag is not rendered. Let's tell Riot to mount the tag. Mount means Riot will use <todo></todo> as a placeholder for our —not yet there— todo tag. doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") todo script. riot.mount('todo'); Open your browser's dev console and reload the page. riot.mount failed because there was no todo.tag. Tags can be served in many ways, but I choose to serve them as regular Express templates. Of course, you can serve it as static assets or bundled. Just below the / route handler, add the /tags/:name.tag handler. // "/" route handler app.get('/', function (req, res) { res.render('index'); }); // tag route handler app.get('/tags/:name.tag', function (req, res) { var name = 'tag-' + req.params.name; res.render('../client/' + name); }); Now create the tag in ./client/tag-todo.jade: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") And reload the browser again. Errors gone and a new form in your browser. onsubmit="{ add }" is part of Riot's syntax and means on submit call the add function. You can add mix implementation with the markup, but I rather prefer to split markup from code. In Jade (and any other template language),it is trivial to include other files, which is exactly what we are going to do. Update the file as: todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") script include tag-todo.js And create ./client/tag-todo.js with this snippet: 'use strict'; var self = this; var api = self.opts; When the tag gets mounted by Riot, it gets a context. That is the reason for var self = this;. That context can include the opts object. opts object can be anything of your choice, defined at the time you mount the tag. Let’s say we have an API object and we pass it to riot.mount as the second option at the time we mount the tag, that isriot.mount('todo', api). Then, at the time the tag is rendered this.opts will point to the api object. This is the mechanism we are going to use to expose our client api with the todo tag. Our form is still waiting for the add function, so edit the tag-todo.js again and append the following: self.add = function (e) { var title = self.todo.value; console.log('New ToDo', title); }; Reload the page, type something at the text field, and hit enter. The expected message should appear in your browser's dev console. Implementing our REST API We are ready to implement our REST API on the Express side. Create ./api.js file and add: 'use strict'; const express = require('express'); var app = module.exports = express(); // simple in memory DB var db = []; // handle ToDo creation app.post('/', function (req, res) { db.push({ title: req.body.title, done: false }); let todoID = db.length - 1; // mountpath = /api/todos/ res.location(app.mountpath + todoID); res.status(201).end(); }); // handle ToDo updates app.put('/', function (req, res) { db[req.body.id] = req.body; res.location('/' + req.body.id); res.status(204).end(); }); Our API supports ToDo creation/update, and it is architected as an Express sub application. To mount it, we just need to update app.js for the last time. Update the require block at app.js to: const express = require('express'), api = require('./api'), app = express(), bodyParser = require('body-parser'); ... And mount the api sub application just before the app.listen... // Mount the api sub application app.use('/api/todos/', api); We said we will implement a client for our API. It should expose two functions –create and update –located at ./public/client.js. Here is its source: 'use strict'; (function (api) { var url = '/api/todos/'; function extractIDFromResponse(xhr) { var location = xhr.getResponseHeader('location'); var result = +location.slice(url.length); return result; } api.create = function createToDo(title, callback) { var xhr = new XMLHttpRequest(); var todo = { title: title, done: false }; xhr.open('POST', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 201) { todo.id = extractIDFromResponse(xhr); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; api.update = function createToDo(todo, callback) { var xhr = new XMLHttpRequest(); xhr.open('PUT', url); xhr.setRequestHeader('Content-Type', 'application/json'); xhr.onload = function () { if (xhr.status === 200) { console.log('200'); } return callback(null, xhr, todo); }; xhr.send(JSON.stringify(todo)); }; })(this.todoAPI = {}); Okay, time to load the API client into the UI and share it with our tag. Modify the index view including it as a dependency: doctype html html head script(src="/js/riot.js") body h1 ToDo App script(type="riot/tag" src="/tags/todo.tag") script(src="/js/client.js") todo script. riot.mount('todo', todoAPI); We are now loading the API client and passing it as a reference to the todo tag. Our last change today is to update the add function to consume the API. Reload the browser again, type something into the textbox, and hit enter. Nothing new happens because our add function is not yet using the API. We need to update ./client/tag-todo.js as: 'use strict'; var self = this; var api = self.opts; self.items = []; self.add = function (e) { var title = self.todo.value; api.create(title, function (err, xhr, todo) { if (xhr.status === 201) { self.todo.value = ''; self.items.push(todo); self.update(); } }); }; We have augmented self with an array of items. Everytime we create a new ToDo task (after we get the 201 code from the server) we push that new ToDo object into the array because we are going to print that list of items. In Riot, we can loop the items adding each attribute to any tag. Last, update to ./client/tag-todo.jade todo form(onsubmit="{ add }") input(type="text", placeholder="Needs to be done", name="todo") ul li(each="{items}") span {title} script include tag-todo.js Finally! Reload the page and create a ToDo! Next steps You can find the complete source code for this article here. The final version of the code also implements a done/undone button, which you can try to implement by yourself. About the author Pedro NarcisoGarcíaRevington is a Senior Full Stack Developer with 10+ years experience in high scalability and availability, microservices, automated deployments, data processing, CI, (T,B,D)DD and polyglot persistence.
Read more
  • 0
  • 0
  • 11725
article-image-joomla-template-system
Packt
12 Dec 2013
9 min read
Save for later

Joomla! Template System

Packt
12 Dec 2013
9 min read
(For more resources related to this topic, see here.) Every website has some content, and all kinds of information is provided on websites; not just text, but pictures, animations, and video clips—anything that communicates a site's body of knowledge. However, visual design is the appearance of the site. A good visual design is one that is high quality, appropriate, and relevant to the audience and the message it supports. As a large amount of companies feel the need to redesign their site very few years, they need someone who can stand back and figure out what all that content should communicate. This could be you. The basic principle of Joomla! (and other content management systems) is to separate the content from its visual form. Although this separation is not absolute, it is distinct enough to facilitate quick and efficient customization and deployment of websites. Changing the appearance of web pages built on CMS comes down to installing and configuring a new template. A template is a set of files that determine the look and feel of your Joomla-powered website. Templates include information about the general layout of the site and other content, such as graphics, colors, background images, headers, logos, and typography and footers. Each template is different, offering many choices for site owners to almost instantly change the look of their website. You can see the result of this separation of content from presentation by changing the default template (preinstalled in Joomla!). For web designers, learning how to develop templates for content management systems such as Joomla! opens up lots of opportunities. Joomla! gives you big opportunities to build websites. Taking into account the evolution of web browsers, you are only limited by your imagination and skill set, thanks to a powerful and flexible CMS infrastructure. The ability to change or modify the content and appearance of web pages is important in today's online landscape. What is a Joomla! template? As in the case of traditional HTML templates, Joomla! template is a collection of files (PHP, CSS, and JavaScript) that define the visual appearance of the site. Each template has variations on these files, and each template's files are different, but they have a common purpose; they control the placement of the elements on the screen and impact both the presentation of the contents and the usability of the functionality. In general, a template does not have any content, but it can include logo and background images. The Joomla! template controls the way all information is shown on each page of the website. A template contains the stylesheets, locations, and layout information for the web content being displayed. Also each installed component can have its own template to present content that can overwrite the default template's CSS styles. A template alone cannot be called a website. Generally, people think of the template as the appearance of their site. But a template is only a structure (usually painted and colored) with active fields. It determines the appearance of individual elements (for example, font size, color, backgrounds, style, and spacing) and arrangement of individual elements (including modules). In Joomla!, a single page view is generated by the HTML output of one component, selected modules, and the template. Unlike typical websites, where different components of the template are duplicated throughout the website pages, in case of Joomla!, there is just one assigned template that is responsible for displaying content for the entire site. Most CMS's, Joomla! included, have a modular structure that allows easy improvement of the site's appearance and functionality by installing and publishing modules in appropriate areas. Search engines don't care about design, but people do. How well a template is designed and implemented is, therefore, largely responsible for the first impression made by a website, which later translates into the perception that people have of the entire website. Joomla! released Joomla! Version 3.0.0 on September 27, 2012 with significant updates and major developments. With the adoption of the Twitter Bootstrap framework, Joomla! has become the first major CMS to be mobile ready in both visitor and administrator areas. Bootstrap (http://twitter.github.com/bootstrap) is an open source JavaScript framework developed by the team at Twitter. It is a combination of HTML, CSS, and JavaScript code designed to help build user interface components. Bootstrap was also programmed to support both HTML5 and CSS3. As a result, page layout uses a 1152 px * 1132 px * 1116 px * 1104 px grid, whereas previous versions of Joomla! templates used a 940 px wide layout. Default template stylesheets in Joomla! 3.x are written with LESS, and are then compiled to generate the CSS files. Because of the use of Bootstrap, Joomla! 3.x will slowly begin to migrate toward jQuery in the core (instead of MooTools). Mootools is no longer the primary JavaScript library interface. Joomla! 3.x templates are not compatible with previous versions of Joomla! and have been developed as a separate product. Templates – download for free, buy, or build your own I also want to show you the sites where you can download templates for free or buy them; after all, this book is supposed to teach you how to create your own sites. There are a number of reasons for this. First, you might not have the time or the ability to design a template or create it from scratch for the customer. You can set up your website within minutes because all you have to do is install or upload your template and begin adding content. By swapping the header image and changing the background color or image, you can transform a template with very little additional work. Second, as you read the book you will get acquainted with the basic principles of modifying templates, and thus you will learn how to adapt the ready-made solutions to the specific needs of your project. In general, you don't need to know much about PHP to use or tweak prebuilt templates. Templates can be customized by anyone with basic HTML/CSS knowledge. You can customize template elements to make it suit your needs or those of your client using a simple CSS editor; your template can be configured by template parameters. Third, learn from other template developers. Follow every move of your competitors. When they release an interesting, functional, and popular template, follow (but do not copy) them. We can all learn from others; projects by other people are probably one of the most obvious sources of inspiration. The following screenshot presents a few commercial templates for Joomla! 3.x. built in 2013 by popular developers, bearing in mind, however, that the line between inspiration and plagiarism is often very thin: Free templates Premade free templates are a great solution for those who have a limited budget. It is good experience to use the work of different developers and is also a great way to test a new web concept without investing much apart from your time. There are some decent free templates out there that may even be suitable for a small or medium production website. If you don't like a certain template after using it for a bit, ditching it doesn't mean any loss in investment. Unfortunately, there are also some disadvantages of using free templates. These templates are not unique. Several thousands of web designers from around the world may have already downloaded and used the template you have chosen. So if you don't change the colors or layout a bit, your site will look like a clone, which would be quite unprofessional. Generally, free Joomla! templates don't have any important or useful features such as color variants, Google fonts, advanced typography, CSS compression options, or even responsive layout. On the downside of free templates, you have the obvious quality issues. The majority of free templates are very basic and sometimes even buggy. The support for free templates is almost always lacking. While there are a few free templates that are supported by their creators, they are under no obligation to provide full support to your template if you need help adjusting the layout or fixing a problem due to an error. Realize that developers often use free templates to advertise their cost structures, expansion versions, or club subscriptions. That's why some developers require you to leave a link to their website on the bottom of your page if you use their free templates. What was surprising to me was that not all the free templates for Joomla! 3.x are mobile friendly, despite the fact that even the built-in CMS are built as Responsive Web Design (RWD). In most cases, it was presumably intended by the creators to look like JoomlaShine or Globbersthemes. The following is a list of resources from where you can download different kinds of free templates: www.joomla24.com www.joomlaos.de www.siteground.com/joomla-templates.htm www.bestofjoomla.com Quite often, popular developers publish free templates on their websites; in this way they promote their brand and other products such as modules or commercial versions of templates. Those templates always have better quality and features than others. I suggest that you download free templates only from reliable sources. It is with a great deal of care that you should approach templates shared on discussion forums or blogs because there's a high probability that the code template has been deliberately modified. A huge proportion of templates available for free are in fact packaged with malicious code. Summary Hopefully, after reading this article you will have a better understanding of the features of the Joomla! Template Manager and the types of problems it is able to solve. Resources for Article: Further resources on this subject: Installing and Configuring Joomla! on Local and Remote Servers [Article] Joomla! 1.5: Installing, Creating, and Managing Modules [Article] Tips and Tricks for Joomla! Multimedia [Article]
Read more
  • 0
  • 0
  • 11705

article-image-introduction-rwd-frameworks
Packt
29 Mar 2013
8 min read
Save for later

Introduction to RWD frameworks

Packt
29 Mar 2013
8 min read
(For more resources related to this topic, see here.) Certainly, whether you are a beginner designer or an expert, creating a responsive website from the ground up can be convoluted. This is probably because of some indispensable technical issues in RWD, such as determining the proper number of columns in the grid and calculating the percentage of the width for each column, determining the correct breakpoint, and other technicalities that usually appear in the development stage. Many threads regarding the issues of creating responsive websites are open on StackOverflow: CSS Responsive grid 1px gap issue (http://stackoverflow.com/questions/12797183/cssresponsive-grid-1px-gap-issue) @media queries - one rule overrides another? (http://stackoverflow.com/questions/12822984/media-queriesone-rule-overrides-another) Why use frameworks? Following are a few reasons why using a framework is considered a good option: Time saver: If done right, using a framework could obviously save a lot of time. A framework generally comes with predefined styles and rules, such as the width of the gird, the button styles, font sizes, form styles, CSS reset, and other aspects to build a website. So, we don't have to repeat the same process from the beginning but simply follow the instructions to apply the styles and structure the markup. Bootstrap, for example, has been equipped with grid styles (http://twitter.github.com/bootstrap/scaffolding.html), basic styles (http://twitter.github.com/bootstrap/base-css.html), and user interface styles (http://twitter.github.com/bootstrap/components.html). Community and extension: A popular framework will most likely have an active community that extends the framework functionality. jQuery UI Bootstrap is perhaps a good example in this case; it is a theme for jQuery UI that matches the look and feel of the Bootstrap original theme. Also, Skeleton, has been extended to the WordPress theme (http://themes.simplethemes.com/skeleton/) and to Drupal (http://demo.drupalizing.com/?theme=skeleton). Cross browser compatibility : This task of assuring how the web page is displayed on different browsers is a really painful one. With a framework, we can minimize this hurdle, since the developers, most likely, have done this job before the framework is released publicly. Foundation is a good example in this case. It has been tested in the iOS, Android, and Windows Phone 7 browsers (http://foundation.zurb.com/docs/support.html). Documentation: A good framework also comes with documentation. The documentation will be very helpful when we are working with a team, to get members on the same page and make them follow the standard code-writing convention. Bootstrap ( http://twitter.github.com/bootstrap/getting-started.html) and Foundation ( http://foundation.zurb.com/docs/index.php), for example, have provided detailed documentation on how to use the framework. There are actually many responsive frameworks to choose from, such as Skeleton, Bootstrap, and Foundation. Let's take a look. Skeleton Skeleton (http://www.getskeleton.com/) is a minimal responsive framework; if you have been working with the 960.gs framework (http://960.gs/), Skeleton should immediately look familiar. Skeleton is 960 pixels wide with 16 columns in its basic grid; the only difference is that the grid is now responsive by integrating the CSS3 media queries. In case this is the first time you have heard about 960.gs or Grid System, you can follow the screencast tutorial by Jeffrey Way available at http://learncss.tutsplus.com/lesson/css-frameworks/. In this screencast, he shows how Grid System works and also guides you to create a website with 960.gs. It is a good place to start with Grid System. Bootstrap Bootstrap (http://twitter.github.com/bootstrap/) was originally built by Mark Otto (http://markdotto.com) and only intended for internal use in Twitter. Short story: Bootstrap was then launched as a free software for public. In it's early development, the responsive feature was not yet included; it was then added in Version 2 in response to the increasing demand for RWD. Bootstrap has a lot more added features as compared to Skeleton. It is packed with styled user interface components of commonly-used interfaces on a website, such as buttons, navigation, pagination, and forms. Beyond that, Bootstrap is also powered with some custom jQuery plugins, such as a tab, carousel, popover, and modal box. To get started with Bootstrap, you can follow the tutorial series (http://www.youtube.com/playlist?list=PLA615C8C2E86B555E) by David Cochran (https://twitter.com/davidcochran). He has thoroughly explained from the basics to utilizing the plugins in this series. Bootstrap has been associated with Twitter so far, but since the author has departed from Twitter and Bootstrap itself has grown beyond expectation, Bootstrap is likely to get separated from the Twitter brand as well (http://blog.getbootstrap.com/2012/09/29/onward/). Foundation Foundation (http://foundation.zurb.com) was built by a team at ZURB (http://www.zurb.com/about/), a product design agency based in California. Similar to Bootstrap, Foundation is beyond just a responsive CSS framework; it is equipped with predefined styles for a common web user interface, such as buttons (http://foundation.zurb.com/docs/components/buttons.html), navigation (http://foundation.zurb.com/docs/components/top-bar.html), and forms. In addition to this, it has also been powered up with some jQuery plugins. A few high-profile brands, such as Pixar (http://projection.pixar.com/) and National Geographic Channel (http://globalcloset.education.nationalgeographic.com/), have built their website on top of this framework. Who is using these frameworks? Now, apart from the two high-profile names we have mentioned in the preceding section, it will be nice to see what other brands and websites have been doing with these frameworks to get inspired. Let's take a look. Hivemind Hivemind is a design firm based in Wisconsin. Their website (www.ourhivemind.com) has been built using Skeleton. As befits the Skeleton framework, their website is very neat, simple, and well structured. The following screenshot shows how it responds in different viewport sizes: Living.is Living.is (http://living.is) is a social sharing website for living room stuff, ideas, and inspiration, such as sofas, chairs, and shelves. Their website has been built using Bootstrap. If you have been examining the Bootstrap UI components yourself, you will immediately recognize this from the button styles. The following screenshot shows how the Living.is page is displayed in the large viewport size: When viewed in a smaller viewport, the menu navigation is concatenated, turning into a navigation button with three stripes, as shown in the following screenshot. This approach now seems to be a popular practice, and this type of button is generally agreed to be a navigation button; the new Google Chrome website has also applied this button approach in their new release. When we click or tap on this button, it will expand the navigation downward, as shown in the following screenshot: To get more inspiration from websites that are built with Bootstrap, you can visit http://builtwithbootstrap.com/. However, the websites listed are not all responsive. Swizzle Swizzle (www.getswizzle.com) is an online service and design studio based in Canada. Their website is built on Foundation. The following screenshot shows how it is displayed in the large viewport size: Swizzle used a different way to deliver their navigation in a smaller viewport. Rather than expanding the menu as Bootstrap does, Swizzle replaces the menu navigation with a MENU link that refers to the navigation at the footer. The cons Using a framework also comes with its own problems. The most common problems found when adopting a framework are as follows: Excessive codes: Since a framework is likely to be used widely, it needs to cover every design scenario, and so it also comes with extra styles that you might not need for your website. Surely, you can sort out the styles and remove them, but this process, depending on the framework, could take a lot of time and could also be a painful task. Learning curve: The first time, it is likely that you will need to spend some time to learn how the framework works, including examining the CSS classes, the ID, and the names, and structuring HTML properly. But, this probably will only happen in your first try and won't be an issue once you are familiar with the framework. Less flexibility: A framework comes with almost everything set up, including the grid width, button styles, and border radius, and follows the standard of its developers. If things don't work the way we want them to, changing it could take a lot of time, and if it is not done properly, it could ruin all other code structure. TOther designers may also have particular issues regarding using a framework; you can further follow the discussion on this matter at http://stackoverflow.com/questions/203069/ what-is-the-best-css-framework-and-are-they-worth-the-effort. The CSS Trick forum has also opened a similar thread on this topic at http://css-tricks.com/forums/discussion/11904/css-frameworks-the-pros-and-cons/p1. Summary In this article we discussed some basic things about Responsive Web Design framework. Resources for Article : Further resources on this subject: Creating mobile friendly themes [Article] Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 11699
Modal Close icon
Modal Close icon