Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-how-create-tax-rule-magento
Packt
27 Oct 2009
11 min read
Save for later

How to create a Tax Rule in Magento

Packt
27 Oct 2009
11 min read
In this article by William Rice, we will see how to create Tax Rules in Magento. In the real world, the tax rate that you pay is based on three things: location, product type, and purchaser type. In Magento, we can create Tax Rules that determine the amount of tax that a customer pays, based upon the shipping address, product class, and customer class. When you buy a product, you sometimes pay sales tax on that product. The sales tax that you pay is based on: Where you purchased the product from. Tax rules vary in different cities, states, and countries. The type of product that you purchased. For example, many places don't tax clothing purchases. And, some places tax only some kinds of clothing. This means that you must be able to apply different tax rates to different kinds of products. The type of purchaser you are. For example, if you buy a laser printer for your home, it is likely that you will pay sales tax. This is because you are a retail customer. If you buy the same printer for your business, in most places you will not pay sales tax. This is because you are a business customer. The amount of the purchase. For example, some places tax clothing purchases only above a specific amount. Anatomy of a Tax Rule A Tax Rule is a combination of the tax rate, shipping address, product class, customer class, and amount of purchase. A Tax Rule states that you pay this amount of tax if you are this class of purchaser, and you bought this class of product for this amount, and are shipping it to this place. The components of a Tax Rule are shown in the following screenshot. This screen is found under Sales | Tax | Manage Tax Rules | Add New Tax Rule. You will see the Name of the Tax Rule while working in the backend. Customer Tax Class Customer Tax Class is a type of customer that is making a purchase. Before creating a Tax Rule, you will need to have at least one Customer Tax Class. Magento provides you with a Tax Rule called Retail Customer. If you serve different types of customers—retail, business, and nonprofit—you will need to create different Customer Tax Classes. Product Tax Class Product Tax Class is a type of Product that is being purchased. When you create a Product, you will assign a Product Tax Class to that Product. Magento comes with two Product Tax Classes:Taxable Goods and Shipping. The class Shipping is applied to shipping charges because some places charge sales tax on shipping. If your customer's sales tax is different for different types of Products, then you will need to create a Product Tax Class for each type of Product. Tax Rate Tax Rate is a combination of place, or tax zone, and percentage. A zone can be a country, state, or zip code. Each zone that you specify can have up to five sales tax percentages. For example, in the default installation of Magento, there is one tax rate for the zone New York. This is 8.3750 percent, and applies to retail customers. The following window can be found at Sales | Tax | Manage Tax Zones & Rates and then clicking on US-NY-*-Rate 1: So in the screenshot of our Tax Rule, the Tax Rate US-NY-*-Rate 1 doesn't mean "a sales tax of 1 percent." It means "Tax rate number 1 for New York, which is 8.3750 percent." In this scenario, New York charges 8.3750 percent sales tax on retail sales. If New York does not charge sales tax for wholesale customers, and you sell to wholesale customers, then you will need to create another Tax Rate for New York: Whenever a zone has different sales taxes for different types of products or customers, you will need to create different Tax Rates for that zone. Priority If several Tax Rules try to apply several Tax Rates at the same time, how should Magento handle them? Should it add them all together? Or, should it apply one rate, calculate the total, and then apply the second rate to that total? That is, should Magento add them or compound them? For example, suppose you sell a product in Philadelphia, Pennsylvania. Further suppose that according to the Tax Rule for Pennsylvania, the sales tax for that item is 6 percent, and that the Tax Rule for Philadelphia adds another 1 percent. In this case, you want Magento to add the two sales taxes. So, you would give the two Tax Rates the same Priority. By contrast, Tax Rates that belong to Tax Rules with different Priorities are compounded. The Tax Rate with the higher Priority (the lower number) is applied, and the next higher Priority is applied to that total, and so on. Sort Order Sort Order determines the Tax Rules' position in the list of Tax Rules. Why create Tax Rules now? Why create a Tax Rule now, before adding our first Product? When you add a Product to your store, you put that Product into a Category, assign an Attribute Set, and select a Tax Class for that Product. By default, Magento comes with two Product Tax Classes and one Tax Rule already created. The Product Tax Classes are Taxable Goods and Shipping. The Tax Rule is Retail Customer-Taxable Goods-Rate 1. If you sell anything other than taxable goods, or sell to anyone other than retail customers, you will need to create a new Tax Rule to cover that situation. Creating a Tax Rule The process for creating a Tax Rule is: Create the Customer Tax Classes that you need, or confirm that you have them. Create the Product Tax Classes that you need, or confirm that you have them. Create the Tax Rates that you need, or confirm that you have them and that they apply to the zones that you need. Create and name the Tax Rule: Assign Customer Tax Class, Product Tax Class, and Tax Rates to the Rule. Use the Priority to determine whether the Rule is added, or compounded, with other Rules. Determine the Sort Order of the Rule and save it. Each of these steps is covered in the subsections that follow. Time for action: Creating a Customer Tax Class From the Admin Panel, select Sales | Tax | Customer Tax Classes. The Customer Tax Classes page is displayed. If this is a new installation, only one Class is listed, Retail Customer as shown in the following screenshot: Click on Add New. A Customer Tax Class Information page is displayed. Enter a name for the Customer Tax Class. In our demo store, we are going to create Customer Tax Classes for Business and Nonprofit customers. Click on Save Class. Repeat these steps until all of the Customer Tax Classes that you need have been created. What just happened? A Tax Rule is composed of a Customer Class, Product Class, Tax Rate, and the location of the purchaser. You have just created the first part of that formula: the Customer Class. Time for action: Creating a Product Tax Class From the Admin Panel, select Sales | Tax | Product Tax Classes. The Product Tax Classes page is displayed. If this is a new installation, only two Classes are listed: Shipping and Taxable Goods. Click on Add New. The Product Tax Class Information page gets displayed: Enter a name for the Product Tax Class. In our demo store, we are going to create Product Tax Classes for Food and Nonfood products. We will apply the Food class to the coffee that we sell. We will apply the Nonfood class to the mugs, coffee presses, and other coffee accessories that we sell. Click on Save Class. Repeat these steps until all of the Product Tax Classes that you need have been created. What just happened? A Tax Rule is composed of a Customer Class, Product Class, Tax Rate, and the location of the purchaser. You have just created the second part of that formula: the Product Class. Creating Tax Rates In Magento, you can create Tax Rates one at a time. You can also import Tax Rates in bulk. Each method is covered in the next section. Time for action: Creating a Tax Rate in Magento From the Admin Panel, select Sales | Tax | Manage Tax Zones & Rates. The Manage Tax Rates page is displayed. If this is a new installation, only two Tax Rates are listed: US-CA-*-Rate 1 and US-NY-*-Rate 1. Click on Add New Tax Rate. The Add New Tax Rate page gets displayed: Tax Identifier is the name that you give this Tax Rate. You will see this name when you select this Tax Rate. The example that we saw is named US-CA-*-Rate 1. Notice how this name tells you the Country, State, and Zip/Post code for the Tax Rate. (The asterisk indicates that it applies to all zip codes in California.) It also tells which rate applies. Notice that the name doesn't give the actual percentage, which is 8.25%. Instead, it says Rate 1. This is because the percentage can change when California changes its tax rate. If you include the actual rate in the name, you would need to rename this Tax Rate when California changes the rate. Another way this rate could have been named is US-CA-All- Retail. Before creating new Tax Rates, you should develop a naming scheme that works for you and your business. Country, State, and Zip/Post Code determine the zone to which this Tax Rate applies. Magento calculates sales tax based upon the billing address, and not the shipping address. Country and State are drop-down lists. You must select from the options given to you. Zip/Post Code accepts both numbers and letters. You can enter an asterisk in this field and it will be a wild card. That is, the rate will apply to all zip/post codes in the selected country and state. You can enter a zip/post code without entering a country or state. If you do this, you should be sure that zip/post code is unique in the entire world. Suppose you have one tax rate for all zip codes in a country/state, such as 6% for United States/Pennsylvania. Also, suppose that you want to have a different tax rate for a few zip codes in that state. In this case, you would create separate tax rates for those few zip codes. The rates for the specific zip codes would override the rates for the wild card. So in a Tax Rate, a wild card means, "All zones unless this is overridden by a specific zone." In our demo store, we are going to create a Tax Rate for retail customers who live in the state of Pennsylvania, but not in the city of Philadelphia as shown: Click on Save Rate. You are taken back to the Manage Tax Rates page. The Tax Rate that you just added should be listed on the page. This procedure is useful for adding Tax Rates one at a time. However, if you need to add many Tax Rates at once, you will probably want to use the Import Tax Rates feature. This enables you to import a .csv, or a text-only file. You usually create the file in a spreadsheet such as OpenOffice Calc or Excel. The next section covers importing Tax Rates. What just happened? A Tax Rule is composed of a Customer Class, Product Class, Tax Rate, and the location of the purchaser. You have just created the third part of that formula: the Tax Rate. The Tax Rate included the location and the percentage of tax. You created the Tax Rate by manually entering the information into the system, which is suitable if you don't have too many Tax Rates to type. Time for action: Exporting and importing Tax Rates In my demo store, I have created a Tax Rate for the state of Pennsylvania. The Tax Rate for the city of Philadelphia is different. However, Magento doesn't enable me to choose a separate Tax Rate based on the city. So I must create a Tax Rate for each zip code in the city of Philadelphia. At this time there are 84 zip codes, and are shown here:     19019 19092 19093 19099 19101 19102 19103 19104 19105 19106 19107 19108 19109 19110 19111 19112 19113 19114 19115 19116 19118 19119 19120 19121 19122 19123 19124 19125 19126 19127 19128 19129 19130 19131 19132 19133 19134 19135 19136 19137 19138 19139 19140 19141 19142 19143 19144 19145 19146 19147 19148 19149 19150 19151 19152 19153 19154 19155 19160 19161 19162 19170 19171 19172 19173 19175 19177 19178 19179 19181 19182 19183 19184 19185 19187 19188 19191 19192 19193 19194 19196 19197 19244 19255        
Read more
  • 0
  • 0
  • 12449

article-image-getting-started-ansible
Packt
18 Nov 2013
8 min read
Save for later

Getting Started with Ansible

Packt
18 Nov 2013
8 min read
(For more resources related to this topic, see here.) First steps with Ansible Ansible modules take arguments in key value pairs that look similar to key=value, perform a job on the remote server, and return information about the job as JSON. The key value pairs allow the module to know what to do when requested. The data returned from the module lets Ansible know if anything changed or if any variables should be changed or set afterwards. Modules are usually run within playbooks as this lets you chain many together, but they can also be used on the command line. Previously, we used the ping command to check that Ansible had been correctly setup and was able to access the configured node. The ping module only checks that the core of Ansible is able to run on the remote machine but effectively does nothing. A slightly more useful module is called setup. This module connects to the configured node, gathers data about the system, and then returns those values. This isn't particularly handy for us while running from the command line, however, in a playbook you can use the gathered values later in other modules. To run Ansible from the command line, you need to pass two things, though usually three. First is a host pattern to match the machine that you want to apply the module to. Second you need to provide the name of the module that you wish to run and optionally any arguments that you wish to give to the module. For the host pattern, you can use a group name, a machine name, a glob, and a tilde (~), followed by a regular expression matching hostnames, or to symbolize all of these, you can either use the word all or simply *. To run the setup module on one of your nodes, you need the following command line: $ ansible machinename -u root -k -m setup The setup module will then connect to the machine and give you a number of useful facts back. All the facts provided by the setup module itself are prepended with ansible_ to differentiate them from variables. The following is a table of the most common values you will use, example values, and a short description of the fields: Field Example Description ansible_architecture x86_64 The architecture of the managed machine ansible_distribution CentOS The Linux or Unix Distribution on the managed machine ansible_distribution_version 6.3 The version of the preceding distribution ansible_domain example.com The domain name part of the server's hostname ansible_fqdn machinename.example.com This is the fully qualified domain name of the managed machine. ansible_interfaces ["lo", "eth0"] A list of all the interfaces the machine has, including the loopback interface ansible_kernel 2.6.32-279.el6.x86_64 The kernel version installed on the managed machine ansible_memtotal_mb 996 The total memory in megabytes available on the managed machine ansible_processor_count 1 The total CPUs available on the managed machine ansible_virtualization_role guest Whether the machine is a guest or a host machine ansible_virtualization_type kvm The type of virtualization setup on the managed machine These variables are gathered using Python from the host system; if you have facter or ohai installed on the remote node, the setup module will execute them and return their data as well. As with other facts, ohai facts are prepended with ohai_ and facter facts with facter_. While the setup module doesn't appear to be too useful on the command line, once you start writing playbooks, it will come into its own. If all the modules in Ansible do as little as the setup and the ping module, we will not be able to change anything on the remote machine. Almost all of the other modules that Ansible provides, such as the file module, allow us to actually configure the remote machine. The file module can be called with a single path argument; this will cause it to return information about the file in question. If you give it more arguments, it will try and alter the file's attributes and tell you if it has changed anything. Ansible modules will almost always tell you if they have changed anything, which becomes more important when you are writing playbooks. You can call the file module, as shown in the following command, to see details about /etc/fstab: $ ansible machinename -u root -k -m file -a 'path=/etc/fstab' The preceding command should elicit a response like the following code: machinename | success >> { "changed": false, "group": "root", "mode": "0644", "owner": "root", "path": "/etc/fstab", "size": 779, "state": "file" } Or like the following command to create a new test directory in /tmp: $ ansible machinename -u root -k -m file -a 'path=/tmp/test state=directory mode=0700 owner=root' The preceding command should return something like the following code: machinename | success >> { "changed": true, "group": "root", "mode": "0700", "owner": "root", "path": "/tmp/test", "size": 4096, "state": "directory" } The second command will have the changed variable set to true, if the directory doesn't exist or has different attributes. When run a second time, the value of changed should be false indicating that no changes were required. There are several modules that accept similar arguments to the file module, and one such example is the copy module. The copy module takes a file on the controller machine, copies it to the managed machine, and sets the attributes as required. For example, to copy the /etc/fstabfile to /tmp on the managed machine, you will use the following command: $ ansible machinename -m copy -a 'path=/tmp/fstab mode=0700 owner=root' The preceding command, when run the first time, should return something like the following code: machinename | success >> { "changed": true, "dest": "/tmp/fstab", "group": "root", "md5sum": "fe9304aa7b683f58609ec7d3ee9eea2f", "mode": "0700", "owner": "root", "size": 637, "src": "/root/.ansible/tmp/ansible-1374060150.96- 77605185106940/source", "state": "file" } There is also a module called command that will run any arbitrary command on the managed machine. This lets you configure it with any arbitrary command, such as a preprovided installer or a self-written script; it is also useful for rebooting machines. Please note that this module does not run the command within the shell, so you cannot perform redirection, use pipes, and expand shell variables or background commands. Ansible modules strive to prevent changes being made when they are not required. This is referred to as idempotency and can make running commands against multiple servers much faster. Unfortunately, Ansible cannot know if your command has changed anything or not, so to help it be more idempotent you have to give it some help. It can do this either via the creates or the removes argument. If you give a creates argument, the command will not be run if the filename argument exists. The opposite is true of the removes argument; if the filename exists, the command will be run. You run the command as follows: $ ansible machinename -m command -a 'rm -rf /tmp/testing removes=/tmp/testing' If there is no file or directory named /tmp/testing, the command output will indicate that it was skipped, as follows: machinename | skipped Otherwise, if the file did exist, it will look as follows: ansibletest | success | rc=0 >> Often it is better to use another module in place of the command module. Other modules offer more options and can better capture the problem domain they work in. For example, it would be much less work for Ansible and also the person writing the configurations to use the file module in this instance, since the file module will recursively delete something if the state is set to absent. So, this command would be equivalent to the following command: $ ansible machinename -m file -a 'path=/tmp/testing state=absent' If you need to use features usually available in a shell while running your command, you will need the shell module. This way you can use redirection, pipes, or job backgrounding. You can pick which shell to use with the executable argument. However, when you write the code, it also supports the creates argument but does not support the removes argument. You can use the shell module as follows: $ ansible machinename -m shell -a '/opt/fancyapp/bin/installer.sh > /var/log/fancyappinstall.log creates=/var/log/fancyappinstall.log' Summary In this article, we have covered which installation type to choose, installing Ansible, and how to build an inventory file to reflect your environment. After this, we saw how to use Ansible modules in an ad hoc style for simple tasks. Finally, we discussed how to learn which modules are available on your system and how to use the command line to get instructions for using a module. Resources for Article: Further resources on this subject: Configuring Manage Out to DirectAccess Clients [Article] Creating and configuring a basic mobile application [Article] Deploying Applications and Software Updates on Microsoft System Center 2012 Configuration Manager [Article]
Read more
  • 0
  • 0
  • 12321

article-image-what-drupal
Packt
30 Oct 2013
3 min read
Save for later

What is Drupal?

Packt
30 Oct 2013
3 min read
(For more resources related to this topic, see here.) Currently Drupal is being used as a CMS in below listed domains Arts Banking and Financial Beauty and Fashion Blogging Community E-Commerce Education Entertainment Government Health Care Legal Industry Manufacturing and Energy Media Music Non-Profit Publishing Social Networking Small business Diversity that is being offered by Drupal is the reason of its growing popularity. Drupal is written in PHP.PHP is open source server side scripting language and it has changed the technological landscape to great extent. The Economist, Examiner.com and The White house websites have been developed in Drupal. System requirements Disk space A minimum installation requires 15 Megabytes. 60 MB is needed for a website with many contributed modules and themes installed. Keep in mind you need much more for the database, files uploaded by the users, media, backups and other files. Web server Apache, Nginx, or Microsoft IIS. Database Drupal 6: MySQL 4.1 or higher, PostgreSQL 7.1, Drupal 7: MySQL 5.0.15 or higher with PDO, PostgreSQL 8.3 or higher with PDO, SQLite 3.3.7 or higher Microsoft SQL Server and Oracle are supported by additional modules. PHP Drupal 6: PHP 4.4.0 or higher (5.2 recommended). Drupal 7: PHP 5.2.5 or higher (5.3 recommended). Drupal 8: PHP 5.3.10 or higher. How to create multiple websites using Drupal Multi-site allows you to share a single Drupal installation (including core code, contributed modules, and themes) among several sites One of the greatest features of Drupal is Multi-site feature. Using this feature a single Drupal installation can be used for various websites. Multisite feature is helpful in managing code during the code upgradation.Each site will have will have its own content, settings, enabled modules, and enabled theme. When to use multisite feature? If the sites are similar in functionallity (use same modules or use the same drupal distribution) you should use multisite feature. If the functionality is different don't use multisite. To create a new site using a shared Drupal code base you must complete the following steps: Create a new database for the site (if there is already an existing database you can also use this by defining a prefix in the installation procedure). Create a new subdirectory of the 'sites' directory with the name of your new site (see below for information on how to name the subdirectory). Copy the file sites/default/default.settings.php into the subdirectory you created in the previous step. Rename the new file to settings.php. Adjust the permissions of the new site directory. Make symbolic links if you are using a subdirectory such as packtpub.com/subdir and not a subdomain such as subd.example.com. In a Web browser, navigate to the URL of the new site and continue with the standard Drupal installation procedure. Summary This article discusses in brief about the Drupal platform and also the requirements for installing it. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Drupal 7 Module Development: Drupal's Theme Layer [Article]
Read more
  • 0
  • 0
  • 12318

Packt
23 Jul 2015
9 min read
Save for later

Eloquent… without Laravel!

Packt
23 Jul 2015
9 min read
In this article by, Francesco Malatesta author of the book, Learning Laravel’s Eloquent, we will learn everything about Eloquent, starting from the very basics and going through models, relationships, and other topics. You probably started to like it and think about implementing it in your next project. In fact, creating an application without a single SQL query is tempting. Maybe you also showed it to your boss and convinced him/her to use it in your next production project. However, there is a little problem. Yeah, the next project isn't so new. It already exists, and, despite everything, it doesn't use Laravel! You start to shiver. This is so sad because you passed the last week studying this new ORM, a really cool one, and then moving forward. There is always a solution! You are a developer! Also, the solution is not so hard to find. If you want, you can use Eloquent without Laravel. Actually, Laravel is not a monolithic framework. It is made up of several, separate parts, which are combined together to build something greater. However, nothing prevents you from using only selected packages in another application. (For more resources related to this topic, see here.) So, what are we going to see in this article? First of all, we will explore the structure of the database package and see what is inside it. Then, you will learn how to install the illuminate/database package separately for your project and how to configure it for the first use. Then, you will encounter some examples. First of all, we will look at the Eloquent ORM. You will learn how to define models and use them. Having done this, as a little extra, I will show you how to use the Query Builder (remember that the "illuminate/database" package isn't just Eloquent). Maybe you would also enjoy the Schema Builder class. I will cover it, don't worry! We will cover the following: Exploring the directory structure Installing and configuring the database package Using the ORM Using the Query and Schema Builders Summary Exploring the directory structure As I mentioned before, the key step in order to use Eloquent in your application without Laravel is to use the "illuminate/database" package. So, before we install it, let's examine it a little. You can see the package contents here: https://github.com/illuminate/database. So, this is what you will probably see: Folder Description Capsule The capsule manager is a fundamental component. It instantiates the service container and loads some dependencies. Connectors The database package can communicate with various DB systems. For instance, SQLite, MySQL, or PostgreSQL. Every type of database has its own connector. This is the folder in which you will find them. Console The database package isn't just Eloquent with a bunch of connectors. In this specific folder, you will find everything related to console commands, such as artisan db:seed or artisan migrate. Eloquent Every single Eloquent class is placed here. Migrations Don't confuse this with the Console folder. Every class related to migrations is stored here. When you type artisan migrate in your terminal, you are calling a class that is placed here. Query The Query Builder is placed here. Schema Everything related to the Schema Builder is placed here. In the main folder, you will also find some other files. However, don't worry as you don't need to know what they are. If you open the composer.json file, take a look at the following "require" section: "require": { "php": ">=5.4.0", "illuminate/container": "5.1.*", "illuminate/contracts": "5.1.*", "illuminate/support": "5.1.*", "nesbot/carbon": "~1.0" }, As you can see, the database package has some prerequisites that you can't avoid. However, the container is quite small, and it is the same for contracts (just some interfaces) and "illuminate/support". Eloquent uses Carbon (https://github.com/briannesbitt/Carbon) to deal with dates in a smarter way. So, if you are seeing this for the first time and you are confused, don't worry! Everything is all right. Now that you know what you can find in this package, let's see how to install it and configure it for the first time. Installing and configuring the database package Let's start with the setup. First of all, we will install the package using composer as usual. After that, we will configure the capsule manager in order to get started. Installing the package Installing the "illuminate/database" package is really easy. All you have to do is to add "illuminate/database" to the "require" section of your composer.json file, like this: "require": {     "illuminate/database": "5.0.*",   }, Then type composer update in to your terminal, and wait for a few seconds. Another way is to include it with the shortcut in your project folder, obviously from the terminal: composer require illuminate/database No matter which method you chose, you just installed the package. Configuring the package Time to use the capsule manager! In your project, you will use something like this to get started: use Illuminate\Database\Capsule\Manager as Capsule;   $capsule = new Capsule;   $capsule->addConnection([ 'driver'   => 'mysql', 'host'     => 'localhost', 'database' => 'database', 'username' => 'root', 'password' => 'password', 'charset'   => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix'   => '', ]);     // Set the event dispatcher used by Eloquent models... (optional) use Illuminate\Events\Dispatcher; use Illuminate\Container\Container; $capsule->setEventDispatcher(new Dispatcher(new Container)); The config syntax I used is exactly the same you can find in the config/database.php configuration file. The only difference is that this time you are explicitly using an instance of the capsule manager in order to do everything. In the second part of the code, I am setting up the event dispatcher. You must do this if events are required from your project. However, events are not included by default in this package, so you will have to manually add the "illuminate/events" dependency to your composer.json file. Now, the final step! Add this code to your setup file: // Make this Capsule instance available globally via static methods... (optional) $capsule->setAsGlobal();   // Setup the Eloquent ORM... (optional; unless you've used setEventDispatcher()) $capsule->bootEloquent(); With setAsGlobal() called on the capsule manager, you can set it as a global component in order to use it with static methods. You may like it or not; the choice is yours. The final line starts up Eloquent, so you will need it. However, this is also an optional instruction. In some situations you may need the Query Builder only. Then there is nothing else to do! Your application is now configured with the database package (and Eloquent)! Using the ORM Using the Eloquent ORM in a non-Laravel application is not a big change. All you have to do is to declare your model as you are used to doing. Then, you need to call it and use it as you are used to. Here is a perfect example of what I am talking about: use Illuminate\Database\Eloquent\Model;   class Book extends Model {   ...   // some attributes here… protected $table = 'my_books_table';   // some scopes here... public function scopeNewest() {    // query here... }   ...   } Exactly as you did with Laravel, the package you are using is the same. So, no worries! If you want to use the model you just created, then use the following: $books = Book::newest()->take(5)->get(); This also applies for relationships, observers, and so on. Everything is the same. In order to use the database package and ORM exactly, you would do the same thing you did in Laravel; remember to set up the project structure in a way that follows the PSR-4 autoloading convention. Using the Query and Schema Builder It's not just about the ORM; with the database package, you can also use the Query and the Schema Builders. Let's discover how! The Query Builder The Query Builder is also very easy to use. The only difference, this time, is that you are passing through the capsule manager object, like this: $books = Capsule::table('books')              ->where('title', '=', "Michael Strogoff")              ->first(); However, the result is still the same. Also, if you like the DB facade in Laravel, you can use the capsule manager class in the same way: $book = Capsule::select('select title, pages_count from books where id = ?', array(12)); The Schema Builder Now, without Laravel, you don't have migrations. However, you can still use the Schema Builder without Laravel. Like this: Capsule::schema()->create('books', function($table){     $table->increments(''id');     $table->string(''title'', 30);     $table->integer(''pages_count'');    $table->decimal(''price'', 5, 2);.   $table->text(''description'');     $table->timestamps(); }); Previously, you used to call the create() method of the Schema facade. This time is a little different: you will use the create() method, chained to the schema() method of the Capsule class. Obviously, you can use any Schema class method in this way. For instance, you could call something like following: Capsule::schema()->table('books', function($table){    $table->string('title', 50)->change();    $table->decimal('special_price', 5, 2); }); And you are good to go! Remember that if you want to unlock some Schema Builder-specific features you will need to install other dependencies. For example, do you want to rename a column? You will need the doctrine/dbal dependency package. Summary I decided to add this article because many people ask me how to use Eloquent without Laravel. Mostly because they like the framework, but they can't migrate an already started project in its entirety. Also, I think that it's cool to know, in a certain sense, what you can find under the hood. It is always just about curiosity. Curiosity opens new paths, and you have a choice to solve a problem in a new and more elegant way. In these few pages, I just scratched the surface. I want to give you some advice: explore the code. The best way to write good code is to read good code. Resources for Article: Further resources on this subject: Your First Application [article] Building a To-do List with Ajax [article] Laravel 4 - Creating a Simple CRUD Application in Hours [article]
Read more
  • 0
  • 0
  • 12315

article-image-introduction-javascript
Packt
10 Nov 2016
15 min read
Save for later

Introduction to JavaScript

Packt
10 Nov 2016
15 min read
In this article by Simon Timms, author of the book, Mastering JavaScript Design Patterns - Second Edition, we will explore the history of JavaScript and how it came to be the important language that it is today (For more resources related to this topic, see here.) JavaScript is an evolving language that has come a long way from its inception. Possibly more than any other programming language, it has grown and changed with the growth of the World Wide Web. As JavaScript has evolved and grown in importance, the need to apply rigorous methods to its construction has also grown. The road to JavaScript We'll never know how language first came into being. Did it slowly evolve from a series of grunts and guttural sounds made during grooming rituals? Perhaps it developed to allow mothers and their offspring to communicate. Both of these are theories, all but impossible to prove. Nobody was around to observe our ancestors during that important period. In fact, the general lack of empirical evidence lead the Linguistic Society of Paris to ban further discussions on the topic, seeing it as unsuitable for serious study. The early days Fortunately, programming languages have developed in recent history and we've been able to watch them grow and change. JavaScript has one of the more interesting histories of modern programming languages. During what must have been an absolutely frantic 10 days in May of 1995, a programmer at Netscape wrote the foundation for what would grow up to be modern JavaScript. At the time, Netscape was involved in the first of the browser wars with Microsoft. The vision for Netscape was far grander than simply developing a browser. They wanted to create an entire distributed operating system making use of Sun Microsystems' recently-released Java programming language. Java was a much more modern alternative to the C++ Microsoft was pushing. However, Netscape didn't have an answer to Visual Basic. Visual Basic was an easier to use programming language, which was targeted at developers with less experience. It avoided some of the difficulties around memory management that make C and C++ notoriously difficult to program. Visual Basic also avoided strict typing and overall allowed more leeway: Brendan Eich was tasked with developing Netscape repartee to VB. The project was initially codenamed Mocha, but was renamed LiveScript before Netscape 2.0 beta was released. By the time the full release was available, Mocha/LiveScript had been renamed JavaScript to tie it into the Java applet integration. Java Applets were small applications which ran in the browser. They had a different security model from the browser itself and so were limited in how they could interact with both the browser and the local system. It is quite rare to see applets these days, as much of their functionality has become part of the browser. Java was riding a popular wave at the time and any relationship to it was played up. The name has caused much confusion over the years. JavaScript is a very different language from Java. JavaScript is an interpreted language with loose typing, which runs primarily on the browser. Java is a language that is compiled to bytecode, which is then executed on the Java Virtual Machine. It has applicability in numerous scenarios, from the browser (through the use of Java applets), to the server (Tomcat, JBoss, and so on), to full desktop applications (Eclipse, OpenOffice, and so on). In most laypersons' minds, the confusion remains. JavaScript turned out to be really quite useful for interacting with the web browser. It was not long until Microsoft had also adopted JavaScript into their Internet Explorer to complement VBScript. The Microsoft implementation was known as JScript. By late 1996, it was clear that JavaScript was going to be the winning web language for the near future. In order to limit the amount of language deviation between implementations, Sun and Netscape began working with the European Computer Manufacturers Association (ECMA) to develop a standard to which future versions of JavaScript would need to comply. The standard was released very quickly (very quickly in terms of how rapidly standards organizations move), in July of 1997. On the off chance that you have not seen enough names yet for JavaScript, the standard version was called ECMAScript, a name which still persists in some circles. Unfortunately, the standard only specified the very core parts of JavaScript. With the browser wars raging, it was apparent that any vendor that stuck with only the basic implementation of JavaScript would quickly be left behind. At the same time, there was much work going on to establish a standard Document Object Model (DOM) for browsers. The DOM was, in effect, an API for a web page that could be manipulated using JavaScript. For many years, every JavaScript script would start by attempting to determine the browser on which it was running. This would dictate how to address elements in the DOM, as there were dramatic deviations between each browser. The spaghetti of code that was required to perform simple actions was legendary. I remember reading a year-long 20-part series on developing a Dynamic HTML (DHTML) drop down menu such that it would work on both Internet Explorer and Netscape Navigator. The same functionally can now be achieved with pure CSS without even having to resort to JavaScript. DHTML was a popular term in the late 1990s and early 2000s. It really referred to any web page that had some sort of dynamic content that was executed on the client side. It has fallen out of use, as the popularity of JavaScript has made almost every page a dynamic one. Fortunately, the efforts to standardize JavaScript continued behind the scenes. Versions 2 and 3 of ECMAScript were released in 1998 and 1999. It looked like there might finally be some agreement between the various parties interested in JavaScript. Work began in early 2000 on ECMAScript 4, which was to be a major new release. A pause Then, disaster struck. The various groups involved in the ECMAScript effort had major disagreements about the direction JavaScript was to take. Microsoft seemed to have lost interest in the standardization effort. It was somewhat understandable, as it was around that time that Netscape self-destructed and Internet Explorer became the de-facto standard. Microsoft implemented parts of ECMAScript 4 but not all of it. Others implemented more fully-featured support, but without the market leader on-board, developers didn't bother using them. Years passed without consensus and without a new release of ECMAScript. However, as frequently happens, the evolution of the Internet could not be stopped by a lack of agreement between major players. Libraries such as jQuery, Prototype, Dojo, and Mootools, papered over the major differences in browsers, making cross-browser development far easier. At the same time, the amount of JavaScript used in applications increased dramatically. The way of GMail The turning point was, perhaps, the release of Google's GMail application in 2004. Although XMLHTTPRequest, the technology behind Asynchronous JavaScript and XML (AJAX), had been around for about five years when GMail was released, it had not been well-used. When GMail was released, I was totally knocked off my feet by how smooth it was. We've grown used to applications that avoid full reloads, but at the time, it was a revolution. To make applications like that work, a great deal of JavaScript is needed. AJAX is a method by which small chunks of data are retrieved from the server by a client instead of refreshing the entire page. The technology allows for more interactive pages that avoid the jolt of full page reloads. The popularity of GMail was the trigger for a change that had been brewing for a while. Increasing JavaScript acceptance and standardization pushed us past the tipping point for the acceptance of JavaScript as a proper language. Up until that point, much of the use of JavaScript was for performing minor changes to the page and for validating form input. I joke with people that in the early days of JavaScript, the only function name which was used was Validate(). Applications such as GMail that have a heavy reliance on AJAX and avoid full page reloads are known as Single Page Applications or SPAs. By minimizing the changes to the page contents, users have a more fluid experience. By transferring only JavaScript Object Notation (JSON) payload instead of HTML, the amount of bandwidth required is also minimized. This makes applications appear to be snappier. In recent years, there have been great advances in frameworks that ease the creation of SPAs. AngularJS, backbone.js, and ember are all Model View Controller style frameworks. They have gained great popularity in the past two to three years and provide some interesting use of patterns. These frameworks are the evolution of years of experimentation with JavaScript best practices by some very smart people. JSON is a human-readable serialization format for JavaScript. It has become very popular in recent years, as it is easier and less cumbersome than previously popular formats such as XML. It lacks many of the companion technologies and strict grammatical rules of XML, but makes up for it in simplicity. At the same time as the frameworks using JavaScript are evolving, the language is too. 2015 saw the release of a much-vaunted new version of JavaScript that had been under development for some years. Initially called ECMAScript 6, the final name ended up being ECMAScript-2015. It brought with it some great improvements to the ecosystem. Browser vendors are rushing to adopt the standard. Because of the complexity of adding new language features to the code base, coupled with the fact that not everybody is on the cutting edge of browsers, a number of other languages that transcompile to JavaScript are gaining popularity. CoffeeScript is a Python-like language that strives to improve the readability and brevity of JavaScript. Developed by Google, Dart is being pushed by Google as an eventual replacement for JavaScript. Its construction addresses some of the optimizations that are impossible in traditional JavaScript. Until a Dart runtime is sufficiently popular, Google provides a Dart to the JavaScript transcompiler. TypeScript is a Microsoft project that adds some ECMAScript-2015 and even some ECMAScript-201X syntax, as well as an interesting typing system, to JavaScript. It aims to address some of the issues that large JavaScript projects present. The point of this discussion about the history of JavaScript is twofold: first, it is important to remember that languages do not develop in a vacuum. Both human languages and computer programming languages mutate based on the environments in which they are used. It is a popularly held belief that the Inuit people have a great number of words for "snow", as it was so prevalent in their environment. This may or may not be true, depending on your definition for the word and exactly who makes up the Inuit people. There are, however, a great number of examples of domain-specific lexicons evolving to meet the requirements for exact definitions in narrow fields. One need look no further than a specialty cooking store to see the great number of variants of items which a layperson such as myself would call a pan. The Sapir–Whorf hypothesis is a hypothesis within the linguistics domain, which suggests that not only is language influenced by the environment in which it is used, but also that language influences its environment. Also known as linguistic relativity, the theory is that one's cognitive processes differ based on how the language is constructed. Cognitive psychologist Keith Chen has proposed a fascinating example of this. In a very highly-viewed TED talk, Dr. Chen suggested that there is a strong positive correlation between languages that lack a future tense and those that have high savings rates (https://www.ted.com/talks/keith_chen_could_your_language_affect_your_ability_to_save_money/transcript). The hypothesis at which Dr. Chen arrived is that when your language does not have a strong sense of connection between the present and the future, this leads to more reckless behavior in the present. Thus, understanding the history of JavaScript puts one in a better position to understand how and where to make use of JavaScript. The second reason I explored the history of JavaScript is because it is absolutely fascinating to see how quickly such a popular tool has evolved. At the time of writing, it has been about 20 years since JavaScript was first built and its rise to popularity has been explosive. What more exciting thing is there than to work in an ever-evolving language? JavaScript everywhere Since the GMail revolution, JavaScript has grown immensely. The renewed browser wars, which pit Internet Explorer and Edge against Chrome, against Firefox, have lead to building a number of very fast JavaScript interpreters. Brand new optimization techniques have been deployed and it is not unusual to see JavaScript compiled to machine-native code for the added performance it gains. However, as the speed of JavaScript has increased, so has the complexity of the applications built using it. JavaScript is no longer simply a language for manipulating the browser, either. The JavaScript engine behind the popular Chrome browser has been extracted and is now at the heart of a number of interesting projects such as Node.js. Node.js started off as a highly asynchronous method of writing server-side applications. It has grown greatly and has a very active community supporting it. A wide variety of applications have been built using the Node.js runtime. Everything from build tools to editors have been built on the base of Node.js. Recently, the JavaScript engine for Microsoft Edge, ChakraCore, was also open sourced and can be embedded in NodeJS as an alternative to Google's V8. SpiderMonkey, the FireFox equivalent, is also open source and is making its way into more tools. JavaScript can even be used to control microcontrollers. The Johnny-Five framework is a programming framework for the very popular Arduino. It brings a much simpler approach to programming devices than the traditional low-level languages used for programming these devices. Using JavaScript and Arduino opens up a world of possibilities, from building robots to interacting with real-world sensors. All of the major smartphone platforms (iOS, Android, and Windows Phone) have an option to build applications using JavaScript. The tablet space is much the same with tablets supporting programming using JavaScript. Even the latest version of Windows provides a mechanism for building applications using JavaScript: JavaScript is becoming one of the most important languages in the world. Although language usage statistics are notoriously difficult to calculate, every single source which attempts to develop a ranking puts JavaScript in the top 10: Language index Rank of JavaScript Langpop.com 4 Statisticbrain.com 4 Codeval.com 6 TIOBE 8 What is more interesting is that most of of these rankings suggest that the usage of JavaScript is on the rise. The long and short of it is that JavaScript is going to be a major language in the next few years. More and more applications are being written in JavaScript and it is the lingua franca for any sort of web development. Developer of the popular Stack Overflow website Jeff Atwood created Atwood's Law regarding the wide adoption of JavaScript: "Any application that can be written in JavaScript, will eventually be written in JavaScript" – Atwood's Law, Jeff Atwood This insight has been proven to be correct time and time again. There are now compilers, spreadsheets, word processors—you name it—all written in JavaScript. As the applications which make use of JavaScript increase in complexity, the developer may stumble upon many of the same issues as have been encountered in traditional programming languages: how can we write this application to be adaptable to change? This brings us to the need for properly designing applications. No longer can we simply throw a bunch of JavaScript into a file and hope that it works properly. Nor can we rely on libraries such as jQuery to save ourselves. Libraries can only provide additional functionality and contribute nothing to the structure of an application. At least some attention must now be paid to how to construct the application to be extensible and adaptable. The real world is ever-changing, and any application that is unable to change to suit the changing world is likely to be left in the dust. Design patterns provide some guidance in building adaptable applications, which can shift with changing business needs. Summary JavaScript has an interesting history and is really coming of age. With server-side JavaScript taking off and large JavaScript applications becoming common, there is a need for more diligence in building JavaScript applications. For more information on JavaScript, you can check other books by Packt mentioned as follows: Mastering JavaScript Promises: https://www.packtpub.com/application-development/mastering-javascript-promises Mastering JavaScript High Performance: https://www.packtpub.com/web-development/mastering-javascript-high-performance JavaScript : Functional Programming for JavaScript Developers: https://www.packtpub.com/web-development/javascript-functional-programming-javascript-developers Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Saying Hello! [article]
Read more
  • 0
  • 0
  • 12306

article-image-working-tooltips
Packt
23 Dec 2013
6 min read
Save for later

Working with Tooltips

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) The jQuery team introduced their version of the tooltip as part of changes to Version 1.9 of the library; it was designed to act as a direct replacement for the standard tooltip used in all browsers. The difference here, though, was that whilst you can't style the standard tooltip, jQuery UI's replacement is intended to be accessible, themeable, and completely customizable. It has been set to display not only when a control receives focus, but also when you hover over that control, which makes it easier to use for keyboard users. Implementing a default tooltip Tooltips were built to act as direct replacements for the browser's native tooltips. They will recognize the default markup of the title attribute in a tag, and use it to automatically add the additional markup required for the widget. The target selector can be customized though using tooltip's items and content options. Let's first have a look at the basic structure required for implementing tooltips. In a new file in your text editor, create the following page: <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>Tooltip</title> <link rel="stylesheet" href="development- bundle/themes/redmond/jquery.ui.all.css"> <style> p { font-family: Verdana, sans-serif; } </style> <script src = "js/jquery-2.0.3.js"></script> <script src = "development- bundle/ui/jquery.ui.core.js"></script> <script src = "development-bundle/ui/jquery.ui.widget.js"> </script> <script src = "development-bundle/ui/jquery.ui.position.js"> </script> <script src = "development-bundle/ui/jquery.ui.tooltip.js"> </script> <script> $(document).ready(function($){ $(document).tooltip(); }); </script> </head> <body> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla blandit mi quis imperdiet semper. Fusce vulputate venenatis fringilla. Donec vitae facilisis tortor. Mauris dignissim nibh ac justo ultricies, nec vehicula ipsum ultricies. Mauris molestie felis ligula, id tincidunt urna consectetur at. Praesent <a href="http://www. ipsum.com" title="This was generated from www.ipsum.com">blandit</a> faucibus ante ut semper. Pellentesque non tristique nisi. Ut hendrerit tempus nulla, sit amet venenatis felis lobortis feugiat. Nam ac facilisis magna. Praesent consequat, risus in semper imperdiet, nulla lorem aliquet nisi, a laoreet nisl leo rutrum mauris.</p> </body> </html> Save the code as tooltip1.html in your jqueryui working folder. Let's review what was used. The following script and CSS resources are needed for the default tooltip widget configuration: jquery.ui.all.css jquery-2.0.3.js jquery.ui.core.js jquery.ui.widget.js jquery.ui.tooltip.js The script required to create a tooltip, when using the title element in the underlying HTML can be as simple as this, which should be added after the last <script> element in your code, as shown in the previous example: <script> $(document).ready(function($){ $(document).tooltip(); }); </script> In this example, when hovering over the link, the library adds in the requisite aria described by the code for screen readers into the HTML link. The widget then dynamically generates the markup for the tooltip, and appends it to the document, just before the closing </body> tag. This is automatically removed as soon as the target element loses focus. ARIA, or Accessible Rich Internet Applications, provides a way to make content more accessible to people with disabilities. You can learn more about this initiative at https://developer.mozilla.org/en-US/docs/Accessibility/ARIA. It is not necessary to only use the $(document) element when adding tooltips. Tooltips will work equally well with classes or selector IDs; using a selector ID, will give a finer degree of control. Overriding the default styles When styling the Tooltip widget, we are not limited to merely using the prebuilt themes on offer, we can always elect to override existing styles with our own. In our next example, we’ll see how easy this is to accomplish, by making some minor changes to the example from tooltip1.html. In a new document, add the following styles, and save it as tooltipOverride.css, within the css folder: p { font-family: Verdana, sans-serif; } .ui-tooltip { background: #637887; color: #fff; } Don't forget to link to the new style sheet from the <head> of your document: <link rel="stylesheet" href="css/tooltipOverride.css"> Before we continue, it is worth explaining a great trick for styling tooltips before committing the results to code. If you are using Firefox, you can download and install the Toggle JS add-on for Firefox, which is available from https://addons.mozilla.org/en-US/firefox/addon/toggle-js/. This allows us to switch off JavaScript on a per-page basis; we can then hover over the link to create the tooltip, before expanding the markup in Firebug and styling it at our leisure. Save your HTML document as tooltip2.html. When we run the page in a browser, you should see the modified tooltip appear when hovering over the link in the text: Using prebuilt themes If creating completely new styles by hand is overkill for your needs, you can always elect to use one of the prebuilt themes that are available for download from the jQuery UI site. This is a really easy change to make. We first need to download a copy of the replacement theme; in our example, we’re going to use one called Excite Bike. Let’s start by browsing to http://jqueryui.com/download/, then deselecting the Toggle All option. We don’t need to download the whole library, just the theme at the bottom, change the theme option to display Excite Bike then select Download. Next, open a copy of tooltip2.html then look for this line: <link rel="stylesheet" href="development-bundle/themes/redmond /jquery.ui.all.css"> You will notice the highlighted word in the above line. This is the name of the existing theme. Change this to excite-bike then save the document as tooltip3.html, then remove the tooltipOverride.css link, and you’re all set. The following is our replacement theme in action: With a single change of word, we can switch between any of the prebuilt themes available for use with jQuery UI (or indeed even any of the custom ones that others have made available online), as long as you have downloaded and copied the theme into the appropriate folder. There may be occasions though, were we need to tweak the settings. This gives us the best of both worlds, where we only need to concentrate on making the required changes. Let’s take a look at how we can alter an existing theme, using ThemeRoller.
Read more
  • 0
  • 0
  • 12279
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-3d-websites
Packt
23 May 2014
10 min read
Save for later

3D Websites

Packt
23 May 2014
10 min read
(For more resources related to this topic, see here.) Creating engaging scenes There is no adopted style for a 3D website. No metaphor can best describe the process of designing the 3D web. Perhaps what we know the most is what does not work. Often, our initial concept is to model the real world. An early design that was used years ago involved a university that wanted to use its campus map to navigate through its website. One found oneself dragging the mouse repeatedly, as fast as one could, just to get to the other side of campus. A better design would've been a book shelf where everything was in front of you. To view the chemistry department, just grab the chemistry book, and click on the virtual pages to view the faculty, curriculum, and other department information. Also, if you needed to cross-reference this with the math department's upcoming schedule, you could just grab the math book. Each attempt adds to our knowledge and gets us closer to something better. What we know is what most other applications of computer graphics learned—that reality might be a starting point, but we should not let it interfere with creativity. 3D for the sake of recreating the real world limits our innovative potential. Following this starting point, strip out the parts bound by physics, such as support beams or poles that serve no purpose in a virtual world. Such items make the rendering slower by just existing. Once we break these bounds, the creative process takes over—perhaps a whimsical version, a parody, something dark and scary, or a world-emphasizing story. Characters in video games and animated movies take on stylized features. The characters are purposely unrealistic or exaggerated. One of the best animations to exhibit this is Chris Landreth's The Spine, Ryan (Academy Award for best-animated short film in 2004), and his earlier work in Psychological Driven Animation, where the characters break apart by the ravages of personal failure (https://www.nfb.ca/film/ryan). This demonstration will describe some of the more difficult technical issues involved with lighting, normal maps, and the efficient sharing of 3D models. The following scene uses 3D models and textures maps from previous demonstrations but with techniques that are more complex. Engage thrusters This scene has two lampposts and three brick walls, yet we only read in the texture map and 3D mesh for one of each and then reuse the same models several times. This has the obvious advantage that we do not need to read in the same 3D models several times, thus saving download time and using less memory. A new function, copyObject(), was created that currently sits inside the main WebGL file, although it can be moved to mesh3dObject.js. In webGLStart(), after the original objects were created, we call copyObject(), passing along the original object with the unique name, location, rotation, and scale. In the following code, we copy the original streetLight0Object into a new streetLight1Object: streetLight1Object = copyObject( streetLight0Object, "streetLight1", streetLight1Location, [1, 1, 1], [0, 0, 0] ); Inside copyObject(), we first create the new mesh and then set the unique name, location (translation), rotation, and scale: function copyObject(original, name, translation, scale, rotation) { meshObjectArray[ totalMeshObjects ] = new meshObject(); newObject = meshObjectArray[ totalMeshObjects ]; newObject.name = name; newObject.translation = translation; newObject.scale = scale; newObject.rotation = rotation; The object to be copied is named original. We will not need to set up new buffers since the new 3D mesh can point to the same buffers as the original object: newObject.vertexBuffer = original.vertexBuffer; newObject.indexedFaceSetBuffer = original.indexedFaceSetBuffer; newObject.normalsBuffer = original.normalsBuffer; newObject.textureCoordBuffer = original.textureCoordBuffer; newObject.boundingBoxBuffer = original.boundingBoxBuffer; newObject.boundingBoxIndexBuffer = original.boundingBoxIndexBuffer; newObject.vertices = original.vertices; newObject.textureMap = original.textureMap; We do need to create a new bounding box matrix since it is based on the new object's unique location, rotation, and scale. In addition, meshLoaded is set to false. At this stage, we cannot determine if the original mesh and texture map have been loaded since that is done in the background: newObject.boundingBoxMatrix = mat4.create(); newObject.meshLoaded = false; totalMeshObjects++; return newObject; } There is just one more inclusion to inform us that the original 3D mesh and texture map(s) have been loaded inside drawScene(): streetLightCover1Object.meshLoaded = streetLightCover0Object.meshLoaded; streetLightCover1Object.textureMap = streetLightCover0Object.textureMap; This is set each time a frame is drawn, and thus, is redundant once the mesh and texture map have been loaded, but the additional code is a very small hit in performance. Similar steps are performed for the original brick wall and its two copies. Most of the scene is programmed using fragment shaders. There are four lights: the two streetlights, the neon Products sign, and the moon, which sets and rises. The brick wall uses normal maps. However, it is more complex here; the use of spotlights and light attenuation, where the light fades over a distance. The faint moon light, however, does not fade over a distance. Opening scene with four light sources: two streetlights, the Products neon sign, and the moon This program has only three shaders: LightsTextureMap, used by the brick wall with a texture normal map; Lights, used for any object that is illuminated by one or more lights; and Illuminated, used by the light sources such as the moon, neon sign, and streetlight covers. The simplest out of these fragment shaders is Illuminated. It consists of a texture map and the illuminated color, uLightColor. For many objects, the texture map would simply be a white placeholder. However, the moon uses a texture map, available for free from NASA that must be merged with its color: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t)); gl_FragColor = vec4(fragmentColor.rgb * uLightColor, 1.0); The light color also serves another purpose, as it will be passed on to the other two fragment shaders since each adds its own individual color: off-white for the streetlights, gray for the moon, and pink for the neon sign. The next step is to use the shaderLights fragment shader. We begin by setting the ambient light, which is a dim light added to every pixel, usually about 0.1, so nothing is pitch black. Then, we make a call for each of our four light sources (two streetlights, the moon, and the neon sign) to the calculateLightContribution() function: void main(void) { vec3 lightWeighting = vec3(uAmbientLight, uAmbientLight, uAmbientLight); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight0Loc, uSpotLightDir, false); lightWeighting += uStreetLightColor * calculateLightContribution(uSpotLight1Loc, uSpotLightDir, false); lightWeighting += uMoonLightColor * calculateLightContribution(uMoonLightPos, vec3(0.0, 0.0, 0.0), true); lightWeighting += uProductTextColor * calculateLightContribution(uProductTextLoc, vec3(0.0, 0.0, 0.0), true); All four calls to calculateLightContribution() are multiplied by the light's color (white for the streetlights, gray for the moon, and pink for the neon sign). The parameters in the call to calculateLightContribution(vec3, vec3, vec3, bool) are: location of the light, its direction, the pixel's normal, and the point light. This parameter is true for a point light that illuminates in all directions, or false if it is a spotlight that points in a specific direction. Since point lights such as the moon or neon sign have no direction, their direction parameter is not used. Therefore, their direction parameter is set to a default, vec3(0.0, 0.0, 0.0). The vec3 lightWeighting value accumulates the red, green, and blue light colors at each pixel. However, these values cannot exceed the maximum of 1.0 for red, green, and blue. Colors greater than 1.0 are unpredictable based on the graphics card. So, the red, green, and blue light colors must be capped at 1.0: if ( lightWeighting.r > 1.0 ) lightWeighting.r = 1.0; if ( lightWeighting.g > 1.0 ) lightWeighting.g = 1.0; if ( lightWeighting.b > 1.0 ) lightWeighting.b = 1.0; Finally, we calculate the pixels based on the texture map. Only the street and streetlight posts use this shader, and neither have any tiling, but the multiplication by uTextureMapTiling was included in case there was tiling. The fragmentColor based on the texture map is multiplied by lightWeighting—the accumulation of our four light sources for the final color of each pixel: vec4 fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)); gl_FragColor = vec4(fragmentColor.rgb * lightWeighting.rgb, 1.0); } In the calculateLightContribution() function, we begin by determining the angle between the light's direction and point's normal. The dot product is the cosine between the light's direction to the pixel and the pixel's normal, which is also known as Lambert's cosine law (http://en.wikipedia.org/wiki/Lambertian_reflectance): vec3 distanceLightToPixel = vec3(vPosition.xyz - lightLoc); vec3 vectorLightPosToPixel = normalize(distanceLightToPixel); vec3 lightDirNormalized = normalize(lightDir); float angleBetweenLightNormal = dot( -vectorLightPosToPixel, vTransformedNormal ); A point light shines in all directions, but a spotlight has a direction and an expanding cone of light surrounding this direction. For a pixel to be lit by a spotlight, that pixel must be in this cone of light. This is the beam width area where the pixel receives the full amount of light, which fades out towards the cut-off angle that is the angle where there is no more light coming from this spotlight: With texture maps removed, we reveal the value of the dot product between the pixel normal and direction of the light if ( pointLight) { lightAmt = 1.0; } else { // spotlight float angleLightToPixel = dot( vectorLightPosToPixel, lightDirNormalized ); // note, uStreetLightBeamWidth and uStreetLightCutOffAngle // are the cosines of the angles, not actual angles if ( angleLightToPixel >= uStreetLightBeamWidth ) { lightAmt = 1.0; } if ( angleLightToPixel > uStreetLightCutOffAngle ) { lightAmt = (angleLightToPixel - uStreetLightCutOffAngle) / (uStreetLightBeamWidth - uStreetLightCutOffAngle); } } After determining the amount of light at that pixel, we calculate attenuation, which is the fall-off of light over a distance. Without attenuation, the light is constant. The moon has no light attenuation since it's dim already, but the other three lights fade out at the maximum distance. The float maxDist = 15.0; code snippet says that after 15 units, there is no more contribution from this light. If we are less than 15 units away from the light, reduce the amount of light proportionately. For example, a pixel 10 units away from the light source receives (15-10)/15 or 1/3 the amount of light: attenuation = 1.0; if ( uUseAttenuation ) { if ( length(distanceLightToPixel) < maxDist ) { attenuation = (maxDist - length(distanceLightToPixel))/maxDist; } else attenuation = 0.0; } Finally, we multiply the values that make the light contribution and we are done: lightAmt *= angleBetweenLightNormal * attenuation; return lightAmt; Next, we must account for the brick wall's normal map using the shaderLightsNormalMap-fs fragment shader. The normal is equal to rgb * 2 – 1. For example, rgb (1.0, 0.5, 0.0), which is orange, would become a normal (1.0, 0.0, -1.0). This normal is converted to a unit value or normalized to (0.707, 0, -0.707): vec4 textureMapNormal = vec4( (texture2D(uSamplerNormalMap, vec2(vTextureCoord.s*uTextureMapTiling.s, vTextureCoord.t*uTextureMapTiling.t)) * 2.0) - 1.0 ); vec3 pixelNormal = normalize(uNMatrix * normalize(textureMapNormal.rgb) ); A normal mapped brick (without red brick texture image) reveals how changing the pixel normal altersthe shading with various light sources We call the same calculateLightContribution() function, but we now pass along pixelNormal calculated using the normal texture map: calculateLightContribution(uSpotLight0Loc, uSpotLightDir, pixelNormal, false); From here, much of the code is the same, except we use pixelNormal in the dot product to determine the angle between the normal and the light sources: float angleLightToTextureMap = dot( -vectorLightPosToPixel, pixelNormal ); Now, angleLightToTextureMap replaces angleBetweenLightNormal because we are no longer using the vertex normal embedded in the 3D mesh's .obj file, but instead we use the pixel normal derived from the normal texture map file, brickNormalMap.png. A normal mapped brick wall with various light sources Objective complete – mini debriefing This comprehensive demonstration combined multiple spot and point lights, shared 3D meshes instead of loading the same 3D meshes, and deployed normal texture maps for a real 3D brick wall appearance. The next step is to build upon this demonstration, inserting links to web pages found on a typical website. In this example, we just identified a location for Products using a neon sign to catch the users' attention. As a 3D website is built, we will need better ways to navigate this virtual space and this is covered in the following section.
Read more
  • 0
  • 0
  • 12252

article-image-now-youre-ready
Packt
20 Aug 2014
14 min read
Save for later

Now You're Ready!

Packt
20 Aug 2014
14 min read
In this article by Ryan John, author of the book Canvas LMS Course Design, we will have a look at the key points encountered during the course-building process, along with connections to educational philosophies and practices that support technology as a powerful way to enhance teaching and learning. (For more resources related to this topic, see here.) As you finish teaching your course, you will be well served to export your course to keep as a backup, to upload and reteach later within Canvas to a new group of students, or to import into another LMS. After covering how to export your course, we will tie everything we've learned together through a discussion of how Canvas can help you and your students achieve educational goals while acquiring important 21st century skills. Overall, we will cover the following topics: Exporting your course from Canvas to your computer Connecting Canvas to education in the 21st century Exporting your course Now that your course is complete, you will want to export the course from Canvas to your computer. When you export your course, Canvas compiles all the information from your course and allows you to download a single file to your computer. This file will contain all of the information for your course, and you can use this file as a master template for each time you or your colleagues teach the course. Exporting your course is helpful for two main reasons: It is wise to save a back-up version of your course on a computer. After all the hard work you have put into building and teaching your course, it is always a good decision to export your course and save it to a computer. If you are using a Free for Teachers account, your course will remain intact and accessible online until you choose to delete it. However, if you use Canvas through your institution, each institution has different procedures and policies in place regarding what happens to courses when they are complete. Exporting and saving your course will preserve your hard work and protect it from any accidental or unintended deletion. Once you have exported your course, you will be able to import your course into Canvas at any point in the future. You are also able to import your course into other LMSs such as Moodle or BlackBoard. You might wish to import your course back into Canvas if your course is removed from your institution-specific Canvas account upon completion. You will have a copy of the course to import for the next time you are scheduled to teach the same course. You might build and teach a course using a Free for Teachers account, and then later wish to import that version of the course into an institution-specific Canvas account or another LMS. Exporting your course does not remove the course from Canvas—your course will still be accessible on the Canvas site unless it is automatically deleted by your institution or if you choose to delete it. To export your entire course, complete the following steps: Click on the Settings tab at the bottom of the left-hand side menu, as pictured in the following screenshot: On the Settings page, look to the right-hand side menu. Click on the Export Course Content button, which is highlighted in the following screenshot: A screen will appear asking you whether you would like to export the Course or export a Quiz. To export your entire course, select the Course option and then click on Create Export, as shown in the following screenshot: Once you click on Create Export, a progress bar will appear. As indicated in the message below the progress bar, the export might take a while to complete, and you can leave the page while Canvas exports the content. The following screenshot displays this progress bar and message: When the export is complete, you will receive an e-mail from notifications@instructure.com that resembles the following screenshot. Click on the Click to view exports link in the e-mail: A new window or tab will appear in your browser that shows your Content Exports. Below the heading of the page, you will see your course export listed with a link that reads Click here to download, as pictured in the following screenshot. Go ahead and click on the link, and the course export file will be downloaded to your computer. Your course export file will be downloaded to your computer as a single .imscc file. You can then move the downloaded file to a folder on your computer's hard drive for later access. Your course export is complete, and you can save the exported file for later use. To access the content stored in the exported .imscc file, you will need to import the file back into Canvas or another LMS. You might notice an option to Conclude this Course on the course Settings page if your institution has not hidden or disabled this option. In most cases, it is not necessary to conclude your course if you have set the correct course start and end dates in your Course Details. Concluding your course prevents you from altering grades or accessing course content, and you cannot unconclude your course on your own. Some institutions conclude courses automatically, which is why it is always best to export your course to preserve your work. Now that we have covered the last how-to aspects of Canvas, let's close with some ways to apply the skills we have learned in this book to contemporary educational practices, philosophies, and requirements that you might encounter in your teaching. Connecting Canvas to education in the 21st century While learning how to use the features of Canvas, it is easy to forget the main purpose of Canvas' existence—to better serve your students and you in the process of education. In the midst of rapidly evolving technology, students and teachers alike require skills that are as adaptable and fluid as the technologies and new ideas swirling around them. While the development of various technologies might seem daunting, those involved in education in the 21st century have access to new and exciting tools that have never before existed. As an educator seeking to refine your craft, utilizing tools such as Canvas can help you and your students develop the skills that are becoming increasingly necessary to live and thrive in the 21st century. As attainment of these skills is indeed proving more and more valuable in recent years, many educational systems have begun to require evidence that instructors are cognizant of these skills and actively working to ensure that students are reaching valuable goals. Enacting the Framework for 21st Century Learning As education across the world continues to evolve through time, the development of frameworks, methods, and philosophies of teaching have shaped the world of formal education. In recent years, one such approach that has gained prominence in the United States' education systems is the Framework for 21st Century Learning, which was developed over the last decade through the work of the Partnership for 21st Century Skills (P21). This partnership between education, business, community, and government leaders was founded to help educators provide children in Kindergarten through 12th Grade (K-12) with the skills they would need going forward into the 21st century. Though the focus of P21 is on children in grades K-12, the concepts and knowledge articulated in the Framework for 21st Century Learning are valuable for learners at all levels, including those in higher education. In the following sections, we will apply our knowledge of Canvas to the desired 21st century student outcomes, as articulated in the P21 Framework for 21st Century Learning, to brainstorm the ways in which Canvas can help prepare your students for the future. Core subjects and 21st century themes The Framework for 21st Century Learning describes the importance of learning certain core subjects including English, reading or language arts, world languages, the arts, Mathematics, Economics, Science, Geography, History, Government, and Civics. In connecting these core subjects to the use of Canvas, the features of Canvas and the tips throughout this book should enable you to successfully teach courses in any of these subjects. In tandem with teaching and learning within the core subjects, P21 also advocates for schools to "promote understanding of academic content at much higher levels by weaving 21st century interdisciplinary themes into core subjects." The following examples offer insight and ideas for ways in which Canvas can help you integrate these interdisciplinary themes into your course. As you read through the following suggestions and ideas, think about strategies that you might be able to implement into your existing curriculum to enhance its effectiveness and help your students engage with the P21 skills: Global awareness: Since it is accessible from anywhere with an Internet connection, Canvas opens the opportunity for a myriad of interactions across the globe. Utilizing Canvas as the platform for a purely online course enables students from around the world to enroll in your course. As a distance-learning tool in colleges, universities, or continuing education departments, Canvas has the capacity to unite students from anywhere in the world to directly interact with one another: You might utilize the graded discussion feature for students to post a reflection about a class reading that considers their personal cultural background and how that affects their perception of the content. Taking it a step further, you might require students to post a reply comment on other students' reflections to further spark discussion, collaboration, and cross-cultural connections. As a reminder, it is always best to include an overview of online discussion etiquette somewhere within your course—you might consider adding a "Netiquette" section to your syllabus to maintain focus and a professional tone within these discussions. You might set up a conference through Canvas with an international colleague as a guest lecturer for a course in any subject. As a prerequisite assignment, you might ask students to prepare three questions to ask the guest lecturer to facilitate a real-time international discussion within your class. Financial, economic, business, and entrepreneurial literacy: As the world becomes increasingly digitized, accessing and incorporating current content from the Web is a great way to incorporate financial, economic, business, and entrepreneurial literacy into your course: In a Math course, you might consider creating a course module centered around the stock market. Within the module, you could build custom content pages offering direct instruction and introductions to specific topics. You could upload course readings and embed videos of interviews with experts with the YouTube app. You could link to live steam websites of the movement of the markets and create quizzes to assess students' understanding. Civic literacy: In fostering students' understanding of their role within their communities, Canvas can serve as a conduit of information regarding civic responsibilities, procedures, and actions: You might create a discussion assignment in which students search the Internet for a news article about a current event and post a reflection with connections to other content covered in the course. Offering guidance in your instructions to address how local and national citizenship impacts students' engagement with the event or incident could deepen the nature of responses you receive. Since discussion posts are visible to all participants in your course, a follow-up assignment might be for students to read one of the articles posted by another student and critique or respond to their reflection. Health literacy: Canvas can allow you to facilitate the exploration of health and wellness through the wide array of submission options for assignments. By utilizing the variety of assignment types you can create within Canvas, students are able to explore course content in new and meaningful ways: In a studio art class, you can create an out-of-class assignment to be submitted to Canvas in which students research the history, nature, and benefits of art therapy online and then create and upload a video sharing their personal relationship with art and connecting it to what they have found in the art therapy stories of others. Environmental literacy: As a cloud-based LMS, Canvas allows you to share files and course content with your students while maintaining and fostering an awareness of environmental sustainability: In any course you teach that involves readings uploaded to Canvas, encourage your students to download the readings to their computers or mobile devices rather than printing the content onto paper. Downloading documents to read on a device instead of printing them saves paper, reduces waste, and helps foster sustainable environmental habits. For PDF files embedded into content pages on Canvas, students can click on the preview icon that appears next to the document link and read the file directly on the content page without downloading or printing anything. Make a conscious effort to mention or address the environmental impacts of online learning versus traditional classroom settings, perhaps during a synchronous class conference or on a discussion board. Learning and innovation skills A number of specific elements combined can enable students to develop learning and innovation skills to prepare them for the increasingly "complex life and work environments in the 21st century." The communication setup of Canvas allows for quick and direct interactions while offering students the opportunity to contemplate and revise their contributions before posting to the course, submitting an assignment, or interacting with other students. This flexibility, combined with the ways in which you design your assignments, can help incorporate the following elements into your course to ensure the development of learning and innovation skills: Creativity and innovation: There are many ways in which the features of Canvas can help your students develop their creativity and innovation. As you build your course, finding ways for students to think creatively, work creatively with others, and implement innovations can guide the creation of your course assignments: You might consider assigning groups of students to assemble a content page within Canvas dedicated to a chosen or assigned topic. Do so by creating a content page, and then enable any user within the course to edit the page. Allowing students to experiment with the capabilities of the Rich Content Editor, embedding outside content and synthesizing ideas within Canvas allows each group's creativity to shine. As a follow-up assignment, you might choose to have students transfer the content of their content page to a public website or blog using sites such as Wikispaces, Wix, or Weebly. Once the sites are created, students can post their group site to a Canvas discussion page, where other students can view and interact with the work of their peers. Asking students to disseminate the class sites to friends or family around the globe could create international connections stemming from the creativity and innovation of your students' web content. Critical thinking and problem solving: As your students learn to overcome obstacles and find multiple solutions to complex problems, Canvas offers a place for students to work together to develop their critical thinking and problem-solving skills: Assign pairs of students to debate and posit solutions to a global issue that connects to topics within your course. Ask students to use the Conversations feature of Canvas to debate the issue privately, finding supporting evidence in various forms from around the Internet. Using the Collaborations feature, ask each pair of students to assemble and submit a final e-report on the topic, presenting the various solutions they came up with as well as supporting evidence in various electronic forms such as articles, videos, news clips, and websites. Communication and collaboration: With the seemingly impersonal nature of electronic communication, communication skills are incredibly important to maintain intended meanings across multiple means of communication. As the nature of online collaboration and communication poses challenges for understanding, connotation, and meaning, honing communication skills becomes increasingly important: As a follow-up assignment to the preceding debate suggestion, use the conferences tool in Canvas to set up a full class debate. During the debate, ask each pair of students to present their final e-report to the class, followed by a group discussion of each pair's findings, solutions, and conclusions. You might find it useful for each pair to explain their process and describe the challenges and/or benefits of collaborating and communicating via the Internet in contrast to collaborating and communicating in person.
Read more
  • 0
  • 0
  • 12209

article-image-platform-service
Packt
21 Nov 2013
5 min read
Save for later

Platform as a Service

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) Platform as a Service is a very interesting take on the traditional cloud computing models. While there are many (often conflicting) definitions of a PaaS, for all practical purposes, PaaS provides a complete platform and environment to build and host applications or services. Emphasis is clearly on providing an end-to-end precreated environment to develop and deploy the application that automatically scales as required. PaaS packs together all the necessary components such as an operating system, database, programming language, libraries, web or application container, and a storage or hosting option. PaaS offerings vary and their chargebacks are dependent on what is utilized by the end user. There are excellent public offerings of PaaS such as Google App Engine, Heroku, Microsoft Azure, and Amazon Elastic Beanstalk. In a private cloud offering for an enterprise, it is possible to implement a similar PaaS environment. Out of the various possibilities, we will focus on building a Database as a Service (DBaaS) infrastructure using Oracle Enterprise Manager. DBaaS is sometimes seen as a mix of PaaS or SaaS depending on the kind of service it provides. DBaaS that provides services such as a database would be leaning more towards its PaaS legacy; but if it provides a service such as Business Intelligence, it takes more of a SaaS form. Oracle Enterprise Manager enables self-service provisioning of virtualized database instances out of a common shared database instance or cluster. Oracle Database is built to be clustered, and this makes it an easy fit for a robust DBaaS platform. Setting up the PaaS infrastructure Before we go about implementing a DBaaS, we will need to make sure our common platform is up and working. We will now check how we can create a PaaS Zone. Creating a PaaS Zone Enterprise Manager groups host or Oracle VM Manager Zones into PaaS Infrastructure Zones. You will need to have at least one PaaS Zone before you can add more features into the setup. To create a PaaS Zone, make sure that you have the following: The EM_CLOUD_ADMINISTRATOR, EM_SSA_ADMINISTRATOR, and EM_SSA_USER roles created A software library To set up a PaaS Infrastructure Zone, perform the following steps: Navigate to Setup | Cloud | PaaS Infrastructure Zone. Click on Create in the PaaS Infrastructure Zone main page. Enter the necessary details for PaaS Infrastructure Zone such as Name and Description. Based on the type of members you want to add to this zone, you can select any of the following member types: Host: This option will only allow the host targets to be part of this zone. Also, make sure you provide the necessary details for the placement policy constraints defined per host. These values are used to prevent over utilization of hosts which are already being heavily used. You can set a percentage threshold for Maximum CPU Utilization and Maximum Memory Allocation. Any host exceeding this threshold will not be used for provisioning. OVM Zone: This option will allow you to add Oracle Virtual Manager Zone targets: If you select Host at this stage, you will see the following page: Click on the + button to add named credentials and make sure you click on Test Credentials button to verify the credential. These named credentials must be global and available on all the hosts in this zone. Click on the Add button to add target hosts to this zone. If you selected OVM Zone in the previous screen (step 1 of 4), you will be presented with the following screen: Click on the Add button to add roles that can access this PaaS Infrastructure Zone. Once you have created a PaaS Infrastructure Zone, you can proceed with setting up necessary pieces for a DBaaS. However, time and again you might want to edit or review your PaaS Infrastructure Zone. To view and manage your PaaS Infrastructure Zones, navigate to Enterprise Menu | Cloud | Middleware and Database Cloud | PaaS Infrastructure Zones. From this page you can create, edit, delete, or view more details for a PaaS Infrastructure Zone. Clicking on the PaaS infrastructure zone link will display a detailed drill-down page with quite a few details related to that zone. The page is shown as follows: This page shows a lot of very useful details about the zone. Some of them are listed as follows: General: This section shows stats for this zone and shows details such as the total number of software pools, Oracle VM zones, member types (hosts or Oracle VM Zones), and other related details. CPU and Memory: This section gives an overview of CPU and memory utilization across all servers in the zone. Issues: This section shows incidents and problems for the target. This is a handy summary to check if there are any issues that needs attention. Request Summary: This section shows the status of requests being processed currently. Software Pool Summary: This section shows the name and type of each software pool in the zone. Unallocated Servers: This section shows a list of servers that are not associated with any software pool. Members: This section shows the members of the zones and the member. Service Template Summary: Shows the service templates associated with the zone. Summary We saw in this article, how PaaS plays a vital role in the structure of a DBaaS architechture. Resources for Article: Further resources on this subject: What is Oracle Public Cloud? [Article] Features of CloudFlare [Article] Oracle Tools and Products [Article]
Read more
  • 0
  • 0
  • 12208

article-image-materials-and-why-they-are-essential
Packt
08 Jul 2015
11 min read
Save for later

Materials, and why they are essential

Packt
08 Jul 2015
11 min read
In this article by Ciro Cardoso, author of the book Lumion3D Best Practices, we will see materials, and why they are essential. In the 3D world, materials and textures are nearly as important as the 3D geometry that composes the scene. A material defines the optical properties of an object when hit by a ray of light. In other words, a material defines how the light interacts with the surface, and textures can help not only to control the color (diffuse), but also the reflections and glossiness. (For more resources related to this topic, see here.) It's not difficult to understand that textures are another essential part of a good material, and if your goal is to achieve believable results, you need textures or images of real elements like stone, wood, brick, and other natural elements. Textures can bring detail to your surface that otherwise would require geometry to look good. In that case, how can Lumion help you and, most importantly, what are the best practices to work with materials? Let's have a look at the following section which will provide the answer. A quick overview of Lumion's materials Lumion always had a good library of materials to assign to your 3D model, The reality is that Physically-Based Rendering (PBR) is more of a concept than a set of rules, and each render engine will implement slightly differently. The good news for us as users is that these materials follow realistic shading and lighting systems to accurately represent real-world materials. You can find excellent information regarding PBR on the following sites: http://www.marmoset.co/toolbag/learn/pbr-theory http://www.marmoset.co/toolbag/learn/pbr-practice https://www.allegorithmic.com/pbr-guide More than 600 materials are already prepared to be assigned directly to your 3D model and, by default, they should provide a realistic and appropriate material. The Lumion team has also made an effort to create a better and simpler interface, as you can see in the following screenshot: The interface was simplified, showing only the most common and essential settings. If you need more control over the material, click on the More… button to have access to extra functionalities. One word of caution: the material preview, which in this case is the sphere, will not reflect the changes you perform using the settings available. For example, if you change the main texture, the sphere will continue to show the previous material. A good practice to tweak materials is to assign the material to the surface, use the viewport to check how the settings are affecting the material, and then do a quick render. The viewport will try to show the final result, but there's nothing like a quick render to see how the material really looks when Lumion does all the lighting and shading calculations. Working with materials in Lumion – three options There are three options to work with materials in Lumion: Using Lumion's materials Using the imported materials that you can create on your favorite modeling application Creating materials using Lumion's Standard material Let's have a look at each one of these options and see how they can help you and when they best suit your project. Using Lumion's materials The first option is obvious; you are using Lumion and it makes sense using Lumion's materials, but you may feel constrained by what is available at Lumion's material library. However, instead of thinking, "I only have 600 materials and I cannot find what I need!", you need to look at the materials library also as a template to create other materials. For example, if none of the brick materials is similar to what you need, nothing stops you from using a brick material, changing the Gloss and Reflection values, and loading a new texture, creating an entirely new material. This is made possible by using the Choose Color Map button, as shown in the following screenshot: When you click on the Choose Color Map button, a new window appears where you can navigate to the folder where the texture is saved. What about the second square? The one with a purple color? Let's see the answer in the following section. Normal maps and their functionality The purple square you just saw is where you can load the normal map. And what is a normal map? Firstly, a normal map is not a bump map. A bump map uses a color range from black to white and in some ways is more limited than a normal map. The following screenshots show the clear difference between these two maps: The map on the left is a bump map and you can see that the level of detail is not the same that we can get with a normal map. A normal map consists of red, green, and blue colors that represent the x, y, and z coordinates. This allows a 2D image to represent depth and Lumion uses this depth information to fake lighting details based on the color associated with the 3D coordinate. The perks of using a normal map Why should you use normal maps? Keep in mind that Lumion is a real-time rendering engine and, as you saw previously, there is the need to keep a balance between detail and geometry. If you add too much detail, the 3D model will look gorgeous but Lumion's performance will suffer drastically. On the other hand, you can have a low-poly 3D model and fake detail with a normal map. Using a normal map for each material has a massive impact on the final quality you can get with Lumion. Since these maps are so important, how can you create one? Tips to create normal maps As you will understand, we cannot cover all the different techniques to create normal maps. However, you may find something to suit your workflow in the following list: Photoshop using an action script called nDo: Teddy Bergsman is the author of this fantastic script. It is a free script that creates a very accurate normal map of any texture you load in Photoshop in seconds. To download and see how to install this script, visit the following link: http://www.philipk.net/ndo.html Here you can find a more detailed tutorial on how to use this nDo script: http://www.philipk.net/tutorials/ndo/ndo.html This script has three options to create normal maps. The default option is Smooth, which gives you a blurry normal map. Then you have the Chisel Hard option to generate a very sharp and subtle normal map but you don't have much control over the final result. The Chisel Soft option is similar to the Chisel Hard except that you have full control over the intensity and bevel radius. This script also allows you to sculpt and combine several normal maps. Using the Quixel NDO application: From the same creator, we have a more capable and optimized application called Quixel NDO. With this application, you can sculpt normal maps in real-time, build your own normal maps without using textures, and preview everything with the 3DO material preview. This is quite useful because you don't have to save the normal map and see how it looks in Lumion. 3DO (which comes free with NDO) has a physically based renderer and lets you load a 3D model to see how the texture looks. Find more information including a free trial here: http://quixel.se/dev/ndo GIMP with the normalmap plugin: If you want to use free software, a good alternative is GIMP. There is a great plugin called normalmap, which does good work not only by creating a normal map but also by providing a preview window to see the tweaks you are making. To download this plugin, visit the following link: https://code.google.com/p/gimp-normalmap/ Do it online with NormalMap-Online: In case you don't want to install another application, the best option is doing it online. In that case, you may want to have a look at NormalMap-Online, as shown in the following screenshot: The process is extremely simple as you can see from the preceding screenshot. You load the image and automatically get a normal map, and on the right-hand side there is a preview to show how the normal map and the texture work together. Christian Petry is the man behind this tool that will help to create sharp and accurate normal maps. He is a great guy and if you like this online application, please consider supporting an application that will save you time and money. Find this online tool here: http://cpetry.github.io/NormalMap-Online/ Don't forget to use a good combination of Strength and Blur/Sharp to create a well-balanced map. You need the correct amount of detail; otherwise your normal map will be too noisy in terms of detail. However, Lumion being a user-friendly application gives you a hand on this topic by providing a tool to create a normal map automatically from a texture you import. Creating a normal map with Lumion's relief feature By now, creating a normal map from a texture is not something too technical or even complex, but it can be time consuming if you need to create a normal map for each texture. This is a wise move because it will remove the need for extra detail for the model to look good. With this in mind, Lumion's team created a new feature that allows you to create a normal map for any texture you import. After loading the new texture, the next step is to click on the Create Normal Map button, as highlighted in the following screenshot: Lumion then creates a normal map based on the texture imported, and you have the ability to invert the map by clicking on the Flip Normal Map direction button, as highlighted in the preceding screenshot. Once Lumion creates the normal map, you need a way to control how the normal map affects the material and the light. For that, you need to use the Relief slider, as shown in the following screenshot: Using this slider is very intuitive; you only need to move the slider and see the adjustments on the viewport, since the material preview will not be updated. The previous screenshot is a good example of that, because even when we loaded a wood texture, the preview still shows a concrete material. Again, this means you can easily use the settings from one material and use that as a base to create something completely new. But how good is the normal map that Lumion creates for you? Have a look for yourself in the following screenshot: On the left hand side, we have a wood floor material with a normal map that Lumion created. The right-hand side image is the same material but the normal map was created using the free nDo script for Photoshop. There is a big difference between the image on the left and the image on the right, and that is related to the normal maps used in this case. You can see clearly that the normal map used for the image on the right achieves the goal of bringing more detail to the surface. The difference is that the normal map that Lumion creates in some situations is too blurry, and for that reason we end up losing detail. Before we explore a few more things regarding creating custom materials in Lumion, let's have a look at another useful feature in Lumion. Summary Physically based rendering materials aren't that scary, don't you agree? In reality, Lumion makes this feature almost unnoticeable by making it so simple. You learned what this feature involves and how you can take full advantage of materials that make your render more believable. You learned the importance of using normal maps and how to create them using a variety of tools for all flavors. You also saw how we can easily improve material reflections without compromising the speed and quality of the render. You also learned another key aspect of Lumion: flexibility to create your own materials using the Standard material. The Standard material, although slightly different from the other materials available in Lumion, lets you play with the reflections, glossiness, and other settings that are essential. On top of all of this, you learned how to create textures. Resources for Article: Further resources on this subject: Unleashing the powers of Lumion [article] Mastering Lumion 3D [article] What is Lumion? [article]
Read more
  • 0
  • 0
  • 12207
article-image-drupal-7-social-networking-managing-users-and-profiles
Packt
27 Sep 2011
9 min read
Save for later

Drupal 7 Social Networking: Managing Users and Profiles

Packt
27 Sep 2011
9 min read
  (For more resources on Drupal, see here.) What are we going to do and why? Before we get started, let's take a closer look at what we are going to do in this article and why. At the moment, our users can interact with the website and contribute content, including through their own personal blog. Apart from the blog, there isn't a great deal which differentiates our users; they are simply a username with a blog! One key improvement to make now is to make provisions for customizable user profiles. Our site being a social network with a dinosaur theme, the following would be useful information to have on our users: Details of their pet dinosaurs, including: Name Breed Date of birth Hobbies Their details for other social networking sites; for example, links to their Facebook profile, Twitter account, or LinkedIn page Location of the user (city / area) Their web address (if they have their own website) Some of these can be added to user profiles by adding new fields to profiles, using the built in Field API; however we will also install some additional modules to extend the default offering. Many websites allow users to upload an to associate with their user account, either a photograph or an avatar to represent them. Drupal has provisions for this, but it has some drawbacks which can be fixed using Gravatar. Gravatar is a social avatar service through which users upload their avatar, which is then accessed by other websites that request the avatar using the user's e-mail address. This is convenient for our users, as it saves them having to upload their avatars to our site, and reduces the amount of data stored on our site, as well as the amount of data being transferred to and from our site. Since not all users will want to use a third-party service for their avatars (particularly, users who are not already signed up to Gravatar) we can let them upload their own avatars if they wish, through the Upload module. There are many other social networking sites out there, which don't complete with ours, and are more generalized, as a result we might want to allow our users to promote their profiles for other social networks too. We can download and install the Follow module which will allow users to publicize their profiles for other social networking sites on their profile on our site. Once our users get to know each other more, they may become more interested in each other's posts and topics and may wish to look up a specific user's contribution to the site. The tracker module allows users to track one another's contributions to the site. It is a core module, which just needs to be enabled and set up. Now that we have a better idea of what we are going to do in this , let's get started! Getting set up As this article covers features provided by both core modules and contributed modules (which need to be downloaded first), let's download and enable the modules first, saving us the need for continually downloading and enabling modules throughout the article. The modules which we will require are: Tracker (core module) Gravatar (can be downloaded from: http://drupal.org/project/gravatar) Follow (can be downloaded from: http://drupal.org/project/follow) Field_collection (can be downloaded from:http://drupal.org/project/field_collection) Entity (can be downloaded from:http://drupal.org/project/entity ) Trigger module (core module) These modules can be downloaded and then the contents extracted to the /sites/all/modules folder within our Drupal installation. Once extracted they will then be ready to be enabled within the Modules section of our admin area. Users, roles, and permissions Let's take a detailed look at users, roles, and permissions and how they all fit together. Users, roles, and permissions are all managed from the People section of the administration area: User management Within the People section, users are listed by default on the main screen. These are user accounts which are either created by us, as administrators, or created when a visitor to our site signs up for a user account. From here we can search for particular types of users, create new users, and edit users—including updating their profiles, suspending their account, or delete them permanently from our social network. Once our site starts to gain popularity it will become more difficult for us to navigate through the user list. Thankfully there are search, sort, and filter features available to make this easier for us. Let's start by taking a look at our user list: (Move the mouse over the image to enlarge.) This user list shows, for each user: Their username If their user account is active or blocked (their status) The roles which are associated with their account How long they have been a member of our community When they last accessed our site A link to edit the user's account Users: Viewing, searching, sorting, and filtering Clicking on a username will take us to the profile of that particular user, allowing us to view their profile as normal. Clicking one of the headings in the user list allows us to sort the list from the field we selected: This could be particularly useful to see who our latest members are, or to allow us to see which users are blocked, if we need to reactivate a particular account. We can also filter the user list based on a particular role that is assigned to a user, a particular permission they have (by virtue of their roles), or by their status (if their account is active or blocked). This is managed from the SHOW ONLY USERS WHERE panel: Creating a user Within the People area, there is a link Add user, which will allow us to create a new user account for our site: This takes us to the new user page where we are required to fill out the Username, E-mail address, and Password (twice to confirm) for the new user account we wish to create. We can also select the status of the user (Active or Blocked), any roles we wish to apply to their account, and indicate if we want to automatically e-mail the user to notify them of their new account: Editing a user To edit a user account we simply need to click the edit link displayed next to the user in the user list. This takes us to a page similar to the create user screen, except that it is pre-populated with the users details. It also contains a few other settings related to some default installed modules. As we install new modules, the page may include more options. Inform the user! If you are planning to change a user's username, password, or e-mail address you should notify them of the change, otherwise they may struggle the next time they try to log in! Suspending / blocking a user If we need to block or suspend a user, we can do this from the edit screen by updating their status to Blocked: This would prevent the user from accessing our site. For example, if a user had been posting inappropriate material, even after a number of warnings, we could block their account to prevent them from accessing the site. Why block? Why not just delete? If we were to simply delete a user who was troublesome on the site, they could simply sign up again (unless we went to a separate area and also blocked their e-mail address and username). Of course, the user could still sign up again using a different e-mail address and a different username, but this helps us keep things under control. Canceling and deleting a user account Also within the edit screen is the option to cancel a user's account: On clicking the Cancel account button, we are given a number of options for how we wish to cancel the account: The first and third options will at least keep the context of any discussions or contributions to which the user was involved with. The second option will unpublish their content, so if for example comments or pages are removed which have an impact on the community, we can at least re-enable them. The final option will delete the account and all content associated with it. Finally, we can also select if the user themselves must confirm that they wish to have their account deleted. Particularly useful if this is in response to a request from the user to delete all of their data, they can be given a final chance to change their mind. Bulk user operations For occasions when we need to perform specific operations to a range of user accounts (for example, unblocking a number of users, or adding / removing roles from specific users) we can use the Update options panel, in the user list to do these: From here we simply select the users we want to apply an action to, and then select one of the following options from the UPDATE OPTIONS list: Unblock the selected users Block the selected users Cancel the selected user accounts Add a role to the selected users Remove a role from the selected users Roles Users are grouped into a number of roles, which in turn have permissions assigned to them. By default there are three roles within Drupal: Administrators Anonymous users Authenticated users The anonymous and authenticated roles can be edited but they cannot be renamed or deleted. We can manage user roles by navigating to People | Permissions | Roles: The edit permissions link allows us to edit the permissions associated with a specific role. To create a new role, we simply need to enter the name for the role in the text box provided and click the Add role button.  
Read more
  • 0
  • 0
  • 12152

article-image-object-oriented-javascript
Packt
21 Dec 2016
9 min read
Save for later

Object-Oriented JavaScript

Packt
21 Dec 2016
9 min read
In this article by Ved Antani, author of the book Object Oriented JavaScript - Third Edition, we will learn that you need to know about object-oriented JavaScript. In this article, we will cover the following topics: ECMAScript 5 ECMAScript 6D Object-oriented programming (For more resources related to this topic, see here.) ECMAScript 5 One of the most important milestone in ECMAScript revisions was ECMAScript5 (ES5), officially accepted in December 2009. ECMAScript5 standard is implemented and supported on all major browsers and server-side technologies. ES5 was a major revision because apart from several important syntactic changes and additions to the standard libraries, ES5 also introduced several new constructs in the language. For instance, ES5 introduced some new objects and properties, and also the so-called strict mode. Strict mode is a subset of the language that excludes deprecated features. The strict mode is opt-in and not required, meaning that if you want your code to run in the strict mode, you will declare your intention using (once per function, or once for the whole program) the following string: "use strict"; This is just a JavaScript string, and it's ok to have strings floating around unassigned to any variable. As a result, older browsers that don't speak ES5 will simply ignore it, so this strict mode is backwards compatible and won't break older browsers. For backwards compatibility, all the examples in this book work in ES3, but at the same time, all the code in the book is written so that it will run without warnings in ES5's strict mode. Strict mode in ES6 While strict mode is optional in ES5, all ES6 modules and classes are strict by default. As you will see soon, most of the code we write in ES6 resides in a module; hence, strict mode is enforced by default. However, it is important to understand that all other constructs do not have implicit strict mode enforced. There were efforts to make newer constructs, such as arrow and generator functions, to also enforce strict mode, but it was later decided that doing so will result in very fragment language rules and code. ECMAScript 6 ECMAScript6 revision took a long time to finish and was finally accepted on 17th June, 2015. ES6 features are slowly becoming part of major browsers and server technologies. It is possible to use transpilers to compile ES6 to ES5 and use the code on environments that do not yet support ES6 completely. ES6 substantially upgrades JavaScript as a language and brings in very exciting syntactical changes and language constructs. Broadly, there are two kinds of fundamental changes in this revision of ECMAScript, which are as follows: Improved syntax for existing features and editions to the standard library; for example, classes and promises New language features; for example, generators ES6 allows you to think differently about your code. New syntax changes can let you write code that is cleaner, easier to maintain, and does not require special tricks. The language itself now supports several constructs that required third-party modules earlier. Language changes introduced in ES6 needs serious rethink in the way we have been coding in JavaScript. A note on the nomenclature—ECMAScript6, ES6, and ECMAScript 2015 are the same, but used interchangeably. Browser support for ES6 Majority of the browsers and server frameworks are on their way toward implementing ES6 features. You can check out the what is supported and what is not by clicking: http://kangax.github.io/compat-table/es6/ Though ES6 is not fully supported on all the browsers and server frameworks, we can start using almost all features of ES6 with the help of transpilers. Transpilers are source-to-source compilers. ES6 transpilers allow you to write code in ES6 syntax and compile/transform them into equivalent ES5 syntax, which can then be run on browsers that do not support the entire range of ES6 features.The defacto ES6 transpiler at the moment is Babel Object-oriented programming Let's take a moment to review what people mean when they say object-oriented, and what the main features of this programming style are. Here's a list of some concepts that are most often used when talking about object-oriented programming (OOP): Object, method, and property Class Encapsulation Inheritance Polymorphism Let's take a closer look into each one of these concepts. If you're new to the object-oriented programming lingo, these concepts might sound too theoretical, and you might have trouble grasping or remembering them from one reading. Don't worry, it does take a few tries, and the subject can be a little dry at a conceptual level. Objects As the name object-oriented suggests, objects are important. An object is a representation of a thing (someone or something), and this representation is expressed with the help of a programming language. The thing can be anything—a real-life object, or a more convoluted concept. Taking a common object, a cat, for example, you can see that it has certain characteristics—color, name, weight, and so on—and can perform some actions—meow, sleep, hide, escape, and so on. The characteristics of the object are called properties in OOP-speak, and the actions are called methods. Classes In real life, similar objects can be grouped based on some criteria. A hummingbird and an eagle are both birds, so they can be classified as belonging to some made up Birds class. In OOP, a class is a blueprint or a recipe for an object. Another name for object is instance, so we can say that the eagle is one concrete instance of the general Birds class. You can create different objects using the same class because a class is just a template, while the objects are concrete instances based on the template. There's a difference between JavaScript and the classic OO languages such as C++ and Java. You should be aware right from the start that in JavaScript, there are no classes; everything is based on objects. JavaScript has the notion of prototypes, which are also objects. In a classic OO language, you'd say something like—create a new object for me called Bob, which is of class Person. In a prototypal OO language, you'd say—I'm going to take this object called Bob's dad that I have lying around (on the couch in front of the TV?) and reuse it as a prototype for a new object that I'll call Bob. Encapsulation Encapsulation is another OOP related concept, which illustrates the fact that an object contains (encapsulates) the following: Data (stored in properties) The means to do something with the data (using methods) One other term that goes together with encapsulation is information hiding. This is a rather broad term and can mean different things, but let's see what people usually mean when they use it in the context of OOP. Imagine an object, say, an MP3 player. You, as the user of the object, are given some interface to work with, such as buttons, display, and so on. You use the interface in order to get the object to do something useful for you, like play a song. How exactly the device is working on the inside, you don't know, and, most often, don't care. In other words, the implementation of the interface is hidden from you. The same thing happens in OOP when your code uses an object by calling its methods. It doesn't matter if you coded the object yourself or it came from some third-party library; your code doesn't need to know how the methods work internally. In compiled languages, you can't actually read the code that makes an object work. In JavaScript, because it's an interpreted language, you can see the source code, but the concept is still the same—you work with the object's interface without worrying about its implementation. Another aspect of information hiding is the visibility of methods and properties. In some languages, objects can have public, private, and protected methods and properties. This categorization defines the level of access the users of the object have. For example, only the methods of the same object have access to the private methods, while anyone has access to the public ones. In JavaScript, all methods and properties are public, but we'll see that there are ways to protect the data inside an object and achieve privacy. Inheritance Inheritance is an elegant way to reuse existing code. For example, you can have a generic object, Person, which has properties such as name and date_of_birth, and which also implements the walk, talk, sleep, and eat functionality. Then, you figure out that you need another object called Programmer. You can reimplement all the methods and properties that a Person object has, but it will be smarter to just say that the Programmer object inherits a Person object, and save yourself some work. The Programmer object only needs to implement more specific functionality, such as the writeCode method, while reusing all of the Person object's functionality. In classical OOP, classes inherit from other classes, but in JavaScript, as there are no classes, objects inherit from other objects. When an object inherits from another object, it usually adds new methods to the inherited ones, thus extending the old object. Often, the following phrases can be used interchangeably—B inherits from A and B extends A. Also, the object that inherits can pick one or more methods and redefine them, customizing them for its own needs. This way, the interface stays the same and the method name is the same, but when called on the new object, the method behaves differently. This way of redefining how an inherited method works is known as overriding. Polymorphism In the preceding example, a Programmer object inherited all of the methods of the parent Person object. This means that both objects provide a talk method, among others. Now imagine that somewhere in your code, there's a variable called Bob, and it just so happens that you don't know if Bob is a Person object or a Programmer object. You can still call the talk method on the Bob object and the code will work. This ability to call the same method on different objects, and have each of them respond in their own way, is called polymorphism. Summary In this article, you learned about how JavaScript came to be and where it is today. You were also introduced to ECMAScript 5 and ECMAScript 6, We also discussed about some of the object-oriented programming concepts. Resources for Article: Further resources on this subject: Diving into OOP Principles [article] Prototyping JavaScript [article] Developing Wiki Seek Widget Using Javascript [article]
Read more
  • 0
  • 0
  • 12147

article-image-flexbox-css
Packt
09 Mar 2016
8 min read
Save for later

Flexbox in CSS

Packt
09 Mar 2016
8 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3, Second Edition, we will look at Flexbox and its uses. In 2015, we have better means to build responsive websites than ever. There is a new CSS layout module called Flexible Box (or Flexbox as it is more commonly known) that now has enough browser support to make it viable for everyday use. It can do more than merely provide a fluid layout mechanism. Want to be able to easily center content, change the source order of markup, and generally create amazing layouts with relevant ease? Flexbox is the layout mechanism for you. (For more resources related to this topic, see here.) Introducing Flexbox Here's a brief overview of Flexbox's superpowers: It can easily vertically center contents It can change the visual order of elements It can automatically space and align elements within a box, automatically assigning available space between them It can make you look 10 years younger (probably not, but in low numbers of empirical tests (me) it has been proven to reduce stress) The bumpy path to Flexbox Flexbox has been through a few major iterations before arriving at the relatively stable version we have today. For example, consider the changes from the 2009 version (http://www.w3.org/TR/2009/WD-css3-flexbox-20090723/), the 2011 version (http://www.w3.org/TR/2011/WD-css3-flexbox-20111129/), and the 2014 version we are basing our examples on (http://www.w3.org/TR/css-flexbox-1/). The syntax differences are marked. These differing specifications mean there are three major implementation versions. How many of these you need to concern yourself with depends on the level of browser support you need. Browser support for Flexbox Let's get this out of the way up front: there is no Flexbox support in Internet Explorer 9, 8, or below. For everything else you'd likely want to support (and virtually all mobile browsers), there is a way to enjoy most (if not all) of Flexbox's features. You can check the support information at http://caniuse.com/. Now, let's look at one of its uses. Changing source order Since the dawn of CSS, there has only been one way to switch the visual ordering of HTML elements in a web page. That was achieved by wrapping elements in something set to display: table and then switching the display property on the items within, between display: table-caption (puts it on top), display: table-footer-group (sends it to the bottom), and display: table-header-group (sends it to just below the item set to display: table-caption). However, as robust as this technique is, it was a happy accident, rather than the true intention of these settings. However, Flexbox has visual source re-ordering built in. Let's have a look at how it works. Consider this markup: <div class="FlexWrapper">     <div class="FlexItems FlexHeader">I am content in the Header.</div>     <div class="FlexItems FlexSideOne">I am content in the SideOne.</div>     <div class="FlexItems FlexContent">I am content in the Content.</div>     <div class="FlexItems FlexSideTwo">I am content in the SideTwo.</div>     <div class="FlexItems FlexFooter">I am content in the Footer.</div> </div> You can see here that the third item within the wrapper has a HTML class of FlexContent—imagine that this div is going to hold the main content for the page. OK, let's keep things simple. We will add some simple colors to more easily differentiate the sections and just get these items one under another in the same order they appear in the markup. .FlexWrapper {     background-color: indigo;     display: flex;     flex-direction: column; }   .FlexItems {     display: flex;     align-items: center;     min-height: 6.25rem;     padding: 1rem; }   .FlexHeader {     background-color: #105B63;    }   .FlexContent {     background-color: #FFFAD5; }   .FlexSideOne {     background-color: #FFD34E; }   .FlexSideTwo {     background-color: #DB9E36; }   .FlexFooter {     background-color: #BD4932; } That renders in the browser like this:   Now, suppose we want to switch the order of .FlexContent to be the first item, without touching the markup. With Flexbox it's as simple as adding a single property/value pair: .FlexContent {     background-color: #FFFAD5;     order: -1; } The order property lets us revise the order of items within a Flexbox simply and sanely. In this example, a value of -1 means that we want it to be before all the others. If you want to switch items around quite a bit, I'd recommend being a little more declarative and add an order number for each. This makes things a little easier to understand when you combine them with media queries. Let's combine our new source order changing powers with some media queries to produce not just a different layout at different sizes but different ordering. As it's generally considered wise to have your main content at the beginning of a document, let's revise our markup to this: <div class="FlexWrapper">     <div class="FlexItems FlexContent">I am content in the Content.</div>     <div class="FlexItems FlexSideOne">I am content in the SideOne.</div>     <div class="FlexItems FlexSideTwo">I am content in the SideTwo.</div>     <div class="FlexItems FlexHeader">I am content in the Header.</div>     <div class="FlexItems FlexFooter">I am content in the Footer.</div> </div> First the page content, then our two sidebar areas, then the header and finally the footer. As I'll be using Flexbox, we can structure the HTML in the order that makes sense for the document, regardless of how things need to be laid out visually. For the smallest screens (outside of any media query), I'll go with this ordering: .FlexHeader {     background-color: #105B63;     order: 1; }   .FlexContent {     background-color: #FFFAD5;     order: 2; }   .FlexSideOne {     background-color: #FFD34E;     order: 3; }   .FlexSideTwo {     background-color: #DB9E36;     order: 4; }   .FlexFooter {     background-color: #BD4932;     order: 5; } Which gives us this in the browser:   And then, at a breakpoint, I'm switching to this: @media (min-width: 30rem) {     .FlexWrapper {         flex-flow: row wrap;     }     .FlexHeader {         width: 100%;     }     .FlexContent {         flex: 1;         order: 3;     }     .FlexSideOne {         width: 150px;         order: 2;     }     .FlexSideTwo {         width: 150px;         order: 4;     }     .FlexFooter {         width: 100%;     } } Which gives us this in the browser: In that example, the shortcut flex-flow: row wrap has been used. That allows the flex items to wrap onto multiple lines. It's one of the poorer supported properties, so depending upon how far back support is needed, it might be necessary to wrap the content and two side bars in another element. Summary There are near endless possibilities when using the Flexbox layout system and due to its inherent "flexiness", it's a perfect match for responsive design. If you've never built anything with Flexbox before, all the new properties and values can seem a little odd and it's sometimes disconcertingly easy to achieve layouts that have previously taken far more work. To double-check implementation details against the latest version of the specification, make sure you check out http://www.w3.org/TR/css-flexbox-1/. I think you'll love building things with Flexbox. To check out the other amazing things you can do with Flexbox, have a look at Responsive Web Design with HTML5 and CSS3, Second Edition. The book also features a plethora of other awesome tips and tricks related to responsive web design. Resources for Article: Further resources on this subject: CodeIgniter Email and HTML Table [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Adding Interactive Course Material in Moodle 1.9: Part 1 [article]
Read more
  • 0
  • 0
  • 12122
article-image-ibm-filenet-p8-content-manager-exploring-object-store-level-items
Packt
10 Feb 2011
9 min read
Save for later

IBM FileNet P8 Content Manager: Exploring Object Store-level Items

Packt
10 Feb 2011
9 min read
As with the Domain, there are two basic paths in FEM to accessing things in an Object Store. The tree-view in the left-hand panel can be expanded to show Object Stores and many repository objects within them, as illustrated in the screenshot below. Each individual Object Store has a right-click context menu. Selecting Properties from that menu will bring up a multi-tabbed pop-up panel. We'll look first at the General tab of that panel. Content Access Recording Level Ordinarily, the Content Engine (CE) server does not keep track of when a particular document's content was last retrieved because it requires an expensive database update that is often uninteresting. The ContentAccessRecordingLevel property on the Object Store can be used to enable the recording of this optional information in a document or annotation's DateContentLastAccessed property. It is off by default. It is sometimes interesting to know, for example, that document content was accessed within the last week as opposed to three years ago. Once a particular document has had its content read, there is a good chance that there will be a few additional accesses in the same neighborhood of time (not for a technical reason; rather, it's just statistically likely). Rather than record the last access time for each access, an installation can choose, via this property's value, to have access times recorded only with a granularity of hourly or daily. This can greatly reduce the number of database updates while still giving a suitable approximation of the last access time. There is also an option to update the DateContentLastAccessed property on every access. Auditing The CE server can record when clients retrieve or update selected objects. Enabling that involves setting up subscriptions to object instances or classes. This is quite similar to the event subsystem in the CE server. Because it can be quite elaborate to set up the necessary auditing configuration, it can also be enabled or disabled completely at the Object Store level. Checkout type The CE server offers two document checkout types, Collaborative and Exclusive. The difference lies in who is allowed to perform the subsequent checkin. An exclusive checkout will only allow the same user to do the checkin. Via an API, an application can make the explicit choice for one type or the other, or it can use the Object Store default value. Using the default value is often handy since a given application may not have any context for deciding one form over another. Even with a collaborative checkout, the subsequent checkin is still subject to access checks, so you can still have fine-grained control over that. In fact, because you can use fine-grained security to limit who can do a checkin, you might as well make the Object Store default be Collaborative unless you have some specific use case that demands Exclusive. Text Index Date Partitioning Most of the values on the CBR tab, as shown in the figure next, are read-only because they are established when the Content Search Engine (CSE) is first set up. One item that can be changed on a per-Object Store basis is the date-based partitioning of text index collections. Partitioning of the text index collections allows for more efficient searching of large collections because the CE can narrow its search to a specific partition or partitions rather than searching the entirety of the text index. By default, there is no partitioning. If you check the box to change the partition settings, the Date Property drop-down presents a list of date-valued custom properties. In the screenshot above, you see the custom properties Received On and Sent On, which are from email-related documents. Once you select one of those properties, you're offered a choice of partitioning granularity, ranging from one month up to one year. Additional text index configuration properties are available if you select the Index Areas node in the FEM tree-view, then right-click an index area entry and select Properties. Although we are showing the screenshot here for reference, your environment may not yet have a CSE or any index areas if the needed installation procedures are not complete. Cache configuration Just as we saw at the Domain level, the Cache tab allows the configuration of various cache tuning parameters for each Object Store. As we've said before, you don't want to change these values without a good reason. The default values are suitable for most situations. Metadata One of the key features of CM is that it has an extensible metadata structure. You don't have to work within a fixed framework of pre-defined document properties. You can add additional properties to the Document class, and you can even make subclasses of Document for specific purposes. For example, you might have a subclass called CustomerProfile, another subclass called DesignDocument, yet another subclass called ProductDescription, and so on. Creating subclasses lets you define just the properties you need to specialize the class to your particular business purpose. There is no need to have informal rules about where properties should be ignored because they're not applicable. There is also generally no need to have a property that marks a document as a CustomerProfile versus something else. The class provides that distinction. CM comes with a set of pre-defined system classes, and each class has a number of pre-defined system properties (many of which are shared across most system classes). There are pre-defined system classes for Document, Folder, Annotation, CustomObject, and many others. The classes just mentioned are often described as the business object classes because they are used to directly implement common business application concepts. System properties are properties for which the CE server has some specifically-coded behavior. Some system properties are used to control server behavior, and others provide reports of some kind of system state. We've seen several examples already in the Domain and Object Store objects. It's common for applications to create their own subclasses and custom properties as part of their installation procedures, but it is equally common to do similar things manually via FEM. FEM contains several wizards to make the process simpler for the administrator, but, behind the scenes, various pieces are always in play. Property templates The foundation for any custom property is a property template. If you select the Property Templates node in the tree view, you will see a long list of existing property templates. Double-clicking on any item in the list will reveal that property template's properties. A property template is an independently persistable object, so it has its own identity and security. Most system properties do not have explicit property templates. Their characteristics come about from a different mechanism internal to the CE server. Property templates have that name because the characteristics they define act as a pattern for properties added to classes, where the property is embodied in a property definition for a particular class. Some of the property template properties can be overridden in a property definition, but some cannot. For example, the basic data type and cardinality cannot be changed once a property template is created. On the other hand, things like settability and a value being required can be modified in the property definition. When creating a new property with no existing property template, you can either create the property template independently, ahead of time, or you can follow the FEM wizard steps for adding a property to a class. FEM will prompt you with additional panels if you need to create a property template for the property being added. Choice lists Most property types allow for a few simple validity checks to be enforced by the CE server. For example, an integer-valued property has an optional minimum and maximum value based on its intended use (in addition to the expected absolute constraints imposed by the integer data type). For some use cases, it is desirable to limit the allowed values to a specific list of items. The mechanism for that in the CE server is the choice list, and it's available for stringvalued and integer-valued properties. If you select the Choice Lists node in the FEM tree view, you will see a list of existing top-level choice lists. The example choice lists in the screenshot below all happen to come from AddOns installed in the Object Store. Double-clicking on any item in the list will reveal that choice list's properties. A choice list is an independently persistable object, so it has its own identity and security. We've mentioned independent objects a couple of times, and more mentions are coming. For now, it is enough to think of them as objects that can be stored or retrieved in their own right. Most independent objects have their own access security. Contrast independent objects with dependent objects that only exist within the context of some independent object. A choice list is a collection of choice objects, although a choice list may be nested hiearchically. That is, at any position in a choice list there can be another choice list rather than a simple choice. A choice object consists of a localizable display name and a choice value (a string or an integer, depending on the type of choice list). Nested choice lists can only be referenced within some top-level choice list. Classes Within the FEM tree view are two nodes describing classes: Document Class and Other Classes. Documents are listed separately only for user convenience (since Document subclasses occur most frequently). You can think of these two nodes as one big list. In any case, expanding the node in the tree reveals hierarchically nested subclasses. Selecting a class from the tree reveals any subclasses and any custom properties. The screenshot shows the custom class EntryTemplate, which comes from a Workplace AddOn. You can see that it has two subclasses, RecordsTemplate and WebContentTemplate, and four custom properties. When we mention a specific class or property name, like EntryTemplate, we try to use the symbolic name, which has a restricted character set and never contains spaces. The FEM screenshots tend to show display names. Display names are localizable and can contain any Unicode character. Although the subclassing mechanism in CM generally mimics the subclassing concept in modern object-oriented programming languages, it does have some differences. You can add custom properties to an existing class, including many system classes. Although you can change some characteristics of properties on a subclass, there are restrictions on what you can do. For example, a particular string property on a subclass must have a maximum length equal to or less than that property's maximum length on the superclass.
Read more
  • 0
  • 0
  • 12089

article-image-managing-images
Packt
14 Apr 2015
11 min read
Save for later

Managing Images

Packt
14 Apr 2015
11 min read
Cats, dogs and all sorts of memes, the Internet as we know it today is dominated by images. You can open almost any web page and you'll surely find images on the page. The more interactive our web browsing experience becomes, the more images we tend to use. So, it is tremendously important to ensure that the images we use are optimized and loaded as fast as possible. We should also make sure that we choose the correct image type. In this article by Dewald Els, author of the book Responsive Design High Performance,we will talk about, why image formats are important, conditional loading, visibility for DOM elements, specifying sizes, media queries, introducing sprite sheets, and caching. Let's talk basics. (For more resources related to this topic, see here.) Choosing the correct image format Deciding what image format to use is usually the first step you take when you start your website. Take a look at this table for an overview and comparison ofthe available image formats: Format Features GIF 256 colors Support for animation Transparency PNG 256 colors True colors Transparency JPEG/JPG 256 colors True colors From the preceding listed formats, you can conclude that, if you had a complex image that was 1000 x 1000 pixels, the image in the JPEG format would be the smallest in file size. This also means that it would load the fastest. The smallest image is not always the best choice though. If you need to have images with transparent parts, you'll have to use the PNG or GIF formats and if you need an animation, you are stuck with using a GIF format or the lesser know APNG format. Optimizing images Optimizing your image can have a huge impact on your overall website performance. There are some great applications to help you with image optimization and compression. TinyPNG is a great example of a site that helps you to compress you PNG's images online for free. They also have a Photoshop plugin that is available for download at https://tinypng.com/. Another great application to help you with JPG compression is JPEGMini. Head over to http://www.jpegmini.com/ to get a copy for either Windows or Mac OS X. Another application that is worth considering is Radical Image Optimization Tool (RIOT). It is a free program and can be found at http://luci.criosweb.ro/riot/. RIOT is a Windows application. Viewing as JPEG is not the only image format that we use in the Web; you can also look at a Mac OS X application called ImageOptim (http://www.imageoptim.com) It is also a free application and compresses both JPEG and PNG images. If you are not on Mac OS X, you can head over to https://tinypng.com/. This handy little site allows you to upload your image to the site, where it is then compressed. The optimized images are then linked to the site as downloadable files. As JPEG image formats make up the majority of most web pages, with some exceptions, lets take a look at how to make your images load faster. Progressive images Most advanced image editors such as Photoshop and GIMP give you the option to encode your JPEG images using either baseline or progressive. If you Save For Web using Photoshop, you will see this section at the top of the dialog box: In most cases, for use on web pages, I would advise you to use the Progressive encoding type. When you save an image using baseline, the full image data of every pixel block is written to the file one after the other. Baseline images load gradually from the top-left corner. If you save an image using the Progressive option, then it saves only a part of each of these blocks to the file and then another part and so on, until the entire image's information is captured in the file. When you render a progressive image, you will see a very grainy image display and this will gradually become sharper as it loads. Progressive images are also smaller than baseline images for various technical reasons. This means that they load faster. In addition, they appear to load faster when something is displayed on the screen. Here is a typical example of the visual difference between loading a progressive and a baseline JPEG image: Here, you can clearly see how the two encodings load in a browser. On the left, the progressive image is already displayed whereas the baseline image is still loading from the top. Alright, that was some really basic stuff, but it was extremely important nonetheless. Let's move on to conditional loading. Adaptive images Adaptive images are an adaptation of Filament Group's context-aware image sizing experiment. What does it do? Well, this is what the guys say about themselves: "Adaptive images detects your visitor's screen size and automatically creates, caches, and delivers device appropriate re-scaled versions of your web page's embedded HTML images. No mark-up changes needed. It is intended for use with Responsive Designs and to be combined with Fluid Images techniques." It certainly trumps the experiment in the simplicity of implementation. So, how does it work? It's quite simple. There is no need to change any of your current code. Head over to http://adaptive-images.com/download.htm and get the latest version of adaptive images. You can place the adaptive-images.php file in the root of your site. Make sure to add the content of the .htaccess file to your own as well. Head over to the index file of your site and add this in the <head> tags: <script>document.cookie='resolution='+Math.max(screen.width,screen.height)+'; path=/';</script> Note that it is has to be in the <head> tag of your site. Open the adaptive-images.php file and add you media query values into the $resolutions variable. Here is a snippet of code that is pretty self-explanatory: $resolutions   = array(1382, 992, 768, 480);$cache_path   = "ai-cache";$jpg_quality   = 80;$sharpen       = TRUE;$watch_cache   = TRUE;$browser_cache = 60*60*24*7; The $resolution variable accepts the break-points that you use for your website. You can simply add the value of the screen width in pixels. So, in the the preceding example, it would read 1382 pixels as the first break-point, 992 pixels as the second one, and so on. The cache path tells adaptive images where to store the generated resized images. It's a relative path from your document root. So, in this case, your folder structure would read as document_root/a-cache/{images stored here}. The next variable, $jpg_quality, sets the quality of any generated JPGs images on a scale of 0 to 100. Shrinking images could cause blurred details. Set $sharpen to TRUE to perform a sharpening process on rescaled images. When you set $watch_cache to TRUE, you force adaptive images to check that the adapted image isn't stale; that is, it ensures that the updated source images are recached. Lastly, $browser_cache sets how long the browser cache should last for. The values are seconds, minutes, hours, days (7 days by default). You can change the last digit to modify the days. So, if you want images to be cached for two days, simply change the last value to 2. Then,… oh wait, that's all? It is indeed! Adaptive images will work with your existing website and they don't require any markup changes. They are also device-agnostic and follow a mobile-first philosophy. Conditional loading Responsive designs combine three main techniques, which are as follows: Fluid grids Flexible images Media queries The technique that I want to focus on in this section is media queries. In most cases, developers use media queries to change the layout, width height, padding, font size and so on, depending on conditions related to the viewport. Let's see how we can achieve conditional image loading using CSS3's image-set function: .my-background-img {background-image: image-set(url(icon1x.jpg) 1x,url(icon2x.jpg) 2x);} You can see in the preceding piece of CSS3 code that the image is loaded conditionally based on its display type. The second statement url(icon2x.jpg) 2x would load the hi-resolution image or retina image. This reduces the number of CSS rules we have to create. Maintaining a site with a lot of background images can become quite a chore if a separate rule exists for each one. Here is a simple media query example: @media screen and (max-width: 480px) {   .container {       width: 320px;   }} As I'm sure you already know, this snippet tells the browser that, for any device with a viewport of fewer than 480 pixels, any element with the class container has to be 320 pixels wide. When you use media queries, always make sure to include the viewport <meta> tag in the head of your HTML document, as follows: <meta name="viewport" content="width=device-width, initial-scale=1"> I've included this template here as I'd like to start with this. It really makes it very easy to get started with new responsive projects: /* MOBILE */@media screen and (max-width: 480px) {   .container {       width: 320px;   }}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {   .container {       width: 480px;   }}/* SMALL DESKTOP OR LARGE TABLETS */@media screen and (min-width: 721px) and (max-width: 960px) {   .container {       width: 720px;   }}/* STANDARD DESKTOP */@media screen and (min-width: 961px) and (max-width: 1200px) {   .container {       width: 960px;   }}/* LARGE DESKTOP */@media screen and (min-width: 1201px) and (max-width: 1600px) {   .container {       width: 1200px;   }}/* EXTRA LARGE DESKTOP */@media screen and (min-width: 1601px) {   .container {       width: 1600px;   }} When you view a website on a desktop, it's quite a common thing to have a left and a right column. Generally, the left column contains information that requires more focus and the right column contains content with a bit less importance. In some cases, you might even have three columns. Take the social website Facebook as an example. At the time of writing this article, Facebook used a three-column layout, which is as follows:   When you view a web page on a mobile device, you won't be able to fit all three columns into the smaller viewport. So, you'd probably want to hide some of the columns and not request the data that is usually displayed in the columns that are hidden. Alright, we've done some talking. Well, you've done some reading. Now, let's get into our code! Our goal in this section is to learn about conditional development, with the focus on images. I've constructed a little website with a two-column layout. The left column houses the content and the right column is used to populate a little news feed. I made a simple PHP script that returns a JSON object with the news items. Here is a preview of the different screens that we will work on:   These two views are a result of the queries that are shown in the following style sheet code: /* MOBILE */@media screen and (max-width: 480px) {}/* TABLETS */@media screen and (min-width: 481px) and (max-width: 720px) {} Summary Managing images is no small feat in a website. Almost all modern websites rely heavily on images to present content to the users. In this article we looked at which image formats to use and when. We also looked at how to optimize your images for websites. We discussed the difference between progressive and optimized images as well. Conditional loading can greatly help you to load your site faster. In this article, we briefly discussed how to use conditional loading to improve your site's performance. Resources for Article: Further resources on this subject: A look into responsive design frameworks [article] Building Responsive Image Sliders [article] Creating a Responsive Project [article]
Read more
  • 0
  • 0
  • 12081
Modal Close icon
Modal Close icon