Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-navigation-stack-robot-setups
Packt
10 Oct 2013
7 min read
Save for later

Navigation Stack - Robot Setups

Packt
10 Oct 2013
7 min read
The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from "base_laser" to "base_link": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:= "`rospack find chapter7 _tutorials`/urdf/robot1_base_04.xacro" $ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows:
Read more
  • 0
  • 0
  • 7505

article-image-introducing-magento-extension-development
Packt
10 Oct 2013
13 min read
Save for later

Introducing Magento extension development

Packt
10 Oct 2013
13 min read
Creating Magento extensions can be an extremely challenging and time-consuming task depending on several factors such as your knowledge of Magento internals, overall development skills, and the complexity of the extension functionality itself. Having a deep insight into Magento internals, its structure, and accompanying tips and tricks will provide you with a strong foundation for clean and unobtrusive Magento extension development. The word unobtrusive should be a constant thought throughout your entire development process. The reason is simple; given the massiveness of the Magento platform, it is way too easy to build extensions that clash with other third-party extensions. This is usually a beginner's flaw, which we will hopefully avoid once we have finished reading this article. The examples listed in this article are targeted towards Magento Community Edition 1.7.0.2. Version 1.7.0.2 is the last stable release at the time of writing this writing. Throughout this article we will be referencing our URL examples as if they are executing on the magento.loc domain. You are free to set your local Apache virtual host and host file to any domain you prefer, as long as you keep this in mind. If you're hearing about virtual host terminology for the first time, please refer to the Apache Virtual Host documentation. Here is a quick summary on each of those files and folders: .htaccess: This file is a directory-level configuration file supported by several web servers, most notably the Apache web server. It controls mod_rewrite for fancy URLs and sets configuration server variables (such as memory limit) and PHP maximum execution time. .htaccess.sample: This is basically a .htaccess template file used for creating new stores within subfolders. api.php: This is primarily used for the Magento REST API, but can be used for SOAP and XML-RPC API server functionality as well. app: This is where you will find Magento core code files for the backend and for the frontend. This folder is basically the heart of the Magento platform. Later on, we will dive into this folder for more details, given that this is the folder that you as an extension developer will spend most of your time on. cron.php: This file, when triggered via URL or via console PHP, will trigger certain Magento cron jobs logic. cron.sh: This file is a Unix shell script version of cron.php. downloader: This folder is used by the Magento Connect Manager, which is the functionality you access from the Magento administration area by navigating to System | Magento Connect | Magento Connect Manager. errors: This folder is a host for a slightly separate Magento functionality, the one that jumps in with error handling when your Magento store gets an exception during code execution. favicon.ico: This is your standard 16 x 16 px website icon. get.php: This file hosts a feature that allows core media files to be stored and served from the database. With the Database File Storage system in place, Magento would redirect requests for media files to get.php. includes: This folder is used by the Mage_Compiler extension whose functionality can be accessed via Magento administration System | Tools | Compilation. The idea behind the Magento compiler feature is that you end up with PHP system that pulls all of its classes from one folder, thus, giving it a massive performance boost. index.php: This is a main entry point to your application, the main loader file for Magento, and the file that initializes everything. Every request for every Magento page goes through this file. index.php.sample: This file is just a backup copy of the index.php file. js: This folder holds the core Magento JavaScript libraries, such as Prototype, scriptaculous.js, ExtJS, and a few others, some of which are from Magento itself. lib: This folder holds the core Magento PHP libraries, such as 3DSecure, Google Checkout, phpseclib, Zend, and a few others, some of which are from Magento itself. LICENSE*: These are the Magento licence files in various formats (LICENSE_AFL.txt, LICENSE.html, and LICENSE.txt). mage: This is a Magento Connect command-line tool. It allows you to add/remove channels, install and uninstall packages (extensions), and various other package related tasks. media: This folder contains all of the media files, mostly just images from various products, categories, and CMS pages. php.ini.sample: This file is a sample php.ini file for PHP CGI/FastCGI installations. Sample files are not actually used by Magento application. pkginfo: This folder contains text files that largely operate as debug files to inform us about changes when extensions are upgraded in any way. RELEASE_NOTES.txt: This file contains the release notes and changes for various Magento versions, starting from version 1.4.0.0 and later. shell: This folder contains several PHP-based shell tools, such as compiler, indexer, and logger. skin: This folder contains various CSS and JavaScript files specific for individual Magento themes. Files in this folder and its subfolder go hand in hand with files in app/design folder, as these two locations actually result in one fully featured Magento theme or package. var: This folder contains sessions, logs, reports, configuration cache, lock files for application processes, and possible various other files distributed among individual subfolders. During development, you can freely select all the subfolders and delete them, as Magento will recreate all of them on the next page request. From a standpoint of Magento extension developer, you might find yourself looking into the var/log and var/report folders every now and then. Code pools The folder code is a placeholder for what is called a codePool in Magento. Usually, there are three code pool's in Magento, that is, three subfolders: community, core, and local. The formula for your extension code location should be something like app/code/community/YourNamespace/YourModuleName/ or app/code/local/YourNamespace/YourModuleName/. There is a simple rule as to whether to chose community or local codePool: Choose the community codePool for extensions that you plan to share across projects, or possibly upload to Magento Connect Choose the local codePool for extensions that are specific for the project you are working on and won't be shared with the public For example, let's imagine that our company name is Foggyline and the extension we are building is called Happy Hour. As we wish to share our extension with the community, we can put it into a folder such as app/code/community/Foggyline/HappyHour/. The theme system In order to successfully build extensions that visually manifest themselves to the user either on the backend or frontend, we need to get familiar with the theme system. The theme system is comprised of two distributed parts: one found under the app/design folder and other under the root skin folder. Files found under the app/design folder are PHP template files and XML layout configuration files. Within the PHP template files you can find the mix of HTML, PHP, and some JavaScript. There is one important thing to know about Magento themes; they have a fallback mechanism, for example, if someone in the administration interface sets the configuration to use a theme called hello from the default package; and if the theme is missing, for example, the app/design/frontend/default/hello/template/catalog/product/view.phtml file in its structure, Magento will use app/design/frontend/default/default/template/catalog/product/view.phtml from the default theme; and if that file is missing as well, Magento will fall back to the base package for the app/design/frontend/base/default/template/catalog/product/view.phtml file. All your layout and view files should go under the /app/design/frontend/defaultdefault/default directory. Secondly, you should never overwrite the existing .xml layout or template .phtml file from within the /app/design/frontend/default/default directory, rather create your own. For example, imagine you are doing some product image switcher extension, and you conclude that you need to do some modifications to the app/design/frontend/default/default/template/catalog/product/view/media.phtml file. A more valid approach would be to create a proper XML layout update file with handles rewriting the media.phtml usage to let's say media_product_image_switcher.phtml. The model, resource, and collection A model represents the data for the better part, and to certain extent a business logic of your application. Models in Magento take the Object Relational Mapping (ORM) approach, thus, having the developer to strictly deal with objects while their data is then automatically persisted to database. If you are hearing about ORM for the first time, please take some time to familiarize yourself with the concept. Theoretically, you could write and execute raw SQL queries in Magento. However, doing so is not advised, especially if you plan on distributing your extensions. There are two types of models in Magento: Basic Data Model: This is a simpler model type, sort of like Active Record pattern based model. If you're hearing about Active Record for the first time, please take some time to familiarize yourself with the concept. EAV (Entity-Attribute-Value) Data Model: This is a complex model type, which enables you to dynamically create new attributes on an entity. As EAV Data Model is significantly more complex than Basic Data Model and Basic Data Model will suffice for most of the time, we will focus on Basic Data Model and everything important surrounding it. Each data model you plan to persist to database, that means models that present an entity, needs to have four files in order for it to work fully: The model file: This extends the Mage_Core_Model_Abstract class. This represents single entity, its properties (fields), and possible business logic within it. The model resource file: This extends the Mage_Core_Model_Resource_Db_Abstract class. This is your connection to database; think of it as the thing that saves your entity properties (fields) database. The model collection file: This extends the Mage_Core_Model_Resource_Db_Collection_Abstract class. This is your collection of several entities, a collection that can be filtered, sorted, and manipulated. The installation script file: In its simplest definition this is the PHP file through which you, in and object-oriented way, create your database table(s). The default Magento installation comes with several built in shipping methods available: Flat Rate, Table Rates, Free Shipping UPS, USPS, FedEx, DHL. For some merchants this is more than enough, for others you are free to build an additional custom Shipping extension with support for one or more shipping methods. Be careful about the terminology here. Shipping method resides within shipping extension. A single extension can define one or more shipping methods. In this article we will learn how to create our own shipping method. Shipping methods There are two, unofficially divided, types of shipping methods: Static, where shipping cost rates are based on a predefined set of rules. For example, you can create a shipping method called 5+ and make it available to the customer for selection under the checkout only if he added more than five products to the cart. Dynamic, where retrieval of shipping cost rates comes from various shipping providers. For example, you have a web service called ABC Shipping that exposes a SOAP web service API which accepts products weight, length, height, width, shipping address and returns the calculated shipping cost which you can then show to your customer. Experienced developers would probably expect one or more PHP interfaces to handle the implementation of new shipping methods. Same goes for Magento, implementing a new shipping method is done via an interface and via proper configuration. The default Magento installation comes with several built-in payment methods available: PayPal, Saved CC, Check/Money Order, Zero Subtotal Checkout, Bank Transfer Payment, Cash On Delivery payment, Purchase Order, and Authorize.Net. For some merchants this is more than enough. Various additional payment extensions can be found on Magento Connect. For those that do not yet exist, you are free to build an additional custom payment extension with support for one or more payment methods. Building a payment extension is usually a non-trivial task that requires a lot of focus. Payment methods There are several unofficially divided types of payment method implementations such as redirect payment, hosted (on-site) payment, and an embedded iframe. Two of them stand out as the most commonly used ones: Redirect payment: During the checkout, once the customer reaches the final ORDER REVIEW step, he/she clicks on the Place Order button. Magento then redirects the customer to specific payment provider website where customer is supposed to provide the credit card information and execute the actual payment. What's specific about this is that prior to redirection, Magento needs to create the order in the system and it does so by assigning this new order a Pending status. Later if customer provides the valid credit card information on the payment provider website, customer gets redirected back to Magento success page. The main concept to grasp here is that customer might just close the payment provider website and never return to your store, leaving your order indefinitely in Pending status. The great thing about this redirect type of payment method providers (gateways) is that they are relatively easy to implement in Magento. Hosted (on-site) payment: Unlike redirect payment, there is no redirection here. Everything is handled on the Magento store. During the checkout, once the customer reaches the Payment Information step, he/she is presented with a form for providing the credit card information. After which, when he/she clicks on the Place Order button in the last ORDER REVIEW checkout step, Magento then internally calls the appropriate payment provider web service, passing it the billing information. Depending on the web service response, Magento then internally sets the order status to either Processing or some other. For example, this payment provider web service can be a standard SOAP service with a few methods such as orderSubmit. Additionally, we don't even have to use a real payment provider, we can just make a "dummy" payment implementation like built-in Check/Money Order payment. You will often find that most of the merchants prefer this type of payment method, as they believe that redirecting the customer to third-party site might negatively affect their sale. Obviously, with this payment method there is more overhead for you as a developer to handle the implementation. On top of that there are security concerns of handling the credit card data on Magento side, in which case PCI compliance is obligatory. If this is your first time hearing about PCI compliance, please click here to learn more. This type of payment method is slightly more challenging to implement than the redirect payment method. Magento Connect Magento Connect is one of the world's largest eCommerce application marketplace where you can find various extensions to customize and enhance your Magento store. It allows Magento community members and partners to share their open source or commercial contributions for Magento with the community. You can access Magento Connect marketplace here. Publishing your extension to Magento Connect is a three-step process made of: Packaging your extension Creating an extension profile Uploading the extension package More of which we will talk later in the article. Only community members and partners have the ability to publish their contributions. Becoming a community member is simple, just register as a user on official Magento website https://www.magentocommerce.com. Member account is a requirement for further packaging and publishing of your extension. Read more: Categories and Attributes in Magento: Part 2 Integrating Twitter with Magento Magento Fundamentals for Developers
Read more
  • 0
  • 0
  • 3677

article-image-creating-new-characters-morphs
Packt
09 Oct 2013
7 min read
Save for later

Creating New Characters with Morphs

Packt
09 Oct 2013
7 min read
(For more resources related to this topic, see here.) Understanding morphs The word morph comes from metamorphosis, which means a change of the form or nature of a thing or person into a completely different one. Good old Franz Kafka had a field day with metamorphosis when he imagined poor Gregor Samsa waking up and finding himself changed into a giant cockroach. This concept applies to 3D modeling very well. As we are dealing with polygons, which are defined by groups of vertices, it's very easy to morph one shape into something different. All that we need to do is to move those vertices around, and the polygons will stretch and squeeze accordingly. To get a better visualization about this process, let’s bring the Basic Female figure to the scene and show it with the wireframe turned on. To do so, after you have added the Basic Female figure, click on the DrawStyle widget on the top-right portion of the 3D Viewport. From that menu, select Wire Texture Shaded. This operation changes how Studio draws the objects in the scene during preview. It doesn't change anything else about the scene. In fact, if you try to render the image at this point, the wireframe will not show up in the render. The wireframe is a great help in working with objects because it gives us a visual representation of the structure of a model. The type of wireframe that I selected in this case is superimposed to the normal texture used with the figure. This is not the only visualization mode available. Feel free to experiment with all the options in the DrawStyle menu; most of them have their use. The most useful, in my opinion, are the Hidden Line, Lit Wireframe, Wire Shaded, and Wire Texture Shaded options. Try the Wire Shaded option as well. It shows the wireframe with a solid gray color. This is, again, just for display purposes. It doesn't remove the texture from the figure. In fact, you can switch back to Texture Shaded to see Genesis fully textured. Switching the view to use the simple wireframe or the shaded wireframe is a Great way of speeding up your workflow. When Studio doesn’t have to render the textures, the Viewport becomes more responsive and all operations take less time. If you have a slow computer, using the wireframe mode is a good way of getting a faster response time. Here are the Wire Texture Shaded and Wire Shaded styles side by side: Now that we have the wireframe visible, the concept of morphing should be simpler to understand. If we pick any vertex in the geometry and we move it somewhere, the geometry is still the same, same number of polygons and same number of vertices, but the shape has shifted. Here is a practical example that shows Genesis loaded in Blender. Blender is a free, fully featured, 3D modeling program. It has extremely advanced features that compete with commercial programs sold for thousands of dollars per license. You can find more information about Blender at http://www.blender.org. Be aware that Blender is a very advanced program with a rather difficult UI. In this image, I have selected a single polygon and pulled it away from the face: In a similar way we can use programs such as modo or ZBrush to modify the basic geometry and come up with all kinds of different shapes. For example, there are people who are specialized in reproducing the faces of celebrities as morph for DAZ V4 or Genesis. What is important to understand about morphs is that they cannot add or remove any portion of the geometry. A morph only moves things around, sometimes to extreme degrees. Morphs for Genesis or Gen4 figures can be purchased from several websites specialized in selling content for Poser and DAZ Studio. In particular, Genesis makes it very easy to apply morphs and even to mix them together. Combining premade morphs to create new faces The standard installation of Genesis provides some interesting ways of changing its shape. Let's start a new Studio scene and add our old friend, the basic Female figure. Once Genesis is in the scene, double-click on it to select it. Now let’s take a look at a new tool, the Shaping tab. It should be visible in the right-hand side pane. Click on the Shaping tab; it should show a list of shapes available. The list should be something like this: As we can see, the Basic Female shape is unsurprisingly dialed all the way to the top. The value of each slider goes from zero, no influence, to one, full influence of the morph. Morphs are not exclusive so, for example, you can add a bit of Body Builder (scroll the list to the bottom if you don't see it) to be used in conjunction with the Basic Female morph. This will give us a muscular woman. This exercise is also giving us an insight about the Basic Female figure that we have used up to this time. The figure is basically the raw Genesis figure with the Basic Female morph applied as a preset. If we continue exploring the Shaping Editor, we can see that the various shapes are grouped by major body section. We have morphs for the shape of the head, the components of the face, the nose, eyes, mouth, and so on. Let's click on the head of Genesis and use the Camera: Frame tool to frame the head in the view. Move the camera a bit so that the face is visible frontally. We will apply a few morphs to the head to see how it can be transformed. Here is the starting point: Now let’s click on the Head category in the Shaping tab. In there we can see a slider labeled Alien Humanoid. Move the slider until it gets to 0.83. The difference is dramatic. Now let’s click on the Eyes category. In there we find two values: Eyes Height and Eyes Width. To create an out-of-this-world creature, we need to break the rules of proportions a little bit, and that means to remove the limits for a couple of parameters. Click on the gear button for the Eyes Height parameter and uncheck the Use Limits checkbox. Confirm by clicking on the Accept button. Once this is done, dial a value of 1.78 for the eyes height. The eyes should move dramatically up, toward the eyebrow. Lastly, let's change the neck; it's much too thick for an alien. Also, in this case, we will need to disable the use of limits. Click on the Neck category and disable the limits for the Neck Size parameter. Once that is done, set the neck size to -1.74. Here is the result, side by side, of the transformation. This is quite a dramatic change for something that is done with just dials, without using a 3D modeling program. It gets even better, as we will see shortly. Saving your morphs If you want to save a morph to re-use it later, you can navigate to File | Save As | Shaping Preset…. To re-use a saved morph, simply select the target figure and navigate to File | Merge… to load the previously saved preset/morph. Why is Studio using the rather confusing term Merge for loading its own files? Nobody knows for sure; it's one of those weird decisions that DAZ's developers made long ago and never changed. You can merge two different Studio scenes, but it is rather confusing to think of loading a morph or a pose preset as a scene merge. Try to mentally replace File | Merge with File | Load. This is the meaning of that menu option.
Read more
  • 0
  • 0
  • 7699

Packt
09 Oct 2013
11 min read
Save for later

Navigation Stack – Robot Setups

Packt
09 Oct 2013
11 min read
Introduction to the navigation stacks and their powerful capabilities—clearly one of the greatest pieces of software that comes with ROS. The TF is explained in order to show how to transform from the frame of one physical element to the other; for example, the data received using a sensor or the command for the desired position of an actuator. We will see how to create a laser driver or simulate it. We will learn how the odometry is computed and published, and how Gazebo provides it. A base controller will be presented, including a detailed description of how to create one for your robot. We will see how to execute SLAM with ROS. That is, we will show you how you can build a map from the environment with your robot as it moves through it. Finally, you will be able to localize your robot in the map using the localization algorithms of the navigation stack. The navigation stack in ROS In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position. You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack. The robot must satisfy some requirements before it uses the navigation stack: The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways. It requires that the robot publishes information about the relationships between all the joints and sensors' position. The robot must send messages with linear and angular velocities. A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot. The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous: In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack. Creating transforms The navigation stack needs to know the position of the sensors, wheels, and joints. To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy. Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us. If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets. Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation. Creating a broadcaster Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it: #include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } } Remember to add the following line in your CMakelist.txt file to create the new executable: rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp) And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot). Creating a listener Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code: #include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like totransform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); } Remember to add the line in the CMakeList.txt file to create the executable. Compile the package and run both the nodes using the following commands: $ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener Then you will see the following message: [ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33 This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link. As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point. A transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link): The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is. Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack. Watching the transformation tree If you want to see the transformation tree of your robot, use the following command: $ roslaunch chapter7_tutorials gazebo_map_robot.launch model:="`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"$ rosrun tf view_frames The resultant frame is depicted as follows: And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code: $ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames The resultant frame is depicted as follows: Publishing sensor information Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud. We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame. In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type. If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section. But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command: $ rosmsg show sensor_msgs/LaserScan std_msgs/Header header uint32 seq time stamp string frame_id float32 angle_min float32 angle_max float32 angle_increment float32 time_increment float32 scan_time float32 range_min float32 range_max float32[] rangesfloat32[] intensities Creating the laser node Now we will create a new file in chapter7_tutorials/src with the name laser.cpp and put the following code in it: #include <ros/ros.h> #include <sensor_msgs/LaserScan.h> int main(int argc, char** argv){ ros::init(argc, argv, "laser_scan_publisher"); ros::NodeHandle n; ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); unsigned int num_readings = 100; double laser_frequency = 40; double ranges[num_readings]; double intensities[num_readings]; int count = 0; ros::Rate r(1.0); while(n.ok()){ //generate some fake data for our laser scan for(unsigned int i = 0; i < num_readings; ++i){ ranges[i] = count; intensities[i] = 100 + count; } ros::Time scan_time = ros::Time::now(); //populate the LaserScan message sensor_msgs::LaserScan scan; scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 100.0; scan.ranges.resize(num_readings); scan.intensities.resize(num_readings); for(unsigned int i = 0; i < num_readings; ++i){ scan.ranges[i] = ranges[i]; scan.intensities[i] = intensities[i]; } scan_pub.publish(scan); ++count; r.sleep(); } } As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name: ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>("scan", 50); It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data: scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles. With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser. This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.
Read more
  • 0
  • 0
  • 5537

article-image-so-what-zeptojs
Packt
08 Oct 2013
7 min read
Save for later

So, what is Zepto.js?

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) One of the most influential JavaScript libraries in the last decade of web development is jQuery, a comprehensive set of functions that make Document Object Model (DOM) selection and manipulation consistent across a range of browsers, freeing web developers from having to handle all these themselves, as well as providing a friendlier interface to the DOM itself. Zepto.js is self-described as an aerogel framework—a JavaScript library that attempts to offer the most of the features as the jQuery API, yet only taking up a fraction of the size (9k versus 93k in the default, compressed current versions Zepto.js v1.01 and jQuery v1.10 respectively). In addition, Zepto.js has a modular assembly, so you can make it even smaller if you don't need the functionality of extra modules. Even the new, streamlined jQuery 2.0 weighs in at a heavyweight 84k. But why does this matter? At a first glance, the difference between the two libraries seems slight, especially in today's world where large files are normally described in terms of gigabytes and terabytes. Well, there are two good reasons why you'd prefer a smaller file size. Firstly, even the newest mobile devices on the market today have slower connections than you'll find on most desktop machines. Also, due to the constrained memory requirements on smartphones, mobile phone browsers tend to have limited caching compared to their bigger desktop cousins, so a smaller helper library means more chance of keeping your actual JavaScript code in the cache and thus preventing your app from slowing down on the device. Secondly, a smaller library helps in response time—although 90k versus 8k doesn't sound like a huge difference, it means fewer network packets; as your application code that relies on the library can't execute until the library's code is loaded, using the smaller library can shave off precious milliseconds in that ever-so-important time to first-page-load time, and will make your web page or application seem more responsive to users. Having said all that, there are a few downsides on using Zepto.js that you should be aware about before deciding to plump for it instead of jQuery. Most importantly, Zepto.js currently makes no attempt to support Internet Explorer. Its origins as a library to replace jQuery on mobile phones meant that it mainly targeted WebKit browsers, primarily iOS. As the library has got more mature, it has expanded to cover Firefox, but general IE support is unlikely to happen (at the time of writing, there is a patch waiting to go into the main trunk that would enable support for IE10 and up, but anything lower than Version 10 is probably never going to be supported). In this guide we'll show you how to include jQuery as a fallback in case a user is running on an older, unsupported browser if you do decide to use Zepto.js on browsers that it supports and want to maintain some compatibility with Internet Explorer. The other pitfall that you need to be aware of is that Zepto.js only claims to be a jQuery-like library, not a 100 percent compatible version. In the majority of web application development, this won't be an issue, but when it comes to integrating plugins and operating at the margins of the libraries, there will be some differences that you will need to know to prevent possible errors and confusions, and we'll be showing you some of them later in this guide. In terms of performance, Zepto.js is a little slower than jQuery, though this varies by browser (take a look at http://jsperf.com/zepto-vs-jquery-2013/ to see the latest benchmark results). In general, it can be up to twice as slow for repeated operations such as finding elements by class name or ID. However, on mobile devices, this is still around 50,000 operations per second. If you really require high-performance from your mobile site, then you need to examine whether you can use raw JavaScript instead—the JavaScript function getElementsByClassName() is almost one hundred times faster than Zepto.js and jQuery in the preceding benchmark. Writing plugins Eventually, you'll want to make your own plugins. As you can imagine, they're fairly similar in construction to jQuery plugins (so they can be compatible). But what can you do with them? Well, consider them as a macro system for Zepto.js; you can do anything that you'd do in normal Zepto.js operations, but they get added to the library's namespace so you can reuse them in other applications. Here is a plugin that will take a Zepto.js collection and turn all the text in it to Helvetica font-family at a user-supplied font-size (in pixels for this example). (function($){ $.extend($.fn, { helveticaize: function( options ){ $.each(this, function(){ $(this).css({"font-family":"Helvetica", "font-size": options['size']+'px'}); }); return this; } }) })(Zepto || jQuery) Then, to make all links on a page Helvetica, you can call $("a").helveticaize(). The most important part of this code is the use of the $.extend method. This adds the helveticaize property/function to the $.fn object, which contains all of the functions that Zepto.js provides. Note that you could potentially use this to redefine methods such as find(), animate(), or any other function you've seen so far. As you can imagine, this is not recommended—if you need different functionality, call $.extend and create a new function with a name like custom_find instead. In addition, you could pass multiple new functions to $.fn with a call to $.extend, but the convention for jQuery and Zepto.js is that you only provide as few functions as possible (ideally one) and offer different functionality through passed parameters (that is, through options). The reason for this is that your plugin may have to live alongside many other plugins, all of which share the same namespace in $.fn. By only setting one property, you hopefully reduce the chance of overriding a method that another plugin has defined. In the actual definition of the method that's being added, it iterates through the objects in the collection, setting the font and size (if present) for all the objects in the collection. But at the rest of the method it returns this. Why? Well, if you remember, part of the power of Zepto.js is that methods are chainable, allowing you to build up complex selectors and operations in one line. And thanks to helveticaize() returning this (which will be a collection), this newly-defined method is just as chainable as all the default methods provided. This isn't a requirement of plugin methods but, where possible, you should make your plugin methods return a collection of some sort to prevent breaking a chain (and if you can't, for some reason, make sure to spell that out in your plugin's documentation). Finally, at the end, the (Zepto || jQuery) part will immediately invoke this definition on either the Zepto object or jQuery object. In this way, you can create plugins that work with either framework depending on whether they're present, with the caveat, of course, that your method must work in both frameworks. Summary In this article, we learned what Zepto.js actually is, what you can do with it, and why it's so great. We also learned how to extend Zepto.js with plugins. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 15347

article-image-minimizing-http-requests
Packt
08 Oct 2013
6 min read
Save for later

Minimizing HTTP requests

Packt
08 Oct 2013
6 min read
(For more resources related to this topic, see here.) How to do it... Reducing DNS lookup: Whenever possible try to use URL directives and paths to different functionalities instead of different hostnames. For example, if a website is abc.com, instead of having a separate hostname for its forum, for example, forum.abc.com, we can have the same URL path, abc.com/forum. This will reduce one extra DNS lookup and thus minimize HTTP requests. Imagine if your website contains many such URLs, either its own subdomains or others, it would take a lot of time to parse the page, because it will send a lot of DNS queries to the server. For example, check www.aliencoders.com that has several DNS lookup components that makes it a very slow website. Please check the following image for a better understanding: If you really have to serve some JavaScript files at the head section, make sure that they come from the same host where you are trying to display the page, else put it at the bottom to avoid latency because almost all browsers block other downloads while rendering JavaScript files are being downloaded fully and get executed. Modern browsers support DNS prefetching. If it's absolutely necessary for developers to load resources from other domains, he/she should make use of it. The following are the URLs: https://developer.mozilla.org/en/docs/Controlling_DNS_prefetching http://www.chromium.org/developers/design-documents/dns-prefetching Using combined files: If we reduce the number of JavaScript files to be parsed and executed and if we do the same for CSS files, it will reduce HTTP requests and load the website much faster. We can do so, by combining all JavaScript files into one file and all CSS files into one CSS file. Setting up CSS sprites: There are two ways to combine different images into one to reduce the number of HTTP requests. One is using the image map technique and other is using CSS sprites. What we do in a CSS sprite is that we write CSS code for the image going to be used so that while hovering, clicking, or performing any action related to that image would invoke the correct action similar to the one with having different images for different actions. It's just a game of coordinates and a little creativity with design. It will make the website at least 50 percent faster as compared to the one with a lot of images. Using image maps: Use the image map idea if you are going to have a constant layout for those images such as menu items and a navigational part. The only drawback with this technique is that it requires a lot of hard work and you should be a good HTML programmer at the least. However, writing mapping code for a larger image with proper coordinates is not an easy task, but there are saviors out there. If you want to know the basics of the area and map tags, you can check out the Basics on area and map tag in HTML post I wrote at http://www.aliencoders.com/content/basics-area-and-map-tag-html. You can create an image map code for your image online at http://www.maschek.hu/imagemap/imgmap. If you want to make it more creative with different sets of actions and colors, try using CSS codes for image maps.. The following screenshot shows you all the options that you can play with while reducing DNS lookups: How it works… In the case of reducing DNS lookup, when you open any web page for the first time, it performs DNS lookups through all unique hostnames that are involved with that web page. When you hit a URL in your browser, it first needs to resolve the address (DNS name) to an IP address. As we know, DNS resolutions are being cached by the browser or the operating system or both. So, if a valid record for the URL is available in the user's browser or OS cache, there is no time delay observed. All ISPs have their own DNS servers that cache name-IP mappings from authoritative name servers and if the caching DNS server's record has already expired, it should be refreshed again. We will not go much deeper into the DNS mechanism. But it's important to reduce DNS lookups more than any other kind of requests because it will add a more prolonged latency period as any other requests do. Similarly, in the case of using image maps, imagine you have a website where you have inserted separate images for separate tabular menus instead of just plain text to make the website catchier! For example, Home, Blogs, Forums, Contact Us, and About Us. Now whenever you load the page, it sends five requests, which will surely consume some amount of time and will make the website a bit slower too. It will be a good idea to merge all such images into one big image and use the image map technique to reduce the number of HTTP requests for those images. We can do it by using area and map tags to make it work like the previous one. It will not only save a few KBs, but also reduce the server request from five to just one. There's more... If you already have map tags in your page and wish to edit it for proper coordinates without creating trouble for yourself, there is a Firefox add-on available called the Image Map Editor (https://addons.mozilla.org/en-us/firefox/addon/ime/). If you want to know the IP address of your name servers, use the $ grepnameserver /etc/resolv.conf command in Linux and C:/>ipconfig /all in Windows. Even you can get the website's details from your name server, that is, host website-name <nameserver>. There is a Firefox add-on that will speed up DNS resolution by doing pre-DNS work and you will observe faster loading of the website. Download Speed DNS from https://addons.mozilla.org/en-US/firefox/addon/speed-dns/?src=search. Summary We saw that lesser the number of requests, faster the website will be. This article showed us how to minimize such HTTP requests without hampering the website. Resources for Article: Further resources on this subject: Magento Performance Optimization [Article] Creating and optimizing your first Retina image [Article] Search Engine Optimization using Sitemaps in Drupal 6 [Article]
Read more
  • 0
  • 0
  • 3839
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-new-ios-social-project
Packt
08 Oct 2013
8 min read
Save for later

Creating a New iOS Social Project

Packt
08 Oct 2013
8 min read
Creating a New iOS Social Project In this article, by Giuseppe Macri, author of Integrating Facebook iOS SDK with Your Application, we will learn about: With this article, we start our coding journey. We are going to build our social application from the group up. In this article we will learn about: Creating a Facebook App ID: It is a key used with our APIs to communicate with the Facebook Platform. Downloading the Facebook SDK: iOS SDK can be downloaded from two different channels. We will look into both of them. Creating a new XCode Project: I will give a brief introduction on how to create a new XCode project and description of the IDE environment. Importing Facebook iOS SDK into our XCode project: I will go through the import of the Facebook SDK into our XCode project step-by-step. Getting familiar with Storyboard to build a better interface: This is a brief introduction on the Apple tool to build our application interface. Creating a Facebook App ID In order to communicate with the Facebook Platform using their SDK, we need an identifier for our application. This identifier, also known as Facebook App ID, will give access to the Platform; at the same time, we will be able to collect a lot of information about its usage, impressions, and ads. To obtain a Facebook App ID, we need a Facebook account. If you don't have one, you can create a Facebook account via the following page at https://www.facebook.com: The previous screenshot shows the new Facebook account sign up form. Fill out all the fields and you will be able to access the Facebook Developer Portal. Once we are logged into Facebook, we need to visit the Developer Portal. You can find it at https://developers.facebook.com/. I already mentioned the important role of Developer Portal in developing our social application. The previous screenshot shows the Facebook Developer Portal. The main section, the top part, is dedicated to the current SDKs. On the top-blue bar, click on the Apps link, and it will redirect us to the Facebook App Dashboard. The previous screenshot shows the Facebook App Dashboard. To the left, we have a list of apps; on the center of the page, we can see the details of the currently selected app from our list. The page shows the application's setting and analytics (Insights). In order to create a new Facebook App ID, you can click on Create New App on the top-right part of the App Dashboard. The previous screenshot shows the first step in order to create a Facebook App ID. When providing the App Name, be sure the name does not already exist or violate any copyright laws; otherwise, Facebook will remove your app. App Namespace is something that we need if we want to define custom objects and/or actions in the Open Graph structure. The App Namespace topic is not part of this book. Web hosting is really useful when creating a social web application. Facebook, in partnership with other providers, can create a web hosting for us if needed. This part is not going to be discussed in this book; therefore, do not check this option for your application. Once all the information is provided, we can move on to the next step. Please fill out the form, and move forward to the next one. On the top of the page, we can see both App ID and App Secret. These are the most important pieces of information about our new social applicaton. App ID is a piece of information that we can share unlike App Secret. At the center of our new Facebook Application Page, we have basic information fields. Do not worry about Namespace, App Domains, and Hosting URL; these fields are for web applications. Sandbox Mode only allows developers to use the current application. Developers are specified through the Developer Roles link on the left side bar. Moving down, select the type of app. For our goal, select Native iOS App. You can select multiple types and create multiplatform social applications. Once you have checked Native iOS App, you will be prompted with the following form: The only field we need to provide for now is the Bundle ID. The Bundle ID is something related to XCode settings. Be sure that the Facebook Bundle ID will match our XCode Social App Bundle Identifier. The format for the bundle identifier is always something like com.MyCompany.MyApp. iPhone/iPad App Store IDs are the App Store identifiers of your application if you have published your app in the App Store. If you didn't provide any of them after you saved your changes, you will receive a warning message; however, don't worry, our new App ID is now ready to be used. Save your changes and get ready to start our developing journey. Downloading the Facebook iOS SDK The iOS Facebook SDK can be downloaded through two different channels: Facebook Developer Portal: For downloading the installation package GitHub: For downloading the SDK source code Using Facebook Developer Portal, we can download the iOS SDK as the installation package. Visit https://developers.facebook.com/ios/ as shown in the following screenshot and click on Download the SDK to download the installation package. The package, once installed, will create a new FacebookSDK folder within our Documents folder. The previous screenshot shows the content of the iOS SDK installation package. Here, we can see four elements: FacebookSDK.framework: This is the framework that we will import in our XCode social project LICENSE: It contains information about licensing and usage of the framework README: It contains all the necessary information about the framework installation Samples: It contains a useful set of sample projects that uses the iOS framework's features With the installation package, we only have the compiled files to use, with no original source code. It is possible to download the source code using the GitHub channel. To clone git repo, you will need a Git client, either Terminal or GUI. The iOS SDK framework git repo is located at https://github.com/facebook/facebook-ios-sdk.git. I prefer the Terminal client that I am using in the following command: git clone https://github.com/facebook/facebook-ios-sdk.git After we have cloned the repo, the target folder will look as the following screenshot: The previous picture shows the content of the iOS SDK GitHub repo. Two new elements are present in this repo: src and scripts. src contains the framework source code that needs to be compiled. The scripts folder has all the necessary scripts needed to compile the source code. Using the GitHub version allows us to keep the framework in our social application always up-to-date, but for the scope of this book, we will be using the installation package. Creating a new XCode project We created a Facebook App ID and downloaded the iOS Facebook SDK. It's time for us to start our social application using XCode. The application will prompt the welcome dialog if Show this window when XCode launches is enabled. Choose the Create a new XCode project option. If the welcome dialog is disabled, navigate to File | New | Project…. Choosing the type of project to work with is the next step as shown in the following screenshot: The bar to the left defines whether the project is targeting a desktop or a mobile device. Navigate to iOS | Application and choose the Single View Application project type. The previous screenshot shows our new project's details. Provide the following information for your new project: Product Name: This is the name of our application Organization Name: I will strongly recommend filling out this part even if you don't belong to an organization because this field will be part of our Bundle Identifier Company Identifier: It is still optional, but we should definitely fill it out to respect the best-practice format for Bundle Identifier Class Prefix: This prefix will be prepended to every class we are going to create in our project Devices: We can select the target device of our application; in this case, it is an iPhone but we could also have chosen iPad or Universal Use Storyboards: We are going to use storyboards to create the user interface for our application Use Automatic Reference Counting: This feature enables reference counting in the Objective C Garbage Collector Include Unit Tests: If it is selected, XCode will also create a separate project target to unit-test our app; this is not part of this book Save the new project. I will strongly recommend checking the Create a local git repository for this project option in order to keep track of changes. Once the project is under version control, we can also decide to use GitHub as the remote host to store our source code.
Read more
  • 0
  • 0
  • 3661

article-image-introducing-feature-introjs
Packt
07 Oct 2013
5 min read
Save for later

Introducing a feature of IntroJs

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) API IntroJs includes functions that let the user to control and change the execution of the introduction. For example, it is possible to make a decision for an unexpected event that happens during execution, or to change the introduction routine according to user interactions. Later on, all available APIs in IntroJs will be explained. However, these functions will extend and develop in the future. IntroJs includes these API functions: start goToStep exit setOption setOptions oncomplete onexit onchange onbeforechange introJs.start() As mentioned before, introJs.start() is the main function of IntroJs that lets the user to start the introduction for specified elements and get an instance of the introJS class. The introduction will start from the first step in specified elements. This function has no arguments and also returns an instance of the introJS class. introJs.goToStep(stepNo) Jump to the specific step of the introduction by using this function. As it is clear, introductions always start from the first step; however, it is possible to change the configuration by using this function. The goToStep function has an integer argument that accepts the number of the step in the introduction. introJs().goToStep(2).start(); //starts introduction from step 2 As the example indicates, first, the default configuration changed by using the goToStep function from 1 to 2, and then the start() function will be called. Hence, the introduction will start from the second step. Finally, this function will return the introJS class's instance. introJs.exit() The introJS.exit() function lets the user exit and close the running introduction. By default, the introduction ends when the user clicks on the Done button or goes to the last step of the introduction. introJs().exit() As it shows, the exit() function doesn't have any arguments and returns an instance of introJS. introJs.setOption(option, value) As mentioned before, IntroJs has some default options that can be changed by using the setOption method. This function has two arguments. The first one is useful to specify the option name and the second one is to set the value. introJs().setOption("nextLabel", "Go Next"); In the preceding example, nextLabel sets to Go Next. Also, it is possible to change other options by using the setOption method. introJs.setOptions(options) It is possible to change an option using the setOption method. However, to change more than one option at once, it is possible to use setOptions instead. The setOptions method accepts different options and values in the JSON format. introJs().setOptions({ skipLabel: "Exit", tooltipPosition: "right" }); In the preceding example, two options are set at the same time by using JSON and the setOptions method. introJs.oncomplete(providedCallback) The oncomplete event is raised when the introduction ends. If a function passes as an oncomplete method, it will be called by the library after the introduction ends. introJs().oncomplete(function() { alert("end of introduction"); }); In this example, after the introduction ends, the anonymous function that is passed to the oncomplete method will be called and alerted with the end of introduction message. introJs.onexit(providedCallback) As mentioned before, the user can exit the running introduction using the Esc key or by clicking on the dark area in the introduction. The onexit event notices when the user exits from the introduction. This function accepts one argument and returns the instance of running introJS. introJs().onexit(function() { alert("exit of introduction"); }); In the preceding example, we passed an anonymous function to the onexit method with an alert() statement. If the user exits the introduction, the anonymous function will be called and an alert with the message exit of introduction will appear. introJs.onchange(providedCallback) The onchange event is raised in each step of the introduction. This method is useful to inform when each step of introduction is completed. introJs().onchange(function(targetElement) { alert("new step"); }); You can define an argument for an anonymous function (targetElement in the preceding example), and when the function is called, you can access the current target element that is highlighted in the introduction with that argument. In the preceding example, when each introduction's step ends, an alert with the new step message will appear. introJs.onbeforechange(providedCallback) Sometimes, you may need to do something before each step of introduction. Consider that you need to do an Ajax call before the user goes to a step of the introduction; you can do this with the onbeforechange event. introJs().onbeforechange(function(targetElement) { alert("before new step");}); We can also define an argument for an anonymous function (targetElement in the preceding example), and when this function is called, the argument gets some information about the currently highlighted element in the introduction. So using that argument, you can know which step of the introduction will be highlighted or what's the type of target element and more. In the preceding example, an alert with the message before new step will appear before highlighting each step of the introduction. Summary In this article we learned about the API functions, their syntaxes, and how they are used. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 6039

article-image-planning-your-store
Packt
07 Oct 2013
11 min read
Save for later

Planning Your Store

Packt
07 Oct 2013
11 min read
Defining the catalogue The type of products you are selling will determine the structure of your store. Different types of products will have different requirements in terms of the information presented to the customer, and the data that you will need to collect in order to fulfill an order. Base product definition Every product needs to have the following fields which are added by default: Title Stock Keeping Unit (SKU) Price (in the default store currency) Status (a flag indicating if the product is live on the store) This is the minimum you need to define a product in Drupal Commerce—everything else is customized for your store. You can define multiple Product Types (Product Entity Bundles), which can contain different fields depending on your requirements. Physical products If you are dealing with physical products, such as books, CDs, or widgets, you may want to consider these additional fields: Product images Description Size Weight Artist/Designer/Author Color You may want to consider setting up multiple Product Types for your store. For example, if you are selling CDs, you may want to have a field for Artist which would not be relevant for a T-shirt (where designer may be a more appropriate field). Whenever you imagine having distinct pieces of data available, adding them as individual fields is well worth doing at the planning stage so that you can use them for detailed searching and filtering later. Digital downloads If you are selling a digital product such as music or e-books, you will need additional fields to contain the actual downloadable file. You may also want to consider including: Cover image Description Author/Artist Publication date Permitted number of downloads Tickets Selling tickets is a slightly more complex scenario since there is usually a related event associated with the product. You may want to consider including: Related event (which would include date, venue, and so on) Ticket Type / Level / Seat Type Content access and subscriptions Selling content access and subscriptions through Drupal Commerce usually requires associating the product with a Drupal role. The customer is buying membership of the role which in turn allows them to see content that would usually be restricted. You may want to consider including: Associated role(s) Duration of membership Initial cost (for example, first month free) Renewal cost (for example, £10/month ) Customizing products The next consideration is whether products can be customized at the point of purchase. Some common examples of this are: Specifying size Specifying color Adding a personal message (for example, embossing) Selecting a specific seat (in the event example) Selecting a subscription duration Specifying language version of an e-book Gift wrapping or gift messaging It is important to understand what additional user input you will need from the customer to fulfill the order over and above the SKU and quantity. When looking at these options, also consider whether the price changes depending on the options that the customer selects. For example: Larger sizes cost more than smaller sizes Premium for "red" color choice Extra cost for adding an embossed message Different pricing for different seating levels Monthly subscription is cheaper if you commit to a longer duration Classifying products Now that you have defined your Product Types, the next step is to consider the classification of products using Drupal's in-built Taxonomy system. A basic store will usually have a catalog taxonomy vocabulary where you can allocate a product to one or more catalog sections, such as books, CDs, clothing, and so on. The taxonomy can also be hierarchical, however, individual vocabularies for the classification of your products is often more workable, especially when providing the customer with a faceted search or filtering facility later. The following are examples of common taxonomy vocabulary: Author/Artist/Designer Color Size Genre Manufacturer/Brand It is considered best practice to define a taxonomy vocabulary rather than have a simple free text field. This provides consistency during data entry. For example, a free text field for size may end up being populated with S, Small, Sm, all meaning the same thing. A dropdown taxonomy selector would ensure that the value entered was the same for every product. Do not be tempted to use List type fields to provide dropdown menus of choices. List fields are necessarily the reserve of the developer and using them excludes the less technical site owner or administrator from managing them. Pricing Drupal Commerce has a powerful pricing engine, which calculates the actual selling price for the customer, depending on one or more predefined rules. This gives enormous flexibility in planning your pricing strategy. Currency Drupal Commerce allows you to specify a default currency for the store, but also allows you to enter multiple price fields or calculate a different price based on other criteria, such as the preferred currency of the customer. If you are going to offer multiple currencies, you need to consider how the currency exchange will work; do you want to enter a set price for each product and currency you offer, or a base price in the default currency and calculate the other currencies based on a conversion rate? If you use a conversion rate, how often is it updated? Variable pricing Prices do not have to be fixed. Consider scenarios where the prices for your store will vary over time, or situations based on other factors such as volume-based discounts. Will some preferred customers get a special price deal on one or more products? Customers You cannot complete an order without a customer and it is important to consider all of their needs during the planning process. By default, a customer profile in Drupal Commerce contains an address type field which works to the Name and Address Standard (xNAL) format, collecting international addresses in a standard way. However, you may want to extend this profile type to collect more information about the customer. For example: Telephone number Delivery instructions E-mail opt-in permission Do any of the following apply? Is the store open to public or open by invitation only? Do customers have to register before they can purchase? Do customers have to enter an e-mail address in order to purchase? Is there a geographical limit to where products can be sold/shipped? Can a customer access their account online? Can a customer cancel an order once it is placed? What are the time limits on this? Can a customer track the progress of their order? Taxes Many stores are subject to Sales tax or Value Added Tax(VAT) on products sold. However, these taxes often vary depending on the type of product sold and the final destination of the physical goods. During your planning you should consider the following: What are the sales tax / VAT rules for the store? Are there different tax rules depending on the shipping destination? Are there different tax rules depending on the type of product? If you are in a situation where different types of products in your store will incur different rates of taxes, then it is a very good idea to set up different Product Types so that it's easy to distinguish between them. For example, in the UK, physical books are zero rated for VAT, whereas, the same book in digital format will have 20% VAT added. Payments Drupal Commerce can connect to many different payment gateways in order create a transaction for an order. While many of the popular payment gateways, such as PayPal and Sage Pay, have fully functional payment gateway modules on Drupal.org, it's worth checking if the one you want is available because creating a new one is no small undertaking. The following should also be considered: Is there a minimum spend limit? Will there be multiple payment options? Are there surcharges for certain payment types? Will there be account customers that do not have to enter a payment card? How will a customer be refunded if they cancel or return their order? Shipping Not every product will require shipping support, but for physical products, shipping can be a complex area. Even a simple product store can have complex shipping costs based on factors such as weight, destination, total spend, and special offers. Ensure the following points are considered during your planning: Is shipping required? How is the cost calculated? By value/weight/destination? Are there geographical restrictions? Is express delivery an option? Can the customer track their order? Stock With physical products and some virtual products such as event tickets, stock control may be a requirement. Stock control is a complex area and beyond the scope of this book, but the following questions will help uncover the requirements: Are stock levels managed in another system, for example, MRP? If the business has other sales channels, is there dedicated stock for the online store? When should stock levels be updated (at the point of adding to the cart or at the point of completing the order)? How long should stock be reserved? What happens when a product is out of stock? Can a customer order an out-of-stock product (back order)? What happens if a product goes out of stock during the customer checkout process? If stock is controlled by an external system, how often should stock levels be updated in the e-store? Legal compliance It is important to understand the legal requirements of the country where you operate your store. It is beyond the scope of this book to detail the legal requirements of every country, but some examples of e-commerce regulation that you should research and understand are included here: PCI-DSS Compliance—Worldwide The Privacy and Electronic Communications (EC Directive) (also known as the EU cookie law)—European Union Distance Selling Regulations—UK Customer communication Once the customer has placed their order, how much communication will there be? A standard expectation of the customer will be to receive a notification that their order has been placed, but how much information should that e-mail contain? Should the e-mail be plain text or graphical? Does the customer receive an additional e-mail when the order is shipped? If the product has a long lead time, should the customer receive interim updates? What communication should take place if a customer cancels their order? Back office In order for the store to run efficiently, it is important to consider the requirements of the back office system. This will often be managed by a different group of people to those specifying the e-store. Identify the different types of users involved in the order fulfillment process. These roles may include: Sales order processing Warehouse and order handling Customer service for order enquiries Product managers These roles may all have different information available to them when trying to locate the order or product they need, so it's important for the interface to cater to different scenarios: Does the website need to integrate with a third-party system for management of orders? How are order status codes updated on the website so that customers can track progress? In a batch, manually or automatically? User experience How will the customer find the product that they are looking for? Well-structured navigation? Search by SKU? Free text search? Faceted search? The source of product data When you are creating a store with more than a trivial number of products, you will probably want to work on a method of mass importing the product data. Find out where the product data will be coming from, and in what format it will be delivered. You may want to define your Product Types taking into account the format of the data coming in—especially if the incoming data format is fixed. You may also want to define different methods of importing taxonomy terms from the supplied data. Summary Once you have gone through all of these checklists with the business stakeholders, you should have enough information to start your Drupal Commerce build. Drupal Commerce is very flexible, but it is crucial that you understand the outcome that you are trying to achieve before you start installing modules and setting up Product Types. Resources for Article: Further resources on this subject: Drupal Web Services: Twitter and Drupal [Article] Introduction to Drupal Web ServicesIntroduction to Drupal Web Services [Article] Drupal Site Configuration: Performance, Maintenance, Logging and Errors and Reports [Article]
Read more
  • 0
  • 0
  • 1799

article-image-gamified-websites-framework
Packt
07 Oct 2013
15 min read
Save for later

Gamified Websites: The Framework

Packt
07 Oct 2013
15 min read
(For more resources related to this topic, see here.) Business objectives Before we can go too far down the road on any journey, we first have to be clear about where we are trying to go. This is where business objectives come into the picture. Although games are about fun, and gamification is about generating positive emotion without losing sight of the business objectives, gamification is a serious business. Organizations spend millions of dollars every year on information technology. Consistent and steady investment in information technology is expected to bring a return on that investment in the way of improved business process flow. It's meant to help the organization run smoother and easier. Gamification is all about "improving" business processes. Organizations try to improve the process itself, wherever possible, whereas technology only facilitates the process. Therefore, gamification efforts will be scrutinized under similar microscope and success metrics that information technology efforts will. The fact that customers, employees, or stakeholders are having more fun with the organization's offering is not enough. It will have to meet a business objective. The place to start with defining business objectives is with the business process that the organization is looking to improve. In our case, the process we are planning to improve is e-learning. We are looking at the process of K-12 aged persons learning "thinking". How does that process look right now? Image source: http://www.moddb.com/groups/critical-thinkers-of-moddb/images/critical-thinking-skills-explained In a full-blown e-learning situation, we would be looking to gamify as much of this process as possible. For our purpose, we will focus on the areas of negotiation and cooperation. According to the Negotiate and Cooperate phase of the Critical Thinking Process, learners consider different perspectives and engage in discussions with others. This gives us a clear picture of what some of our objectives might be. They might be, among others: Increasing engagement in discussion with others Increasing the level of consideration of different perspectives Note that these objectives are measurable. We will be able to test whether the increases/improvements we are looking for are actually happening over time. With a set of measurable objectives, we can turn our attention to the next step, that is target behaviors, in our Gamification Design Framework. Target behaviors Now that we are clear about what we are trying to accomplish with our system, we will focus on the actions we are hoping to incentivize: our target behaviors. One of the big questions around gamification efforts is can it really cause behavioral change. Will employees, customers, and stakeholders simply go back to doing things the way they are used to once the game is over? Will they figure out a way to "cheat" the system? The only way to meet long-term organizational objectives in a systematic way is the application to not only cause change for the moment, but lasting change over time. Many gamification applications fail in long-term behavior change, and here's why. Psychologists have studied the behavior change life cycle at length. . The study revealed that people go through five distinct phases when changing a behavior. Each phase presents a different set of challenges. The five phases of the behavioral life cycle are as follows: Awareness: Before a person will take any action to change a behavior, he/she must first be aware of their current behavior and how it might need to change. Buy in: After a person becomes aware that they need to change, they must agree that they actually need to change and make the necessary commitment to do so. Learn: But what actually does a person need to do to change? It cannot be assumed that he/she knows how to change. They must learn the new behavior. Adopt: Now that he/she has learned the necessary skills, they have to actually implement them. They need to take the new action. Maintain: Finally, after adopting a new behavior, it can only become a lasting change with constant practice. Image source: http://www.accenture.com/us-en/blogs/technology-labs-blog/archive/2012/03/28/gamification-and-the-behavior-change-lifecycle.aspx) How can we use this understanding to establish our target behaviors? Keep in mind that our objectives are to increase interaction through discussion and increase consideration for other perspectives. According to our understanding of changing behavior around our objectives, we need our users to: Become aware of their discussion frequency with other users Become aware that other perspectives exist Commit to more discussions with other users Commit to considering other users' perspectives Learn how to have more discussions with other users Learn about other users' perspectives Have more discussions with other users Actually consider other users' perspectives Continue to have more discussions with other users on a consistent basis Continue to consider other users' perspectives over time This outlines the list of activities that needs to be performed for our systems to meet our objectives. Of course, some of our target behaviors will be clear. In other cases, it will require some creativity on our part to get users to take these actions. So what are some possible actions that we can have our users take to move them along the behavior change life cycle? Check their discussion thread count Review the Differing Point of View section Set a target discussion amount for a particular time period Set a target number of Differing Points of View to review Watch a video (or some instructional material) on how to use the discussion area Watch a video (or some instructional material) on the value of viewing other perspectives Participate in the discussion groups Read through other users' discussions posts Participate in the discussion groups over time Read through other users' perspectives over time Some of these target behaviors are relatively straightforward to implement. Others will require more thought. More importantly, we have now identified the target behaviors we want our users to take. This will guide the rest of our development efforts. Players Although the last few sections have been about the serious side of things, such as objectives and target behaviors, we still have gamification as the focal point. Hence, from this point on we will refer to our users as players. We must keep in mind that although we have defined the actions that we want our players to take, the strategies to motivate them to take that action vary from player to player. Gamification is definitely not a one-size-fits-all process. We will have to look at each of our target behaviors from the perspective of our players. We must take their motivations into consideration, unless our mechanics are pretty much trial and error. We will need an approach that's a little more structured. According to the Bartle's Player Motivations theory, players of any game system fall into one of the following four categories: Killers: These are people motivated to participate in a gaming scenario with the primary purpose of winning the game by "acting on" other players. This might include killing them, beating, and directly competing with other players in the game. Achievers: These, on the other hand, are motivated by taking clear actions against the system itself to win. They are less motivated by beating an opponent than by achieving things to win. Socializers: These have very different motivations for participating in a game. They are motivated more by interacting and engaging with other players. Explorers: Like socializers, explorers enjoy interaction and engagement, but less with other players than with the system itself. The following diagram outlines each player motivation type and what game mechanic might best keep them engaged. Image source: http://frankcaron.com/Flogger/?p=1732 As we define our activity loops, we need to make sure that we include each of the four types of players and their motivations. Activity loops Gamified systems, like other systems, are simply a series of actions. The player acts on the system and the system responds. We refer to how the user interacts with the system as activity loops. We will talk about two types of activity loops, engagement loops and progression loops, to describe our player interactions. Engagement loops describe how a player engages the system. They outline what a player does and how the system responds. Activity will be different for players depending on their motivations, so we must also take into consideration why the player is taking the action he is taking. A progression loop describes how the player engages the system as a whole. It outlines how he/she might progress through the game itself. Whereas engagement loops discuss what the player does on a detailed level, progression loops outline the movement of the player through the system. For example, when a person drives a car, he/she is interacting with the car almost constantly. This interaction is a set of engagement loops. All the while, the car is going somewhere. Where the car is going describes its progression loops. Activity loops tend to follow the Motivation, Action, Feedback pattern. The players are sufficiently motivated to take an action. When the players take the action and they get a feedback from the system, the feedback hopefully motivates the players enough to take another action. They take that action and get more feedback. In a perfect world, this cycle would continue indefinitely and the players would never stop playing our gamified system. Our goal is to get as close to this continuous activity loop as we possibly can. Progression loops We have spent the last few pages looking at the detailed interactions that a player will have with the system in our engagement loops. Now it's time to turn our attention to the other type of activity loop, the progression loop. Progression loops look at the system at a macro level. They describe the player's journey through the system. We usually think about levels, badges, and/or modes when we are thinking about progression loops We answer questions such as: where have you been, where are you now, and where are you going. This can all be summed up into codifying the player's mastery level. In our application, we will look at the journey from the vantage point of a novice, an expert, and a master. Upon joining the game, players will begin at novice level. At novice level we will focus on: Welcome On-boarding and getting the user acclimated to using the system Achievable goals In the Welcome stage, we will simply introduce the user to the game and encourage him/her to try it out. Upon on-boarding, we need to make the process as easy as possible and give back positive feedback as soon as possible. Once the user is on board, we will outline the easiest way to get involved and begin the journey. At the expert level, the player is engaging regularly in the game. However, other players would not consider this player a leader in the game. Our goal at this level is to present more difficult challenges. When the player reaches a challenge that is appearing too difficult, we can include surprise alternatives along the way to keep him/her motivated until they can break through the expert barrier to master level. The game and other players recognize masters. They should be prominently displayed within the game and might tend to want to help others at novice and expert levels. These options should become available at later stages in the game. Fun After we have done the work of identifying our objectives, defining target behaviors, scoping our players, and laying out the activities of our system, we can finally think about the area of the system where many novice game designers start: the fun. Other gamification practitioners will avoid, or at least disguise, the fun aspect of the gamification design process. It is important that we don't over or under emphasize the fun in the process. For example, chefs prepare an entire meal with spices, but they don't add all spices together. They use the spices in a balanced amount in their cooking to bring flavor to their dishes. Think of fun as an array of spices that we can apply to our activity loops. Marc Leblanc has categorized fun into eight distinct categories. We will attempt to sprinkle just enough of each, where appropriate, to accomplish the desired amount of fun. Keep in mind that what one player will experience as fun will not be the same for another. One size definitely does not fit all in this case. Sensation: A pleasurable experience Narrative: An unfolding story Challenge: An obstacle course Fantasy: Make believe Fellowship: A social framework Discovery: Exploring uncharted territory Expression: Player is given a platform Submission: Mindless activity So how can we sparingly introduce the above dimensions of fun in our system? Action to take Dimension of fun Check their discussion thread count Challenge Review a differing point of the View section Discovery Set a target discussion  amount for a particular time period Challenge Set a target number of "Differing Points of View" to review Challenge Watch a video (or some instructional material) on the how to use the discussion area Challenge Watch a video (or some instructional material) on the value of viewing other perspectives Challenge Participate in the discussion groups Fellowship Expression Read through other users' discussions posts Discovery Participate in the discussion groups over time Fellowship Expression Read through other users' perspectives over time Discovery Tools We are finally at the stage from where we can begin implementation. At this point, we can look at the various game elements (tools) to implement our gamified system. If we have followed the framework upto this point, the mechanics and elements should become apparent. We are not simply adding leader boards or a point system for the sake of it. We can tie all the tools we use back to our previous work. This will result in a Gamification Design Matrix for our application. But before we go there, let's stop and take a look at some tools we have at our disposal. There are a myriad of tools, mechanics, and strategies at our disposal. New ones are being designed everyday. Here are a few of the most common mechanics that we will encounter when designing our gamified system: Achievements: These are specific objectives that a player meets. Avatars: These are visual representations of a player's role, persona, or character in a game. Badges: These are visual elements used to recognize a particular accomplishment. They give players a sense of pride that they can show off to others. Boss fight: This is an exceptionally difficult challenge in a game scenario, usually at the end of a level to demonstrate enough skill level to move up to the next level. Leaderboards: These show rankings of players publicly. They recognize an accomplishment like a badge, but they are visible for all to see. We see this almost every day, in every way from sports team rankings to sales rep monthly results. Points: These are rather straightforward. Players accumulate points and take various actions in the system. Quests/Mission: These are specialized challenges in a game scenario having narrative and objective as characteristics. Reward: This is anything used to extrinsically motivate the user to take a particular action. Team: This is a group of players playing as a single unit. Virtual assets: These are elements in the game that have some value and can be acquired or used to acquire other assets, whether tangible or virtual. Now it's time to turn and take off our gamification design hat and put on our developer hat. Let's start by developing some initial mockups of what our final site might look like using the design we have outlined previously. Many people develop mockups using graphics tools such as Photoshop or Gimp. At this stage, we will be less detailed in our mockups and simply use pencil sketches or a mockup tool such as Balsamiq. Login screen This is a mock-up of the basic login screen in our application. Players are accustomed to a basic login and password scenario we provide here. Account creation screen First time players will have to create an account initially. This is the mock-up of our signup page. Main Player Screen This captures the main elements of our system when a player is fully engaged with the system. Main Player Post Response Screen We have outlined the key functionality of our gamified system via mock-ups. Mock-ups are a means of visually communicating to our team what we are building and why we are building it. Visual mock-ups also give us an opportunity to uncover issues in our design early in the process. Summary Most gamified applications will fail due to a poorly designed system. Hence, we have introduced a Gamification Design Framework to guide our development process. We know that our chances of developing a successful system increase tremendously if we: Define clear business objectives Establish target behaviors Understand our players Work through the activity loops Remember the fun Optimize the tools Resources for Article: Further resources on this subject: An Introduction to PHP-Nuke [Article] Installing phpMyAdmin [Article] Getting Started with jQuery [Article]
Read more
  • 0
  • 0
  • 2565
article-image-different-strategies-make-responsive-websites
Packt
04 Oct 2013
9 min read
Save for later

Different strategies to make responsive websites

Packt
04 Oct 2013
9 min read
(For more resources related to this topic, see here.) The Goldilocks approach In 2011, and in response to the dilemma of building several iterations of the same website by targeting every single device, the web-design agency, Design by Front, came out with an official set of guidelines many designers were already adhering to. In essence, the Goldilocks approach states that rather than rearranging our layouts for every single device, we shouldn't be afraid of margins on the left and right of our designs. There's a blurb about sizing around the width of our body text (which they state should be around 66 characters per line, or 33 em's wide), but the important part is that they completely destroyed the train of thought that every single device needed to be explicitly targeted—effectively saving designers countless hours of time. This approach became so prevalent that most CSS frameworks, including Twitter Bootstrap 2, adopted it without realizing that it had a name. So how does this work exactly? You can see a demo at http://goldilocksapproach.com/demo; but for all you bathroom readers out there, you basically wrap your entire site in an element (or just target the body selector if it doesn't break anything else) and set the width of that element to something smaller than the width of the screen while applying a margin: auto. The highlighted element is the body tag. You can see the standard and huge margins on each side of it on larger desktop monitors. As you contract the viewport to a generic tablet-portrait size, you can see the width of the body is decreased dramatically, creating margins on each side again. They also do a little bit of rearranging by dropping the sidebar below the headline. As you contract the viewport more to a phone size, you'll notice that the body of the page occupies the full width of the page now, with just some small margins on each side to keep text from butting up against the viewport edges. Okay, so what are the advantages and disadvantages? Well, one advantage is it's incredibly easy to do. You literally create a wrapping element and every time the width of the viewport touches the edges of that element, you make that element smaller and tweak a few things. But, the huge advantage is that you aren't targeting every single device, so you only have to write a small amount of code to make your site responsive. The downside is that you're wasting a lot of screen real-estate with all those margins. For the sake of practice, create a new folder called Goldilocks. Inside that folder create a goldilocks.html and goldilocks.css file. Put the following code in your goldilocks.html file: <!DOCTYPE html> <html> <head> <title>The Goldilocks Approach</title> <link rel="stylesheet" href="goldilocks.css"> </head> <body> <div id="wrap"> <header> <h1>The Goldilocks Approach</h1> </header> <section> <aside>Sidebar</aside> <article> <header> <h2>Hello World</h2> <p> Lorem ipsum... </p> </header> </article> </section> </div> </body> </html> We're creating an incredibly simple page with a header, sidebar, and content area to demonstrate how the Goldilocks approach works. In your goldilocks.css file, put the following code: * { margin: 0; padding: 0; background: rgba(0,0,0,.05); font: 13px/21px Arial, sans-serif; } h1, h2 { line-height: 1.2; } h1 { font-size: 30px; } h2 { font-size: 20px; } #wrap { width: 900px; margin: auto; } section { overflow: hidden; } aside { float: left; margin-right: 20px; width: 280px; } article { float: left; width: 600px; } @media (max-width: 900px) { #wrap { width: 500px; } aside { width: 180px; } article { width: 300px; } } @media (max-width: 500px) { #wrap { width: 96%; margin: 0 2%; } aside, article { width: 100%; margin-top: 10px; } } Did you notice how the width of the #wrap element becomes the max-width of the media query? After you save and refresh your page, you'll be able to expand/contract to your heart's content and enjoy your responsive website built with the Goldilocks approach. Look at you! You just made a site that will serve any device with only a few media queries. The fewer media queries you can get away with, the better! Here's what it should look like: The preceding screenshot shows your Goldilocks page at desktop width. At tablet size, it looks like the following: On a mobile site, you should see something like the following screenshot: The Goldilocks approach is great for websites that are graphic heavy as you can convert just three mockups to layouts and have completely custom, graphic-rich websites that work on almost any device. It's nice if you are of the type who enjoys spending a lot of time in Photoshop and don't mind putting in the extra work of recreating a lot of code for a more textured website with a lot of attention to detail. The Fluid approach Loss of real estate and a substantial amount of extra work for slightly prettier (and heavier) websites is a problem that most of us don't want to deal with. We still want beautiful sites, and luckily with pure CSS, we can replicate a huge amount of elements in flexible code. A common, real-world example of replacing images with CSS is to use CSS to create buttons. Where Goldilocks looks at your viewport as a container for smaller, usually pixel-based containers, the Fluid approach looks at your viewport as a 100 percent large container. If every element inside the viewport adds up to around 100 percent, you've effectively used the real estate you were given. Duplicate your goldilocks.html file, then rename it to fluid.html. Replace the mentions of "Goldilocks" with "Fluid": <!DOCTYPE html> <html> <head> <title>The Fluid Approach</title> <link rel="stylesheet" href="fluid.css"> </head> <body> <div id="wrap"> <header> <h1>The Fluid Approach</h1> </header> <section> <aside>Sidebar</aside> <article> <header> <h2>Hello World</h2> </header> <p> Lorem ipsum... </p> </article> </section> </div> </body> </html> We're just duplicating our very simple header, sidebar, and article layout. Create a fluid.css file and put the following code in it: * { margin: 0; padding: 0; background: rgba(0,0,0,.05); font: 13px/21px Arial, sans-serif; } aside { float: left; width: 24%; margin-right: 1%; } article { float: left; width: 74%; margin-left: 1%; } Wow! That's a lot less code already. Save and refresh your browser, then expand/contract your viewport. Did you notice how we're using all available space? Did you notice how we didn't even have to use media queries and it's already responsive? Percentages are pretty cool. Your first fluid, responsive, web design We have a few problems though: On large monitors, when that layout is full of text, every paragraph will fit on one line. That's horrible for readability. Text and other elements butt up against the edges of the design. The sidebar and article, although responsive, don't look great on smaller devices. They're too small. Luckily, these are all pretty easy fixes. First, let's make sure the layout of our content doesn't stretch to 100 percent of the width of the viewport when we're looking at it in larger resolutions. To do this, we use a CSS property called max-width. Append the following code to your fluid.css file: #wrap { max-width: 980px; margin: auto; } What do you think max-width does? Save and refresh, expand and contract. You'll notice that wrapping div is now centered in the screen at 980 px width, but what happens when you go below 980 px? It simply converts to 100 percent width. This isn't the only way you'll use max-width, but we'll learn a bit more in the Gotchas and best practices section. Our second problem was that the elements were butting up against the edges of the screen. This is an easy enough fix. You can either wrap everything in another element with specified margins on the left and right, or simply add some padding to our #wrap element shown as follows: #wrap { max-width: 980px; margin: auto; padding: 0 20px; } Now our text and other elements are touching the edges of the viewport. Finally, we need to rearrange the layout for smaller devices, so our sidebar and article aren't so tiny. To do this, we'll have to use a media query and simply unassign the properties we defined in our original CSS: @media (max-width: 600px) { aside, article { float: none; width: 100%; margin: 10px 0; } } We're removing the float because it's unnecessary, giving these elements a width of 100 percent, and removing the left and right margins while adding some margins on the top and bottom so that we can differentiate the elements. This act of moving elements on top of each other like this is known as stacking. Simple enough, right? We were able to make a really nice, real-world, responsive, fluid layout in just 28 lines of CSS. On smaller devices, we stack content areas to help with readability/usability: It's up to you how you want to design your websites. If you're a huge fan of lush graphics and don't mind doing extra work or wasting real estate, then use Goldilocks. I used Goldilocks for years until I noticed a beautiful site with only one breakpoint (width-based media query), then I switched to Fluid and haven't looked back. It's entirely up to you. I'd suggest you make a few websites using Goldilocks, get a bit annoyed at the extra effort, then try out Fluid and see if it fits. In the next section we'll talk about a somewhat new debate about whether we should be designing for larger or smaller devices first. Summary In this article, we have taken a look at how to build a responsive website using the Goldilocks approach and the Fluid approach. Resources for Article : Further resources on this subject: Creating a Web Page for Displaying Data from SQL Server 2008 [Article] The architecture of JavaScriptMVC [Article] Setting up a single-width column system (Simple) [Article]
Read more
  • 0
  • 0
  • 5234

article-image-using-events-interceptors-and-logging-services
Packt
03 Oct 2013
19 min read
Save for later

Using Events, Interceptors, and Logging Services

Packt
03 Oct 2013
19 min read
(For more resources related to this topic, see here.) Understanding interceptors Interceptors are defined as part of the EJB 3.1 specification (JSR 318), and are used to intercept Java method invocations and lifecycle events that may occur in Enterprise Java Beans (EJB) or Named Beans from Context Dependency Injection (CDI). The three main components of interceptors are as follows: The Target class: This class will be monitored or watched by the interceptor. The target class can hold the interceptor methods for itself. The Interceptor class: This interceptor class groups interceptor methods. The Interceptor method: This method will be invoked according to the lifecycle events. As an example, a logging interceptor will be developed and integrated into the Store application. Following the hands-on approach of this article, we will see how to apply the main concepts through the given examples without going into a lot of details. Check the Web Resources section to find more documentation about interceptors. Creating a log interceptor A log interceptor is a common requirement in most Java EE projects as it's a simple yet very powerful solution because of its decoupled implementation and easy distribution among other projects if necessary. Here's a diagram that illustrates this solution: Log and LogInterceptor are the core of the log interceptor functionality; the former can be thought of as the interface of the interceptor, it being the annotation that will decorate the elements of SearchManager that must be logged, and the latter carries the actual implementation of our interceptor. The business rule is to simply call a method of class LogService, which will be responsible for creating the log entry. Here's how to implement the log interceptor mechanism: Create a new Java package named com.packt.store.log in the project Store. Create a new enumeration named LogLevel inside this package. This enumeration will be responsible to match the level assigned to the annotation and the logging framework: package com.packt.store.log; public enum LogLevel { // As defined at java.util.logging.Level SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST; public String toString() { return super.toString(); } } We're going to create all objects of this section—LogLevel, Log, LogService, and LogInterceptor—into the same package, com.packt.store.log. This decision makes it easier to extract the logging functionality from the project and build an independent library in the future, if required. Create a new annotation named Log. This annotation will be used to mark every method that must be logged, and it accepts the log level as a parameter according to the LogLevel enumeration created in the previous step: package com.packt.store.log; @Inherited @InterceptorBinding @Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD, ElementType.TYPE}) public @interface Log { @Nonbinding LogLevel value() default LogLevel.FINEST; } As this annotation will be attached to an interceptor, we have to add the @InterceptorBinding decoration here. When creating the interceptor, we will add a reference that points back to the Log annotation, creating the necessary relationship between them. Also, we can attach an annotation virtually to any Java element. This is dictated by the @Target decoration, where we can set any combination of the ElementType values such as ANNOTATION_TYPE, CONSTRUCTOR, FIELD, LOCAL_VARIABLE, METHOD, PACKAGE, PARAMETER, and TYPE (mapping classes, interfaces, and enums), each representing a specific element. The annotation being created can be attached to methods and classes or interface definitions. Now we must create a new stateless session bean named LogService that is going to execute the actual logging: @Stateless public class LogService { // Receives the class name decorated with @Log public void log(final String clazz, final LogLevel level, final String message) { // Logger from package java.util.logging Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); } } Create a new class, LogInterceptor, to trap calls from classes or methods decorated with @Log and invoke the LogService class we just created—the main method must be marked with @AroundInvoke—and it is mandatory that it receives an InvocationContext instance and returns an Object element: @Log @Interceptor public class LogInterceptor implements Serializable { private static final long serialVersionUID = 1L; @Inject LogService logger; @AroundInvoke public Object logMethod(InvocationContext ic) throws Exception { final Method method = ic.getMethod(); // check if annotation is on class or method LogLevel logLevel = method.getAnnotation(Log.class)!= null ? method.getAnnotation(Log.class).value() : method.getDeclaringClass().getAnnotation(Log.class).value(); // invoke LogService logger.log(ic.getClass().getCanonicalName(),logLevel, method.toString()); return ic.proceed(); } } As we defined earlier, the Log annotation can be attached to methods and classes or interfaces by its @Target decoration; we need to discover which one raised the interceptor to retrieve the correct LogLevel value. When trying to get the annotation from the class shown in the method.getDeclaringClass().getAnnotation(Log.class) line, the engine will traverse through the class' hierarchy searching for the annotation, up to the Object class if necessary. This happens because we marked the Log annotation with @Inherited. Remember that this behavior only applies to the class's inheritance, not interfaces. Finally, as we marked the value attribute of the Log annotation as @Nonbinding in step 3, all log levels will be handled by the same LogInterceptor function. If you remove the @Nonbinding line, the interceptor should be further qualified to handle a specific log level, for example @Log(LogLevel.INFO), so you would need to code several interceptors, one for each existing log level. Modify the beans.xml (under /WEB-INF/) file to tell the container that our class must be loaded as an interceptor—currently, the file is empty, so add all the following lines: <beans xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/beans_1_0.xsd"> <interceptors> <class>com.packt.store.log.LogInterceptor</class> </interceptors> </beans> Now decorate a business class or method with @Log in order to test what we've done. For example, apply it to the getTheaters() method in SearchManager from the project Store. Remember that it will be called every time you refresh the query page: @Log(LogLevel.INFO) public List<Theater> getTheaters() { ... } Make sure you have no errors in the project and deploy it to the current server by right-clicking on the server name and then clicking on the Publish entry. Access the theater's page, http://localhost:7001/theater/theaters.jsf, refresh it a couple of times, and check the server output. If you have started your server from Eclipse, it should be under the Console tab: Nov 12, 2012 4:53:13 PM com.packt.store.log.LogService log INFO: public java.util.List com.packt.store.search.SearchManager.getTheaters() Let's take a quick overview of what we've accomplished so far; we created an interceptor and an annotation that will perform all common logging operations for any method or class marked with such an annotation. All log entries generated from the annotation will follow WebLogic's logging services configuration. Interceptors and Aspect Oriented Programming There are some equivalent concepts on these topics, but at the same time, they provide some critical functionalities, and these can make a completely different overall solution. In a sense, interceptors work like an event mechanism, but in reality, it's based on a paradigm called Aspect Oriented Programming (AOP). Although AOP is a huge and complex topic and has several books that cover it in great detail, the examples shown in this article make a quick introduction to an important AOP concept: method interception. Consider AOP as a paradigm that makes it easier to apply crosscutting concerns (such as logging or auditing) as services to one or multiple objects. Of course, it's almost impossible to define the multiple contexts that AOP can help in just one phrase, but for the context of this article and for most real-world scenarios, this is good enough. Using asynchronous methods A basic programming concept called synchronous execution defines the way our code is processed by the computer, that is, line-by-line, one at a time, in a sequential fashion. So, when the main execution flow of a class calls a method, it must wait until its completion so that the next line can be processed. Of course, there are structures capable of processing different portions of a program in parallel, but from an external viewpoint, the execution happens in a sequential way, and that's how we think about it when writing code. When you know that a specific portion of your code is going to take a little while to complete, and there are other things that could be done instead of just sitting and waiting for it, there are a few strategies that you could resort to in order to optimize the code. For example, starting a thread to run things in parallel, or posting a message to a JMS queue and breaking the flow into independent units are two possible solutions. If your code is running on an application server, you should know by now that thread spawning is a bad practice—only the server itself must create threads, so this solution doesn't apply to this specific scenario. Another way to deal with such a requirement when using Java EE 6 is to create one or more asynchronous methods inside a stateless session bean by annotating either the whole class or specific methods with javax.ejb.Asynchronous. If the class is decorated with @Asynchronous, all its methods inherit the behavior. When a method marked as asynchronous is called, the server usually spawns a thread to execute the called method—there are cases where the same thread can be used, for instance, if the calling method happens to end right after emitting the command to run the asynchronous method. Either way, the general idea is that things are explicitly going to be processed in parallel, which is a departure from the synchronous execution paradigm. To see how it works, let's change the LogService method to be an asynchronous one; all we need to do is decorate the class or the method with @Asynchronous: @Stateless @Asynchronous public class LogService { … As the call to its log method is the last step executed by the interceptor, and its processing is really quick, there is no real benefit in doing so. To make things more interesting, let's force a longer execution cycle by inserting a sleep method into the method of LogService: public void log(final String clazz,final LogLevel level,final String message) { Logger log = Logger.getLogger(clazz); log.log(Level.parse(level.toString()), message); try { Thread.sleep(5000); log.log(Level.parse(level.toString()), "reached end of method"); } catch (InterruptedException e) { e.printStackTrace(); } } Using Thread.sleep() when running inside an application server is another classic example of a bad practice, so keep away from this when creating real-world solutions. Save all files, publish the Store project, and load the query page a couple of times. You will notice that the page is rendered without delay, as usual, and that the reached end of method message is displayed after a few seconds in the Console view. This is a pretty subtle scenario, so you can make it harsher by commenting out the @Asynchronous line and deploying the project again—this time when you refresh the browser, you will have to wait for 5 seconds before the page gets rendered. Our example didn't need a return value from the asynchronous method, making it pretty simple to implement. If you need to get a value back from such methods, you must declare it using the java.util.concurrent.Future interface: @Asynchronous public Future<String> doSomething() { … } The returned value must be changed to reflect the following: return new AsyncResult<String>("ok"); The javax.ejb.AsyncResult function is an implementation of the Future interface that can be used to return asynchronous results. There are other features and considerations around asynchronous methods, such as ways to cancel a request being executed and to check if the asynchronous processing has finished, so the resulting value can be accessed. For more details, check the Creating Asynchronous methods in EJB 3.1 reference at the end of this article. Understanding WebLogic's logging service Before we advance to the event system introduced in Java EE 6, let's take a look at the logging services provided by Oracle WebLogic Server. By default, WebLogic Server creates two log files for each managed server: access.log: This is a standard HTTP access log, where requests to web resources of a specific server instance are registered with details such as HTTP return code, the resource path, response time, among others <ServerName.log>: This contains the log messages generated by the WebLogic services and deployed applications of that specific server instance These files are generated in a default directory structure that follows the pattern $DOMAIN_NAME/servers/<SERVER_NAME>/logs/. If you are running a WebLogic domain that spawns over more than one machine, you will find another log file named <DomainName>.log in the machine where the administration server is running. This file aggregates messages from all managed servers of that specific domain, creating a single point of observation for the whole domain. As a best practice, only messages with a higher level should be transferred to the domain log, avoiding overhead to access this file. Keep in mind that the messages written to the domain log are also found at the managed server's specific log file that generated them, so there's no need to redirect everything to the domain log. Anatomy of a log message Here's a typical entry of a log file: ####<Jul 15, 2013 8:32:54 PM BRT> <Alert> <WebLogicServer> <sandbox-lap> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <weblogic> <> <> <1373931174624> <BEA-000396> <Server shutdown has been requested by weblogic.> The description of each field is given in the following table: Text Description #### Fixed, every log message starts with this sequence <Jul 15, 2013 8:32:54 PM BRT> Locale-formatted timestamp <Alert> Message severity <WebLogicServer> WebLogic subsystem-other examples are WorkManager, Security, EJB, and Management <sandbox-lap> Physical machine name <AdminServer> WebLogic Server name <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> Thread ID <weblogic> User ID <>  Transaction ID, or empty if not in a transaction context <>  Diagnostic context ID, or empty if not applicable; it is used by the Diagnostics Framework to correlate messages of a specific request <1373931174624> Raw time in milliseconds <BEA-000396> Message ID <Server shutdown has been requested by weblogic.> Description of the event The Diagnostics Framework presents functionalities to monitor, collect, and analyze data from several components of WebLogic Server. Redirecting standard output to a log file The logging solution we've just created is currently using the Java SE logging engine—we can see our messages on the console's screen, but they aren't being written to any log file managed by WebLogic Server. It is this way because of the default configuration of Java SE, as we can see from the following snippet, taken from the logging.properties file used to run the server: # "handlers" specifies a comma separated list of log Handler # classes. These handlers will be installed during VM startup. # Note that these classes must be on the system classpath. # By default we only configure a ConsoleHandler, which will only # show messages at the INFO and above levels. handlers= java.util.logging.ConsoleHandler You can find this file at $JAVA_HOME/jre/lib/logging.properties. So, as stated here, the default output destination used by Java SE is the console. There are a few ways to change this aspect: If you're using this Java SE installation solely to run WebLogic Server instances, you may go ahead and change this file, adding a specific WebLogic handler to the handlers line as follows: handlers= java.util.logging.ConsoleHandler,weblogic.logging.ServerLoggingHandler Tampering with Java SE files is not an option (it may be shared among other software, for instance); you can duplicate the default logging.properties file into another folder $DOMAIN_HOME being a suitable candidate, add the new handler, and instruct WebLogic to use this file at startup by adding this argument to the following command line: -Djava.util.logging.config.file=$DOMAIN_HOME/logging.properties You can use the administration console to set the redirection of the standard output (and error) to the log files. To do so, perform the following steps: In the left-hand side panel, expand Environment and select Servers. In the Servers table, click on the name of the server instance you want to configure. Select Logging and then General. Find the Advanced section, expand it, and tick the Redirect stdout logging enabled checkbox: Click on Save to apply your changes. If necessary, the console will show a message stating that the server must be restarted to acquire the new configuration. If you get no warnings asking to restart the server, then the configuration is already in use. This means that both WebLogic subsystems and any application deployed to that server is automatically using the new values, which is a very powerful feature for troubleshooting applications without intrusive actions such as modifying the application itself—just change the log level to start capturing more detailed messages! Notice that there are a lot of other logging parameters that can be configured, and three of them are worth mentioning here: The Rotation group (found in the inner General tab): The rotation feature instructs WebLogic to create new log files based on the rules set on this group of parameters. It can be set to check for a size limit or create new files from time to time. By doing so, the server creates smaller files that we can easily handle. We can also limit the number of files retained in the machine to reduce the disk usage. If the partition where the log files are being written to reaches 100 percent of utilization, WebLogic Server will start behaving erratically. Always remember to check the disk usage; if possible, set up a monitoring solution such as Nagios to keep track of this and alert you when a critical level is reached. Minimum severity to log (also in the inner General tab): This entry sets the lower severity that should be logged by all destinations. This means that even if you set the domain level to debug, the messages will be actually written to the domain log only if this parameter is set to the same or lower level. It will work as a gatekeeper to avoid an overload of messages being sent to the loggers. HTTP access log enabled (found in the inner HTTP tab): When WebLogic Server is configured in a clustered environment, usually a load-balancing solution is set up to distribute requests between the WebLogic managed servers; the most common options are Oracle HTTP Server (OHS) or Apache Web Server. Both are standard web servers, and as such, they already register the requests sent to WebLogic in their own access logs. If this is the case, disable the WebLogic HTTP access log generation, saving processing power and I/O requests to more important tasks. Integrating Log4J to WebLogic's logging services If you already have an application that uses Log4J and want it to write messages to WebLogic's log files, you must add a new weblogic.logging.log4j.ServerLoggingAppender appender to your lo4j.properties configuration file. This class works like a bridge between Log4J and WebLogic's logging framework, allowing the messages captured by the appender to be written to the server log files. As WebLogic doesn't package a Log4J implementation, you must add its JAR to the domain by copying it to $DOMAIN_HOME/tickets/lib, along with another file, wllog4j.jar, which contains the WebLogic appender. This file can be found inside $MW_HOME/wlserver/server/lib. Restart the server, and it's done! If you're using a *nix system, you can create a symbolic link instead of copying the files—this is great to keep it consistent when a path changing these specific files must be applied to the server. Remember that having a file inside $MW_HOME/wlserver/server/lib doesn't mean that the file is being loaded by the server when it starts up; it is just a central place to hold the libraries. To be loaded by a server, a library must be added to the classpath parameter of that server, or you can add it to the domain-wide lib folder, which guarantees that it will be available to all nodes of the domain on a specific machine. Accessing and reading log files If you have direct access to the server files, you can open and search them using a command-line tool such as tail or less, or even use a graphical viewer such as Notepad. But when you don't have direct access to them, you may use WebLogic's administration console to read their content by following the steps given here: In the left-hand side pane of the administration console, expand Diagnostics and select Log Files. In the Log Files table, select the option button next to the name of the log you want to check and click on View: The types displayed on this screen, which are mentioned at the start of the section, are Domain Log, Server Log, and HTTP Access. The others are resource-specific or linked to the diagnostics framework. Check the Web resources section at the end of this article for further reference. The page displays the latest contents of the log file; the default setting shows up to 500 messages in reverse chronological order. The messages at the top of the window are the most recent messages that the server has generated. Keep in mind that the log viewer does not display messages that have been converted into archived log files.
Read more
  • 0
  • 0
  • 7331

Packt
03 Oct 2013
8 min read
Save for later

Quick start – using Foundation 4 components for your first website

Packt
03 Oct 2013
8 min read
(For more resources related to this topic, see here.)   Step 1 – using the Grid The base building block that Foundation 4 provides is the Grid. This component allows us to easily put the rest of elements in the page. The Grid also avoids the temptation of using tables to put elements in their places. Tables should be only used to show tabular data. Don't use them with any other meaning. Web design using tables is considered a really terrible practice. Defining a grid, intuitively, consists of defining rows and columns. There are basically two ways to do this, depending on which kind of layout you want to create. They are as follows: If you want a simple layout which evenly splits contents of the page, you should use Block Grid . To use Block Grid, we must have the default CSS package or be sure to have selected Block Grid from a custom package. If you want a more complex layout, with different sized elements and not necessarily evenly distributed, you should use the normal Grid. This normal Grid contains up to 12 grid columns, to put elements into. After picking the general layout of your page, you should decide if you want your grid structure to be the same for small devices, such as smartphones or tablets, as it will be on large devices, such as desktop computers or laptops. So, our first task is to define a grid structure for our page as follows: Select how we want to distribute our elements. We choose Block Grid, the simpler one. Consequently, we define a <ul> element with several <li> elements inside it. Select if we want different structure for large and small screens. We Yes .This is important to determine which Foundation 4 CSS class our elements will belong to. As result, we have the following code: <ul class="large-block-grid-4"> <li><img src ="demo1.jpg"></li> <li><img src ="demo2.jpg"></li> <li><img src ="demo3.jpg"></li> <li><img src ="demo4.jpg"></li> </ul> The key concept here is the class large-block-grid-4. There are two important classes related to Block Grid: Small-block-grid : If the element belongs to this class, the resulting Block Grid will keep its spacing and configuration for any screen size Large-block-grid : With this, the behavior will change between large and small screen sizes. So, large forces the responsive behavior. You can also use both classes together. In that case, large overrides the behavior of small class. The number 4 at the end of the class name is just the number of grid columns. The complete code of our page, so far is as follows: <!DOCTYPE html> <!--[if IE 8]><html class="no-js lt-ie9" lang="en" ><![endif]--> <!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]--> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1" /> <title>My First Foundation Web</title> <link rel="stylesheet" href="css/foundation.css" /> <script src ="js/vendor/custom.modernizr.js"></script> <style> img { width: 300px; border: 1px solid #ddd; } </style> </head> <body> <!-- Grid --> <ul class="large-block-grid-4"> <li><img src ="demo1.jpg"></li> <li><img src ="demo2.jpg"></li> <li><img src ="demo3.jpg"></li> <li><img src ="demo4.jpg"></li> </ul> <!-- end grid --> <script> document.write('<script src =' + ('__proto__' in {} ? 'js/vendor/zepto' : 'js/vendor/jquery') + .js><\/script>') </script> <script src ="js/foundation.min.js"></script> <script> $(document).foundation(); </script> </body> </html> We have created a simple HTML file with a basic grid that contains 4 images in a list, using the Block Grid facility. Each image spans 4 grid columns. The following screenshot shows how our page looks: Not very fancy, uh? Don't worry, we will add some nice features in the following steps. We have way more options to choose from, for the grid arrangements of our pages. Visit http://foundation.zurb.com/docs/components/grid.html for more information. Step 2 – the navigation bar Adding a basic top navigation bar to our website with Foundation 4 is really easy. We just follow steps: Create an HTML nav element by inserting the following code: <nav class="top-bar"></nav> Add a title for the navigation bar (optional) by inserting the following code: <nav class="top-bar"> <ul class="title-area"> <li class="name"> <h1><a href="#">My First Foundation Web</h1> </li> <li class="toggle-topbar"> <a href="#"><span>Menu</span></a> </li> </ul> </nav> Add some navigation elements inside the nav element <nav class="top-bar"> <!--Title area --> <ul class="title-area"> <li class="name"> <h1><a href="#">My First Foundation Web</h1> </li> <li class="toggle-topbar"> <a href="#"><span>Menu</span></a> </li> </ul> <!-- Here starts nav Section --> <section class="top-bar-section"> <!-- Left Nav Section --> <ul class="left"> <li class="divider"></li> <li class="has-dropdown"><a href="#">Options</a> <!-- First submenu --> <ul class="dropdown"> <li class="has-dropdown"><a href="#">Option 1a</a> <!-- Second submenu --> <ul class="dropdown"> <li><label>2nd Options list</label></li> <li><a href="#">Option 2a</a></li> <li><a href="#">Option 2b</a></li> <li class="has-dropdown"> <a href="#">Option 2c</a> <!-- Third submenu --> <ul class="dropdown"> <li><label>3rd Options list</label></li> <li><a href="#">Option 3a</a></li> <li><a href="#">Option 3b</a></li> <li><a href="#">Option 3c</a></li> </ul> </li> </ul> </li> <!-- Visual separation between elements --> <li class="divider"></li> <li><a href="#">Option 2b</a></li> <li><a href="#">Option 2c</a></li> </ul> </li> <li class="divider"></li> </ul> </section> </nav> Interesting parts in the preceding code are as follows: <li class="divider">: It creates a visual separation between the elements of a list <li class="has-dropdown">: It shows a drop-down element when is clicked. <ul class="dropdown">: It indicates that the list is a drop-down menu. Apart from that, the left class, used to specify where we want the buttons on the bar. We would use right class to put them on the right side of the screen, or both classes, if we want several buttons in different places. Our navigation bar looks like the following screenshot: Of course, this navigation bar shows responsive behavior, thanks to the class toggle-topbar-menu-icon. This class forces the buttons to collapse in narrower screens. And it looks like the following screenshot: Now we know how to add a top navigation bar to our page. We just add the navigation bar's code before the grid, and the result is shown in the following screenshot: Summary In this article we learnt how to use 2 of the many UI elements provided to us by Foundation 4, the grid and the navigation bar. The screenshots provided help in giving a better idea of how the elements look, when incorporated in our website. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] Building HTML5 Pages from Scratch [Article] Video conversion into the required HTML5 Video playback [Article]
Read more
  • 0
  • 0
  • 4659
article-image-creating-our-first-bot-webbot
Packt
03 Oct 2013
9 min read
Save for later

Creating our first bot, WebBot

Packt
03 Oct 2013
9 min read
(For more resources related to this topic, see here.)   With the knowledge you have gained, we are now ready to develop our first bot, which will be a simple bot that gathers data (documents) based on a list of URLs and datasets (field and field values) that we will require. First, let's start by creating our bot package directory. So, create a directory called WebBot so that the files in our project_directory/lib directory look like the following: '-- project_directory|-- lib | |-- HTTP (our existing HTTP package) | | '-- (HTTP package files here) | '-- WebBot | |-- bootstrap.php| |-- Document.php | '-- WebBot.php |-- (our other files)'-- 03_webbot.php As you can see, we have a very clean and simple directory and file structure that any programmer should be able to easily follow and understand. The WebBot class Next, open the file WebBot.php file and add the code from the project_directory/lib/WebBot/WebBot.php file: In our WebBot class, we first use the __construct() method to pass the array of URLs (or documents) we want to fetch, and the array of document fields are used to define the datasets and regular expression patterns. Regular expression patterns are used to populate the dataset values (or document field values). If you are unfamiliar with regular expressions, now would be a good time to study them. Then, in the __construct() method, we verify whether there are URLs to fetch or not. If there , we set an error message stating this problem. Next, we use the __formatUrl() method to properly format URLs we fetch data. This method will also set the correct protocol: either HTTP or HTTPS ( Hypertext Transfer Protocol Secure ). If the protocol is already set for the URL, for example http://www.[dom].com, we ignore setting the protocol. Also, if the class configuration setting conf_force_https is set to true, we force the HTTPS protocol again unless the protocol is already set for the URL. We then use the execute() method to fetch data for each URL, set and add the Document objects to the array of documents, and track document statistics. This method also implements fetchdelay logic that will delay each fetch by x number of seconds if set in the class configuration settings conf_delay_between_fetches. We also include the logic that only allows distinct URL fetches, meaning that, if we have already fetched data for a URL we won't fetch it again; this eliminates duplicate URL data fetches. The Document object is used as a container for the URL data, and we can use the Document object to use the URL data, the data fields, and their corresponding data field values. In the execute() method, you can see that we have performed a HTTPRequest::get() request using the URL and our default timeout value—which is set with the class configuration settings conf_default_timeout. We then pass the HTTPResponse object that is returned by the HTTPRequest::get() method to the Document object. Then, the Document object uses the data from the HTTPResponse object to build the document data. Finally, we include the getDocuments() method, which simply returns all the Document objects in an array that we can use for our own purposes as we desire. The WebBot Document class Next, we need to create a class called Document that can be used to store document data and field names with their values. To do this we will carry out the following steps: We first pass the data retrieved by our WebBot class to the Document class. Then, we define our document's fields and values using regular expression patterns. Next, add the code from the project_directory/lib/WebBot/Document.php file. Our Document class accepts the HTTPResponse object that is set in WebBot class's execute() method, and the document fields and document ID. In the Document __construct() method, we set our class properties: the HTTP Response object, the fields (and regular expression patterns), the document ID, and the URL that we use to fetch the HTTP response. We then check if the HTTP response successful (status code 200), and if it isn't, we set the error with the status code and message. Lastly, we call the __setFields() method. The __setFields() method parses out and sets the field values from the HTTP response body. For example, if in our fields we have a title field defined as $fields = ['title' => '<title>(.*)</title>'];, the __setFields() method will add the title field and pull all values inside the <title>*</title> tags into the HTML response body. So, if there were two title tags in the URL data, the __setField() method would add the field and its values to the document as follows: ['title'] => [ 0 => 'title x', 1 => 'title y' ] If we have the WebBot class configuration variable—conf_include_document_field_raw_values—set to true, the method will also add the raw values (it will include the tags or other strings as defined in the field's regular expression patterns) as a separate element, for example: ['title'] => [ 0 => 'title x', 1 => 'title y', 'raw' => [ 0 => '<title>title x</title>', 1 => '<title>title y</title>' ] ] The preceding code is very useful when we want to extract specific data (or field values) from URL data. To conclude the Document class, we have two more methods as follows: getFields(): This method simply returns the fields and field values getHttpResponse(): This method can be used to get the HTTPResponse object that was originally set by the WebBot execute() method This will allow us to perform logical requests to internal objects if we wish. The WebBot bootstrap file Now we will create a bootstrap.php file (at project_directory/lib/WebBot/) to load the HTTP package and our WebBot package classes, and set our WebBot class configuration settings: <?php namespace WebBot; /** * Bootstrap file * * @package WebBot */ // load our HTTP package require_once './lib/HTTP/bootstrap.php'; // load our WebBot package classes require_once './lib/WebBot/Document.php'; require_once './lib/WebBot/WebBot.php'; // set unlimited execution time set_time_limit(0); // set default timeout to 30 seconds WebBotWebBot::$conf_default_timeout = 30; // set delay between fetches to 1 seconds WebBotWebBot::$conf_delay_between_fetches = 1; // do not use HTTPS protocol (we'll use HTTP protocol) WebBotWebBot::$conf_force_https = false; // do not include document field raw values WebBotWebBot::$conf_include_document_field_raw_values = false; We use our HTTP package to handle HTTP requests and responses. You have seen in our WebBot class how we use HTTP requests to fetch the data, and then use the HTTP Response object to store the fetched data in the previous two sections. That is why we need to include the bootstrap file to load the HTTP package properly. Then, we load our WebBot package files. Because our WebBot class uses the Document class, we load that class file first. Next, we use the built-in PHP function set_time_limit() to tell the PHP interpreter that we want to allow unlimited execution time for our script. You don't necessarily have to use unlimited execute time. However, for testing reasons, we will use unlimited execution time for this example. Finally, we set the WebBot class configuration settings. These settings are used by the WebBot object internally to make our bot work as we desire. We should always make the configuration settings as simple as possible to help other developers understand. This means we should also include detailed comments in our code to ensure easy usage of package configuration settings. We have set up four configuration settings in our WebBot class. These are static and public variables, meaning that we can set them from anywhere after we have included the WebBot class, and once we set them they will remain the same for all WebBot objects unless we change the configuration variables. If you do not understand the PHP keyword static, now would be a good time to research this subject. The first configuration variable is conf_default_timeout. This variable is used to globally set the default timeout (in seconds) for all WebBot objects we create. The timeout value tells the HTTPRequest class how long it continue trying to send a request before stopping and deeming it as a bad request, or a timed-out request. By default, this configuration setting value is set to 30 (seconds). The second configuration variable—conf_delay_between_fetches—is used to set a time delay (in seconds) between fetches (or HTTP requests). This can be very useful when gathering a lot of data from a website or web service. For example, say, you had to fetch one million documents from a website. You wouldn't want to unleash your bot with that type of mission without fetch delays because you could inevitably cause—to that website—problems due to massive requests. By default, this value is set to 0, or no delay. The third WebBot class configuration variable—conf_force_https—when set to true, can be used to force the HTTPS protocol. As mentioned earlier, this will not override any protocol that is already set in the URL. If the conf_force_https variable is set to false, the HTTP protocol will be used. By default, this value is set to false. The fourth and final configuration variable—conf_include_document_field_raw_values—when set to true, will force the Document object to include the raw values gathered from the ' regular expression patterns. We've discussed configuration settings in detail in the WebBot Document Class section earlier in this article. By default, this value is set to false. Summary In this article you have learned how to get started with building your first bot using HTTP requests and responses. Resources for Article : Further resources on this subject: Installing and Configuring Jobs! and Managing Sections, Categories, and Articles using Joomla! [Article] Search Engine Optimization in Joomla! [Article] Adding a Random Background Image to your Joomla! Template [Article]
Read more
  • 0
  • 0
  • 2621

article-image-routes-and-model-binding-intermediate
Packt
01 Oct 2013
6 min read
Save for later

Routes and model binding (Intermediate)

Packt
01 Oct 2013
6 min read
(For more resources related to this topic, see here.) Getting ready This section builds on the previous section and assumes you have the TodoNancy and TodoNancyTests projects all set up. How to do it... The following steps will help you to handle the other HTTP verbs and work with dynamic routes: Open the TodoNancy Visual Studio solution. Add a new class to the NancyTodoTests project, call it TodosModulesTests, and fill this test code for a GET and a POST route into it: public class TodosModuleTests { private Browser sut; private Todo aTodo; private Todo anEditedTodo; public TodosModuleTests() { TodosModule.store.Clear(); sut = new Browser(new DefaultNancyBootstrapper()); aTodo = new Todo { title = "task 1", order = 0, completed = false }; anEditedTodo = new Todo() { id = 42, title = "edited name", order = 0, completed = false }; } [Fact] public void Should_return_empty_list_on_get_when_no_todos_have_been_posted() { var actual = sut.Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } [Fact] public void Should_return_201_create_when_a_todo_is_posted() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)); Assert.Equal(HttpStatusCode.Created, actual.StatusCode); } [Fact] public void Should_not_accept_posting_to_with_duplicate_id() { var actual = sut.Post("/todos/", with => with.JsonBody(anEditedTodo)) .Then .Post("/todos/", with => with.JsonBody(anEditedTodo)); Assert.Equal(HttpStatusCode.NotAcceptable, actual.StatusCode); } [Fact] public void Should_be_able_to_get_posted_todo() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo) ) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(aTodo, actualBody[0]); } private void AssertAreSame(Todo expected, Todo actual) { Assert.Equal(expected.title, actual.title); Assert.Equal(expected.order, actual.order); Assert.Equal(expected.completed, actual.completed); } } The main thing to notice new in these tests is the use of actual.Body.DesrializeJson<Todo[]>(), which takes the Body property of the BrowserResponse type, assumes it contains JSON formatted text, and then deserializes that string into an array of Todo objects. At the moment, these tests will not compile. To fix this, add this Todo class to the TodoNancy project as follows: public class Todo { public long id { get; set; } public string title { get; set; } public int order { get; set; } public bool completed { get; set; } } Then, go to the TodoNancy project, and add a new C# file, call it TodosModule, and add the following code to body of the new class: public static Dictionary<long, Todo> store = new Dictionary<long, Todo>(); Run the tests and watch them fail. Then add the following code to TodosModule: public TodosModule() : base("todos") { Get["/"] = _ => Response.AsJson(store.Values); Post["/"] = _ => { var newTodo = this.Bind<Todo>(); if (newTodo.id == 0) newTodo.id = store.Count + 1; if (store.ContainsKey(newTodo.id)) return HttpStatusCode.NotAcceptable; store.Add(newTodo.id, newTodo); return Response.AsJson(newTodo) .WithStatusCode(HttpStatusCode.Created); }; } The previous code adds two new handlers to our application. One handler for the GET "/todos/" HTTP and the other handler for the POST "/todos/" HTTP. The GET handler returns a list of todo items as a JSON array. The POST handler allows for creating new todos. Re-run the tests and watch them succeed. Now let's take a closer look at the code. Firstly, note how adding a handler for the POST HTTP is similar to adding handlers for the GET HTTP. This consistency extends to the other HTTP verbs too. Secondly, note that we pass the "todos"string to the base constructor. This tells Nancy that all routes in this module are related to /todos. Thirdly, notice the this.Bind<Todo>() call, which is Nancy's data binding in action; it deserializes the body of the POST HTTP into a Todo object. Now go back to the TodosModuleTests class and add these tests for the PUT and DELETE HTTP as follows: [Fact] public void Should_be_able_to_edit_todo_with_put() { var actual = sut.Post("/todos/", with => with.JsonBody(aTodo)) .Then .Put("/todos/1", with => with.JsonBody(anEditedTodo)) .Then .Get("/todos/"); var actualBody = actual.Body.DeserializeJson<Todo[]>(); Assert.Equal(1, actualBody.Length); AssertAreSame(anEditedTodo, actualBody[0]); } [Fact] public void Should_be_able_to_delete_todo_with_delete() { var actual = sut.Post("/todos/", with => with.Body(aTodo.ToJSON())) .Then .Delete("/todos/1") .Then .Get("/todos/"); Assert.Equal(HttpStatusCode.OK, actual.StatusCode); Assert.Empty(actual.Body.DeserializeJson<Todo[]>()); } After watching these tests fail, make them pass by adding this code to the constructor of TodosModule: Put["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; var updatedTodo = this.Bind<Todo>(); store[p.id] = updatedTodo; return Response.AsJson(updatedTodo); }; Delete["/{id}"] = p => { if (!store.ContainsKey(p.id)) return HttpStatusCode.NotFound; store.Remove(p.id); return HttpStatusCode.OK; }; All tests should now pass. Take a look at the routes to the new handlers for the PUT and DELETE HTTP. Both are defined as "/{id}". This will match any route that starts with /todos/ and then something more that appears after the trailing /, such as /todos/42 and the {id} part of the route definition is 42. Notice that both these new handlers use their p argument to get the ID from the route in the p.id expression. Nancy lets you define very flexible routes. You can use any regular expression to define a route. All named parts of such regular expressions are put into the argument for the handler. The type of this argument is DynamicDictionary, which is a special Nancy type that lets you look up parts via either indexers (for example, p["id"]) like a dictionary, or dot notation (for example, p.id) like other dynamic C# objects. There's more... In addition to the handlers for GET, POST, PUT, and DELETE, which we added in this recipe, we can go ahead and add handler for PATCH and OPTIONS by following the exact same pattern. Out of the box, Nancy automatically supports HEAD and OPTIONS for you. To handle the HEAD HTTP request, Nancy will run the corresponding GET handler but only return the headers. To handle OPTIONS, Nancy will inspect which routes you have defined and respond accordingly. Summary In this article we saw how to handle the other HTTP verbs apart from GET and how to work with dynamic routes. We will also saw how to work with JSON data and how to do model binding. Resources for Article: Further resources on this subject: Displaying MySQL data on an ASP.NET Web Page [Article] Layout with Ext.NET [Article] ASP.Net Site Performance: Speeding up Database Access [Article]
Read more
  • 0
  • 0
  • 1443
Modal Close icon
Modal Close icon