Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-bpms-components
Packt
19 Aug 2014
8 min read
Save for later

BPMS Components

Packt
19 Aug 2014
8 min read
In this article by Mariano Nicolas De Maio, the author of jBPM6 Developer Guide, we will look into the various components of a Business Process Management (BPM) system. (For more resources related to this topic, see here.) BPM systems are pieces of software created with the sole purpose of guiding your processes through the BPM cycle. They were originally monolithic systems in charge of every aspect of a process, where they had to be heavily migrated from visual representations to executable definitions. They've come a long way from there, but we usually relate them to the same old picture in our heads when a system that runs all your business processes is mentioned. Nowadays, nothing is further from the truth. Modern BPM Systems are not monolithic environments; they're coordination agents. If a task is finished, they will know what to do next. If a decision needs to be made regarding the next step, they manage it. If a group of tasks can be concurrent, they turn them into parallel tasks. If a process's execution is efficient, they will perform the processing 0.1 percent of the time in the process engine and 99.9 percent of the time on tasks in external systems. This is because they will have no heavy executions within, only derivations to other systems. Also, they will be able to do this from nothing but a specific diagram for each process and specific connectors to external components. In order to empower us to do so, they need to provide us with a structure and a set of tools that we'll start defining to understand how BPM systems' internal mechanisms work, and specifically, how jBPM6 implements these tools. Components of a BPMS All big systems become manageable when we divide their complexities into smaller pieces, which makes them easier to understand and implement. BPM systems apply this by dividing each function in a different module and interconnecting them within a special structure that (in the case of jBPM6) looks something like the following figure: BPMS' internal structure Each component in the preceding figure resolves one particular function inside the BPMS architecture, and we'll see a detailed explanation on each one of them. The execution node The execution node, as seen from a black box perspective, is the component that receives the process definitions (a description of each step that must be followed; from here on, we'll just refer to them as processes). Then, it executes all the necessary steps in the established way, keeping track of each step, variable, and decision that has to be taken in each process's execution (we'll start calling these process instances). The execution node along with its modules are shown in the following figure: The execution node is composed of a set of low-level modules: the semantic module and the process engine. The semantic module The semantic module is in charge of defining each of the specific language semantics, that is, what each word means and how it will be translated to the internal structures that the process engine can execute. It consists of a series of parsers to understand different languages. It is flexible enough to allow you to extend and support multiple languages; it also allows the user to change the way already defined languages are to be interpreted for special use cases. It is a common component of most of the BPMSes out there, and in jBPM6, it allows you to add the extensions of the process interpretations to the module. This is so that you can add your own language parsers, and define your very own text-based process definition language or extend existing ones. The process engine The process engine is the module that is in charge of the actual execution of our business processes. It creates new process instances and keeps track of their state and their internal steps. Its job is to expose methods to inject process definitions and to create, start, and continue our process instances. Understanding how the process engine works internally is a very important task for the people involved in BPM's stage 4, that is, runtime. This is where different configurations can be used to improve performance, integrate with other systems, provide fault tolerance, clustering, and many other functionalities. Process Engine structure In the case of jBPM6, process definitions and process instances have similar structures but completely different objectives. Process definitions only show the steps it should follow and the internal structures of the process, keeping track of all the parameters it should have. Process instances, on the other hand, should carry all of the information of each process's execution, and have a strategy for handling each step of the process and keep track of all its actual internal values. Process definition structures These structures are static representations of our business processes. However, from the process engine's internal perspective, these representations are far from the actual process structure that the engine is prepared to handle. In order for the engine to get those structures generated, it requires the previously described semantic module to transform those representations into the required object structure. The following figure shows how this parsing process happens as well as the resultant structure: Using a process modeler, business analysts can draw business processes by dragging-and-dropping different activities from the modeler palette. For jBPM6, there is a web-based modeler designed to draw Scalable Vector Graphics (SVG) files; this is a type of image file that has the particularity of storing the image information using XML text, which is later transformed into valid BPMN2 files. Note that both BPMN2 and jBPM6 are not tied up together. On one hand, the BPMN2 standard can be used by other process engine provides such as Activiti or Oracle BPM Suite. Also, because of the semantic module, jBPM6 could easily work with other parsers to virtually translate any form of textual representation of a process to its internal structures. In the internal structures, we have a root component (called Process in our case, which is finally implemented in a class called RuleFlowProcess) that will contain all the steps that are represented inside the process definition. From the jBPM6 perspective, you can manually create these structures using nothing but the objects provided by the engine. Inside the jBPM6-Quickstart project, you will find a code snippet doing exactly this in the createProcessDefinition() method of the ProgrammedProcessExecutionTest class: //Process Definition RuleFlowProcess process = new RuleFlowProcess(); process.setId("myProgramaticProcess"); //Start Task StartNode startTask = new StartNode(); startTask.setId(1); //Script Task ActionNode scriptTask = new ActionNode(); scriptTask.setId(2); DroolsAction action = new DroolsAction(); action.setMetaData("Action", new Action() { @Override public void execute(ProcessContext context) throws Exception { System.out.println("Executing the Action!!"); } }); scriptTask.setAction(action); //End Task EndNode endTask = new EndNode(); endTask.setId(3); //Adding the connections to the nodes and the nodes to the processes new ConnectionImpl(startTask, "DROOLS_DEFAULT", scriptTask, "DROOLS_DEFAULT"); new ConnectionImpl(scriptTask, "DROOLS_DEFAULT", endTask, "DROOLS_DEFAULT"); process.addNode(startTask); process.addNode(scriptTask); process.addNode(endTask); Using this code, we can manually create the object structures to represent the process shown in the following figure: This process contains three components: a start node, a script node, and an end node. In this case, this simple process is in charge of executing a simple action. The start and end tasks simply specify a sequence. Even if this is a correct way to create a process definition, it is not the recommended one (unless you're making a low-level functionality test). Real-world, complex processes are better off being designed in a process modeler, with visual tools, and exported to standard representations such as BPMN 2.0. The output of both the cases is the same; a process object that will be understandable by the jBPM6 runtime. While we analyze how the process instance structures are created and how they are executed, this will do. Process instance structures Process instances represent the running processes and all the information being handled by them. Every time you want to start a process execution, the engine will create a process instance. Each particular instance will keep track of all the activities that are being created by its execution. In jBPM6, the structure is very similar to that of the process definitions, with one root structure (the ProcessInstance object) in charge of keeping all the information and NodeInstance objects to keep track of live nodes. The following code shows a simplification of the methods of the ProcessInstance implementation: public class RuleFlowProcessInstance implements ProcessInstance { public RuleFlowProcess getRuleFlowProcess() { ... } public long getId() { ... } public void start() { ... } public int getState() { ... } public void setVariable(String name, Object value) { ... } public Collection<NodeInstance> getNodeInstances() { ... } public Object getVariable(String name) { ... } } After its creation, the engine calls the start() method of ProcessInstance. This method seeks StartNode of the process and triggers it. Depending on the execution of the path and how different nodes connect between each other, other nodes will get triggered until they reach a safe state where the execution of the process is completed or awaiting external data. You can access the internal parameters that the process instance has through the getVariable and setVariable methods. They provide local information from the particular process instance scope. Summary In this article, we saw what are the basic components required to set up a BPM system. With these components in place, we are ready to explore, in more detail, the structure and working of a BPM system. Resources for Article: Further resources on this subject: jBPM for Developers: Part 1 [Article] Configuring JBoss Application Server 5 [Article] Boss jBPM Concepts and jBPM Process Definition Language (jPDL) [Article]
Read more
  • 0
  • 0
  • 4825

article-image-bootstrap-grid-system
Packt
19 Aug 2014
3 min read
Save for later

The Bootstrap grid system

Packt
19 Aug 2014
3 min read
This article is written by Pieter van der Westhuizen, the author of Bootstrap for ASP.NET MVC. Many websites are reporting an increasing amount of mobile traffic and this trend is expected to increase over the coming years. The Bootstrap grid system is mobile-first, which means it is designed to target devices with smaller displays and then grow as the display size increases. Fortunately, this is not something you need to be too concerned about as Bootstrap takes care of most of the heavy lifting. (For more resources related to this topic, see here.) Bootstrap grid options Bootstrap 3 introduced a number of predefined grid classes in order to specify the sizes of columns in your design. These class names are listed in the following table: Class name Type of device Resolution Container width Column width col-xs-* Phones Less than 768 px Auto Auto col-sm-* Tablets Larger than 768 px 750 px 60 px col-md-* Desktops Larger than 992 px 970 px 1170 px col-lg-* High-resolution desktops Larger than 1200 px 78 px 95 px The Bootstrap grid is divided into 12 columns. When laying out your web page, keep in mind that all columns combined should be a total of 12. To illustrate this, consider the following HTML code: <div class="container"><div class="row"><div class="col-md-3" style="background-color:green;"><h3>green</h3></div><div class="col-md-6" style="background-color:red;"><h3>red</h3></div><div class="col-md-3" style="background-color:blue;"><h3>blue</h3></div></div></div> In the preceding code, we have a <div> element, container, with one child <div> element, row. The row div element in turn has three columns. You will notice that two of the columns have a class name of col-md-3 and one of the columns has a class name of col-md-6. When combined, they add up to 12. The preceding code will work well on all devices with a resolution of 992 pixels or higher. To preserve the preceding layout on devices with smaller resolutions, you'll need to combine the various CSS grid classes. For example, to allow our layout to work on tablets, phones, and medium-sized desktop displays, change the HTML to the following code: <div class="container"><div class="row"><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:green;"><h3>green</h3></div><div class="col-xs-6 col-sm-6 col-md-6" style="backgroundcolor:red;"><h3>red</h3></div><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:blue;"><h3>blue</h3></div></div></div> By adding the col-xs-* and col-sm-* class names to the div elements, we'll ensure that our layout will appear the same in a wide range of device resolutions. Bootstrap HTML elements Bootstrap provides a host of different HTML elements that are styled and ready to use. These elements include the following: Tables Buttons Forms Images
Read more
  • 0
  • 0
  • 7730

article-image-internet-things-xively
Packt
19 Aug 2014
2 min read
Save for later

Internet of Things with Xively

Packt
19 Aug 2014
2 min read
In this article by Marco Schwartz, author of the book Arduino Networking, we will visualize the data we recorded with Xively. (For more resources related to this topic, see here.) Visualizing the recorded data We are now going to visualize the data we recorded with Xively. You can go over again to the device page on the Xively website. You should see that some data has been recorded in different channels, as shown in the following screenshot: By clicking on one of these channels, you can also display the data graphically. For example, the following screenshot shows the temperature channel after a few measurements: After a while, you will have more points for the temperature measurements, as shown in the following screenshot: You can also do the same for the humidity measurements; the following screenshot shows the humidity measurements: Note that by clicking on the time icon, you can change the time axis and display a longer or shorter time range. If you don't see any data being displayed, you need to go back to the Arduino IDE and make sure that the answer coming from the Xively server is a 200 OK message, like we saw in the previous section. Summary In this article, we went to the Xively website to visualize the data and learned how to visualize the data graphically, and saw this data arrive in real time. Resources for Article: Further resources on this subject: Avoiding Obstacles Using Sensors [Article] Hardware configuration [Article] Writing Tag Content [Article]
Read more
  • 0
  • 0
  • 10832

article-image-foundation
Packt
19 Aug 2014
22 min read
Save for later

Foundation

Packt
19 Aug 2014
22 min read
In this article by Kevin Horek author of Learning Zurb Foundation, we will be covering the following points: How to move away from showing clients wireframes and how to create responsive prototypes Why these prototypes are better and quicker than doing traditional wireframes The different versions of Foundation What does Foundation include? How to use the documentation How to migrate from an older version Getting support when you can't figure something out What browsers does Foundation support? How to extend Foundation Our demo site (For more resources related to this topic, see here.) Over the last couple of years, showing wireframes to most clients has not really worked well for me. They never seem to quite get it, and if they do, they never seem to fully understand all the functionality through a wireframe. For some people, it is really hard to picture things in their head, they need to see exactly what it will look and function like to truly understand what they are looking at. You should still do a rough wireframe either on paper, on a whiteboard, or on the computer. Then once you and/or your team are happy with these rough wireframes, then jump right into the prototype. Rough wireframing and prototypying You might think prototyping this early on when the client has only seen a sitemap is crazy, but the thing is, once you master Foundation, you can build prototypes in about the same time you would spend doing traditional high quality wireframes in Illustrator or whatever program you currently use. With these prototypes, you can make things clickable, interactive, and super fast to make edits to after you get feedback from the client. With the default Foundation components, you can work out how things will work on a phone, tablet, and desktop/laptop. This way you can work with your team to fully understand how things will function and start seeing where the project's potential issues will be. You can then assign people to start dealing with these potential problems early on in the process. When you are ready to show the client, you can walk them through their project on multiple devices and platforms. You can easily show them what content they are going to need and how that content will flow and reflow based on the medium the user is viewing their project on. You should try to get content as early as possible; a lot of companies are hiring content strategists. These content strategists handle working with the client to get, write, and rework content to fit in the responsive web medium. This allows you to design around a client's content, or at least some of the content. We all know that what a client says they will get you for content is not always what you get, so you may need to tweak the design to fit the actual content you get. Making these theming changes to accommodate these content changes can be a pain, but with Foundation, you can just reflow part of the page and try some ideas out in the prototype before you put them back into the working development site. Once you have built up a bunch of prototypes, you can easily combine and use parts of them to create new designs really fast for current or new projects. When prototyping, you should keep everything grayscale, without custom fonts, or a theme beyond the base Foundation one. These prototypes do not have to look pretty. The less it looks like a full design, the better off you will be. You will have to inform your client that an actual design for their project will be coming and that it will be done after they sign off this prototype. When you show the client, you should bring a phone, a tablet, and a laptop to show them how the project will flow on each of these devices. This takes out all the confusion about what happens to the layouts on different screen sizes and on touch and non-touch devices. It also allows your client and your team to fully understand what they are building and how everything works and functions. Trying to take a PDF of wireframes, a Photoshop file, and trying to piece them together to build a responsive web project can be really challenging. With this approach, so many details can get lost in translation, you have to keep going back to talk to the client or your team about how certain things should work or function. Even worse, you have to make huge changes to a section close to the end of the project because something was designed without being really thought through and now your developers have to scramble to make something work within the budget. Prototyping can sort out all the issues or at least the major issues that could arise in the project. With these Foundation prototypes, you keep building on the code for each step of the web building process. Your designer can work with your frontend/backend team to come up with a prototype that everyone is happy with and commit to being able to build it before the client sees anything. If you are familiar with version control, you can use it to keep track of your prototypes and collaborate with another person or a team of people. The two most popular version control software applications are Git (http://git-scm.com/) and Subversion (http://subversion.apache.org/). Git is the more popular of the two right now; however, if you are working on a project that has been around for a number of years, chances are that it will be in Subversion. You can migrate from one to the other, but it might take a bit of work. These prototypes keep your team on the same page right from the beginning of the project and allow the client to sign off on functionality and how the project will work on different mediums. Yes, you are spending more time at the beginning getting everyone on the same page and figuring out functionality early on, but this process should sort out all the confusion later in a project and save you time and money at the end of the project. When the client has changes that are out of scope, it is easy to reference back to the prototype and show them how that change will impact what they signed off on. If the change is major enough then you will need to get them a cost on making that change happen. You should test your prototypes on an iPhone, an Android phone, an iPad, and your desktop or laptop. I would also figure out what browser your client uses and make sure you test on that as well. If they are using an older version of IE, 8 or earlier, you will need to have the conversation with them about how Foundation 4+ does not support IE8. If that support is needed, you will have to come up with a solution to handle this outdated version of IE. Looking at a client's analytics to see what versions of IE their clients are coming to the project with will help you decide how to handle older versions of IE. Analytics might tell you that you can drop the version all together. Another great component that is included with Foundation is Modernizr (http://modernizr.com/); this allows you to write conditional JS and/or CSS for a specific situation or browser version. This really can be a lifesaver. Prototyping smaller projects While you are learning Foundation, you might think that using Foundation on a smaller project will eat up your entire budget. However, these are the best projects to learn Foundation. Basically, you take the prototype to a place where you can show a client the rough look and feel using Foundation. Then, you create a theme board in Photoshop with colors, fonts, photos and anything else to show the client. This first version will be a grayscale prototype that will function across multiple screen sizes. Then you can pull up your theme board to show the direction you are thinking of for the look and feel. If you still feel more comfortable doing your designs in Photoshop, there are some really good Photoshop grid templates at http://www.yeedeen.com/downloads/category/30-psd. If you want to create a custom grid that you can take a screenshot of, then paste into Photoshop, and then drag your guidelines over the grid to make your own template, you can refer to http://www.gridlover.net/foundation/. Prototyping wrap-up These methods are not perfect and may not always work for you, but you're going to see my workflow and how Foundation can be used on all of your web projects. You will figure out what will work with your clients, your projects, and your workflow. Also, you might have slightly different workflows based on the type of project, and/or project budget. If the client does not see value in having a responsive site, you should choose if you want to work with these types of clients. The Web is not one standard resolution anymore and it never will be again, so if a client does not understand that, you might want to consider not working with them. These types of clients are usually super hard to work with and your time is better spent on clients that get or are willing to allow you to teach them and trust you that you are building their project for the modern Web. Personally, clients that have fought with me to not be responsive usually come back a few months later wondering why their site does not work great on their new smartphone or tablet and wanting you to fix it. So try and address this up front and it will save you grief later on and make your clients happier and their experience better. Like anything, there are exceptions to this but just make sure you have a contract in place to outline that you are not building this as responsive, and that it could cause the client a lot of grief and costs later to go back and make it responsive. No matter what you do for a client, you should have a contract in place, this will just make sure you both understand what is each party responsible for. Personally, I like to use a modified version of, (https://gist.github.com/malarkey/4031110). This contract does not have any legal mumbo jumbo that people do not understand. It is written in plain English and has a little bit of a less serious tone. Now that we have covered why prototyping with Foundation is faster than doing wireframes or prototypes in Photoshop, let's talk about what comes in the base Foundation framework. Then we will cover which version to install, and then go through each file and folder. Introducing the framework Before we get started, please refer to the http://foundation.zurb.com/develop/download.html webpage. You will see that there are four versions of Foundation: complete, essentials, custom, and SCSS. But let's talk about the other versions. The essentials is just a smaller version of Foundation that does not include all the components of the framework; this version is a barebones version. Once you are familiar with Foundation, you will likely only include the components that you need for a specific project. By only including the components you need, you can speed up the load time of your project and you do not make the user download files that are not being used by your project. The custom version allows you to pick the components and basic sizes, colors, radius, and text direction. You will likely use this or the SCSS version of Foundation once you are more comfortable with the framework. The SCSS or Sass version of Foundation is the most powerful version. If you do not know what Sass is, it basically gives you additional features of CSS that can speed up how you theme your projects. There is actually another version of Foundation that is not listed on this page, which can be found by hitting the blue Getting Started option in the top right-corner and then clicking on App Guide under Building and App. You can also visit this version at http://foundation.zurb.com/docs/applications.html. This version is the Ruby Gem version of Foundation, and unless you are building a Ruby on Rails project, you will never use this version of Foundation. Zurb keeps the gem pretty up to date, you will likely get the new version of the gem about a week or two after the other versions come out. Alright, let's get into Foundation. If you have not already, hit the blue Download Everything button below the complete heading on the webpage. We will be building a one page demo site from the base Foundation theme that you just downloaded. This way, you can see how to take what you are given by default and customize this base theme to look anyway you want it to. We will give this base theme a custom look and feel, and make it look like you are not using a responsive framework at all. The only way to tell is if you view the source of the website. The Zurb components have very little theming applied to them. This allows you to not have to worry about really overriding the CSS code and you can just start adding additional CSS to customize these components. We will cover how to use all the major components of the framework, you will have an advanced understanding of the framework and how you can use it on all your projects going forward. Foundation has been used on small-to-large websites, web apps, at startups, with content management systems, and with enterprise-level applications. Going over the base theme The base theme that you download is made up of an HTML index file, a folder of CSS files, JavaScript files, and an empty img folder for images, which are explained in the following points: The index.html file has a few Foundation components to get you started. You have three, 12- column grids at three screen sizes; small, medium, and large. You can also control how many columns are in the grid, and the spacing (also called the gutter) between the columns, and how to use the other grid options. You will soon notice that you have full control over pretty much anything and you can control how things are rendered on any screen size or device, and whether that device is in portrait or landscape. You also have the ability to render different code on different devices and for different screen sizes. In the CSS folder, there is the un-minified version of Foundation with the filename foundation.css. There is also a minified version of Foundation with the filename foundation.min.css. If you are not familiar with minification, it has the same code as the foundation.css file, just all the spacing, comments, and code formatting have been taken out. This makes the file really hard to read and edit, but the file size is smaller and will speed up your project's load time. Most of the time, minified files have all the code on one really long line. You should use the foundation.css file as reference but actually include the minified one in your project. The minified version makes debugging and error fixing almost impossible, so we use the un-minified version for development and then the minified version for production. The last file in that folder is normalize.css; this file can be called a reset file, but it is more of an alternative to a reset file. This file is used to try to set defaults on a bunch of CSS elements and tries to get all the browsers to be set to the same defaults. The thinking behind this is that every browser will look and render things the same, and, therefore, there should not be a lot of specific theming fixes for different browsers. These types of files do a pretty good job but are not perfect and you will have to do little fixes for different browsers, even the modern ones. We will also cover how to use some extra CSS to take resetting certain elements a little further than the normalize file does for you. This will mainly include showing you how to render form elements and buttons to be the same across-browser and device. We will also talk about, browser version, platform, OS, and screen resolution detection when we talk about testing. We will also be adding our own CSS file that we will add our customizations to, so if you ever decide to update the framework as a new version comes out, you will not have to worry about overriding your changes. We will never add or modify the core files of the framework; I highly recommend you do not do this either. Once we get into Sass, we will cover how you can really start customizing the framework defaults using the custom variables that are built right into Foundation. These variables are one of the reasons that Foundation is the most advanced responsive framework out there. These variables are super powerful and one of my favorite things about Foundation. Once you understand how to use variables, you can write your own or you can extend your setup of Foundation as much as you like. In the JS folder, you will find a few files and some folders. In the Foundation folder, you will find each of the JavaScript components that you need to make Foundation work properly cross-device, browser, and responsive. These JavaScript components can also be use to extend Foundation's functionality even further. You can only include the components that you need in your project. This allows you to keep the framework lean and can help with load times; this is especially useful on mobile. You can also use CSS to theme each of these components to be rendered differently on each device or at different screen sizes. The foundation.min.js file is a minified version of all the files in the Foundation folder. You can decide based on your needs whether you want to include only the JavaScripts you are using on that project or whether you want to include them all. When you are learning, you should include them all. When you are comfortable with the framework and are ready to make your project live, you should only include the JavaScripts you are actually using. This helps with load times and can make troubleshooting easier. Many of the Foundation components will not work without including the JavaScript for that component. The next file you will notice is jquery.js it might be either in the root of this folder or in the vendor folder if you are using a newer version of Foundation 5. If you are not familiar with jQuery, it is a JavaScript library that makes DOM manipulation, event handling, animation, and Ajax a lot easier. It also makes all of this stuff work cross-browser and cross-device. The next file in the JS folder or in the vendor folder under JS is modernizr.js; this file helps you to write conditional JavaScript and/or CSS to make things work cross-browser and to make progressive enhancements. Also, you put third-party JavaScript libraries that you are using on your project in the vendor folder. These are libraries that you either wrote yourself or found online, are not part of Foundation, and are not required for Foundation to work properly. Referring to the Foundation documentation The Foundation documentation is located at http://foundation.zurb.com/docs/. Foundation is really well documented and provides a lot of code samples and examples to use in your own projects. All the components also contain Sass variables that you can use to customize some of the defaults and even build your own. This saves you writing a bunch of override CSS classes. Each part of the framework is listed on the left-hand side and you can click on what you are looking for. You are taken to a page about that specific part and can read the section's overview, view code samples, working examples, and how to customize that part of the framework. Each section has a pretty good walk through about how to use each piece. Zurb is constantly updating Foundation, so you should check the change log every once in a while at http://foundation.zurb.com/docs/changelog.html. If you need documentation on an older version of Foundation, it is at the bottom of the documentation site in the left-hand column. Zurb keeps all the documentation back to Foundation 2. The only reason you will ever need to use Foundation 2 is if you need to support a really, really old version of IE, such as version 7. Foundation never supported IE6, but you will likely never have to worry about that version of IE. Migrating to a newer version of Foundation If you have an older version of Foundation, each version has a migration guide. The migration guide from Foundation 4 to 5 can be found at http://foundation.zurb.com/docs/upgrading.html. Personally, I have migrated websites and web apps in multiple languages and as long as Zurb does not change the grid, like they did from Foundation 3 to 4, then usually we copy-and-paste over the old version of the Foundation CSS, JavaScript, and images. You will likely have to change some JavaScript calls, do some testing, and do some minor fixes here and there, but it is usually a pretty smooth process as long as you did not modify the core framework or write a bunch of custom overrides. If you did either of these things, you will be in for a lot of work or a full rebuild of your project, so you should never modify the core. For old versions of Foundation, or if your version has been heavily modified, it might be easier to start with a fresh version of Foundation and copy-and-paste in the parts that you want to still use. Personally, I have done both and it really depends on the project. Before you do any migration, make sure you are using some sort of version control, such as GIT. If you do not know what GIT is, you should look into it. Here is a good place to start: (http://git-scm.com/book/en/Getting-Started) GIT has saved me from losing code so many times. If GIT is a little overwhelming right now, at the very least, duplicate your project folder as a backup and then copy in the new version of the framework over your files. If things are really broken, you can at least still use your old version while you work out the kinks in the new version. Framework support At some point, you will likely have questions about something in the framework, or will be trying to get something to work and for some reason, you can't figure it out. Foundation has multiple ways to get support, some of which are listed as follows: E-mail Twitter GitHub StackOverflow Forums To visit or get in-touch with support go to http://foundation.zurb.com/support/support.html. Browser support Foundation 5 supports the majority of browsers and devices, but like anything modern, it drops support for older browser versions. If you need IE8 or cringe, or IE7 support, you will need to use an older version of Foundation. You can see a full browser and device compatibility list at http://foundation.zurb.com/docs/compatibility.html. Extending Foundation Zurb also builds a bunch of other components that usually make their way into Foundation at some point, and work well with Foundation even though they are not officially part of it. These components range from new JavaScript libraries, fonts, icons, templates, and so on. You can visit their playground at http://zurb.com/playground. This playground also has other great resources and tools that you can use on other projects and other mediums. The things at Zurb's playground can make designing with Foundation a lot easier, even if you are not a designer. It can take quite a while to find icons or make them into SVGs or fonts for use in your projects, but Zurb has provided these in their playground. Overview of our one-page demo website The best way to show you how to learn the Zurb Foundation Responsive Framework is to actually get you building a demo site along with me. You can visit the final demo site we will be building at http://www.learningzurbfoundation.com/demo. We will be taking the base starter theme that we downloaded and making a one-page demo site. The demo site is built to teach you how to use the components and how they work together. You can also add outside components, but you can try those on your own. The demo site will show you how to build a responsive website, and it might not look like an ideal site, but I am trying to use as many components as possible to show you how to use the framework. Once you complete this site, you will have a deep understanding of the framework. You can then use this site as a starter theme or at the very least, as a reference for all your Foundation projects going forward. Summary In this article, we covered how to rough wireframe and quickly moved into prototyping. We also covered the following points: We went over what is included in the base Foundation theme Explored the documentation and how to migrate Foundation versions How to get framework support Started to get you thinking about browser support Letting you know that you can extend Foundation beyond its defaults We quickly covered our one-page demo site Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Zurb Foundation – an Overview [Article] Best Practices for Modern Web Applications [Article]
Read more
  • 0
  • 0
  • 2014

article-image-angularjs-project
Packt
19 Aug 2014
14 min read
Save for later

AngularJS Project

Packt
19 Aug 2014
14 min read
This article by Jonathan Spratley, the author of book, Learning Yeoman, covers the steps of how to create an AngularJS project and previewing our application. (For more resources related to this topic, see here.) Anatomy of an Angular project Generally in a single page application (SPA), you create modules that contain a set of functionality, such as a view to display data, a model to store data, and a controller to manage the relationship between the two. Angular incorporates the basic principles of the MVC pattern into how it builds client-side web applications. The major Angular concepts are as follows: Templates : A template is used to write plain HTML with the use of directives and JavaScript expressions Directives : A directive is a reusable component that extends HTML with the custom attributes and elements Models : A model is the data that is displayed to the user and manipulated by the user Scopes : A scope is the context in which the model is stored and made available to controllers, directives, and expressions Expressions : An expression allows access to variables and functions defined on the scope Filters : A filter formats data from an expression for visual display to the user Views : A view is the visual representation of a model displayed to the user, also known as the Document Object Model (DOM) Controllers : A controller is the business logic that manages the view Injector : The injector is the dependency injection container that handles all dependencies Modules : A module is what configures the injector by specifying what dependencies the module needs Services : A service is a piece of reusable business logic that is independent of views Compiler : The compiler handles parsing templates and instantiating directives and expressions Data binding : Data binding handles keeping model data in sync with the view Why Angular? AngularJS is an open source JavaScript framework known as the Superheroic JavaScript MVC Framework, which is actively maintained by the folks over at Google. Angular attempts to minimize the effort in creating web applications by teaching the browser's new tricks. This enables the developers to use declarative markup (known as directives or expressions) to handle attaching the custom logic behind DOM elements. Angular includes many built-in features that allow easy implementation of the following: Two-way data binding in views using double mustaches {{ }} DOM control for repeating, showing, or hiding DOM fragments Form submission and validation handling Reusable HTML components with self-contained logic Access to RESTful and JSONP API services The major benefit of Angular is the ability to create individual modules that handle specific responsibilities, which come in the form of directives, filters, or services. This enables developers to leverage the functionality of the custom modules by passing in the name of the module in the dependencies. Creating a new Angular project Now it is time to build a web application that uses some of Angular's features. The application that we will be creating will be based on the scaffold files created by the Angular generator; we will add functionality that enables CRUD operations on a database. Installing the generator-angular To install the Yeoman Angular generator, execute the following command: $ npm install -g generator-angular For Karma testing, the generator-karma needs to be installed. Scaffolding the application To scaffold a new AngularJS application, create a new folder named learning-yeoman-ch3 and then open a terminal in that location. Then, execute the following command: $ yo angular --coffee This command will invoke the AngularJS generator to scaffold an AngularJS application, and the output should look similar to the following screenshot: Understanding the directory structure Take a minute to become familiar with the directory structure of an Angular application created by the Yeoman generator: app: This folder contains all of the front-end code, HTML, JS, CSS, images, and dependencies: images: This folder contains images for the application scripts: This folder contains AngularJS codebase and business logic: app.coffee: This contains the application module definition and routing controllers: Custom controllers go here: main.coffee: This is the main controller created by default directives: Custom directives go here filters: Custom filters go here services: Reusable application services go here styles: This contains all CSS/LESS/SASS files: main.css: This is the main style sheet created by default views: This contains the HTML templates used in the application main.html: This is the main view created by default index.html: This is the applications' entry point bower_components: This folder contains client-side dependencies node_modules: This contains all project dependencies as node modules test: This contains all the tests for the application: spec: This contains unit tests mirroring structure of the app/scripts folder karma.conf.coffee: This file contains the Karma runner configuration Gruntfile.js: This file contains all project tasks package.json: This file contains project information and dependencies bower.json: This file contains frontend dependency settings The directories (directives, filters, and services) get created when the subgenerator is invoked. Configuring the application Let's go ahead and create a configuration file that will allow us to store the application wide properties; we will use the Angular value services to reference the configuration object. Open up a terminal and execute the following command: $ yo angular:value Config This command will create a configuration service located in the app/scripts/services directory. This service will store global properties for the application. For more information on Angular services, visit http://goo.gl/Q3f6AZ. Now, let's add some settings to the file that we will use throughout the application. Open the app/scripts/services/config.coffee file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App').value('Config', Config = baseurl: document.location.origin sitetitle: 'learning yeoman' sitedesc: 'The tutorial for Chapter 3' sitecopy: '2014 Copyright' version: '1.0.0' email: 'jonniespratley@gmail.com' debug: true feature: title: 'Chapter 3' body: 'A starting point for a modern angular.js application.' image: 'http://goo.gl/YHBZjc' features: [ title: 'yo' body: 'yo scaffolds out a new application.' image: 'http://goo.gl/g6LO99' , title: 'Bower' body: 'Bower is used for dependency management.' image: 'http://goo.gl/GpxBAx' , title: 'Grunt' body: 'Grunt is used to build, preview and test your project.' image: 'http://goo.gl/9M00hx' ] session: authorized: false user: null layout: header: 'views/_header.html' content: 'views/_content.html' footer: 'views/_footer.html' menu: [ title: 'Home', href: '/' , title: 'About', href: '/about' , title: 'Posts', href: '/posts' ] ) The preceding code does the following: It creates a new Config value service on the learningYeomanCh3App module The baseURL property is set to the location where the document originated from The sitetitle, sitedesc, sitecopy, and version attributes are set to default values that will be displayed throughout the application The feature property is an object that contains some defaults for displaying a feature on the main page The features property is an array of feature objects that will display on the main page as well The session property is defined with authorized set to false and user set to null; this value gets set to the current authenticated user The layout property is an object that defines the paths of view templates, which will be used for the corresponding keys The menu property is an array that contains the different pages of the application Usually, a generic configuration file is created at the top level of the scripts folder for easier access. Creating the application definition During the initial scaffold of the application, an app.coffee file is created by Yeoman located in the app/scripts directory. The scripts/app.coffee file is the definition of the application, the first argument is the name of the module, and the second argument is an array of dependencies, which come in the form of angular modules and will be injected into the application upon page load. The app.coffee file is the main entry point of the application and does the following: Initializes the application module with dependencies Configures the applications router Any module dependencies that are declared inside the dependencies array are the Angular modules that were selected during the initial scaffold. Consider the following code: 'use strict' angular.module('learningYeomanCh3App', [ 'ngCookies', 'ngResource', 'ngSanitize', 'ngRoute' ]) .config ($routeProvider) -> $routeProvider .when '/', templateUrl: 'views/main.html' controller: 'MainCtrl' .otherwise redirectTo: '/' The preceding code does the following: It defines an angular module named learningYeomanCh3App with dependencies on the ngCookies, ngSanitize, ngResource, and ngRoute modules The .config function on the module configures the applications' routes by passing route options to the $routeProvider service Bower downloaded and installed these modules during the initial scaffold. Creating the application controller Generally, when creating an Angular application, you should define a top-level controller that uses the $rootScope service to configure some global application wide properties or methods. To create a new controller, use the following command: $ yo angular:controller app This command will create a new AppCtrl controller located in the app/scripts/controllers directory. file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App') .controller('AppCtrl', ($rootScope, $cookieStore, Config) -> $rootScope.name = 'AppCtrl' App = angular.copy(Config) App.session = $cookieStore.get('App.session') window.App = $rootScope.App = App) The preceding code does the following: It creates a new AppCtrl controller with dependencies on the $rootScope, $cookieStore, and Config modules Inside the controller definition, an App variable is copied from the Config value service The session property is set to the App.session cookie, if available Creating the application views The Angular generator will create the applications' index.html view, which acts as the container for the entire application. The index view is used as the shell for the other views of the application; the router handles mapping URLs to views, which then get injected to the element that declares the ng-view directive. Modifying the application's index.html Let's modify the default view that was created by the generator. Open the app/index.html file, and add the content right below the following HTML comment: The structure of the application will consist of an article element that contains a header,<article id="app" <article id="app" ng-controller="AppCtrl" class="container">   <header id="header" ng-include="App.layout.header"></header>   <section id=”content” class="view-animate-container">     <div class="view-animate" ng-view=""></div>   </section>   <footer id="footer" ng-include="App.layout.footer"></footer> </article> In the preceding code: The article element declares the ng-controller directive to the AppCtrl controller The header element uses an ng-include directive that specifies what template to load, in this case, the header property on the App.layout object The div element has the view-animate-container class that will allow the use of CSS transitions The ng-view attribute directive will inject the current routes view template into the content The footer element uses an ng-include directive to load the footer specified on the App.layout.footer property Use ng-include to load partials, which allows you to easily swap out templates. Creating Angular partials Use the yo angular:view command to create view partials that will be included in the application's main layout. So far, we need to create three partials that the index view (app/index.html) will be consuming from the App.layout property on the $rootScope service that defines the location of the templates. Names of view partials typically begin with an underscore (_). Creating the application's header The header partial will contain the site title and navigation of the application. Open a terminal and execute the following command: $ yo angular:view _header This command creates a new view template file in the app/views directory. Open the app/views/_header.html file and add the following contents: <div class="header"> <ul class="nav nav-pills pull-right"> <li ng-repeat="item in App.menu" ng-class="{'active': App.location.path() === item.href}"> <a ng-href = "#{{item.href}}"> {{item.title}} </a> </li> </ul> <h3 class="text-muted"> {{ App.sitetitle }} </h3> </div> The preceding code does the following: It uses the {{ }} data binding syntax to display App.sitetitle in a heading element The ng-repeat directive is used to repeat each item in the App.menu array defined on $rootScope Creating the application's footer The footer partial will contain the copyright message and current version of the application. Open the terminal and execute the following command: $ yo angular:view _footer This command creates a view template file in the app/views directory. Open the app/views/_footer.html file and add the following markup: <div class="app-footer container clearfix">     <span class="app-sitecopy pull-left">       {{ App.sitecopy }}     </span>     <span class="app-version pull-right">       {{ App.version }}     </span> </div> The preceding code does the following: It uses a div element to wrap two span elements The first span element contains data binding syntax referencing App.sitecopy to display the application's copyright message The second span element also contains data binding syntax to reference App.version to display the application's version Customizing the main view The Angular generator creates the main view during the initial scaffold. Open the app/views/main.html file and replace with the following markup: <div class="jumbotron">     <h1>{{ App.feature.title }}</h1>     <img ng-src="{{ App.feature.image  }}"/>       <p class="lead">       {{ App.feature.body }}       </p>   </div>     <div class="marketing">   <ul class="media-list">         <li class="media feature" ng-repeat="item in App.features">        <a class="pull-left" href="#">           <img alt="{{ item.title }}"                       src="http://placehold.it/80x80"                       ng-src="{{ item.image }}"            class="media-object"/>        </a>        <div class="media-body">           <h4 class="media-heading">{{item.title}}</h4>           <p>{{ item.body }}</p>        </div>         </li>   </ul> </div> The preceding code does the following: At the top of the view, we use the {{ }} data binding syntax to display the title and body properties declared on the App.feature object Next, inside the div.marketing element, another div element is declared with the ng-repeat directive to loop for each item in the App.features property Then, using the {{ }} data binding syntax wrapped around the title and body properties from the item being repeated, we output the values Previewing the application To preview the application, execute the following command: $ grunt serve Your browser should open displaying something similar to the following screenshot: Download the AngularJS Batarang (http://goo.gl/0b2GhK) developer tool extension for Google Chrome for debugging. Summary In this article, we learned the concepts of AngularJS and how to leverage the framework in a new or existing project. Resources for Article: Further resources on this subject: Best Practices for Modern Web Applications [article] Spring Roo 1.1: Working with Roo-generated Web Applications [article] Understand and Use Microsoft Silverlight with JavaScript [article]
Read more
  • 0
  • 0
  • 3711

article-image-social-media-and-magento
Packt
19 Aug 2014
17 min read
Save for later

Social Media and Magento

Packt
19 Aug 2014
17 min read
Social networks such as Twitter and Facebook are ever popular and can be a great source of new customers if used correctly on your store. In this article by Richard Carter, the author of Learning Magento Theme Development, covers the following topics: Integrating a Twitter feed into your Magento store Integrating a Facebook Like Box into your Magento store Including social share buttons in your product pages Integrating product videos from YouTube into the product page (For more resources related to this topic, see here.) Integrating a Twitter feed into your Magento store If you're active on Twitter, it can be worthwhile to let your customers know. While you can't (yet, anyway!) accept payment for your goods through Twitter, it can be a great way to develop a long term relationship with your store's customers and increase repeat orders. One way you can tell customers you're active on Twitter is to place a Twitter feed that contains some of your recent tweets on your store's home page. While you need to be careful not to get in the way of your store's true content, such as your most recent products and offers, you could add the Twitter feed in the footer of your website.   Creating your Twitter widget   To embed your tweets, you will need to create a Twitter widget. Log in to your Twitter account, navigate to https://twitter.com/settings/widgets, and follow the instructions given there to create a widget that contains your most recent tweets. This will create a block of code for you that looks similar to the following code:   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a>   <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> Embedding your Twitter feed into a Magento template   Once you have the Twitter widget code to embed, you're ready to embed it into one of Magento's template files. This Twitter feed will be embedded in your store's footer area. So, so open your theme's /app/design/frontend/default/m18/template/page/html/footer.phtml file and add the highlighted section of the following code:   <div class="footer-about footer-col">   <?php echo $this->getLayout()->createBlock('cms/block')->setBlockId('footer_about')->toHtml(); ?>   <?php   $_helper = Mage::helper('catalog/category'); $_categories = $_helper->getStoreCategories(); if (count($_categories) > 0): ?> <ul>   <?phpforeach($_categories as $_category): ?> <li>   <a href="<?php echo $_helper->getCategoryUrl($_category) ?>"> <?php echo $_category->getName() ?>   </a>   </li> <?phpendforeach; ?> </ul>   <?phpendif; ?>   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>   </div>   The result of the preceding code is a Twitter feed similar to the following one embedded on your store:     As you can see, the Twitter widget is quite cumbersome. So, it's wise to be sparing when adding this to your website. Sometimes, a simple Twitter icon that links to your account is all you need!   Integrating a Facebook Like Box into your Magento store   Facebook is one of the world's most popular social networks; with careful integration, you can help drive your customers to your Facebook page and increase long term interaction. This will drive repeat sales and new potential customers to your store. One way to integrate your store's Facebook page into your Magento site is to embed your Facebook page's news feed into it.   Getting the embedding code from Facebook   Getting the necessary code for embedding from Facebook is relatively easy; navigate to the Facebook Developers website at https://developers.facebook.com/docs/plugins/like-box-for-pages. Here, you are presented with a form. Complete the form to generate your embedding code; enter your Facebook page's URL in the Facebook Page URL field (the following example uses Magento's Facebook page):   Click on the Get Code button on the screen to tell Facebook to generate the code you will need, and you will see a pop up with the code appear as shown in the following screenshot:   Adding the embed code into your Magento templates   Now that you have the embedding code from Facebook, you can alter your templates to include the code snippets. The first block of code for the JavaScript SDK is required in the header.phtml file in your theme's directory at /app/design/frontend/default/m18/template/page/html/. Then, add it at the top of the file:   <div id="fb-root"></div> <script>(function(d, s, id) {   var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id;   js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script>   Next, you can add the second code snippet provided by the Facebook Developers site where you want the Facebook Like Box to appear in your page. For flexibility, you can create a static block in Magento's CMS tool to contain this code and then use the Magento XML layout to assign the static block to a template's sidebar.   Navigate to CMS | Static Blocks in Magento's administration panel and add a new static block by clicking on the Add New Block button at the top-right corner of the screen. Enter a suitable name for the new static block in the Block Title field and give it a value facebook in the Identifier field. Disable Magento's rich text editor tool by clicking on the Show / Hide Editor button above the Content field.   Enter in the Content field the second snippet of code the Facebook Developers website provided, which will be similar to the following code:   <div class="fb-like-box" data-href="https://www.facebook.com/Magento" data-width="195" data-colorscheme="light" data-show-faces="true" data-header="true" data-stream="false" data-show-border="true"></div> Once complete, your new block should look like the following screenshot:   Click on the Save Block button to create a new block for your Facebook widget. Now that you have created the block, you can alter your Magento theme's layout files to include the block in the right-hand column of your store.   Next, open your theme's local.xml file located at /app/design/frontend/default/m18/layout/ and add the following highlighted block of XML to it. This will add the static block that contains the Facebook widget:   <reference name="right">   <block type="cms/block" name="cms_facebook">   <action method="setBlockId"><block_id>facebook</block_id></action>   </block>   <!--other layout instructions -->   </reference>   If you save this change and refresh your Magento store on a page that uses the right-hand column page layout, you will see your new Facebook widget appear in the right-hand column. This is shown in the following screenshot:   Including social share buttons in your product pages   Particularly if you are selling to consumers rather than other businesses, you can make use of social share buttons in your product pages to help customers share the products they love with their friends on social networks such as Facebook and Twitter. One of the most convenient ways to do this is to use a third-party service such as AddThis, which also allows you to track your most shared content. This is useful to learn which products are your most-shared products within your store! Styling the product page a little further   Before you begin to integrate the share buttons, you can style your product page to provide a little more layout and distinction between the blocks of content. Open your theme's styles.css file and append the following CSS (located at /skin/frontend/default/m18/css/) to provide a column for the product image and a column for the introductory content of the product:   .product-img-box, .product-shop {   float: left;   margin: 1%;   padding: 1%;   width: 46%;   }   You can also add some additional CSS to style some of the elements that appear on the product view page in your Magento store:   .product-name { margin-bottom: 10px; }   .or {   color: #888; display: block; margin-top: 10px;   }   .add-to-box { background: #f2f2f2; border-radius: 10px; margin-bottom: 10px; padding: 10px; }   .more-views ul { list-style-type: none;   }   If you refresh a product page on your store, you will see the new layout take effect: Integrating AddThis   Now that you have styled the product page a little, you can integrate AddThis with your Magento store. You will need to get a code snippet from the AddThis website at http://www.addthis.com/get/sharing. Your snippet will look something similar to the following code:   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   Once the preceding code is included in a page, this produces a social share tool that will look similar to the following screenshot:   Copy the product view template from the view.phtml file from /app/design/frontend/base/default/catalog/product/ to /app/design/frontend/default/m18/catalog/product/ and open your theme's view.phtml file for editing. You probably don't want the share buttons to obstruct the page name, add-to-cart area, or the brief description field. So, positioning the social share tool underneath those items is usually a good idea. Locate the snippet in your view.phtml file that has the following code:   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   Below this block, you can insert your AddThis social share tool highlighted in the following code so that the code is similar to the following block of code (the youraddthisusername value on the last line becomes your AddThis account's username):   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   If you want to reuse this block in multiple places throughout your store, consider adding this to a static block in Magento and using Magento's XML layout to add the block as required.   Once again, refresh the product page on your Magento store and you will see the AddThis toolbar appear as shown in the following screenshot. It allows your customers to begin sharing their favorite products on their preferred social networking sites.     If you can't see your changes, don't forget to clear your caches by navigating to System | Cache Management. If you want to provide some space between other elements and the AddThis toolbar, add the following CSS to your theme's styles.css file:   .addthis_toolbox {   margin: 10px 0;   }   The resulting product page will now look similar to the following screenshot. You have successfully integrated social sharing tools on your Magento store's product page:     Integrating product videos from YouTube into the product page   An increasingly common occurrence on ecommerce stores is the use of video in addition to product photography. The use of videos in product pages can help customers overcome any fears they're not buying the right item and give them a better chance to see the quality of the product they're buying. You can, of course, simply add the HTML provided by YouTube's embedding tool to your product description. However, if you want to insert your video on a specific page within your product template, you can follow the steps described in this section. Product attributes in Magento   Magento products are constructed from a number of attributes (different fields), such as product name, description, and price. Magento allows you to customize the attributes assigned to products, so you can add new fields to contain more information on your product. Using this method, you can add a new Video attribute that will contain the video embedding HTML from YouTube and then insert it into your store's product page template.   An attribute value is text or other content that relates to the attribute, for example, the attribute value for the Product Name attribute might be Blue Tshirt. Magento allows you to create different types of attribute:   •        Text Field: This is used for short lines of text.   •        Text Area: This is used for longer blocks of text.   •        Date: This is used to allow a date to be specified.   •        Yes/No: This is used to allow a Boolean true or false value to be assignedto the attribute.   •        Dropdown: This is used to allow just one selection from a list of optionsto be selected.   •        Multiple Select: This is used for a combination box type to allow one ormore selections to be made from a list of options provided.   •        Price: This is used to allow a value other than the product's price, specialprice, tier price, and cost. These fields inherit your store's currency settings.   •        Fixed Product Tax: This is required in some jurisdictions for certain types ofproducts (for example, those that require an environmental tax to be added). Creating a new attribute for your video field   Navigate to Catalog | Attributes | Manage Attributes in your Magento store's control panel. From there, click on the Add New Attribute button located near the top-right corner of your screen:     In the Attribute Properties panel, enter a value in the Attribute Code field that will be used internally in Magento to refer this. Remember the value you enter here, as you will require it in the next step! We will use video as the Attribute Code value in this example (this is shown in the following screenshot). You can leave the remaining settings in this panel as they are to allow this newly created attribute to be used with all types of products within your store.   In the Frontend Properties panel, ensure that Allow HTML Tags on Frontend is set to Yes (you'll need this enabled to allow you to paste the YouTube embedding HTML into your store and for it to work in the template). This is shown in the following screenshot:   Now select the Manage Labels / Options tab in the left-hand column of your screen and enter a value in the Admin and Default Store View fields in the Manage Titles panel:     Then, click on the Save Attribute button located near the top-right corner of the screen. Finally, navigate to Catalog | Attributes | Manage Attribute Sets and select the attribute set you wish to add your new video attribute to (we will use the Default attribute set for this example). In the right-hand column of this screen, you will see the list of Unassigned Attributes with the newly created video attribute in this list:     Drag-and-drop this attribute into the Groups column under the General group as shown in the following screenshot:   Click on the Save Attribute Set button at the top-right corner of the screen to add the new video attribute to the attribute set.   Adding a YouTube video to a product using the new attribute   Once you have added the new attribute to your Magento store, you can add a video to a product. Navigate to Catalog | Manage Products and select a product to edit (ensure that it uses one of the attribute sets you added the new video attribute to). The new Video field will be visible under the General tab:   Insert the embedding code from the YouTube video you wish to use on your product page into this field. The embed code will look like the following:   <iframe width="320" height="240" src="https://www.youtube.com/embed/dQw4w9WgXcQ?rel=0" frameborder="0" allowfullscreen></iframe> Once you have done that, click on the Save button to save the changes to the product.   Inserting the video attribute into your product view template   Your final task is to allow the content of the video attribute to be displayed in your product page templates in Magento. Open your theme's view.phtml file from /app/design/frontend/default/m18/catalog/product/ and locate the followingsnippet of code:   <div class="product-img-box">   <?php echo $this->getChildHtml('media') ?> </div>   Add the following highlighted code to the preceding code to check whether a video for the product exists and show it if it does exist:   <div class="product-img-box">   <?php   $_video-html = $_product->getResource()->getAttribute('video')->getFrontend()->getValue($_product);   if ($_video-html) echo $_video-html ;   ?>   <?php echo $this->getChildHtml('media') ?>   </div>   If you now refresh the product page that you have added a video to, you will see that the video appears in the same column as the product image. This is shown in the following screenshot: Summary In this article, we looked at expanding the customization of your Magento theme to include elements from social networking sites. You learned about integrating a Twitter feed and Facebook feed into your Magento store, including social share buttons in your product pages, and integrating product videos from YouTube. Resources for Article: Further resources on this subject: Optimizing Magento Performance — Using HHVM [article] Installing Magento [article] Magento Fundamentals for Developers [article]
Read more
  • 0
  • 0
  • 10941
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-shapefiles-leaflet
Packt
18 Aug 2014
5 min read
Save for later

Shapefiles in Leaflet

Packt
18 Aug 2014
5 min read
This article written by Paul Crickard III, the author of Leaflet.js Essentials, describes the use of shapefiles in Leaflet. It shows us how a shapefile can be used to create geographical features on a map. This article explains how shapefiles can be used to add a pop up or for styling purposes. (For more resources related to this topic, see here.) Using shapefiles in Leaflet A shapefile is the most common geographic file type that you will most likely encounter. A shapefile is not a single file, but rather several files used to create geographic features on a map. When you download a shapefile, you will have .shp, .shx, and .dbf at a minimum. These files are the shapefiles that contain the geometry, the index, and a database of attributes. Your shapefile will most likely include a projection file (.prj) that will tell that application the projection of the data so the coordinates make sense to the application. In the examples, you will also have a .shp.xml file that contains metadata and two spatial index files, .sbn and .sbx. To find shapefiles, you can usually search for open data and a city name. In this example, we will be using a shapefile from ABQ Data, the City of Albuquerque data portal. You can find more data on this at http://www.cabq.gov/abq-data. When you download a shapefile, it will most likely be in the ZIP format because it will contain multiple files. To open a shapefile in Leaflet using the leaflet-shpfile plugin, follow these steps: First, add references to two JavaScript files. The first, leaflet-shpfile, is the plugin, and the second depends on the shapefile parser, shp.js: <script src="leaflet.shpfile.js"></script> <script src="shp.js"></script> Next, create a new shapefile layer and add it to the map. Pass the layer path to the zipped shapefile: var shpfile = new L.Shapefile('council.zip'); shpfile.addTo(map); Your map should display the shapefile as shown in the following screenshot: Performing the preceding steps will add the shapefile to the map. You will not be able to see any individual feature properties. When you create a shapefile layer, you specify the data, followed by specifying the options. The options are passed to the L.geoJson class. The following code shows you how to add a pop up to your shapefile layer: var shpfile = new L.Shapefile('council.zip',{onEachFeature:function(feature, layer) { layer.bindPopup("<a href='"+feature.properties.WEBPAGE+"'>Page</a><br><a href='"+feature. properties.PICTURE+"'>Image</a>"); }}); In the preceding code, you pass council.zip to the shapefile, and for options, you use the onEachFeature option, which takes a function. In this case, you use an anonymous function and bind the pop up to the layer. In the text of the pop up, you concatenate your HTML with the name of the property you want to display using the format feature.properties.NAME-OF-PROPERTY. To find the names of the properties in a shapefile, you can open .dbf and look at the column headers. However, this can be cumbersome, and you may want to add all of the shapefiles in a directory without knowing its contents. If you do not know the names of the properties for a given shapefile, the following example shows you how to get them and then display them with their value in a pop up: var holder=[]; for (var key in feature.properties){holder.push(key+": "+feature.properties[key]+"<br>");popupContent=holder.join(""); layer.bindPopup(popupContent);} shapefile.addTo(map); In the preceding code, you first create an array to hold all of the lines in your pop up, one for each key/value pair. Next, you run a for loop that iterates through the object, grabbing each key and concatenating the key name with the value and a line break. You push each line into the array and then join all of the elements into a single string. When you use the .join() method, it will separate each element of the array in the new string with a comma. You can pass empty quotes to remove the comma. Lastly, you bind the pop up with the string as the content and then add the shapefile to the map. You now have a map that looks like the following screenshot: The shapefile also takes a style option. You can pass any of the path class options, such as the color, opacity, or stroke, to change the appearance of the layer. The following code creates a red polygon with a black outline and sets it slightly transparent: var shpfile = new L.Shapefile('council.zip',{style:function(feature){return {color:"black",fillColor:"red",fillOpacity:.75}}}); Summary In this article, we learned how shapefiles can be added to a geographical map. We learned how pop ups are added to the maps. This article also showed how these pop ups would look once added to the map. You will also learn how to connect to an ESRI server that has an exposed REST service. Resources for Article: Further resources on this subject: Getting started with Leaflet [Article] Using JavaScript Effects with Joomla! [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 34358

article-image-moving-space-pod-using-touch
Packt
18 Aug 2014
10 min read
Save for later

Moving the Space Pod Using Touch

Packt
18 Aug 2014
10 min read
Moving the Space Pod Using Touch This article written by, Frahaan Hussain, Arutosh Gurung, and Gareth Jones, authors of the book Cocos2d-x Game Development Essentials, will cover how to set up touch events within our game. So far, the game has had no user interaction from a gameplay perspective. This article will rectify this by adding touch controls in order to move the space pod and avoid the asteroids. (For more resources related to this topic, see here.) The topics that will be covered in this article are as follows: Implementing touch Single-touch Multi-touch Using touch locations Moving the spaceship when touching the screen There are two main routes to detect touch provided by Cocos2d-x: Single-touch: This method detects a single-touch event at any given time, which is what will be implemented in the game as it is sufficient for most gaming circumstances Multi-touch: This method provides the functionality that detects multiple touches simultaneously; this is great for pinching and zooming; for example, the Angry Birds game uses this technique Though a single-touch will be the approach that the game will incorporate, multi-touch will also be covered in this article so that you are aware of how to use this in future games. The general process for setting up touches The general process of setting up touch events, be it single or multi-touch, is as follows: Declare the touch functions. Declare a listener to listen for touch events. Assign touch functions to appropriate touch events as follows: When the touch has begun When the touch has moved When the touch has ended Implement touch functions. Add appropriate game logic/code for when touch events have occurred. Single-touch events Single-touch events can be detected at any given time, and for many games this is sufficient as it is for this game. Follow these steps to implement single-touch events into a scene: Declare touch functions in the GameScene.h file as follows: bool onTouchBegan(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchMoved(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchEnded(cocos2d::Touch *touch, cocos2d::Event * event); void onTouchCancelled(cocos2d::Touch *touch, cocos2d::Event * event); This is what the GameScene.h file will look like: The previous functions do the following: The onTouchBegan function detects when a single-touch has occurred, and it returns a Boolean value. This should be true if the event is swallowed by the node and false indicates that it will keep on propagating. The onTouchMoved function detects when the touch moves. The onTouchEnded function detects when the touch event has ended, essentially when the user has lifted up their finger. The onTouchCancelled function detects when a touch event has ended but not by the user; for example, a system alert. The general practice is to call the onTouchEnded method to run the same code, as it can be considered the same event for most games. Declare a Boolean variable in the GameScene.h file, which will be true if the screen is being touched and false if it isn't, and also declare a float variable to keep track of the position being touched: bool isTouching;float touchPosition; This is how it will look in the GameScene.h file: Add the following code in the init() method of GameScene.cpp: auto listener = EventListenerTouchOneByOne::create(); listener->setSwallowTouches(true); listener->onTouchBegan = CC_CALLBACK_2(GameScreen::onTouchBegan, this); listener->onTouchMoved = CC_CALLBACK_2(GameScreen::onTouchMoved, this); listener->onTouchEnded = CC_CALLBACK_2(GameScreen::onTouchEnded, this); listener->onTouchCancelled = CC_CALLBACK_2(GameScreen::onTouchCancelled, this); this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); isTouching = false; touchPosition = 0; This is how it will look in the GameScene.cpp file: There is quite a lot of new code in the previous code snippet, so let's run through it line by line: The first statement declares and initializes a listener for a single-touch The second statement prevents layers underneath from where the touch occurred by detecting the touches The third statement assigns our onTouchBegan method to the onTouchBegan listener The fourth statement assigns our onTouchMoved method to the onTouchMoved listener The fifth statement assigns our onTouchEnded method to the onTouchEnded listener The sixth statement assigns our onTouchCancelled method to the onTouchCancelled listener The seventh statement sets the touch listener to the event dispatcher so the events can be detected The eighth statement sets the isTouching variable to false as the player won't be touching the screen initially when the game starts The final statement initializes the touchPosition variable to 0 Implement the touch functions inside the GameScene.cpp file: bool GameScreen::onTouchBegan(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = true;touchPosition = touch->getLocation().x;return true;}void GameScreen::onTouchMoved(cocos2d::Touch *touch,cocos2d::Event * event){// not used for this game}void GameScreen::onTouchEnded(cocos2d::Touch *touch,cocos2d::Event * event){isTouching = false;}void GameScreen::onTouchCancelled(cocos2d::Touch *touch,cocos2d::Event * event){onTouchEnded(touch, event);} The following is what the GameScene.cpp file will look like: Let's go over the touch functions that have been implemented previously: The onTouchBegan method will set the isTouching variable to true as the user is now touching the screen and is storing the starting touch position The onTouchMoved function isn't used in this game but it has been implemented so that you are aware of the steps for implementing it (as an extra task, you can implement touch movement so that if the user moves his/her finger from one side to another direction, the space pod gets changed) The onTouchEnded method will set the isTouching variable to false as the user is no longer touching the screen The onTouchCancelled method will call the onTouchEnded method as a touch event has essentially ended If the game were to be run, the space pod wouldn't move as the movement code hasn't been implemented yet. It will be implemented within the update() method to move left when the user touches in the left half of the screen and move right when user touches in the right half of the screen. Add the following code at the end of the update() method: // check if the screen is being touchedif (true == isTouching){// check which half of the screen is being touchedif (touchPosition < visibleSize.width / 2){// move the space pod leftplayerSprite->setPosition().x(playerSprite->getPosition().x - (0.50 * visibleSize.width * dt));// check to prevent the space pod from going offthe screen (left side)if (playerSprite->getPosition().x <= 0 +(playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(playerSprite->getContentSize().width / 2);}}else{// move the space pod rightplayerSprite->setPosition().x(playerSprite->getPosition().x + (0.50 * visibleSize.width * dt));// check to prevent the space pod from going off thescreen (right side)if (playerSprite->getPosition().x >=visibleSize.width - (playerSprite->getContentSize().width / 2)){playerSprite->setPositionX(visibleSize.width -(playerSprite->getContentSize().width / 2));}}} The following is how this will look after adding the code: The preceding code performs the following steps: Checks whether the screen is being touched. Checks which side of the screen is being touched. Moves the player left or right. Checks whether the player is going off the screen and if so, stops him/her from moving. Repeats the process until the screen is no longer being touched. This section covered how to set up single-touch events and implement them within the game to be able to move the space pod left and right. Multi-touch events Multi-touch is set up in a similar manner of declaring the functions and creating a listener to actively listen out for touch events. Follow these steps to implement multi-touch into a scene: Firstly, the multi-touch feature needs to be enabled in the AppController.mm file, which is located within the ios folder. To do so, add the following code line below the viewController.view = eaglView; line: [eaglView setMultipleTouchEnabled: YES]; The following is what the AppController.mm file will look like: Declare the touch functions within the game scene header file (the functions do the same thing as the single-touch equivalents but enable multiple touches that can be detected simultaneously): void onTouchesBegan(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesMoved(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesEnded(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); void onTouchesCancelled(const std::vector<cocos2d::Touch *> &touches, cocos2d::Event *event); The following is what the header file will look like: Add the following code in the init() method of the scene.cpp file to listen to the multi-touch events that will use the EventListenerTouchAllAtOnce class, which allows multiple touches to be detected at once: auto listener = EventListenerTouchAllAtOnce::create();listener->onTouchesBegan = CC_CALLBACK_2(GameScreen::onTouchesBegan, this);listener->onTouchesMoved = CC_CALLBACK_2(GameScreen::onTouchesMoved, this);listener->onTouchesEnded = CC_CALLBACK_2(GameScreen::onTouchesEnded, this);listener->onTouchesCancelled = CC_CALLBACK_2(GameScreen::onTouchesCancelled, this);this->getEventDispatcher()->addEventListenerWithSceneGraphPriority(listener, this); The following is how this will look: Implement the following multi-touch functions inside the scene.cpp: void GameScreen::onTouchesBegan(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("Multi-touch BEGAN"); } void GameScreen::onTouchesMoved(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { for (int i = 0; i < touches.size(); i++) { CCLOG("Touch %i: %f", i, touches[i]- >getLocation().x); } } void GameScreen::onTouchesEnded(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE ENDED"); } Moving the Space Pod Using Touch [ 92 ] void GameScreen::onTouchesCancelled(const std:: vector<cocos2d::Touch *> &touches, cocos2d::Event *event) { CCLOG("MULTI TOUCHES HAVE BEEN CANCELLED"); } The following is how this will look: The multi-touch functions just print out a log, stating that they have occurred, but when touches are moved, their respective x positions are logged. This section covered how to implement core foundations for multi-touch events so that they can be used for features such as zooming (for example, zooming into a scene in the Clash Of Clans game) and panning. Multi-touch wasn't incorporated within the game as it wasn't needed, but this section is a good starting point to implement it in future games. Summary This article covered how to set up touch listeners to detect touch events for single-touch and multi-touch. We incorporated single-touch within the game to be able to move the space pod left or right, depending on which half of the screen was being touched. Multi-touch wasn't used as the game didn't require it, but its implementation was shown so that it can be used for future projects. Resources for Article: Further resources on this subject: Cocos2d: Uses of Box2D Physics Engine [article] Cocos2d-x: Installation [article] Thumping Moles for Fun [article]
Read more
  • 0
  • 0
  • 9872

article-image-configuring-your-operating-system
Packt
18 Aug 2014
3 min read
Save for later

Configuring Your Operating System

Packt
18 Aug 2014
3 min read
In this article by William Smith, author of Learning Xamarin Studio, we will configure our operating system. (For more resources related to this topic, see here.) Configuring your Mac To configure your Mac, perform the following steps: From the Apple menu, open System Preferences. Open the Personal group. Select the Security and Privacy item. Open the Firewall tab, and ensure the Firewall is turned off. Configuring your Windows machine To configure your Windows machine, download and install the Xamarin Unified Installer. This installer includes a tool called Xamarin Bonjour Service, which runs Apple's network discovery protocol. Xamarin Bonjour Service requires administrator rights, so you may want to just run the installer as an administrator. Configuring a Windows VM within Mac There is really no difference between using the Visual Studio plugin from a Windows machine or from a VM using software, such as Parallels or VMware. However, if you are running Xamarin Studio on a Retina Macbook Pro, it is advisable to adjust the hardware video settings. Otherwise, some of the elements within Xamarin Studio will render poorly making them difficult to use. The following screenshot contains the recommended video settings: To adjust the settings in Parallels, follow these steps: If your Windows VM is running, shut it down. With your VM shut down, go to Virtual Machine | Configure…. Choose the Hardware tab. Select the Video group. Under Resolution, choose Scaled. Final installation steps Now that the necessary tools are installed and the settings have been enabled, you still need to link to your Xamarin account in Visual Studio, as well as connect Visual Studio to your Mac build machine. To connect to your Xamarin account, follow these steps: In Visual Studio, go to Tools | Xamarin Account…. Click Login to your Xamarin Account and enter your credentials. Once your credentials are verified, you will receive a confirmation message. To connect to your Mac build machine, follow these steps: On your Mac, open Spotlight and type Xamarin build host. Choose Xamarin.iOS Build Host under the Applications results group. After the Build Host utility dialog opens click the Pair button to continue. You will be provided with a PIN. Write this down. On your PC, open Xamarin Studio. Go to Tools | Options | Xamarin | iOS Settings. After the Build Host utility opens, click the Continue button. If your Mac and network are correctly configured, you will see your Mac in the list of available build machines. Choose your build machine and click the Continue button. You will be prompted to enter the PIN. Do so, then click the Pair button. Once the machines are paired, you can build, test, and deploy applications using the networked Mac. If for whatever reason you want to unpair these two machines, open the Xamarin.iOS Build Host on your Mac again, and click the Invalidate PIN button. When prompted, complete the process by clicking the Unpair button. Summary In this article, we learned how to configure our operating system. We also learned how to connect to your Mac build machine. Resources for Article: Further resources on this subject: Updating data in the background [Article] Gesture [Article] Making POIApp Location Aware [Article]
Read more
  • 0
  • 0
  • 2911

Packt
18 Aug 2014
3 min read
Save for later

Designing the New FlexCast® Management Architecture

Packt
18 Aug 2014
3 min read
This article written by Luca Dentella, the author of Citrix® XenApp® 7.x Performance Essentials, teaches us how to design a XenApp architecture. It also helps us get acquainted to the key features of the new FlexCast Management Architecture. We learn about the layers as well. The design of a XenApp infrastructure is a complex task that requires good knowledge of XenApp components. Making the right decisions in the design phase may also greatly help system administrators to expand XenApp farms to satisfy new business requirements or to improve the user experience. In this article, you will learn about the following: The key features of the new FlexCast Management Architecture The five-layer model Sizing each layer's components Implementing and using Machine Creation Services to deploy new worker servers in minutes The difference between XenApp 6.5 and 7.5 (For more resources related to this topic, see here.) FlexCast® Management Architecture With XenApp 7.5, Citrix adopted the same architecture that was introduced in XenDesktop 5 and refined in XenDesktop 7, namely, FlexCast Management Architecture (FMA). FMA is primarily made up of Delivery Controllers and agents. Delivery agents are installed on all virtual and/or physical machines that host and publish resources (named worker servers), while the controllers manage users, resources, configurations, and store them in a central SQL server database. Unlike the previous versions of XenApp, the delivery agent now communicates only with the controllers in the Site and does not need to access the Site's database or license server directly, as illustrated in the following figure: Overview of FlexCast infrastructure's elements The main advantage of this architectural change is that now only one underlying infrastructure is used by XenApp and XenDesktop. Therefore, the overall solution might include both published applications and virtual desktops, leveraging the same infrastructure elements. XenApp administrators who have moved to version 7.5 might be a bit confused; there are no more zones or data collectors. By the end of this article, you will find a table that maps concepts and terms from XenApp 6.x to the new ones in XenApp 7.5. The five-layer model When designing a new infrastructure, a common mistake is trying to focus on everything at once. A better and suggested approach is to divide the solution into layers and then analyze, size, and make decisions, one level at a time. FlexCast Management Architecture can be divided into the following five layers: User layer: This defines user groups and locations Access layer: This defines how users access the resources Resource layer: This defines which resources are assigned to the given users Control layer: This defines the components required to run the solution Hardware layer: This defines the physical elements where the software components run The power of a FlexCast architecture is that it's extremely flexible; different users can have their own set of policies and resources, but everything is managed by a single, integrated control layer, as shown in the following figure: The five-layer model of FlexCast Management Architecture The user layer The need for a new application delivery solution normally comes from user requirements. The minimum information that must be collected is as follows: What users need access to (business applications, a personalized desktop environment, and so on) What endpoints the users will use (personal devices, thin clients, smartphones, and so on) Where users connect from (company's internal network, unreliable external networks, and so on) User groups can access more than one resource at a time. For example, office workers can access a shared desktop environment with some common office applications that are installed, and in addition, use some hosted applications.
Read more
  • 0
  • 0
  • 2005
article-image-avoiding-obstacles-using-sensors
Packt
14 Aug 2014
8 min read
Save for later

Avoiding Obstacles Using Sensors

Packt
14 Aug 2014
8 min read
In this article by Richard Grimmett, the author of Arduino Robotic Projects, we'll learn the following topics: How to add sensors to your projects How to add a servo to your sensor (For more resources related to this topic, see here.) An overview of the sensors Before you begin, you'll need to decide which sensors to use. You require basic sensors that will return information about the distance to an object, and there are two choices—sonar and infrared. Let's look at each. Sonar sensors The sonar sensor uses ultrasonic sound to calculate the distance to an object. The sensor consists of a transmitter and receiver. The transmitter creates a sound wave that travels out from the sensor, as illustrated in the following diagram: The device sends out a sound wave 10 times a second. If an object is in the path of these waves, the waves reflect off the object. This then returns sound waves to the sensor. The sensor measures the returning sound waves. It uses the time difference between when the sound wave was sent out and when it returns to measure the distance to the object. Infrared sensors Another type of sensor is a sensor that uses infrared (IR) signals to detect distance. An IR sensor also uses both a transmitter and a receiver. The transmitter transmits a narrow beam of light and the sensor receives this beam of light. The difference in transit ends up as an angle measurement at the sensor, as shown in the following diagram: The different angles give you an indication of the distance to the object. Unfortunately, the relationship between the output of the sensor and the distance is not linear, so you'll need to do some calibration to predict the actual distance and its relationship to the output of the sensor. Connecting a sonar sensor to Arduino Here is an image of a sonar sensor, HC-SR04, which works well with Arduino: These sonar sensors are available at most places that sell Arduino products, including amazon.com. In order to connect this sonar sensor to your Arduino, you'll need some of those female-to-male jumper cables. You'll notice that there are four pins to connect the sonar sensor. Two of these supply the voltage and current to the sensor. One pin, the Trig pin, triggers the sensor to send out a sound wave. The Echo pin then senses the return from the echo. To access the sensor with Arduino, make the following connections using the male-to-female jumper wires: Arduino pin Sensor pin 5V Vcc GND GND 12 Trig 11 Echo Accessing the sonar sensor from the Arduino IDE Now that the HW is connected, you'll want to download a library that supports this sensor. One of the better libraries for this sensor is available at https://code.google.com/p/arduino-new-ping/. Download the NewPing library and then open the Arduino IDE. You can include the library in the IDE by navigating to Sketch | Import Library | Add Library | Downloads and selecting the NewPing ZIP file. Once you have the library installed, you can access the example program by navigating to File | Examples | NewPing | NewPingExample as shown in the following screenshot: You will then see the following code in the IDE: Now, upload the code to Arduino and open a serial terminal by navigating to Tools | Serial Monitor in the IDE. Initially, you will see characters that make no sense; you need to change the serial port baud rate to 115200 baud by selecting this field in the lower-right corner of Serial Monitor, as shown in the following screenshot: Now, you should begin to see results that make sense. If you place your hand in front of the sensor and then move it, you should see the distance results change, as shown in the following screenshot: You can now measure the distance to an object using your sonar sensor. Connecting an IR sensor to Arduino One popular choice is the Sharp series of IR sensors. Here is an image of one of the models, Sharp 2Y0A02, which is a unit that provides sensing to a distance of 150 cm: To connect this unit, you'll need to connect the three pins that are available on the bottom of the sensor. Here is the connection list: Arduino pin Sensor pin 5V Vcc GND GND A3 Vo Unfortunately, there are no labels on the unit, but there is a data sheet that you can download from www.phidgets.com/documentation/Phidgets/3522_0_Datasheet.pdf. The following image shows the pins you'll need to connect: One of the challenges of making this connection is that the female-to-male connection jumpers are too big to connect directly to the sensor. You'll want to order the three-wire cable with connectors with the sensor, and then you can make the connections between this cable and your Arduino device using the male-to-male jumper wires. Once the pins are connected, you are ready to access the sensor via the Arduino IDE. Accessing the IR sensor from the Arduino IDE Now, bring up the Arduino IDE. Here is a simple sketch that provides access to the sensor and returns the distance to the object via the serial link: The sketch is quite simple. The three global variables at the top set the input pin to A3 and provide a storage location for the input value and distance. The setup() function simply sets the serial port baud rate to 9600 and prints out a single line to the serial port. In the loop() function, you first get the value from the A3 input port. The next step is to convert it to a distance based on the voltage. To do this, you need to use the voltage to distance chart for the device; in this case, it is similar to the following diagram: There are two parts to the curve. The first is the distance up to about 15 centimeters and then the distance from 15 centimeters to 150 centimeters. This simple example ignores distances closer than 15 centimeters, and models the distance from 15 centimeters and out as a decaying exponential with the following form: Thanks to teaching.ericforman.com/how-to-make-a-sharp-ir-sensor-linear/, the values that work quite well for a cm conversion in distance are 30431 for the constant and -1.169 as the exponential for this curve. If you open the Serial Monitor tab and place an object in front of the sensor, you'll see the readings for the distance to the object, as shown in the following screenshot: By the way, when you place the object closer than 15 cm, you should begin to see distances that seem much larger than should be indicated. This is due to the voltage to distance curve at these much shorter distances. If you truly need very short distances, you'll need a much more complex calculation. Creating a scanning sensor platform While knowing the distance in front of your robotic project is normally important, you might want to know other distances around the robot as well. One solution is to hook up multiple sensors, which is quite simple. However, there is another solution that may be a bit more cost effective. To create a scanning sensor of this type, take a sensor of your choice (in this case, I'll use the IR sensor) and mount it on a servo. I like to use a servo L bracket for this, which is mounted on the servo Pas follows: You'll need to connect both the IR sensor as well as the servo to Arduino. Now, you will need some Arduino code that will move the servo and also take the sensor readings. The following screenshot illustrates the code: The preceding code simply moves the servo to an angle and then prints out the distance value reported by the IR sensor. The specific statements that may be of interest are as follows: servo.attach(servoPin);: This statement attaches the servo control to the pin defined servo.write(angle);: This statement sends the servo to this angle inValue = analogRead(inputPin);: This statement reads the analog input value from this pin distance = 30431 * pow(inValue, -1.169);: This statement translates the reading to distance in centimeters If you upload the sketch, open Serial Monitor , and enter different angle values, the servo should move and you should see something like the following screenshot: Summary Now that you know how to use sensors to understand the environment, you can create even more complex programs that will sense these barriers and then change the direction of your robot to avoid them or collide with them. You learned how to find distance using sonar sensors and how to connect them to Arduino. You also learned about IR sensors and how they can be used with Arduino. Resources for Article: Further resources on this subject: Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article] Using PVR with Raspbmc [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 10778

Packt
14 Aug 2014
10 min read
Save for later

Additional SOA Patterns – Supporting Composition Controllers

Packt
14 Aug 2014
10 min read
In this article by Sergey Popov, author of the book Applied SOA Patterns on the Oracle Platform, we will learn some complex SOA patterns, realized on very interesting Oracle products: Coherence and Oracle Event Processing. (For more resources related to this topic, see here.) We have to admit that for SOA Suite developers and architects (especially from the old BPEL school), the Oracle Event Processing platform could be a bit outlandish. This could be the reason why some people oppose service-oriented and event-driven architecture, or see them as different architectural approaches. The situation is aggravated by the abundance of the acronyms flying around such as EDA EPN, EDN, CEP, and so on. Even here, we use EPN and EDN interchangeably, as Oracle calls it event processing, and generically, it is used in an event delivery network.   The main argument used for distinguishing SOA and EDN is that SOA relies on the application of a standardized contract principle, whereas EDN has to deal with all types of events. This is true, and we have mentioned this fact before. We also mentioned that we have to declare all the event parameters in the form of key-value pairs with their types in <event-type-repository>. We also mentioned that the reference to the event type from the event type repository is not mandatory for a standard EPN adapter, but it's essential when you are implementing a custom inbound adapter in the EPN framework, which is an extremely powerful Java-based feature. As long as it's Java, you can do practically everything! Just follow the programming flow explained in the Oracle documentation; see the EP Input Adapter Implementation section:   import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import com.bea.wlevs.ede.api.EventProperty; import com.bea.wlevs.ede.api.EventRejectedException; import com.bea.wlevs.ede.api.EventType;   import com.bea.wlevs.ede.api.EventTypeRepository; import com.bea.wlevs.ede.api.RunnableBean;   import com.bea.wlevs.ede.api.StreamSender; import com.bea.wlevs.ede.api.StreamSink; import com.bea.wlevs.ede.api.StreamSource; import com.bea.wlevs.util.Service;   import java.lang.RuntimeException;   public class cargoBookingAdapter implements RunnableBean, StreamSource, StreamSink   {   static final Log v_logger = LogFactory. getLog("cargoBookingAdapter");   private String v_eventTypeName; private EventType v_eventType;        private StreamSender v_eventSender;   private EventTypeRepository v_EvtRep = null; public cargoBookingAdapter(){   super();   }   /**   *  Called by the server to pass in the name of the event   *  v_EvTypee to which event data should be bound.   */   public void setEventType(String v_EvType){ v_eventTypeName = v_EvType; }   /**   *  Called by the server to set an event v_EvTypee   *  repository instance that knows about event   *  v_EvTypees configured for this application   *   *  This repository instance will be used to retrieve an   *  event v_EvTypee instance that will be populated   *  with event data retrieved from the event data file   *  @param etr The event repository.   */   @Service(filter = EventTypeRepository.SERVICE_FILTER)   public void setEventTypeRepository(EventTypeRepository etr){ v_EvtRep = etr;   }   /**   *  Executes to retrieve raw event data and   *  create event v_EvTypee instances from it, then   *  sends the events to the next stage in the   *  EPN.   *  This method, implemented from the RunnableBean   *  interface, executes when this adapter instance   *  is active.   */   public void run()   {   if (v_EvtRep == null){   throw new RuntimeException("EventTypeRepository is   not set");   }   //  Get the event v_EvTypee from the repository by using   //  the event v_EvTypee name specified as a property of   //  this adapter in the EPN assembly file.   v_eventType = v_EvtRep.getEventType(v_eventTypeName); if (v_eventType == null){ throw new RuntimeException("EventType(" + v_eventType + ") is not found.");     }   /**   *   Actual Adapters implementation:   *             *  1. Create an object and assign to it   *      an event v_EvTypee instance generated   *      from event data retrieved by the   *      reader   *   *  2. Send the newly created event v_EvTypee instance   *      to a downstream stage that is   *      listening to this adapter.   */   }   }   }   The presented code snippet demonstrates the injection of a dependency into the Adapter class using the setEventTypeRepository method, implanting the event type definition that is specified in the adapter's configuration.   So, it appears that we, in fact, have the data format and model declarations in an XML form for the event, and we put some effort into adapting the inbound flows to our underlying component. Thus, the Adapter Framework is essential in EDN, and dependency injection can be seen here as a form of dynamic Data Model/Format Transformation of the object's data. Going further, just following the SOA reusabilityprinciple, a single adapter can be used in multiple event-processing networks and for that, we can employ the Adapter Factory pattern discussed earlier (although it's not an official SOA pattern, remember?) For that, we will need the Adapter Factory class and the registration of this factory in the EPN assembly file with a dedicated provider name, which we will use further in applications, employing the instance of this adapter. You must follow the OSGi service registry rules if you want to specify additional service properties in the <osgi:service interface="com.bea.wlevs.ede.api.AdapterFactory"> section and register it only once as an OSGi service.   We also use Asynchronous Queuing and persistence storage to provide reliable delivery of events aggregation to event subscribers, as we demonstrated in the previous paragraph. Talking about aggregation on our CQL processors, we have practically unlimited possibilities to merge and correlate various event sources, such as streams:   <query id="cargoQ1"><![CDATA[   select * from CragoBookingStream, VoyPortCallStream   where CragoBookingStream.POL_CODE = VoyPortCallStream.PORT_CODE and VoyPortCallStream.PORT_CALL_PURPOSE ="LOAD" ]]></query> Here, we employ Intermediate Routing (content-based routing) to scale and balance our event processors and also to achieve a desirable level of high availability. Combined together, all these basic SOA patterns are represented in the Event-Driven Network that has Event-Driven Messaging as one of its forms.   Simply put, the entire EDN has one main purpose: effective decoupling of event (message) providers and consumers (Loose Coupling principle) with reliable event identification and delivering capabilities. So, what is it really? It is a subset of the Enterprise Service Bus compound SOA pattern, and yes, it is a form of an extended Publish-Subscribe pattern.   Some may say that CQL processors (or bean processors) are not completely aligned with the classic ESB pattern. Well, you will not find OSB XQuery in the Canonical ESB patterns catalog either; it's just a tool that supports ESB VETRO operations in this matter. In ESB, we can also call Java Beans when it's necessary for message processing; for instance, doing complex sorts inJava Collections is far easier than in XML/XSLT, and it is worth the serialization/ deserialization efforts. In a similar way, EDN extends the classic ESB by providing the following functionalities:   •        Continuous Query Language   •        It operates on multiple streams of disparate data   •        It joins the incoming data with persisted data   •        It has the ability to plug in to any type of adapter   •        It has the ability to plug to any type of adapters   Combined together, all these features can cover almost any range of practical challenges, and the logistics example we used here in this article is probably too insignificant for such a powerful event-driven platform; however, for a more insightful look at Oracle CEP, refer to Getting Started with Oracle Event Processing 11g, Alexandre Alves, Robin J. Smith, Lloyd Williams, Packt Publishing. Using exactly the same principles and patterns, you can employ the already existing tools in your arsenal. The world is apparentlybigger, and this tool can demonstrate all its strength in the following use cases:     •    As already mentioned, Cablecom Enterprise strives to improve the overall customer experience (not only for VOD). It does so by gathering and aggregating information about user preferences through the purchasing history, watch lists, channel switching, activity in social networks, search history and used meta tags in search, other user experiences from the same target group, upcoming related public events (shows, performances, or premieres), and even the duration of the cursor's position over certain elements of corporate web portals. The task is complex and comprises many activities, including meta tag updates in metadata storage that depend on new findings for predicting trends and so on; however, here we can tolerate (to some extent) the events that aren't processed or are not received.     •    For bank transaction monitoring, we do not have such a luxury. All online events must be accounted and processed with the maximum speed possible. If the last transaction with your credit card was at Bond Street in London, (ATM cash withdrawal) and 5 minutes later, the same card is used to purchase expensive jewellery online with a peculiar delivery address, then someone should flag the card with a possible fraud case and contact the card holder. This is the simplest example that we can provide. When it comes to money laundering tracking cases in our borderless world—the decision-parsing tree from the very first figure in this article—based on all possible correlated events will require all the pages of this book, and you will need a strong magnifying glass to read it; the stratagem of the web nodes and links would drive even the most worldly wise spider crazy.   For these mentioned use cases, Oracle EPN is simply compulsory with some spice, like Coherence for cache management and adequate hardware. It would be prudent to avoid implementing homebrewed solutions (without dozens of years of relevant experience), and following the SOA design patterns is essential.   Let's now assemble all that we discussed in the preceding paragraphs in one final figure. Installation routines will not give you any trouble; just install OEPE 3.5, download it, install CEP components for Eclipse, and you are done with the client/ dev environment. The installation of the server should not pose many difficulties either (http://docs.oracle.com/cd/E28280_01/doc.1111/e14476/install.htm#CEPGS472). When the server is up and running, you can register it in Eclipse(1). The graphical interface will support you in assembling event-handling applications from adapters, processor channels, and event beans; however, knowledge of the internal organization of an XML config and application assembly files (as demonstrated in the earlier code snippets) is always beneficial. In addition to the Eclipse development environment, you have the CEP server web console (visualizer) with almost identical functionalities, which gives you a quick hand with practically all CQL constructs (2). Parallel Complex Events Processing
Read more
  • 0
  • 0
  • 1311

article-image-physics-bullet
Packt
13 Aug 2014
7 min read
Save for later

Physics with Bullet

Packt
13 Aug 2014
7 min read
In this article by Rickard Eden author of jMonkeyEngine 3.0 CookBook we will learn how to use physics in games using different physics engine. This article contains the following recipes: Creating a pushable door Building a rocket engine Ballistic projectiles and arrows Handling multiple gravity sources Self-balancing using RotationalLimitMotors (For more resources related to this topic, see here.) Using physics in games has become very common and accessible, thanks to open source physics engines, such as Bullet. jMonkeyEngine supports both the Java-based jBullet and native Bullet in a seamless manner. jBullet is a Java-based library with JNI bindings to the original Bullet based on C++. jMonkeyEngine is supplied with both of these, and they can be used interchangeably by replacing the libraries in the classpath. No coding change is required. Use jme3-libraries-physics for the implementation of jBullet and jme3-libraries-physics-native for Bullet. In general, Bullet is considered to be faster and is full featured. Physics can be used for almost anything in games, from tin cans that can be kicked around to character animation systems. In this article, we'll try to reflect the diversity of these implementations. Creating a pushable door Doors are useful in games. Visually, it is more appealing to not have holes in the walls but doors for the players to pass through. Doors can be used to obscure the view and hide what's behind them for a surprise later. In extension, they can also be used to dynamically hide geometries and increase the performance. There is also a gameplay aspect where doors are used to open new areas to the player and give a sense of progression. In this recipe, we will create a door that can be opened by pushing it, using a HingeJoint class. This door consists of the following three elements: Door object: This is a visible object Attachment: This is the fixed end of the joint around which the hinge swings Hinge: This defines how the door should move Getting ready Simply following the steps in this recipe won't give us anything testable. Since the camera has no physics, the door will just sit there and we will have no way to push it. If you have made any of the recipes that use the BetterCharacterControl class, we will already have a suitable test bed for the door. If not, jMonkeyEngine's TestBetterCharacter example can also be used. How to do it... This recipe consists of two sections. The first will deal with the actual creation of the door and the functionality to open it. This will be made in the following six steps: Create a new RigidBodyControl object called attachment with a small BoxCollisionShape. The CollisionShape should normally be placed inside the wall where the player can't run into it. It should have a mass of 0, to prevent it from being affected by gravity. We move it some distance away and add it to the physicsSpace instance, as shown in the following code snippet: attachment.setPhysicsLocation(new Vector3f(-5f, 1.52f, 0f)); bulletAppState.getPhysicsSpace().add(attachment); Now, create a Geometry class called doorGeometry with a Box shape with dimensions that are suitable for a door, as follows: Geometry doorGeometry = new Geometry("Door", new Box(0.6f, 1.5f, 0.1f)); Similarly, create a RigidBodyControl instance with the same dimensions, that is, 1 in mass; add it as a control to the doorGeometry class first and then add it to physicsSpace of bulletAppState. The following code snippet shows you how to do this: RigidBodyControl doorPhysicsBody = new RigidBodyControl(new BoxCollisionShape(new Vector3f(.6f, 1.5f, .1f)), 1); bulletAppState.getPhysicsSpace().add(doorPhysicsBody); doorGeometry.addControl(doorPhysicsBody); Now, we're going to connect the two with HingeJoint. Create a new HingeJoint instance called joint, as follows: new HingeJoint(attachment, doorPhysicsBody, new Vector3f(0f, 0f, 0f), new Vector3f(-1f, 0f, 0f), Vector3f.UNIT_Y, Vector3f.UNIT_Y); Then, we set the limit for the rotation of the door and add it to physicsSpace as follows: joint.setLimit(-FastMath.HALF_PI - 0.1f, FastMath.HALF_PI + 0.1f); bulletAppState.getPhysicsSpace().add(joint); Now, we have a door that can be opened by walking into it. It is primitive but effective. Normally, you want doors in games to close after a while. However, here, once it is opened, it remains opened. In order to implement an automatic closing mechanism, perform the following steps: Create a new class called DoorCloseControl extending AbstractControl. Add a HingeJoint field called joint along with a setter for it and a float variable called timeOpen. In the controlUpdate method, we get hingeAngle from HingeJoint and store it in a float variable called angle, as follows: float angle = joint.getHingeAngle(); If the angle deviates a bit more from zero, we should increase timeOpen using tpf. Otherwise, timeOpen should be reset to 0, as shown in the following code snippet: if(angle > 0.1f || angle < -0.1f) timeOpen += tpf; else timeOpen = 0f; If timeOpen is more than 5, we begin by checking whether the door is still open. If it is, we define a speed to be the inverse of the angle and enable the door's motor to make it move in the opposite direction of its angle, as follows: if(timeOpen > 5) { float speed = angle > 0 ? -0.9f : 0.9f; joint.enableMotor(true, speed, 0.1f); spatial.getControl(RigidBodyControl.class).activate(); } If timeOpen is less than 5, we should set the speed of the motor to 0: joint.enableMotor(true, 0, 1); Now, we can create a new DoorCloseControl instance in the main class, attach it to the doorGeometry class, and give it the same joint we used previously in the recipe, as follows: DoorCloseControl doorControl = new DoorCloseControl(); doorControl.setHingeJoint(joint); doorGeometry.addControl(doorControl); How it works... The attachment RigidBodyControl has no mass and will thus not be affected by external forces such as gravity. This means it will stick to its place in the world. The door, however, has mass and would fall to the ground if the attachment didn't keep it up with it. The HingeJoint class connects the two and defines how they should move in relation to each other. Using Vector3f.UNIT_Y means the rotation will be around the y axis. We set the limit of the joint to be a little more than half PI in each direction. This means it will open almost 100 degrees to either side, allowing the player to step through. When we try this out, there may be some flickering as the camera passes through the door. To get around this, there are some tweaks that can be applied. We can change the collision shape of the player. Making the collision shape bigger will result in the player hitting the wall before the camera gets close enough to clip through. This has to be done considering other constraints in the physics world. You can consider changing the near clip distance of the camera. Decreasing it will allow things to get closer to the camera before they are clipped through. This might have implications on the camera's projection. One thing that will not work is making the door thicker, since the triangles on the side closest to the player are the ones that are clipped through. Making the door thicker will move them even closer to the player. In DoorCloseControl, we consider the door to be open if hingeAngle deviates a bit more from 0. We don't use 0 because we can't control the exact rotation of the joint. Instead we use a rotational force to move it. This is what we do with joint.enableMotor. Once the door is open for more than five seconds, we tell it to move in the opposite direction. When it's close to 0, we set the desired movement speed to 0. Simply turning off the motor, in this case, will cause the door to keep moving until it is stopped by an external force. Once we enable the motor, we also need to call activate() on RigidBodyControl or it will not move.
Read more
  • 0
  • 0
  • 11701
article-image-lightning-introduction
Packt
13 Aug 2014
11 min read
Save for later

Lightning Introduction

Packt
13 Aug 2014
11 min read
In this article by Jorge González and James Watts, the authors of CakePHP 2 Application Cookbook, we will cover the following recipes: Listing and viewing records Adding and editing records Deleting records Adding a login Including a plugin (For more resources related to this topic, see here.) CakePHP is a web framework for rapid application development (RAD), which admittedly covers a wide range of areas and possibilities. However, at its core, it provides a solid architecture for the CRUD (create/read/update/delete) interface. This chapter is a set of quick-start recipes to dive head first into using the framework and build out a simple CRUD around product management. If you want to try the code examples on your own, make sure that you have CakePHP 2.5.2 installed and configured to use a database—you should see something like this: Listing and viewing records To begin, we'll need a way to view the products available and also allow the option to select and view any one of those products. In this recipe, we'll create a listing of products as well as a page where we can view the details of a single product. Getting ready To go through this recipe, we'll first need a table of data to work with. So, create a table named products using the following SQL statement: CREATE TABLE products ( id VARCHAR(36) NOT NULL, name VARCHAR(100), details TEXT, available TINYINT(1) UNSIGNED DEFAULT 1, created DATETIME, modified DATETIME, PRIMARY KEY(id) ); We'll then need some sample data to test with, so now run this SQL statement to insert some products: INSERT INTO products (id, name, details, available, created, modified) VALUES ('535c460a-f230-4565-8378-7cae01314e03', 'Cake', 'Yummy and sweet', 1, NOW(), NOW()), ('535c4638-c708-4171-985a-743901314e03', 'Cookie', 'Browsers love cookies', 1, NOW(), NOW()), ('535c49d9-917c-4eab-854f-743801314e03', 'Helper', 'Helping you all the way', 1, NOW(), NOW()); Before we begin, we'll also need to create ProductsController. To do so, create a file named ProductsController.php in app/Controller/ and add the following content: <?php App::uses('AppController', 'Controller'); class ProductsController extends AppController { public $helpers = array('Html', 'Form'); public $components = array('Session', 'Paginator'); } Now, create a directory named Products/ in app/View/. Then, in this directory, create one file named index.ctp and another named view.ctp. How to do it... Perform the following steps: Define the pagination settings to sort the products by adding the following property to the ProductsController class: public $paginate = array('limit' => 10); Add the following index() method in the ProductsController class: public function index() { $this->Product->recursive = -1; $this->set('products', $this->paginate()); } Introduce the following content in the index.ctp file that we created: <h2><?php echo __('Products'); ?></h2><table><tr><th><?php echo $this->Paginator->sort('id'); ?></th><th><?php echo $this->Paginator->sort('name'); ?></th><th><?php echo $this->Paginator->sort('created'); ?></th></tr><?php foreach ($products as $product): ?><tr><td><?php echo $product['Product']['id']; ?></td><td><?phpecho $this->Html->link($product['Product']['name'],array('controller' => 'products', 'action' => 'view',$product['Product']['id']));?></td><td><?php echo $this->Time->nice($product['Product']['created']); ?></td></tr><?php endforeach; ?></table><div><?php echo $this->Paginator->counter(array('format' => __('Page{:page} of {:pages}, showing {:current} records out of {:count}total, starting on record {:start}, ending on {:end}'))); ?></div><div><?phpecho $this->Paginator->prev(__('< previous'), array(), null,array('class' => 'prev disabled'));echo $this->Paginator->numbers(array('separator' => ''));echo $this->Paginator->next(__('next >'), array(), null,array('class' => 'next disabled'));?></div> Returning to the ProductsController class, add the following view() method to it: public function view($id) {if (!($product = $this->Product->findById($id))) {throw new NotFoundException(__('Product not found'));}$this->set(compact('product'));} Introduce the following content in the view.ctp file: <h2><?php echo h($product['Product']['name']); ?></h2><p><?php echo h($product['Product']['details']); ?></p><dl><dt><?php echo __('Available'); ?></dt><dd><?php echo __((bool)$product['Product']['available'] ? 'Yes': 'No'); ?></dd><dt><?php echo __('Created'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['created']); ?></dd><dt><?php echo __('Modified'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['modified']); ?></dd></dl> Now, navigating to /products in your web browser will display a listing of the products, as shown in the following screenshot: Clicking on one of the product names in the listing will redirect you to a detailed view of the product, as shown in the following screenshot: How it works... We started by defining the pagination setting in our ProductsController class, which defines how the results are treated when returning them via the Paginator component (previously defined in the $components property of the controller). Pagination is a powerful feature of CakePHP, which extends well beyond simply defining the number of results or sort order. We then added an index() method to our ProductsController class, which returns the listing of products. You'll first notice that we accessed a $Product property on the controller. This is the model that we are acting against to read from our table in the database. We didn't create a file or class for this model, as we're taking full advantage of the framework's ability to determine the aspects of our application through convention. Here, as our controller is called ProductsController (in plural), it automatically assumes a Product (in singular) model. Then, in turn, this Product model assumes a products table in our database. This alone is a prime example of how CakePHP can speed up development by making use of these conventions. You'll also notice that in our ProductsController::index() method, we set the $recursive property of the Product model to -1. This is to tell our model that we're not interested in resolving any associations on it. Associations are other models that are related to this one. This is another powerful aspect of CakePHP. It allows you to determine how models are related to each other, allowing the framework to dynamically generate those links so that you can return results with the relations already mapped out for you. We then called the paginate() method to handle the resolving of the results via the Paginator component. It's common practice to set the $recursive property of all models to -1 by default. This saves heavy queries where associations are resolved to return the related models, when it may not be necessary for the query at hand. This can be done via the AppModel class, which all models extend, or via an intermediate class that you may be using in your application. We had also defined a view($id) method, which is used to resolve a single product and display its details. First, you probably noticed that our method receives an $id argument. By default, CakePHP treats the arguments in methods for actions as parts of the URL. So, if we have a product with an ID of 123, the URL would be /products/view/123. In this case, as our argument doesn't have a default value, in its absence from the URL, the framework would return an error page, which states that an argument was required. You will also notice that our IDs in the products table aren't sequential numbers in this case. This is because we defined our id field as VARCHAR(36). When doing this, CakePHP will use a Universally Unique Identifier (UUID) instead of an auto_increment value. To use a UUID instead of a sequential ID, you can use either CHAR(36) or BINARY(36). Here, we used VARCHAR(36), but note that it can be less performant than BINARY(36) due to collation. The use of UUID versus a sequential ID is usually preferred due to obfuscation, where it's harder to guess a string of 36 characters, but also more importantly, if you use database partitioning, replication, or any other means of distributing or clustering your data. We then used the findById() method on the Product model to return a product by it's ID (the one passed to the action). This method is actually a magic method. Just as you can return a record by its ID, by changing the method to findByAvailable(). For example, you would be able to get all records that have the given value for the available field in the table. These methods are very useful to easily perform queries on the associated table without having to define the methods in question. We also threw NotFoundException for the cases in which a product isn't found for the given ID. This exception is HTTP aware, so it results in an error page if thrown from an action. Finally, we used the set() method to assign the result to a variable in the view. Here we're using the compact() function in PHP, which converts the given variable names into an associative array, where the key is the variable name, and the value is the variable's value. In this case, this provides a $product variable with the results array in the view. You'll find this function useful to rapidly assign variables for your views. We also created our views using HTML, making use of the Paginator, Html, and Time helpers. You may have noticed that the usage of TimeHelper was not declared in the $helpers property of our ProductsController. This is because CakePHP is able to find and instantiate helpers from the core or the application automatically, when it's used in the view for the first time. Then, the sort() method on the Paginator helper helps you create links, which, when clicked on, toggle the sorting of the results by that field. Likewise, the counter(), prev(), numbers(), and next() methods create the paging controls for the table of products. You will also notice the structure of the array that we assigned from our controller. This is the common structure of results returned by a model. This can vary slightly, depending on the type of find() performed (in this case, all), but the typical structure would be as follows (using the real data from our products table here): Array([0] => Array([Product] => Array([id] => 535c460a-f230-4565-8378-7cae01314e03[name] => Cake[details] => Yummy and sweet[available] => true[created] => 2014-06-12 15:55:32[modified] => 2014-06-12 15:55:32))[1] => Array([Product] => Array([id] => 535c4638-c708-4171-985a-743901314e03[name] => Cookie[details] => Browsers love cookies[available] => true[created] => 2014-06-12 15:55:33[modified] => 2014-06-12 15:55:33))[2] => Array([Product] => Array([id] => 535c49d9-917c-4eab-854f-743801314e03[name] => Helper[details] => Helping you all the way[available] => true[created] => 2014-06-12 15:55:34[modified] => 2014-06-12 15:55:34))) We also used the link() method on the Html helper, which provides us with the ability to perform reverse routing to generate the link to the desired controller and action, with arguments if applicable. Here, the absence of a controller assumes the current controller, in this case, products. Finally, you may have seen that we used the __() function when writing text in our views. This function is used to handle translations and internationalization of your application. When using this function, if you were to provide your application in various languages, you would only need to handle the translation of your content and would have no need to revise and modify the code in your views. There are other variations of this function, such as __d() and __n(), which allow you to enhance how you handle the translations. Even if you have no initial intention of providing your application in multiple languages, it's always recommended that you use these functions. You never know, using CakePHP might enable you to create a world class application, which is offered to millions of users around the globe!
Read more
  • 0
  • 0
  • 1573

article-image-program-structure-execution-flow-and-runtime-objects
Packt
30 Jul 2014
8 min read
Save for later

Program structure, execution flow, and runtime objects

Packt
30 Jul 2014
8 min read
(For more resources related to this topic, see here.) Unfortunately, C++ isn't an easy language at all. Sometimes, you can think that it is a limitless language that cannot be learned and understood entirely, but you don't need to worry about that. It is not important to know everything; it is important to use the parts that are required in specific situations correctly. Practice is the best teacher, so it's better to understand how to use as many of the features as needed. In examples to come, we will use author Charles Simonyi's Hungarian notation. In his PhD in 1977, he used Meta-Programming – A software production method to make a standard for notation in programming, which says that the first letter of type or variable should represent the data type. For the example class that we want to call, a Test data type should be CTest, where the first letter says that Test is a class. This is good practice because a programmer who is not familiar with the Test data type will immediately know that Test is a class. The same standard stands for primitive types, such as int or double. For example, iCount stands for an integer variable Count, while dValue stands for a double variable Value. Using the given prefixes, it is easy to read code even if you are not so familiar with it. Getting ready Make sure Visual Studio is up and running. How to do it... Now, let's create our first program by performing the following steps and explaining its structure: Create a new C++ console application named TestDemo. Open TestDemo.cpp. Add the following code: #include "stdafx.h" #include <iostream> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { cout << "Hello world" << endl; return 0; } How it works... The structure of the C++ program varies due to different programming techniques. What most programs must have is the #include or preprocessor directives. The #include <iostream> header tells the compiler to include the iostream.h header file where the available function prototypes reside. It also means that libraries with the functions' implementations should be built into executables. So, if we want to use some API or function, we need to include an appropriate header file, and maybe we will have to add an additional input library that contains the function/API implementation. One more important difference when including files is <header> versus "header". The first (<>) targets solution-configured project paths, while the other ("") targets folders relative to the C++ project. The using command instructs the compiler to use the std namespace. Namespaces are packages with object declarations and function implementations. Namespaces have a very important usage. They are used to minimize ambiguity while including third-party libraries when the same function name is used in two different packages. We need to implement a program entry point—the main function. As we said before, we can use main for an ANSI signature, wmain for a Unicode signature, or _tmain where the compiler will resolve its signature depending on the preprocessor definitions in the project property pages. For a console application, the main function can have the following four different prototypes: int _tmain(int argc, TCHAR* argv[]) void _tmain(int argc, TCHAR* argv[]) int _tmain(void) void _tmain(void) The first prototype has two arguments, argc and argv. The first argument, argc, or the argument count, says how many arguments are present in the second argument, argv, or the argument values. The argv parameter is an array of strings, where each string represents one command-line argument. The first string in argv is always the current program name. The second prototype is the same as the first one, except the return type. This means that the main function may or may not return a value. This value is returned to the OS. The third prototype has no arguments and returns an integer value, while the fourth prototype neither has arguments nor returns a value. It is good practice to use the first format. The next command uses the cout object. The cout object is the name of the standard output stream in C++, and the meaning of the entire statement is to insert a sequence of characters (in this case, the Hello world sequence of characters) into the standard output stream (usually corresponds to the screen). The cout object is declared in the iostream standard file within the std namespace. This is why we need to include the specific file and declare that we will use this specific namespace earlier in our code. In our usual selection of the (int _tmain(int,_TCHAR**)) prototype, _tmain returns an integer. We must specify some int value after the return command, in this case 0. When returning a value to the operating system, 0 usually means success, but this is operating system dependent. This simple program is very easy to create. We use this simple example to demonstrate the basic program structure and usage of the main routine as the entry point for every C++ program. Programs with one thread are executed sequentially line-by-line. This is why our program is not user friendly if we place all code into one thread. After the user gives the input, control is returned to the application. Only now can the application continue the execution. In order to overcome such an issue, we can create concurrent threads that will handle the user input. In this way, the application does not stall and is responsive all the time. After a thread handles its task, it can signal that a user has performed the requested operation to the application. There's more... Every time we have an operation that needs to be executed separately from the main execution flow, we have to think about a separate thread. The simplest example is when we have some calculations, and we want to have a progress bar where the calculation progress will be shown. If the same thread was responsible for calculation as well as to update the progress bar, probably it wouldn't work. This occurs because, if both work and a UI update are performed from a single thread, such threads can't interact with OS painting adequately; so almost always, the UI thread is separate from the working threads. Let's review the following example. Assume that we have created a function where we will calculate something, for example, sines or cosines of some angle, and we want to display progress in every step of the calculation: void CalculateSomething(int iCount) { int iCounter = 0; while (iCounter++ < iCount) { //make calculation //update progress bar } } As commands are executed one after another inside each iteration of the while loop, the operating system doesn't have the required time to properly update the user interface (in this case, the progress bar), so we would see an empty progress bar. After the function returns, a fully filled progress bar appears. The solution for this is to create a progress bar on the main thread. A separate thread should execute the CalculateSomething function; in each step of iteration, it should somehow signal the main thread to update the progress bar step by step. As we said before, threads are switched extremely fast on the CPU, so we can get the impression that the progress bar is updated at the same time at the calculation is performed. To conclude, each time we have to make a parallel task, wait for some kind of user input, or wait for an external dependency such as some response from a remote server, we will create a separate thread so that our program won't hang and be unresponsive. In our future examples, we will discuss static and dynamic libraries, so let's say a few words about both. A static library (*.lib) is usually some code placed in separate files and already compiled for future use. We will add it to the project when we want to use some of its features. When we wrote #include <iostream> earlier, we instructed the compiler to include a static library where implementations of input-output stream functions reside. A static library is built into the executable in compile time before we actually run the program. A dynamic library (*.dll) is similar to a static library, but the difference is that it is not resolved during compilation; it is linked later when we start the program, in another words—at runtime. Dynamic libraries are very useful when we have functions that a lot of programs will use. So, we don't need to include these functions into every program; we will simply link every program at runtime with one dynamic library. A good example is User32.dll, where the Windows OS placed a majority of GUI functions. So, if we create two programs where both have a window (GUI form), we do not need to include CreateWindow in both programs. We will simply link User32.dll at runtime, and the CreateWindow API will be available. Summary Thus the article covers the four main paradigms: imperative, declarative, functional (or structural), and object-oriented. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 4733
Modal Close icon
Modal Close icon