Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 10953

article-image-introduction-oracle-bpm
Packt
25 Sep 2014
23 min read
Save for later

Introduction to Oracle BPM

Packt
25 Sep 2014
23 min read
In this article, Vivek Acharya, the author of the book, Oracle BPM Suite 12c Modeling Patterns, has discussed the various patterns in Oracle. The spectrum of this book covers patterns and scenarios from strategic alignment (the goals and strategy model) to flow patterns. From conversation, collaboration, and correlation patterns to exception handling and management patterns; from human task patterns and business-IT collaboration to adaptive case management; and many more advance patterns and features have been covered in this book. This is an easy-to-follow yet comprehensive guide to demystify the strategies and best practices for developing BPM solutions on the Oracle BPM 12c platform. All patterns are complemented with code examples to help you better discover how patterns work. Real-life scenarios and examples touch upon many facets of BPM, where solutions are a comprehensive guide to various BPM modeling and implementation challenges. (For more resources related to this topic, see here.) In this section, we will cover dynamic task assignment patterns in detail, while a glimpse of the strategic alignment pattern is offered. Dynamic task assignment pattern Business processes need human interactions for approvals, exception management, and interaction with a running process, group collaboration, document reviews or case management, and so on. There are various requirements for enabling human interaction with a running BPMN process, which are accomplished using human tasks in Oracle BPM. Human tasks are implemented by human workflow services that are responsible for the routing of tasks, assignment of tasks to users, and so on. When a token arrives at the user task, control is passed from the BPMN process to Oracle Human Workflow, and the token remains with human tasks until it's completed. As callbacks are defined implicitly, once the workflow is complete, control is returned back to the user task and the token moves ahead to subsequent flow activities. When analyzing all the human task patterns, it's evident that it comprises various features, such as assignment patterns, routing patterns, participant list builder patterns, and so on. Oracle BPM offers these patterns as a template for developers, which can be extended. The participants are logically grouped using stages that define the assignment with a routing slip. The assignment can be grouped based on assignment modeling patterns. It could be sequential, parallel, or hybrid. Routing of the tasks to participants is governed by the routing pattern, which is a behavioral pattern. It defines whether one participant needs to act on a task, many participants need to act in sequence, all the participants need to act in parallel, or participants need not act at all. The participants of the task are built using participant list building patterns, such as approval groups, management chains, and so on. The assignment of the participant is performed by a task assignment mechanism, such as static, dynamic, or rule-based assignment. In this section, we will be talking about the task assignment pattern. The intent of the task assignment pattern is the assignment of human tasks to user(s), group(s), and/or role(s). Essentially, it's being motivated with the fact that you need to assign participants to tasks either statically, dynamically, or on derivations based on business rules. We can model and define task assignment at runtime and at design time. However, the dynamic task assignment pattern deals with the assignment of tasks at runtime. This means the derivation of task participants will be performed at runtime when the process is executing. This pattern is very valuable in business scenarios where task assignment cannot be defined at design time. There are various business requirements that need task assignments based on the input data of the process. As task assignment will be based on the input data of the process, task assignments will be evaluated at runtime; hence, these are termed as dynamic task assignments. The following pattern table gives the details of the dynamic task assignment pattern: Signature Dynamic Task Assignment Pattern Classification Human Task Pattern Intent To dynamically assign tasks to the user/group/roles. Motivation Evaluating task assignment at runtime. Applicability Dynamic task assignment is applicable where the distribution of work is required. It's required when you need multilevel task assignment or for the evaluation of users/roles/groups at runtime. Implementation Dynamic task assignment can be implemented using complex process models, business rules, organization units, organizational roles (parametric roles), external routing, and so on. Known issues What if the users that are derived based on the evaluation of dynamic conditions have their own specific preferences? Known solution Oracle BPM offers rules that can be defined in the BPM workspace. Defining the use case We will walk though an insurance claim process that has two tasks to be performed: the first task is verification, and the other is the validation of claimant information. The insurance claim needs to be verified by claim agents. Agents are assigned verification tasks based on the location and the business organization unit they belong to. An agent's skills in handling claim cases are also considered based on the case's sensitivity. If the case's sensitivity requires an expert agent, then an expert agent assigned to that organization unit and belonging to that specific location must be assigned. Think of this from the perspective of having the same task being assigned to different users/groups/roles based on different criteria. For example, if the input payload has Regular as the sensitivity and EastCoast as the organization unit, then the verification task needs to be assigned to an EastCoast-Regular agent. However, if the input payload has EastCoast-Expert as the sensitivity and EastCoast as the organization unit, then the verification task needs to be assigned to an EastCoast-Expert agent. Then, we might end up with multiple swimlanes to fulfill such a task assignment model. However, such models are cumbersome and complicated, and the same activities will be duplicated, as you can see in the following screenshot: The routing of tasks can be modeled in human tasks as well as in the BPMN process. It depends purely on the business requirements and on various modeling considerations. For instance, if you are looking for greater business visibility and if there is a requirement to use exceptional handling or something similar, then it's good to model tasks in the BPMN process itself. However, if you are looking for dynamism, abstraction, dynamic assignment, dynamic routing, rule-driven routing, and so on, then modeling task routing in human task assignment and routing is an enhanced modeling mechanism. We need an easy-to-model and dynamic technique for task assignment. There are various ways to achieve dynamic task assignment: Business rules Organization units Organizational roles (parametric roles) We can use business rules to define the condition(s), and then we can invoke various seeded List Builder functions to build the list of participants. This is one way of enabling dynamic task assignment using business rules. Within an organization, departments and divisions are represented by organizational units. You can define the hierarchy of the organization units, which corresponds to your organizational structure. User(s), group(s), role(s), and organizational roles can be assigned as members to organization units. When a task belonging to a process is associated with an organization unit, then the task is available for the members of that organization unit. So, using the scenario we discussed previously, we can define organization units and assign members to them. We can then associate the process with the organization unit, which will result in task assignment to only those members who belong to the organization that is associated with the process. Users have various properties (attributes) defined in LDAP or in the Oracle Internet Directory (OID); however, there are cases where we need to define additional properties for the users. Most common among them is the definition of roles and organization units as properties of the user. There are cases where we don't have the flexibility to extend an enterprise LDAP/OID for adding these properties. What could be the solution in this case? Extended user properties are the solution in such scenarios. Using an admin user, we can define extended properties for the users in the BPM workspace. Once the extended properties are defined, they can be associated with the users. Let's extend the scenario we were talking about previously. Perform the following tasks: Define the agent group in the myrealm weblogic, which contains users as its members. Define two organization units, one for EastCoast and another for West Coast. The users of the agent group will be assigned as members to organization units. Define extended properties. Associate extended properties with the users (these users are members of the agent group). Define organizational roles, and create a process with a user task that builds the participant list using the organizational role defined previously. The features of the use case are as follows: A validation task gets assigned to the insurance claim agent group (the ClaimAgents group) The assignment of the task to be used is dynamically evaluated based on the value of the organization unit being passed as input to the process Once the validation task is acted upon by the participants (users), the process token reaches the verification task The assignment of the verification task is based on the organization role (the parametric role) being defined Assigning users to groups In the course of achieving the use case scenario, we will execute the following tasks to define a group, organization units, extended properties, and parametric roles: Log in to the Oracle weblogic console and navigate to myrealm. Click on the Users & Groups tab and select the Groups tab. Click on the new group with the name ClaimAgents and save the changes. Click on Users; create the users (anju, rivi, and buny) and assign them to the ClaimAgents group as members. Click on the following users and assign them to the ClaimAgents group: jausten, jverne, mmitch, fkafka, achrist, cdickens, wshake, and rsteven. Click on Save. We have defined and assigned some new users and some of the already existing users to the ClaimAgents group. Now, we will define the organization units and assign users as members to the organization units. Perform the following steps: Log in to the Oracle BPM workspace to define the organization units. Go to Administration | Organization | Organization Units. Navigate to Create Organization Unit | Root Organization Unit in order to create a parent organization unit. Name it Finance. Go to Create Organization Unit | Child Organization Unit in order to create child organization units. Name it EastCoastFinOrg and WestCoastFinOrg: Assign the following users as members to the to the EastCoastFinOrg organization unit: Mmitch, fkafka, jverne, jausten, achrist, and anju Assign the following users as members to the WestCoastFinOrg organization unit: rivi, buny, cdickens, wshake, rsteven, and jstein Defining extended properties The users are now assigned as members to organization units. Now, it's time to define the extended properties. Perform the following steps: In the BPM workspace, navigate to Administration | Organization | Extended User Properties. Click on Add Property to define the extended properties, which are as follows: Property Name Value Sensitivity Expert, regular Location FL, CA, and TX Add Users should be clicked on to associate the properties of users in the Map Properties section on the same page, as shown previously. Use the following mapping to create the map of extended properties and users: User Name Sensitivity Location mmitch Regular FL fkafka Regular FL jverne Regular FL jausten Expert FL achrist Expert FL anju Expert FL rivi Expert CA buny Expert CA cdickens Regular CA wshake Regular CA rsteven Regular CA jstein Expert CA Defining the organization role We will define the organizational role (the parametric role) and use it for task assignment in the case/BPM process. The organizational role (parametric role) as task assignment pattern is used for dynamic task assignment because users/roles are assigned to the parametric role based on the evaluation of the condition at runtime. These conditions are evaluated at runtime for the determination of users/roles based on organization units and extended properties. Perform the following steps: Go to Administration | Organization | Parametric Roles. Click on Create Parametric Role to create a role and name it AgentRole. Click on Add Parameter to define the parameters for the parametric role and name them as Sensitivity of string type and Location of string type. You can find the Sensitivity and Location extended properties in the Add Condition drop-box. Select Grantees as Group; browse and select the ClaimAgents group from the LDAP. Select Sensitivity and click on the + sign to include the property in the condition. For the sensitivity, use the Equals operator and select the parametric role's input parameter from the drop-down list. This parameter will be listed as $Sensitivity. Similarly, select $Location and click on the + sign to include the property in the condition. For the location, use the Equals operator and select the parametric role's input parameter from the drop-down list. This parameter will be listed as $Location. This is shown in the following screenshot: Click on Save to apply the changes. As we can see in the preceding screenshot, the organization unit is also available as a property that can be included in the condition. We have configured the parametric role with a specific condition. The admin can log in and change the conditions as and when the business requires (changes at runtime, which bring in agility and dynamism). Implementing the BPMN process We can create a BPM process which has a user task that builds a participant's list using the parametric role, as follows: Create a new BPM project with the project name as DynamicTaskAssignment. Create the composite with a BPMN component. Create an asynchronous BPMN process and name it as VerificationProcess. Create a business object, VerificationProcessBO, based on the InsuranceClaim.xsd schema. The schema (XSD) can be found in the DynamicTaskAssignment project in this article. You can navigate to the schemas folder in the project to get the XSD. Define the process input argument as VerificationProcessIN, which should be based on the VerificationProcessBO business object. Similarly, define the process output argument as VerificationProcessOUT, based on VerificationProcessBO. Define two process data objects as VerificationProcessINPDO and VerificationProcessOUTPDO, which are based on the VerificationProcessBO business object. Click on the message start event's data association and perform the data association as shown in the following screenshot. As we can see, the process input is assigned to a PDO; however, the organization unit element from the process input is assigned to the predefined variable, organizationUnit. This is shown in the following screenshot: The validation task is implemented to demonstrate dynamic task assignment using multilevel organization units. We will check the working of this task when we perform the test. Perform the following steps: Drag-and-drop a user task between the message start event and the message end event and name it ValidationTask. Go to the Implementation tab in the user task property dialog and click on Add to create a human task; name it ValidationTask. Enter the title of the task with outcomes as Accept and Reject. Let the input parameter for ValidationTask be VerificationProcessINPDO. Click on OK to finish the task configuration. This will bring you back to the ValidationTask property dialog box. Perform the necessary data association. Click on ValidationTask.task to open the task metadata, and go to the Assignment section in the task metadata. Click on the Participant block. This will open the participant type dialog box. Select the Parallel Routing Pattern. Select the list building pattern as Lane Participant ( the current lane). For all other values, use the default settings. Click on Save All. Now, we will create a second task in the same process, which will be used to demonstrate dynamic task assignment using the organization role (the parametric role). Perform the following steps: Drag-and-drop a user task between Validation Task and the message end event in the verification process and name it as Verification Task. Go to the Implementation tab in the user task property dialog and click on Add to create a human task. Enter the title of the task as Verification Task, with the Accept and Reject outcomes. Let the input parameter for the Verification Task be VerificationProcessINPDO. Click on OK to finish the task configuration. This will bring you back to VerificationTask property dialog. Click on VerificationTask.task to open the task metadata and go to the Assignment section in the task metadata. Click on the Participant block. This will open the Participant Type dialog box. Select parallel routing pattern. Select list building pattern as Organization Role. This is shown in the following screenshot: Enter the name of the organizational role as AgentRole, which is the organizational role we have defined previously: Along with the organizational role, enter the input arguments for the organizational role. For the Sensitivity input argument of the parametric role, use the XPath expression to browse the input payload and select Sensitivity Element, as shown previously. Similarly, for the Location argument, select State Element from the input payload. Click on Save All. This article contains downloads for DynamicTaskAssignment, which we have already created to facilitate verification. If you have not created the project by following the steps mentioned previously, you can use the project delivered in the download. Deploy the project to the weblogic server. Log in to the BPM workspace as the admin user and go to Administrator | Organization units. Click on roles to assign the ClaimAgents group to the DynamicTaskAssignmentPrj.ClaimAgents role. Click on Save. Testing the dynamic task assignment pattern Log in to the EM console as an admin user to test the project; however, you can use any tool of your choice to test it. We can get the test data from the project itself. Navigate to DynamicTaskAssignment | SOA | Testsuites | TestData12c.xml to find the test data file. Use the TestData.xml file if you are going to execute the project in the 11g environment. The test data contains values where the sensitivity is set to Expert, organization unit is set to Finance/EastCoastFinOrg, and state is set to CA. The following are the test results of the validation task: Org Unit Input Value Validation Task Assignment Finance/EastCoastFinOrg mmitch, fkafka, jverne, jausten, achrist, anju, and jstein Finance jausten, jverne, mmitch, fkafka, achrist, cdickens, wshake, rsteven, rivi, buny, jstein, and anju Note that if you pass the organization unit as Finance, then all the users belonging to the finance organization's child organization will receive the task. However, if you pass the organization unit as Finance/EastCoastFinOrg (EastCoastFinOrg is a child organization in the finance parent organization), then only those users who are in the EastCoastFinOrg child organization will receive the task. The process flow will move from the validation task to the verification task only when 50 percent of the participants act on the validation task, as parallel routing pattern is defined with the voting pattern of 50 percent. The following are the test results for Verification Task: Input Verification Task Sensitivity: Expert buny, rivi, and jstein Location: CA N/A Based on the extended properties mapping, the verification task will get assigned to the users buny, rivi, and jstein. Addressing known issues What if the users are derived based on the evaluation of dynamic conditions? To address this, Oracle BPM offers rules that can be defined in the BPM workspace. We will extend the use case we have defined. When we execute the verification process, the verification task is assigned to the buny, rivi, and jstein users. How will you address a situation where the user, buny, wants the task to be reassigned or delegated to someone when he is in training and cannot act on the assigned tasks? Perform the following steps: Log in to the Oracle BPM workspace as the user (buny) for whom we want to set the preference. Go to Preferences | Rules, as shown in the following screenshot: Click on + to create a new rule. Enter the name of the rule as TrainingRule. Specify the rule condition for the use case user (buny). We have defined the rule condition that executes the rule when the specified dates are met and the rule is applicable to all the tasks: Specify the dates (Start Date and End Date) between which we want the rule to be applicable. (These are the dates on which the user, buny, will be in training). Define the rule condition either by selecting all the tasks or by specifying matching criteria for the task. Hence, when the task condition is evaluated to True, the rule gets executed. Reassign the task to the jcooper user when the rule gets evaluated to True. The rule action constitutes of task reassignment or delegation, or we can specify a rule that takes no action. The rule action is to change task assignment or to allow someone else to perform on behalf of the original assignee, as described in the following points: Select reassignment if you want the task to be assigned to another assignee, who can work on the task as if the task was assigned to him/her Select delegation if you want the assignee to whom the task is delegated to work on behalf of the original assignee Execute the DynamicTaskAssignment project to run the verification process. When the verification task gets executed, log in to the EM console and check the process trace. As we can see in the following screenshot, the task gets reassigned to the jcooper user. We can log in to the BPM workspace as the jcooper user and can also verify the task in the task list. This is shown in the following screenshot: There's more The dynamic task assigned pattern brings in dynamism and agility to the business process. In the preceding use case, we have passed organization unit as the process input parameter. However, with Oracle BPM 12c, we can define business parameters and use them to achieve greater flexibility. The business parameters allow business owners to change the business parameter's value at runtime without changing the process, which essentially allows you to change the process without the inclusion of IT. Basically, business parameters are already used in the process and they are driving the process flow. Changing the value of business parameters at runtime is like changing the process flow at execution. For the preceding use case, the insurance input schema (parameter) has an organization unit that is passed when invoking the process. However, what if there are no placeholders to pass the organization unit in the input parameter? We can define a business parameter in JDeveloper and assign the value to the organization unit. This is shown in the following screenshot: Perform the following steps to deploy the project. Open JDeveloper 12c and expand the project to navigate to Organization | Business Parameters. Define a business parameter with the name ORGUNIT and of the type String; enter a default value and save the changes. Go to the process message start event and navigate to the Implementation tab. Click on data association and assign business parameters to the predefined variable (organization unit). Save and deploy the project. Using the shown mechanism, a developer can enable business parameters; technically, the BPMN engine executes the following function to get the business parameter value: bpmn:getBusinessParameter('Business Parameter') Similarly, a process analyst can click on the BPM composer application and bring about changes in the process to define business parameters and changes in the process. Process asset manager (PAM) will take care of asset sharing and collaboration. Business owners can log in to the BPM workspace application and change the business parameters by navigating to the following path to edit the parameter values to drive/modify the process flow: Administration | Organization | Business Parameter Strategic alignment pattern BPMN needs a solution to align business goals, objectives, and strategies. It also needs a solution to allow business analysts and function/knowledge workers to create business architecture models that drive the IT development of a process, which remains aligned with the goals and objectives. Oracle BPM 12c offers business architecture, a methodology to perform high-level analysis of business processes. This methodology adopts a top-down approach for discovering organizational processes, defining goals and objectives, defining strategies and mapping them to goals and objectives, and reporting the BA components. All these details are elaborated exclusively with use cases and demo projects in the book. The following pattern table highlights facts around the strategic alignment pattern: Signature Strategic Alignment Pattern Classification Analysis and Discovery Pattern Intent To offer a broader business model (an organizational blueprint) that ensures the alignment of goals, objectives, and strategies with organizational initiatives. Motivation A BPMN solution should offer business analysts and functional users with a set of features to analyze, refine, define, optimize, and report business processes in the enterprise. Applicability Such a solution will only empower businesses to define models based on what they actually need, and reporting will help to evaluate the performances. This will then drive the IT development of the processes by translating requirements into BPMN processes and cases. Implementation Using BPM composer, we can define goals, objectives, strategies, and value chain models. We can refer to BPMN processes from the value chain models. Goals are broken down into objects, which are fulfilled by strategies. Strategies are implemented by value chains, which can be decomposed into value chains/business processes. Known issues The sharing of assets between IT developers, business architects, and process analysts. Known solution Oracle BPM 12c offers PAM, which is a comprehensive solution, and offers seamless asset sharing and collaboration between business and IT. This book covers PAM exclusively. Summary In this article, we have just introduced the alignment pattern. However, in the book, alignment pattern is covered in detail. It shows how IT development and process models can be aligned with organization goals. While performing alignments, we will learn enterprise maps, strategy models, and value chain models. We will discover how models are created and linked to an organization. Capturing the business context showcases the importance of documentation in the process model phase. Different document levels and their methods of definition are discussed along with their usage. Further, we learned how to create different reports based on the information we have documented in the process, such as RACI reports and so on. The process player demonstration showcased how process behavior can be emulated in a visual representation, which allows designers and analysts to test and revise a process without deploying it. This infuses a preventive approach and also enables organizations to quickly find the loopholes, making them more responsive to challenges. We also elaborated on how round trips and business-IT collaboration facilitates the storing, sharing, and collaboration of process assets and business architecture assets. While doing so, we witnessed PAM and subversion as well as learnt versioning, save/update/commit, difference and merge, and various other activities that empower developers and analysts to work in concert. In this book, we will learn various patterns in a similar format. Each pattern pairs the classic problem/solution format, which includes signature, intent, motivation, applicability, and implementation; the implementation is demonstrated via a use case scenario along with a BPMN application, in each chapter. It's a one-stop title to learn about patterns, their applicability and implementation, as well as BPMN features. Resources for Article: Further resources on this subject: Oracle GoldenGate- Advanced Administration Tasks - I [article] Oracle B2B Overview [article] Oracle ADF Essentials – Adding Business Logic [article]
Read more
  • 0
  • 0
  • 2314

article-image-getting-started-with-vagrant
Timothy Messier
25 Sep 2014
6 min read
Save for later

Getting started with Vagrant

Timothy Messier
25 Sep 2014
6 min read
As developers, one of the most frustrating types of bugs are those that only happen in production. Continuous integration servers have gone a long way in helping prevent these types of configuration bugs, but wouldn't it be nice to avoid these bugs altogether? Developers’ machines tend to be a mess of software. Multiple versions of languages, random services with manually tweaked configs, and debugging tools all contribute to a cluttered and unpredictable environment. Sandboxes Many developers are familiar with sandboxing development. Tools like virtualenv and rvm were created to install packages in isolated environments, allowing for multiple versions of packages to be installed on the same machine. Newer languages like nodejs install packages in a sandbox by default. These are very useful tools for managing package versions in both development and production, but they also lead to some of the aforementioned clutter. Additionally, these tools are specific to particular languages. They do not allow for easily installing multiple versions of services like databases. Luckily, there is Vagrant to address all these issues (and a few more). Getting started Vagrant at its core is simply a wrapper around virtual machines. One of Vagrant's strengths is its ease of use. With just a few commands, virtual machines can be created, provisioned, and destroyed. First grab the latest download for your OS from Vagrant's download page (https://www.vagrantup.com/downloads.html). NOTE: For Linux users, although your distro may have a version of Vagrant via its package manager, it is most likely outdated. You probably want to use the version from the link above to get the latest features. Also install Virtualbox (https://www.virtualbox.org/wiki/Downloads) using your preferred method. Vagrant supports other virtualization providers as well as docker, but Virtualbox is the easiest to get started with. When most Vagrant commands are run, they will look for a file named Vagrantfile (or vagrantfile) in the current directory. All configuration for Vagrant is done in this file and it is isolated to a particular Vagrant instance. Create a new Vagrantfile: $ vagrant init hashicorp/precise64 This creates a Vagrantfile using the base virtual image, hashicorp/precise64, which is a stock Ubuntu 12.04 image provided by Hashicorp. This file contains many useful comments about the various configurations that can be done. If this command is run with the --minimal flag, it will create a file without comments, like the following: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" end Now create the virtual machine: $ vagrant up This isn’t too difficult. You now have a nice clean machine in just two commands. What did Vagrant up do? First, if you did not already have the base box, hashicorp/precise64, it was downloaded from Vagrant Cloud. Then, Vagrant made a new machine based on the current directory name and a unique number. Once the machine was booted it created a shared folder between the host's current directory and the guest's /vagrant. To access the new machine run: $ vagrant ssh Provisioning At this point, additional software needs to be installed. While a clean Ubuntu install is nice, it does not have the software to develop or run many applications. While it may be tempting to just start installing services and libraries with apt-get, that would just start to cause the same old clutter. Instead, use Vagrant's provisioning infrastructure. Vagrant has support for all of the major provision tools like Salt (http://www.saltstack.com/), Chef (http://www.getchef.com/chef/) or Ansible (http://www.ansible.com/home). It also supports calling shell commands. For the sake of keeping this post focused on Vagrant, an example using the shell provisioner will be used. However, to unleash the full power of Vagrant, use the same provisioner used for production systems. This will enable the virtual machine to be configured using the same provisioning steps as production, thus ensuring that the virtual machine mirrors the production environment. To continue this example, add a provision section to the Vagrantfile: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.provision "shell", path: "provision.sh" end This tells Vagrant to run the script provision.sh. Create this file with some install commands: #!/bin/bash # Install nginx apt-get -y install nginx Now tell Vagrant to provision the machine: $ vagrant provision NOTE: When running Vagrant for the first time, Vagrant provision is automatically called. To force a provision, call vagrant up --provision. The virtual machine should now have nginx installed with the default page being served. To access this page from the host, port forwarding can be set in the Vagrantfile: # -*- mode: ruby -*- # vi: set ft=ruby : # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.network "forwarded_port", guest: 80, host: 8080 config.vm.provision "shell", path: "provision.sh" end For the forwarding to take effect, the virtual machine must be restarted: $ vagrant halt && vagrant up Now in a browser go to http://localhost:8080 to see the nginx welcome page. Next steps This simple example shows how easy Vagrant is to use, and how quickly machines can be created and configured. However, to truly battle issues like "it works on my machine", Vagrant needs to leverage the same provisioning tools that are used for production servers. If these provisioning scripts are cleanly implemented, Vagrant can easily leverage them and make creating a pristine production-like environment easy. Since all configuration is contained in a single file, it becomes simple to include a Vagrant configuration along with code in a repository. This allows developers to easily create identical environments for testing code. Final notes When evaluating any open source tool, at some point you need to examine the health of the community supporting the tool. In the case of Vagrant, the community seems very healthy. The primary developer continues to be responsive about bugs and improvements, despite having launched a new company in the wake of Vagrant's success. New features continue to roll in, all the while keeping a very stable product. All of this is good news since there seem to be no other tools that make creating clean sandbox environments as effortless as Vagrant. About the author Timothy Messier is involved in many open source devops projects, including Vagrant and Salt, and can be contacted at tim.messier@gmail.com.
Read more
  • 0
  • 0
  • 9978

article-image-docker-container-management-with-saltstack
Nicole Thomas
25 Sep 2014
8 min read
Save for later

Docker Container Management at Scale with SaltStack

Nicole Thomas
25 Sep 2014
8 min read
Every once in a while a technology comes along that changes the way work is done in a data center. It happened with virtualization and we are seeing it with various cloud computing technologies and concepts. But most recently, the rise in popularity of container technology has given us reason to rethink interdependencies within software stacks and how we run applications in our infrastructures. Despite all of the enthusiasm around the potential of containers, they are still just another thing in the data center that needs to be centrally controlled and managed...often at massive scale. This article will provide an introduction to how SaltStack can be used to manage all of this, including container technology, at web scale. SaltStack Systems Management Software The SaltStack systems management software is built for remote execution, orchestration, and configuration management and is known for being fast, scalable, and flexible. Salt is easy to set up and can easily communicate asynchronously with tens of thousands of servers in a matter of seconds. Salt was originally built as a remote execution engine relying on a secure, bi-directional communication system utilizing a Salt Master daemon used to control Salt Minion daemons, where the minions receive commands from the remote master. Salt’s configuration management capabilities are called the state system, or Salt States, and are built on SLS formulas. SLS files are data structures based on dictionaries, lists, strings, and numbers that are used to enforce the state that a system should be in, also known as configuration management. SLS files are easy to write, simple to implement, and are typically written in YAML. State file execution occurs on the Salt Minion. Therefore, once any states files that the infrastructure requires have been written, it is possible to enforce state on tens of thousands of hosts simultaneously. Additionally, each minion returns its status to the Salt Master. Docker containers Docker is an agile runtime and packaging platform specializing in enabling applications to be quickly built, shipped, and run in any environment. Docker containers allow developers to easily compartmentalize their applications so that all of the program’s needs are installed and configured in a single container. This container can then be dropped into clouds, data centers, laptops, virtual machines (VMs), or infrastructures where the app can execute, unaltered, without any cross-environment issues. Building Docker containers is fast, simple, and inexpensive. Developers can create a container, change it, break it, fix it, throw it away, or save it much like a VM. However, Docker containers are much more nimble, as the containers only possess the application and its dependencies. VMs, along with the application and its dependencies, also install a guest operating system, which requires much more overhead than may be necessary. While being able to slice deployments into confined components is a good idea for application portability and resource allocation, it also means there are many more pieces that need to be managed. As the number of containers scales, without proper management, inefficiencies abound. This scenario, known as container sprawl, is where SaltStack can help and the combination of SaltStack and Docker quickly proves its value. SaltStack + Docker When we combine SaltStack’s powerful configuration management armory with Docker’s portable and compact containerization tools, we get the best of both worlds. SaltStack has added support for users to manage Docker containers on any infrastructure. The following demonstration illustrates how SaltStack can be used to manage Docker containers using Salt States. We are going to use a Salt Master to control a minimum of two Salt Minions. On one minion we will install HAProxy. On the other minion, we will install Docker and then spin up two containers on that minion. Each container hosts a PHP app with Apache that displays information about each container’s IP address and exposed port. The HAProxy minion will automatically update to pull the IP address and port number from each of the Docker containers housed on the Docker minion. You can certainly set up more minions for this experiment, where additional minions would be additional Docker container minions, to see an implementation of containers at scale, but it is not necessary to see what is possible. First, we have a few installation steps to take care of. SaltStack’s installation documentation provides a thorough overview of how to get Salt up and running, depending on your operating system. Follow the instructions on Salt’s Installation Guide to install and configure your Salt Master and two Salt Minions. Go through the process of accepting the minion keys on the Salt Master and, to make sure the minions are connected, run: salt ‘*’ test.ping Note: Much of the Docker functionality used in this demonstration is brand new. You must install the latest supported version of Salt, which is 2014.1.6. For reference, I have my master and minions all running on Ubuntu 14.04. My Docker minion is named “docker1” and my HAProxy minion is named “haproxy”. Now that you have a Salt Master and two Salt Minions configured and communicating with each other, it’s time to get to work. Dave Boucha, a Senior Engineer at SaltStack, has already set up the necessary configuration files for both our haproxy and docker1 minions and we will be using those to get started. First, clone the Docker files in Dave’s GitHub repository, dock-apache, and copy the dock_apache and docker> directories into the /srv/salt directory on your Salt Master (you may need to create the /srv/salt directory): cp -r path/to/download/dock_apache /srv/salt cp -r path/to/download/docker /srv/salt The init.sls file inside the docker directory is a Salt State that installs Docker and all of its dependencies on your minion. The docker.pgp file is a public key that is used during the Docker installation. To fire off all of these events, run the following command: salt ‘docker1’ state.sls docker Once Docker has successfully installed on your minion, we need to set up our two Docker containers. The init.sls file in the dock_apache directory is a Salt State that specifies the image to pull from Docker’s public container repository, installs the two containers, and assigns the ports that each container should expose: 8000 for the first container and 8080 for the second container. Now, let’s run the state: salt ‘docker1’ state.sls dock_apache Let’s check and see if it worked. Get the IP address of the minion by running: salt ‘docker1’ network.ip_addrs Copy and paste the IP address into a browser and add one of the ports to the end. Try them both (IP:8000 and IP:8080) to see each container up and running! At this point, you should have two Docker containers installed on your “docker1” minion (or any other docker minions you may have created). Now we need to set up the “haproxy” minion. Download the files from Dave’s GitHub repository, haproxy-docker, and place them all in your Salt Master’s /srv/salt/haproxy directory: cp -r path/to/download/haproxy /srv/salt First, we need to retrieve the data about our Docker containers from the Docker minion(s), such as the container IP address and port numbers, by running the haproxy.config salt ‘haproxy’ state.sls haproxy.config By running haproxy.config, we are setting up a new configuration with the Docker minion data. The configuration file also runs the haproxy state (init.sls. The init.sls file contains a haproxy state that ensures that HAProxy is installed on the minion, sets up the configuration for HAProxy, and then runs HAProxy. You should now be able to look at the HAProxy statistics by entering the haproxy minion’s IP address in your browser with /haproxy?stats on the end of the URL. When the authentication pop-up appears, the username and password are both saltstack. While this is a simple example of running HAProxy to balance the load of only two Docker containers, it demonstrates how Salt can be used to manage tens of thousands of containers as your infrastructure scales. To see this same demonstration with a haproxy minion in action with 99 Docker minions, each with two Docker containers installed, check out SaltStack’s Salt Air 20 tutorial by Dave Boucha on YouTube. About the author Nicole Thomas is a QA Engineer at SaltStack, Inc. Before coming to SaltStack, she wore many hats from web and Android developer to contributing editor to working in Environmental Education. Nicole recently graduated Summa Cum Laude from Westminster College with a degree in Computer Science. Nicole also has a degree in Environmental Studies from the University of Utah.
Read more
  • 0
  • 0
  • 12855

article-image-machine-learning-ipython-scikit-learn-0
Packt
25 Sep 2014
13 min read
Save for later

Machine Learning in IPython with scikit-learn

Packt
25 Sep 2014
13 min read
This article written by Daniele Teti, the author of Ipython Interactive Computing and Visualization Cookbook, the basics of the machine learning scikit-learn package (http://scikit-learn.org) is introduced. Its clean API makes it really easy to define, train, and test models. Plus, scikit-learn is specifically designed for speed and (relatively) big data. (For more resources related to this topic, see here.) A very basic example of linear regression in the context of curve fitting is shown here. This toy example will allow to illustrate key concepts such as linear models, overfitting, underfitting, regularization, and cross-validation. Getting ready You can find all instructions to install scikit-learn on the main documentation. For more information, refer to http://scikit-learn.org/stable/install.html. With anaconda, you can type conda install scikit-learn in a terminal. How to do it... We will generate a one-dimensional dataset with a simple model (including some noise), and we will try to fit a function to this data. With this function, we can predict values on new data points. This is a curve fitting regression problem. First, let's make all the necessary imports: In [1]: import numpy as np        import scipy.stats as st        import sklearn.linear_model as lm        import matplotlib.pyplot as plt        %matplotlib inline We now define a deterministic nonlinear function underlying our generative model: In [2]: f = lambda x: np.exp(3 * x) We generate the values along the curve on [0,2]: In [3]: x_tr = np.linspace(0., 2, 200)        y_tr = f(x_tr) Now, let's generate data points within [0,1]. We use the function f and we add some Gaussian noise: In [4]: x = np.array([0, .1, .2, .5, .8, .9, 1])        y = f(x) + np.random.randn(len(x)) Let's plot our data points on [0,1]: In [5]: plt.plot(x_tr[:100], y_tr[:100], '--k')        plt.plot(x, y, 'ok', ms=10) In the image, the dotted curve represents the generative model. Now, we use scikit-learn to fit a linear model to the data. There are three steps. First, we create the model (an instance of the LinearRegression class). Then, we fit the model to our data. Finally, we predict values from our trained model. In [6]: # We create the model.        lr = lm.LinearRegression()        # We train the model on our training dataset.        lr.fit(x[:, np.newaxis], y)        # Now, we predict points with our trained model.        y_lr = lr.predict(x_tr[:, np.newaxis]) We need to convert x and x_tr to column vectors, as it is a general convention in scikit-learn that observations are rows, while features are columns. Here, we have seven observations with one feature. We now plot the result of the trained linear model. We obtain a regression line in green here: In [7]: plt.plot(x_tr, y_tr, '--k')        plt.plot(x_tr, y_lr, 'g')        plt.plot(x, y, 'ok', ms=10)        plt.xlim(0, 1)        plt.ylim(y.min()-1, y.max()+1)        plt.title("Linear regression") The linear fit is not well-adapted here, as the data points are generated according to a nonlinear model (an exponential curve). Therefore, we are now going to fit a nonlinear model. More precisely, we will fit a polynomial function to our data points. We can still use linear regression for this, by precomputing the exponents of our data points. This is done by generating a Vandermonde matrix, using the np.vander function. We will explain this trick in How it works…. In the following code, we perform and plot the fit: In [8]: lrp = lm.LinearRegression()        plt.plot(x_tr, y_tr, '--k')        for deg in [2, 5]:            lrp.fit(np.vander(x, deg + 1), y)            y_lrp = lrp.predict(np.vander(x_tr, deg + 1))            plt.plot(x_tr, y_lrp,                      label='degree ' + str(deg))             plt.legend(loc=2)            plt.xlim(0, 1.4)            plt.ylim(-10, 40)            # Print the model's coefficients.            print(' '.join(['%.2f' % c for c in                            lrp.coef_]))        plt.plot(x, y, 'ok', ms=10)      plt.title("Linear regression") 25.00 -8.57 0.00 -132.71 296.80 -211.76 72.80 -8.68 0.00 We have fitted two polynomial models of degree 2 and 5. The degree 2 polynomial appears to fit the data points less precisely than the degree 5 polynomial. However, it seems more robust; the degree 5 polynomial seems really bad at predicting values outside the data points (look for example at the x 1 portion). This is what we call overfitting; by using a too complex model, we obtain a better fit on the trained dataset, but a less robust model outside this set. Note the large coefficients of the degree 5 polynomial; this is generally a sign of overfitting. We will now use a different learning model, called ridge regression. It works like linear regression except that it prevents the polynomial's coefficients from becoming too big. This is what happened in the previous example. By adding a regularization term in the loss function, ridge regression imposes some structure on the underlying model. We will see more details in the next section. The ridge regression model has a meta-parameter, which represents the weight of the regularization term. We could try different values with trials and errors, using the Ridge class. However, scikit-learn includes another model called RidgeCV, which includes a parameter search with cross-validation. In practice, it means that we don't have to tweak this parameter by hand—scikit-learn does it for us. As the models of scikit-learn always follow the fit-predict API, all we have to do is replace lm.LinearRegression() by lm.RidgeCV() in the previous code. We will give more details in the next section. In [9]: ridge = lm.RidgeCV()        plt.plot(x_tr, y_tr, '--k')               for deg in [2, 5]:             ridge.fit(np.vander(x, deg + 1), y);            y_ridge = ridge.predict(np.vander(x_tr, deg+1))            plt.plot(x_tr, y_ridge,                      label='degree ' + str(deg))            plt.legend(loc=2)            plt.xlim(0, 1.5)           plt.ylim(-5, 80)            # Print the model's coefficients.            print(' '.join(['%.2f' % c                            for c in ridge.coef_]))               plt.plot(x, y, 'ok', ms=10)        plt.title("Ridge regression") 11.36 4.61 0.00 2.84 3.54 4.09 4.14 2.67 0.00 This time, the degree 5 polynomial seems more precise than the simpler degree 2 polynomial (which now causes underfitting). Ridge regression reduces the overfitting issue here. Observe how the degree 5 polynomial's coefficients are much smaller than in the previous example. How it works... In this section, we explain all the aspects covered in this article. The scikit-learn API scikit-learn implements a clean and coherent API for supervised and unsupervised learning. Our data points should be stored in a (N,D) matrix X, where N is the number of observations and D is the number of features. In other words, each row is an observation. The first step in a machine learning task is to define what the matrix X is exactly. In a supervised learning setup, we also have a target, an N-long vector y with a scalar value for each observation. This value is continuous or discrete, depending on whether we have a regression or classification problem, respectively. In scikit-learn, models are implemented in classes that have the fit() and predict() methods. The fit() method accepts the data matrix X as input, and y as well for supervised learning models. This method trains the model on the given data. The predict() method also takes data points as input (as a (M,D) matrix). It returns the labels or transformed points as predicted by the trained model. Ordinary least squares regression Ordinary least squares regression is one of the simplest regression methods. It consists of approaching the output values yi with a linear combination of Xij: Here, w = (w1, ..., wD) is the (unknown) parameter vector. Also, represents the model's output. We want this vector to match the data points y as closely as possible. Of course, the exact equality cannot hold in general (there is always some noise and uncertainty—models are always idealizations of reality). Therefore, we want to minimize the difference between these two vectors. The ordinary least squares regression method consists of minimizing the following loss function: This sum of the components squared is called the L2 norm. It is convenient because it leads to differentiable loss functions so that gradients can be computed and common optimization procedures can be performed. Polynomial interpolation with linear regression Ordinary least squares regression fits a linear model to the data. The model is linear both in the data points Xiand in the parameters wj. In our example, we obtain a poor fit because the data points were generated according to a nonlinear generative model (an exponential function). However, we can still use the linear regression method with a model that is linear in wj, but nonlinear in xi. To do this, we need to increase the number of dimensions in our dataset by using a basis of polynomial functions. In other words, we consider the following data points: Here, D is the maximum degree. The input matrix X is therefore the Vandermonde matrix associated to the original data points xi. For more information on the Vandermonde matrix, refer to http://en.wikipedia.org/wiki/Vandermonde_matrix. Here, it is easy to see that training a linear model on these new data points is equivalent to training a polynomial model on the original data points. Ridge regression Polynomial interpolation with linear regression can lead to overfitting if the degree of the polynomials is too large. By capturing the random fluctuations (noise) instead of the general trend of the data, the model loses some of its predictive power. This corresponds to a divergence of the polynomial's coefficients wj. A solution to this problem is to prevent these coefficients from growing unboundedly. With ridge regression (also known as Tikhonov regularization) this is done by adding a regularization term to the loss function. For more details on Tikhonov regularization, refer to http://en.wikipedia.org/wiki/Tikhonov_regularization. By minimizing this loss function, we not only minimize the error between the model and the data (first term, related to the bias), but also the size of the model's coefficients (second term, related to the variance). The bias-variance trade-off is quantified by the hyperparameter , which specifies the relative weight between the two terms in the loss function. Here, ridge regression led to a polynomial with smaller coefficients, and thus a better fit. Cross-validation and grid search A drawback of the ridge regression model compared to the ordinary least squares model is the presence of an extra hyperparameter . The quality of the prediction depends on the choice of this parameter. One possibility would be to fine-tune this parameter manually, but this procedure can be tedious and can also lead to overfitting problems. To solve this problem, we can use a grid search; we loop over many possible values for , and we evaluate the performance of the model for each possible value. Then, we choose the parameter that yields the best performance. How can we assess the performance of a model with a given value? A common solution is to use cross-validation. This procedure consists of splitting the dataset into a train set and a test set. We fit the model on the train set, and we test its predictive performance on the test set. By testing the model on a different dataset than the one used for training, we reduce overfitting. There are many ways to split the initial dataset into two parts like this. One possibility is to remove one sample to form the train set and to put this one sample into the test set. This is called Leave-One-Out cross-validation. With N samples, we obtain N sets of train and test sets. The cross-validated performance is the average performance on all these set decompositions. As we will see later, scikit-learn implements several easy-to-use functions to do cross-validation and grid search. In this article, there exists a special estimator called RidgeCV that implements a cross-validation and grid search procedure that is specific to the ridge regression model. Using this model ensures that the best hyperparameter is found automatically for us. There's more… Here are a few references about least squares: Ordinary least squares on Wikipedia, available at http://en.wikipedia.org/wiki/Ordinary_least_squares Linear least squares on Wikipedia, available at http://en.wikipedia.org/wiki/Linear_least_squares_(mathematics) Here are a few references about cross-validation and grid search: Cross-validation in scikit-learn's documentation, available at http://scikit-learn.org/stable/modules/cross_validation.html Grid search in scikit-learn's documentation, available at http://scikit-learn.org/stable/modules/grid_search.html Cross-validation on Wikipedia, available at http://en.wikipedia.org/wiki/Cross-validation_%28statistics%29 Here are a few references about scikit-learn: scikit-learn basic tutorial available at http://scikit-learn.org/stable/tutorial/basic/tutorial.html scikit-learn tutorial given at the SciPy 2013 conference available at https://github.com/jakevdp/sklearn_scipy2013 Summary Using the scikit-learn Python package, this article illustrates fundamental data mining and machine learning concepts such as supervised and unsupervised learning, classification, regression, feature selection, feature extraction, overfitting, regularization, cross-validation, and grid search. Resources for Article: Further resources on this subject: Driving Visual Analyses with Automobile Data (Python) [Article] Fast Array Operations with NumPy [Article] Python 3: Designing a Tasklist Application [Article]
Read more
  • 0
  • 0
  • 3237

article-image-building-content-management-system
Packt
25 Sep 2014
25 min read
Save for later

Building a Content Management System

Packt
25 Sep 2014
25 min read
In this article by Charles R. Portwood II, the author of Yii Project Blueprints, we will look at how to create a feature-complete content management system and blogging platform. (For more resources related to this topic, see here.) Describing the project Our CMS can be broken down into several different components: Users who will be responsible for viewing and managing the content Content to be managed Categories for our content to be placed into Metadata to help us further define our content and users Search engine optimizations Users The first component of our application is the users who will perform all the tasks in our application. For this application, we're going to largely reuse the user database and authentication system. In this article, we'll enhance this functionality by allowing social authentication. Our CMS will allow users to register new accounts from the data provided by Twitter; after they have registered, the CMS will allow them to sign-in to our application by signing in to Twitter. To enable us to know if a user is a socially authenticated user, we have to make several changes to both our database and our authentication scheme. First, we're going to need a way to indicate whether a user is a socially authenticated user. Rather than hardcoding a isAuthenticatedViaTwitter column in our database, we'll create a new database table called user_metadata, which will be a simple table that contains the user's ID, a unique key, and a value. This will allow us to store additional information about our users without having to explicitly change our user's database table every time we want to make a change: ID INTEGER PRIMARY KEYuser_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER We'll also need to modify our UserIdentity class to allow socially authenticated users to sign in. To do this, we'll be expanding upon this class to create a RemoteUserIdentity class that will work off the OAuth codes that Twitter (or any other third-party source that works with HybridAuth) provide to us rather than authenticating against a username and password. Content At the core of our CMS is our content that we'll manage. For this project, we'll manage simple blog posts that can have additional metadata associated with them. Each post will have a title, a body, an author, a category, a unique URI or slug, and an indication whether it has been published or not. Our database structure for this table will look as follows: ID INTEGER PRIMARY KEYtitle STRINGbody TEXTpublished INTEGERauthor_id INTEGERcategory_id INTEGERslug STRINGcreated INTEGERupdated INTEGER Each post will also have one or many metadata columns that will further describe the posts we'll be creating. We can use this table (we’ll call it content_metadata) to have our system store information about each post automatically for us, or add information to our posts ourselves, thereby eliminating the need to constantly migrate our database every time we want to add a new attribute to our content: ID INTEGER PRIMARY KEYcontent_id INTEGERkey STRINGvalue STRINGcreated INTEGERupdated INTEGER Categories Each post will be associated with a category in our system. These categories will help us further refine our posts. As with our content, each category will have its own slug. Before either a post or a category is saved, we'll need to verify that the slug is not already in use. Our table structure will look as follows: ID INTEGER PRIMARY KEYname STRINGdescription TEXTslug STRINGcreated INTEGERupdated INTEGER Search engine optimizations The last core component of our application is optimization for search engines so that our content can be indexed quickly. SEO is important because it increases our discoverability and availability both on search engines and on other marketing materials. In our application, there are a couple of things we'll perform to improve our SEO: The first SEO enhancement we'll add is a sitemap.xml file, which we can submit to popular search engines to index. Rather than crawl our content, search engines can very quickly index our sitemap.xml file, which means that our content will show up in search engines faster. The second enhancement we'll be adding is the slugs that we discussed earlier. Slugs allow us to indicate what a particular post is about directly from a URL. So rather than have a URL that looks like http://chapter6.example.com/content/post/id/5, we can have URL's that look like: http://chapter6.example.com/my-awesome-article. These types of URLs allow search engines and our users to know what our content is about without even looking at the content itself, such as when a user is browsing through their bookmarks or browsing a search engine. Initializing the project To provide us with a common starting ground, a skeleton project has been included with the project resources for this article. Included with this skeleton project are the necessary migrations, data files, controllers, and views to get us started with developing. Also included in this skeleton project are the user authentication classes. Copy this skeleton project to your web server, configure it so that it responds to chapter6.example.com as outlined at the beginning of the article, and then perform the following steps to make sure everything is set up: Adjust the permissions on the assets and protected/runtime folders so that they are writable by your web server. In this article, we'll once again use the latest version of MySQL (at the time of writing MySQL 5.6). Make sure that your MySQL server is set up and running on your server. Then, create a username, password, and database for our project to use, and update your protected/config/main.php file accordingly. For simplicity, you can use ch6_cms for each value. Install our Composer dependencies: Composer install Run the migrate command and install our mock data: php protected/yiic.php migrate up --interactive=0psql ch6_cms -f protected/data/postgres.sql Finally, add your SendGrid credentials to your protected/config/params.php file: 'username' => '<username>','password' => '<password>','from' => 'noreply@ch6.home.erianna.net') If everything is loaded correctly, you should see a 404 page similar to the following: Exploring the skeleton project There are actually a lot of different things going on in the background to make this work even if this is just a 404 error. Before we start doing any development, let's take a look at a few of the classes that have been provided in our skeleton project in the protected/components folder. Extending models from a common class The first class that has been provided to us is an ActiveRecord extension called CMSActiveRecord that all of our models will stem from. This class allows us to reduce the amount of code that we have to write in each class. For now, we'll simply add CTimestampBehavior and the afterFind() method to store the old attributes for the time the need arises to compare the changed attributes with the new attributes: class CMSActiveRecordCMSActiveRecord extends CActiveRecord{public $_oldAttributes = array();public function behaviors(){return array('CTimestampBehavior' => array('class' => 'zii.behaviors.CTimestampBehavior','createAttribute' => 'created','updateAttribute' => 'updated','setUpdateOnCreate' => true));}public function afterFind(){if ($this !== NULL)$this->_oldAttributes = $this->attributes;return parent::afterFind();}} Creating a custom validator for slugs Since both Content and Category classes have slugs, we'll need to add a custom validator to each class that will enable us to ensure that the slug is not already in use by either a post or a category. To do this, we have another class called CMSSlugActiveRecord that extends CMSActiveRecord with a validateSlug() method that we'll implement as follows: class CMSSLugActiveRecord extends CMSActiveRecord{public function validateSlug($attributes, $params){// Fetch any records that have that slug$content = Content::model()->findByAttributes(array('slug' =>$this->slug));$category = Category::model()->findByAttributes(array('slug' =>$this->slug));$class = strtolower(get_class($this));if ($content == NULL && $category == NULL)return true;else if (($content == NULL && $category != NULL) || ($content !=NULL && $category == NULL)){$this->addError('slug', 'That slug is already in use');return false;}else{if ($this->id == $$class->id)return true;}$this->addError('slug', 'That slug is already in use');return false;}} This implementation simply checks the database for any item that has that slug. If nothing is found, or if the current item is the item that is being modified, then the validator will return true. Otherwise, it will add an error to the slug attribute and return false. Both our Content model and Category model will extend from this class. View management with themes One of the largest challenges of working with larger applications is changing their appearance without locking functionality into our views. One way to further separate our business logic from our presentation logic is to use themes. Using themes in Yii, we can dynamically change the presentation layer of our application simply utilizing the Yii::app()->setTheme('themename) method. Once this method is called, Yii will look for view files in themes/themename/views rather than protected/views. Throughout the rest of the article, we'll be adding views to a custom theme called main, which is located in the themes folder. To set this theme globally, we'll be creating a custom class called CMSController, which all of our controllers will extend from. For now, our theme name will be hardcoded within our application. This value could easily be retrieved from a database though, allowing us to dynamically change themes from a cached or database value rather than changing it in our controller. Have a look at the following lines of code: class CMSController extends CController{public function beforeAction($action){Yii::app()->setTheme('main');return parent::beforeAction($action);}} Truly dynamic routing In our previous applications, we had long, boring URL's that had lots of IDs and parameters in them. These URLs provided a terrible user experience and prevented search engines and users from knowing what the content was about at a glance, which in turn would hurt our SEO rankings on many search engines. To get around this, we're going to heavily modify our UrlManager class to allow truly dynamic routing, which means that, every time we create or update a post or a category, our URL rules will be updated. Telling Yii to use our custom UrlManager Before we can start working on our controllers, we need to create a custom UrlManager to handle routing of our content so that we can access our content by its slug. The steps are as follows: The first change we need to make to allow for this routing is to update the components section of our protected/config/main.php file. This will tell Yii what class to use for the UrlManager component: 'urlManager' => array('class' => 'application.components.CMSUrlManager','urlFormat' => 'path','showScriptName' => false) Next, within our protected/components folder, we need to create CMSUrlManager.php: class CMSUrlManager extends CUrlManager {} CUrlManager works by populating a rules array. When Yii is bootstrapped, it will trigger the processRules() method to determine which route should be executed. We can overload this method to inject our own rules, which will ensure that the action that we want to be executed is executed. To get started, let's first define a set of default routes that we want loaded. The routes defined in the following code snippet will allow for pagination on our search and home page, enable a static path for our sitemap.xml file, and provide a route for HybridAuth to use for social authentication: public $defaultRules = array('/sitemap.xml' => '/content/sitemap','/search/<page:d+>' => '/content/search','/search' => '/content/search','/blog/<page:d+>' => '/content/index','/blog' => '/content/index','/' => '/content/index','/hybrid/<provider:w+>' => '/hybrid/index',); Then, we'll implement our processRules() method: protected function processRules() {} CUrlManager already has a public property that we can interface to modify the rules, so we'll inject our own rules into this. The rules property is the same property that can be accessed from within our config file. Since processRules() gets called on every page load, we'll also utilize caching so that our rules don't have to be generated every time. We'll start by trying to load any of our pregenerated rules from our cache, depending upon whether we are in debug mode or not: $this->rules = !YII_DEBUG ? Yii::app()->cache->get('Routes') : array(); If the rules we get back are already set up, we'll simple return them; otherwise, we'll generate the rules, put them into our cache, and then append our basic URL rules: if ($this->rules == false || empty($this->rules)) { $this->rules = array(); $this->rules = $this->generateClientRules(); $this->rules = CMap::mergearray($this->addRssRules(), $this- >rules); Yii::app()->cache->set('Routes', $this->rules); } $this->rules['<controller:w+>/<action:w+>/<id:w+>'] = '/'; $this->rules['<controller:w+>/<action:w+>'] = '/'; return parent::processRules(); For abstraction purposes, within our processRules() method, we've utilized two methods we'll need to create: generateClientRules, which will generate the rules for content and categories, and addRSSRules, which will generate the RSS routes for each category. The first method, generateClientRules(), simply loads our default rules that we defined earlier with the rules generated from our content and categories, which are populated by the generateRules() method: private function generateClientRules() { $rules = CMap::mergeArray($this->defaultRules, $this->rules); return CMap::mergeArray($this->generateRules(), $rules); } private function generateRules() { return CMap::mergeArray($this->generateContentRules(), $this- >generateCategoryRules()); } The generateRules() method, that we just defined, actually calls the methods that build our routes. Each route is a key-value pair that will take the following form: array( '<slug>' => '<controller>/<action>/id/<id>' ) Content rules will consist of an entry that is published. Have a look at the following code: private function generateContentRules(){$rules = array();$criteria = new CDbCriteria;$criteria->addCondition('published = 1');$content = Content::model()->findAll($criteria);foreach ($content as $el){if ($el->slug == NULL)continue;$pageRule = $el->slug.'/<page:d+>';$rule = $el->slug;if ($el->slug == '/')$pageRule = $rule = '';$pageRule = $el->slug . '/<page:d+>';$rule = $el->slug;$rules[$pageRule] = "content/view/id/{$el->id}";$rules[$rule] = "content/view/id/{$el->id}";}return $rules;} Our category rules will consist of all categories in our database. Have a look at the following code: private function generateCategoryRules() { $rules = array(); $categories = Category::model()->findAll(); foreach ($categories as $el) { if ($el->slug == NULL) continue; $pageRule = $el->slug.'/<page:d+>'; $rule = $el->slug; if ($el->slug == '/') $pageRule = $rule = ''; $pageRule = $el->slug . '/<page:d+>'; $rule = $el->slug; $rules[$pageRule] = "category/index/id/{$el->id}"; $rules[$rule] = "category/index/id/{$el->id}"; } return $rules; } Finally, we'll add our RSS rules that will allow RSS readers to read all content for the entire site or for a particular category, as follows: private function addRSSRules() { $categories = Category::model()->findAll(); foreach ($categories as $category) $routes[$category->slug.'.rss'] = "category/rss/id/ {$category->id}"; $routes['blog.rss'] = '/category/rss'; return $routes; } Displaying and managing content Now that Yii knows how to route our content, we can begin work on displaying and managing it. Begin by creating a new controller called ContentController in protected/controllers that extends CMSController. Have a look at the following line of code: class ContentController extends CMSController {} To start with, we'll define our accessRules() method and the default layout that we're going to use. Here's how: public $layout = 'default';public function filters(){return array('accessControl',);}public function accessRules(){return array(array('allow','actions' => array('index', 'view', 'search'),'users' => array('*')),array('allow','actions' => array('admin', 'save', 'delete'),'users'=>array('@'),'expression' => 'Yii::app()->user->role==2'),array('deny', // deny all users'users'=>array('*'),),);} Rendering the sitemap The first method we'll be implementing is our sitemap action. In our ContentController, create a new method called actionSitemap(): public function actionSitemap() {} The steps to be performed are as follows: Since sitemaps come in XML formatting, we'll start by disabling WebLogRoute defined in our protected/config/main.php file. This will ensure that our XML validates when search engines attempt to index it: Yii::app()->log->routes[0]->enabled = false; We'll then send the appropriate XML headers, disable the rendering of the layout, and flush any content that may have been queued to be sent to the browser: ob_end_clean();header('Content-type: text/xml; charset=utf-8');$this->layout = false; Then, we'll load all the published entries and categories and send them to our sitemap view: $content = Content::model()->findAllByAttributes(array('published'=> 1));$categories = Category::model()->findAll();$this->renderPartial('sitemap', array('content' => $content,'categories' => $categories,'url' => 'http://'.Yii::app()->request->serverName .Yii::app()->baseUrl)) Finally, we have two options to render this view. We can either make it a part of our theme in themes/main/views/content/sitemap.php, or we can place it in protected/views/content/sitemap.php. Since a sitemap's structure is unlikely to change, let's put it in the protected/views folder: <?php echo '<?xml version="1.0" encoding="UTF-8"?>'; ?><urlset ><?php foreach ($content as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>1</priority></url><?php endforeach; ?><?php foreach ($categories as $v): ?><url><loc><?php echo $url .'/'. htmlspecialchars(str_replace('/', '', $v['slug']), ENT_QUOTES, "utf-8"); ?></loc><lastmod><?php echo date('c',strtotime($v['updated']));?></lastmod><changefreq>weekly</changefreq><priority>0.7</priority></url><?php endforeach; ?></urlset> You can now load http://chapter6.example.com/sitemap.xml in your browser to see the sitemap. Before you make your site live, be sure to submit this file to search engines for them to index. Displaying a list view of content Next, we'll implement the actions necessary to display all of our content and a particular post. We'll start by providing a paginated view of our posts. Since CListView and the Content model's search() method already provide this functionality, we can utilize those classes to generate and display this data: To begin with, open protected/models/Content.php and modify the return value of the search() method as follows. This will ensure that Yii's pagination uses the correct variable in our CListView, and tells Yii how many results to load per page. return new CActiveDataProvider($this, array('criteria' =>$criteria,'pagination' => array('pageSize' => 5,'pageVar' =>'page'))); Next, implement the actionIndex() method with the $page parameter. We've already told our UrlManager how to handle this, which means that we'll get pretty URI's for pagination (for example, /blog, /blog/2, /blog/3, and so on): public function actionIndex($page=1){// Model Search without $_GET params$model = new Content('search');$model->unsetAttributes();$model->published = 1;$this->render('//content/all', array('dataprovider' => $model->search()));} Then we'll create a view in themes/main/views/content/all.php, that will display the data within our dataProvider: <?php $this->widget('zii.widgets.CListView', array('dataProvider'=>$dataprovider,'itemView'=>'//content/list','summaryText' => '','pager' => array('htmlOptions' => array('class' => 'pager'),'header' => '','firstPageCssClass'=>'hide','lastPageCssClass'=>'hide','maxButtonCount' => 0))); Finally, copy themes/main/views/content/all.php from the project resources folder so that our views can render. Since our database has already been populated with some sample data, you can start playing around with the results right away, as shown in the following screenshot: Displaying content by ID Since our routing rules are already set up, displaying our content is extremely simple. All that we have to do is search for a published model with the ID passed to the view action and render it: public function actionView($id=NULL){// Retrieve the data$content = Content::model()->findByPk($id);// beforeViewAction should catch thisif ($content == NULL || !$content->published)throw new CHttpException(404, 'The article you specified doesnot exist.');$this->render('view', array('id' => $id,'post' => $content));} After copying themes/main/views/content/view.php from the project resources folder into your project, you'll be able to click into a particular post from the home page. In its actions present form, this action has introduced an interesting side effect that could negatively impact our SEO rankings on search engines—the same entry can now be accessed from two URI's. For example, http://chapter6.example.com/content/view/id/1 and http://chapter6.example.com/quis-condimentum-tortor now bring up the same post. Fortunately, correcting this bug is fairly easy. Since the goal of our slugs is to provide more descriptive URI's, we'll simply block access to the view if a user tries to access it from the non-slugged URI. We'll do this by creating a new method called beforeViewAction() that takes the entry ID as a parameter and gets called right after the actionView() method is called. This private method will simply check the URI from CHttpRequest to determine how actionView was accessed and return a 404 if it's not through our beautiful slugs: private function beforeViewAction($id=NULL){// If we do not have an ID, consider it to be null, and throw a 404errorif ($id == NULL)throw new CHttpException(404,'The specified post cannot befound.');// Retrieve the HTTP Request$r = new CHttpRequest();// Retrieve what the actual URI$requestUri = str_replace($r->baseUrl, '', $r->requestUri);// Retrieve the route$route = '/' . $this->getRoute() . '/' . $id;$requestUri = preg_replace('/?(.*)/','',$requestUri);// If the route and the uri are the same, then a direct accessattempt was made, and we need to block access to the controllerif ($requestUri == $route)throw new CHttpException(404, 'The requested post cannot befound.');return str_replace($r->baseUrl, '', $r->requestUri);} Then right after our actionView starts, we can simultaneously set the correct return URL and block access to the content if it wasn't accessed through the slug as follows: Yii::app()->user->setReturnUrl($this->beforeViewAction($id)); Adding comments to our CMS with Disqus Presently, our content is only informative in nature—we have no way for our users to communicate with us what they thought about our entry. To encourage engagement, we can add a commenting system to our CMS to further engage with our readers. Rather than writing our own commenting system, we can leverage comment through Disqus, a free, third-party commenting system. Even through Disqus, comments are implemented in JavaScript and we can create a custom widget wrapper for it to display comments on our site. The steps are as follows: To begin with, log in to the Disqus account you created at the beginning of this article as outlined in the prerequisites section. Then, navigate to http://disqus.com/admin/create/ and fill out the form fields as prompted and as shown in the following screenshot: Then, add a disqus section to your protected/config/params.php file with your site shortname: 'disqus' => array('shortname' => 'ch6disqusexample',) Next, create a new widget in protected/components called DisqusWidget.php. This widget will be loaded within our view and will be populated by our Content model: class DisqusWidget extends CWidget {} Begin by specifying the public properties that our view will be able to inject into as follows: public $shortname = NULL; public $identifier = NULL; public $url = NULL; public $title = NULL; Then, overload the init() method to load the Disqus JavaScript callback and to populate the JavaScript variables with those populated to the widget as follows:public function init() public function init(){parent::init();if ($this->shortname == NULL)throw new CHttpException(500, 'Disqus shortname isrequired');echo "<div id='disqus_thread'></div>";Yii::app()->clientScript->registerScript('disqus', "var disqus_shortname = '{$this->shortname}';var disqus_identifier = '{$this->identifier}';var disqus_url = '{$this->url}';var disqus_title = '{$this->title}';/* * * DON'T EDIT BELOW THIS LINE * * */(function() {var dsq = document.createElement('script'); dsq.type ='text/javascript'; dsq.async = true;dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);})();");} Finally, within our themes/main/views/content/view.php file, load the widget as follows: <?php $this->widget('DisqusWidget', array('shortname' => Yii::app()->params['includes']['disqus']['shortname'],'url' => $this->createAbsoluteUrl('/'.$post->slug),'title' => $post->title,'identifier' => $post->id)); ?> Now, when you load any given post, Disqus comments will also be loaded with that post. Go ahead and give it a try! Searching for content Next, we'll implement a search method so that our users can search for posts. To do this, we'll implement an instance of CActiveDataProvider and pass that data to our themes/main/views/content/all.php view to be rendered and paginated: public function actionSearch(){$param = Yii::app()->request->getParam('q');$criteria = new CDbCriteria;$criteria->addSearchCondition('title',$param,'OR');$criteria->addSearchCondition('body',$param,'OR');$dataprovider = new CActiveDataProvider('Content', array('criteria'=>$criteria,'pagination' => array('pageSize' => 5,'pageVar'=>'page')));$this->render('//content/all', array('dataprovider' => $dataprovider));} Since our view file already exists, we can now search for content in our CMS. Managing content Next, we'll implement a basic set of management tools that will allow us to create, update, and delete entries: We'll start by defining our loadModel() method and the actionDelete() method: private function loadModel($id=NULL){if ($id == NULL)throw new CHttpException(404, 'No category with that IDexists');$model = Content::model()->findByPk($id);if ($model == NULL)throw new CHttpException(404, 'No category with that IDexists');return $model;}public function actionDelete($id){$this->loadModel($id)->delete();$this->redirect($this->createUrl('content/admin'));} Next, we can implement our admin view, which will allow us to view all the content in our system and to create new entries. Be sure to copy the themes/main/views/content/admin.php file from the project resources folder into your project before using this view: public function actionAdmin(){$model = new Content('search');$model->unsetAttributes();if (isset($_GET['Content']))$model->attributes = $_GET;$this->render('admin', array('model' => $model));} Finally, we'll implement a save view to create and update entries. Saving content will simply pass it through our content model's validation rules. The only override we'll be adding is ensuring that the author is assigned to the user editing the entry. Before using this view, be sure to copy the themes/main/views/content/save.php file from the project resources folder into your project: public function actionSave($id=NULL){if ($id == NULL)$model = new Content;else$model = $this->loadModel($id);if (isset($_POST['Content'])){$model->attributes = $_POST['Content'];$model->author_id = Yii::app()->user->id;if ($model->save()){Yii::app()->user->setFlash('info', 'The articles wassaved');$this->redirect($this->createUrl('content/admin'));}}$this->render('save', array('model' => $model));} At this point, you can now log in to the system using the credentials provided in the following table and start managing entries: Username Password user1@example.com test user2@example.com test Summary In this article, we dug deeper into Yii framework by manipulating our CUrlManager class to generate completely dynamic and clean URIs. We also covered the use of Yii's built-in theming to dynamically change the frontend appearance of our site by simply changing a configuration value. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [Article] Yii 1.1: Using Zii Components [Article] Agile with Yii 1.1 and PHP5: The TrackStar Application [Article]
Read more
  • 0
  • 0
  • 2435
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-how-to-build-a-recommender-by-running-mahout-on-spark
Pat Ferrel
24 Sep 2014
7 min read
Save for later

How to Build a Recommender by Running Mahout on Spark

Pat Ferrel
24 Sep 2014
7 min read
Mahout on Spark: Recommenders There are big changes happening in Apache Mahout. For several years it was the go-to machine learning library for Hadoop. It contained most of the best-in-class algorithms for scalable machine learning, which means clustering, classification, and recommendations. But it was written for Hadoop and MapReduce. Today a number of new parallel execution engines show great promise in speeding calculations by 10-100x (Spark, H2O, Flink). That means that instead of buying 10 computers for a cluster, one will do. That should get your manager’s attention. After releasing Mahout 0.9, the team decided to begin an aggressive retool using Spark, building in the flexibility to support other engines, and both H2O and Flink have shown active interest. This post is about moving the heart of Mahout’s item-based collaborative filtering recommender to Spark. Where we are Mahout is currently on the 1.0 snapshot version, meaning we are working on what will be released as 1.0. For the past year or, some of the team has been working on a Scala-based DSL (Domain Specific Language), which looks like Scala with R-like algebraic expressions. Since Scala supports not only operator overloading but functional programming, it is a natural choice for building distributed code with rich linear algebra expressions. Currently we have an interactive shell that runs Scala with all of the R-like expression support. Think of it as R but supporting truly huge data in a completely distributed way. Many algorithms—the ones that can be expressed as simple linear algebra equations—are implemented with relative ease (SSVD, PCA). Scala also has lazy evaluation, which allows Mahout to slide a modern optimizer underneath the DSL. When an end product of a calculation is needed, the optimizer figures out the best path to follow and spins off the most efficient Spark jobs to accomplish the whole. Recommenders One of the first things we want to implement is the popular item-based recommenders. But here, too, we’ll introduce many innovations. It still starts from some linear algebra. Let’s take the case of recommending purchases on an e-commerce site. The problem can be defined in the following code example: r_p = recommendations for purchases for a given user. This is a vector of item-ids and strengths of recommendation. h_p = history of purchases for a given user A = the matrix of all purchases by all users. Rows are users, columns items, for now we will just flag a purchase so the matrix is all ones and zeros. r_p = [A’A]h_p A’A is the matrix A transposed then multiplied by A. This is the core cooccurrence or indicator matrix used in this style of recommender. Using the Mahout Scala DSL we could write the recommender as: val recs = (A.t %*% A) * userHist This would produce a reasonable recommendation, but from experience we know that A’A is better calculated using a method called the Log Likelihood Ratio, which is a probabilistic measure of the importance of a cooccurrence (http://en.wikipedia.org/wiki/Likelihood-ratio_test). In general, when you see something like A’A, it can be replaced with a similarity comparison for each row with every other row. This will produce a matrix whose rows are items and whose columns are the same items. The magnitude of the value in the matrix determines the strength of similarity of row item to the column item. In recommenders, the more similar the items, the more they were purchased by similar people. The previous line of code is replaced by the following: val recs = CooccurrenceAnalysis.cooccurence(A)* userHist However, this would take time to execute for each user as they visit the e-commerce site, so we’ll handle that outside of Mahout. First let’s talk about data preparation. Item Similarity Driver Creating the indicator matrix (A’A) is the core of this type of recommender. We have a quick flexible way to create this using text log files and creating output that is in an easy form to digest. The job of data prep is greatly streamlined in the Mahout 1.0 snapshot. In the past a user would have to do all the data prep themselves. This requires translating their own user and item IDs into Mahout IDs, putting the data into text tuple files and feeding them to the recommender. Out the other end you’d get a Hadoop binary file called a sequence file and you’d have to translate the Mahout IDs into something your application could understand. This is no longer required. To make this process much simpler we created the spark-itemsimilarity command line tool. After installing Mahout, Hadoop, and Spark, and assuming you have logged user purchases in some directories in HDFS, we can probably read them in, calculate the indicator matrix, and write it out with no other prep required. The spark-itemsimilarity command line takes in text-delimited files, extracts the user ID and item ID, runs the cooccurrence analysis, and outputs a text file with your application’s user and item IDs restored. Here is the sample input file where we’ve specified a simple comma-delimited format that field holds the user ID, the item ID, and the filter—purchase: Thu Jul 10 10:52:10.996,u1,purchase,iphone Fri Jul 11 13:52:51.018,u1,purchase,ipad Fri Jul 11 21:42:26.232,u2,purchase,nexus Sat Jul 12 09:43:12.434,u2,purchase,galaxy Sat Jul 12 19:58:09.975,u3,purchase,surface Sat Jul 12 23:08:03.457,u4,purchase,iphone Sun Jul 13 14:43:38.363,u4,purchase,galaxy spark-itemsimilarity will create a Spark distributed dataset (rdd) to back the Mahout DRM (distributed row matrix) that holds this data: User/item iPhone IPad Nexus Galaxy Surface u1 1 1 0 0 0 u2 0 0 1 1 0 u3 0 0 0 0 1 u4 1 0 0 1 0 The output of the job is the LLR computed “indicator matrix” and will contain this data: Item/item iPhone iPad Nexus Galaxy Surface iPhone 0 1.726092435 0 0 0 iPad 1.726092435 0 0 0 0 Nexus 0 0 0 1.726092435 0 Galaxy 0 0 1.726092435 0 0 Surface 0 0 0 0 0 Reading this, we see that self-similarities have been removed so the diagonal is all zeros. The iPhone is similar to the iPad and the Galaxy is similar to the Nexus. The output of the spark-itemsimilarity job can be formatted in various ways but by default it looks like this: galaxy<tab>nexus:1.7260924347106847 ipad<tab>iphone:1.7260924347106847 nexus<tab>galaxy:1.7260924347106847 iphone<tab>ipad:1.7260924347106847 surface On the e-commerce site for the page displaying the Nexus we can show that the Galaxy was purchased by the same people. Notice that application specific IDs are preserved here and the text file is very easy to parse in text-delimited format. The numeric values are the strength of similarity, and for the cases where there are many similar products, you can sort on that value if you want to show only the highest weighted recommendations. Still, this is only part way to an individual recommender. We have done the [A’A] part, but now we need to do the [A’A]h_p. Using the current user’s purchase history will personalize the recommendations. The next post in this series will talk about using a search engine to take this last step. About the author Pat is a serial entrepreneur, consultant, and Apache Mahout committer working on the next generation of Spark-based recommenders. He lives in Seattle and can be contacted through his site https://finderbots.com or @occam on Twitter. Want more Spark? We've got you covered - click here.
Read more
  • 0
  • 0
  • 5915

article-image-creating-extension-yii-2
Packt
24 Sep 2014
22 min read
Save for later

Creating an Extension in Yii 2

Packt
24 Sep 2014
22 min read
In this article by Mark Safronov, co-author of the book Web Application Development with Yii 2 and PHP, we we'll learn to create our own extension using a simple way of installation. There is a process we have to follow, though some preparation will be needed to wire up your classes to the Yii application. The whole article will be devoted to this process. (For more resources related to this topic, see here.) Extension idea So, how are we going to extend the Yii 2 framework as an example for this article? Let's become vile this time and make a malicious extension, which will provide a sort of phishing backdoor for us. Never do exactly the thing we'll describe in this article! It'll not give you instant access to the attacked website anyway, but a skilled black hat hacker can easily get enough information to achieve total control over your application. The idea is this: our extension will provide a special route (a controller with a single action inside), which will dump the complete application configuration to the web page. Let's say it'll be reachable from the route /app-info/configuration. We cannot, however, just get the contents of the configuration file itself and that too reliably. At the point where we can attach ourselves to the application instance, the original configuration array is inaccessible, and even if it were accessible, we can't be sure about where it came from anyway. So, we'll inspect the runtime status of the application and return the most important pieces of information we can fetch at the stage of the controller action resolution. That's the exact payload we want to introduce. public function actionConfiguration()    {        $app = Yii::$app;        $config = [            'components' => $app->components,            'basePath' => $app->basePath,            'params' => $app->params,            'aliases' => Yii::$aliases        ];        return yiihelpersJson::encode($config);    } The preceding code is the core of the extension and is assumed in the following sections. In fact, if you know the value of the basePath setting of the application, a list of its aliases, settings for the components (among which the DB connection may reside), and all custom parameters that developers set manually, you can map the target application quite reliably. Given that you know all the credentials this way, you have an enormous amount of highly valuable information about the application now. All you need to do now is make the user install this extension. Creating the extension contents Our plan is as follows: We will develop our extension in a folder, which is different from our example CRM application. This extension will be named yii2-malicious, to be consistent with the naming of other Yii 2 extensions. Given the kind of payload we saw earlier, our extension will consist of a single controller and some special wiring code (which we haven't learned about yet) to automatically attach this controller to the application. Finally, to consider this subproject a true Yii 2 extension and not just some random library, we want it to be installable in the same way as other Yii 2 extensions. Preparing the boilerplate code for the extension Let's make a separate directory, initialize the Git repository there, and add the AppInfoController to it. In the bash command line, it can be achieved by the following commands: $ mkdir yii2-malicious && cd $_$ git init$ > AppInfoController.php Inside the AppInfoController.php file, we'll write the usual boilerplate code for the Yii 2 controller as follows: namespace malicious;use yiiwebController;class AppInfoController extends Controller{// Action here} Put the action defined in the preceding code snippet inside this controller and we're done with it. Note the namespace: it is not the same as the folder this controller is in, and this is not according to our usual auto-loading rules. We will explore later in this article that this is not an issue because of how Yii 2 treats the auto-loading of classes from extensions. Now this controller needs to be wired to the application somehow. We already know that the application has a special property called controllerMap, in which we can manually attach controller classes. However, how do we do this automatically, better yet, right at the application startup time? Yii 2 has a special feature called bootstrapping to support exactly this: to attach some activity at the beginning of the application lifetime, though not at the very beginning but before handling the request for sure. This feature is tightly related to the extensions concept in Yii 2, so it's a perfect time to explain it. FEATURE – bootstrapping To explain the bootstrapping concept in short, you can declare some components of the application in the yiibaseApplication::$bootstrap property. They'll be properly instantiated at the start of the application. If any of these components implement the BootstrapInterface interface, its bootstrap() method will be called, so you'll get the application initialization enhancement for free. Let's elaborate on this. The yiibaseApplication::$bootstrap property holds the array of generic values that you tell the framework to initialize beforehand. It's basically an improvement over the preload concept from Yii 1.x. You can specify four kinds of values to initialize as follows: The ID of an application component The ID of some module A class name A configuration array If it's the ID of a component, this component is fully initialized. If it's the ID of a module, this module is fully initialized. It matters greatly because Yii 2 has lazy loading employed on the components and modules system, and they are usually initialized only when explicitly referenced. Being bootstrapped means to them that their initialization, regardless of whether it's slow or resource-consuming, always happens, and happens always at the start of the application. If you have a component and a module with identical IDs, then the component will be initialized and the module will not be initialized! If the value being mentioned in the bootstrap property is a class name or configuration array, then the instance of the class in question is created using the yiiBaseYii::createObject() facility. The instance created will be thrown away immediately if it doesn't implement the yiibaseBootstrapInterface interface. If it does, its bootstrap() method will be called. Then, the object will be thrown away. So, what's the effect of this bootstrapping feature? We already used this feature while installing the debug extension. We had to bootstrap the debug module using its ID, for it to be able to attach the event handler so that we would get the debug toolbar at the bottom of each page of our web application. This feature is indispensable if you need to be sure that some activity will always take place at the start of the application lifetime. The BootstrapInterface interface is basically the incarnation of a command pattern. By implementing this interface, we gain the ability to attach any activity, not necessarily bound to the component or module, to the application initialization. FEATURE – extension registering The bootstrapping feature is repeated in the handling of the yiibaseApplication::$extensions property. This property is the only place where the concept of extension can be seen in the Yii framework. Extensions in this property are described as a list of arrays, and each of them should have the following fields: name: This field will be with the name of the extension. version: This field will be with the extension's version (nothing will really check it, so it's only for reference). bootstrap: This field will be with the data for this extension's Bootstrap. This field is filled with the same elements as that of Yii::$app->bootstrap described previously and has the same semantics. alias: This field will be with the mapping from Yii 2 path aliases to real directory paths. When the application registers the extension, it does two things in the following order: It registers the aliases from the extension, using the Yii::setAlias() method. It initializes the thing mentioned in the bootstrap of the extension in exactly the same way we described in the previous section. Note that the extensions' bootstraps are processed before the application's bootstraps. Registering aliases is crucial to the whole concept of extension in Yii 2. It's because of the Yii 2 PSR-4 compatible autoloader. Here is the quote from the documentation block for the yiiBaseYii::autoload() method: If the class is namespaced (e.g. yiibaseComponent), it will attempt to include the file associated with the corresponding path alias (e.g. @yii/base/Component.php). This autoloader allows loading classes that follow the PSR-4 standard and have its top-level namespace or sub-namespaces defined as path aliases. The PSR-4 standard is available online at http://www.php-fig.org/psr/psr-4/. Given that behavior, the alias setting of the extension is basically a way to tell the autoloader the name of the top-level namespace of the classes in your extension code base. Let's say you have the following value of the alias setting of your extension: "alias" => ["@companyname/extensionname" => "/some/absolute/path"] If you have the /some/absolute/path/subdirectory/ClassName.php file, and, according to PSR-4 rules, it contains the class whose fully qualified name is companynameextensionnamesubdirectoryClassName, Yii 2 will be able to autoload this class without problems. Making the bootstrap for our extension – hideous attachment of a controller We have a controller already prepared in our extension. Now we want this controller to be automatically attached to the application under attack when the extension is processed. This is achievable using the bootstrapping feature we just learned. Let's create the maliciousBootstrap class for this cause inside the code base of our extension, with the following boilerplate code: <?phpnamespace malicious;use yiibaseBootstrapInterface;class Bootstrap implements BootstrapInterface{/** @param yiiwebApplication $app */public function bootstrap($app){// Controller addition will be here.}} With this preparation, the bootstrap() method will be called at the start of the application, provided we wire everything up correctly. But first, we should consider how we manipulate the application to make use of our controller. This is easy, really, because there's the yiiwebApplication::$controllerMap property (don't forget that it's inherited from yiibaseModule, though). We'll just do the following inside the bootstrap() method: $app->controllerMap['app-info'] = 'maliciousAppInfoController'; We will rely on the composer and Yii 2 autoloaders to actually find maliciousAppInfoController. Just imagine that you can do anything inside the bootstrap. For example, you can open the CURL connection with some botnet and send the accumulated application information there. Never believe random extensions on the Web. This actually concludes what we need to do to complete our extension. All that's left now is to make our extension installable in the same way as other Yii 2 extensions we were using up until now. If you need to attach this malicious extension to your application manually, and you have a folder that holds the code base of the extension at the path /some/filesystem/path, then all you need to do is to write the following code inside the application configuration:  'extensions' => array_merge((require __DIR__ . '/../vendor/yiisoft/extensions.php'),['maliciousapp-info' => ['name' => 'Application Information Dumper','version' => '1.0.0','bootstrap' => 'maliciousBootstrap','alias' => ['@malicious' =>'/some/filesystem/path']// that's the path to extension]]) Please note the exact way of specifying the extensions setting. We're merging the contents of the extensions.php file supplied by the Yii 2 distribution from composer and our own manual definition of the extension. This extensions.php file is what allows Yiisoft to distribute the extensions in such a way that you are able to install them by a simple, single invocation of a require composer command. Let's learn now what we need to do to repeat this feature. Making the extension installable as... erm, extension First, to make it clear, we are talking here only about the situation when Yii 2 is installed by composer, and we want our extension to be installable through the composer as well. This gives us the baseline under all of our assumptions. Let's see the extensions that we need to install: Gii the code generator The Twitter Bootstrap extension The Debug extension The SwiftMailer extension We can install all of these extensions using composer. We introduce the extensions.php file reference when we install the Gii extension. Have a look at the following code: 'extensions' => (require __DIR__ . '/../vendor/yiisoft/extensions.php') If we open the vendor/yiisoft/extensions.php file (given that all extensions from the preceding list were installed) and look at its contents, we'll see the following code (note that in your installation, it can be different): <?php $vendorDir = dirname(__DIR__); return array ( 'yiisoft/yii2-bootstrap' => array ( 'name' => 'yiisoft/yii2-bootstrap', 'version' => '9999999-dev', 'alias' => array ( '@yii/bootstrap' => $vendorDir . '/yiisoft/yii2-bootstrap', ), ), 'yiisoft/yii2-swiftmailer' => array ( 'name' => 'yiisoft/yii2-swiftmailer', 'version' => '9999999-dev', 'alias' => array ( '@yii/swiftmailer' => $vendorDir . ' /yiisoft/yii2-swiftmailer', ), ), 'yiisoft/yii2-debug' => array ( 'name' => 'yiisoft/yii2-debug', 'version' => '9999999-dev', 'alias' => array ( '@yii/debug' => $vendorDir . '/yiisoft/yii2-debug', ), ), 'yiisoft/yii2-gii' => array ( 'name' => 'yiisoft/yii2-gii', 'version' => '9999999-dev', 'alias' => array ( '@yii/gii' => $vendorDir . '/yiisoft/yii2-gii', ), ), ); One extension was highlighted to stand out from the others. So, what does all this mean to us? First, it means that Yii 2 somehow generates the required configuration snippet automatically when you install the extension's composer package Second, it means that each extension provided by the Yii 2 framework distribution will ultimately be registered in the extensions setting of the application Third, all the classes in the extensions are made available in the main application code base by the carefully crafted alias settings inside the extension configuration Fourth, ultimately, easy installation of Yii 2 extensions is made possible by some integration between the Yii framework and the composer distribution system The magic is hidden inside the composer.json manifest of the extensions built into Yii 2. The details about the structure of this manifest are written in the documentation of composer, which is available at https://getcomposer.org/doc/04-schema.md. We'll need only one field, though, and that is type. Yii 2 employs a special type of composer package, named yii2-extension. If you check the manifests of yii2-debug, yii2-swiftmail and other extensions, you'll see that they all have the following line inside: "type": "yii2-extension", Normally composer will not understand that this type of package is to be installed. But the main yii2 package, containing the framework itself, depends on the special auxiliary yii2-composer package: "require": {… other requirements ..."yiisoft/yii2-composer": "*", This package provides Composer Custom Installer (read about it at https://getcomposer.org/doc/articles/custom-installers.md), which enables this package type. The whole point in the yii2-extension package type is to automatically update the extensions.php file with the information from the extension's manifest file. Basically, all we need to do now is to craft the correct composer.json manifest file inside the extension's code base. Let's write it step by step. Preparing the correct composer.json manifest We first need a block with an identity. Have a look at the following lines of code: "name": "malicious/app-info","version": "1.0.0","description": "Example extension which reveals importantinformation about the application","keywords": ["yii2", "application-info", "example-extension"],"license": "CC-0", Technically, we must provide only name. Even version can be omitted if our package meets two prerequisites: It is distributed from some version control system repository, such as the Git repository It has tags in this repository, correctly identifying the versions in the commit history And we do not want to bother with it right now. Next, we need to depend on the Yii 2 framework just in case. Normally, users will install the extension after the framework is already in place, but in the case of the extension already being listed in the require section of composer.json, among other things, we cannot be sure about the exact ordering of the require statements, so it's better (and easier) to just declare dependency explicitly as follows: "require": {"yiisoft/yii2": "*"}, Then, we must provide the type as follows: "type": "yii2-extension", After this, for the Yii 2 extension installer, we have to provide two additional blocks; autoload will be used to correctly fill the alias section of the extension configuration. Have a look at the following code: "autoload": {"psr-4": {"malicious\": ""}}, What we basically mean is that our classes are laid out according to PSR-4 rules in such a way that the classes in the malicious namespace are placed right inside the root folder. The second block is extra, in which we tell the installer that we want to declare a bootstrap section for the extension configuration: "extra": {"bootstrap": "malicious\Bootstrap"}, Our manifest file is complete now. Commit everything to the version control system: $ git commit -a -m "Added the Composer manifest file to repo" Now, we'll add the tag at last, corresponding to the version we declared as follows: $ git tag 1.0.0 We already mentioned earlier the purpose for which we're doing this. All that's left is to tell the composer from where to fetch the extension contents. Configuring the repositories We need to configure some kind of repository for the extension now so that it is installable. The easiest way is to use the Packagist service, available at https://packagist.org/, which has seamless integration with composer. It has the following pro and con: Pro: You don't need to declare anything additional in the composer.json file of the application you want to attach the extension to Con: You must have a public VCS repository (either Git, SVN, or Mercurial) where your extension is published In our case, where we are just in fact learning about how to install things using composer, we certainly do not want to make our extension public. Do not use Packagist for the extension example we are building in this article. Let's recall our goal. Our goal is to be able to install our extension by calling the following command at the root of the code base of some Yii 2 application: $ php composer.phar require "malicious/app-info:*" After that, we should see something like the following screenshot after requesting the /app-info/configuration route: This corresponds to the following structure (the screenshot is from the http://jsonviewer.stack.hu/ web service): Put the extension to some public repository, for example, GitHub, and register a package at Packagist. This command will then work without any preparation in the composer.json manifest file of the target application. But in our case, we will not make this extension public, and so we have two options left for us. The first option, which is perfectly suited to our learning cause, is to use the archived package directly. For this, you have to add the repositories section to composer.json in the code base of the application you want to add the extension to: "repositories": [// definitions of repositories for the packages required by thisapplication] To specify the repository for the package that should be installed from the ZIP archive, you have to grab the entire contents of the composer.json manifest file of this package (in our case, our malicious/app-info extension) and put them as an element of the repositories section, verbatim. This is the most complex way to set up the composer package requirement, but this way, you can depend on absolutely any folder with files (packaged into an archive). Of course, the contents of composer.json of the extension do not specify the actual location of the extension's files. You have to add this to repositories manually. In the end, you should have the following additional section inside the composer.json manifest file of the target application: "repositories": [{"type": "package","package": {// … skipping whatever were copied verbatim from the composer.jsonof extension..."dist": {"url": "/home/vagrant/malicious.zip", // example filelocation"type": "zip"}}}] This way, we specify the location of the package in the filesystem of the same machine and tell the composer that this package is a ZIP archive. Now, you should just zip the contents of the yii2-malicious folder we have created for the extension, put them somewhere at the target machine, and provide the correct URL. Please note that it's necessary to archive only the contents of the extension and not the folder itself. After this, you run composer on the machine that really has this URL accessible (you can use http:// type of URLs, of course, too), and then you get the following response from composer: To check that Yii 2 really installed the extension, you can open the file vendor/yiisoft/extensions.php and check whether it contains the following block now: 'malicious/app-info' =>array ('name' => 'malicious/app-info','version' => '1.0.0.0','alias' =>array ('@malicious' => $vendorDir . '/malicious/app-info',),'bootstrap' => 'malicious\Bootstrap',), (The indentation was preserved as is from the actual file.) If this block is indeed there, then all you need to do is open the /app-info/configuration route and see whether it reports JSON to you. It should. The pros and cons of the file-based installation are as follows: Pros Cons You can specify any file as long as it is reachable by some URL. The ZIP archive management capabilities exist on virtually any kind of platform today. There is too much work in the composer.json manifest file of the target application. The requirement to copy the entire manifest to the repositories section is overwhelming and leads to code duplication. You don't need to set up any version control system repository. It's of dubious benefit though. The manifest from the extension package will not be processed at all. This means that you cannot just strip the entry in repositories, leaving only the dist and name sections there, because the Yii 2 installer will not be able to get to the autoloader and extra sections. The last method is to use the local version control system repository. We already have everything committed to the Git repository, and we have the correct tag placed here, corresponding to the version we declared in the manifest. This is everything we need to prepare inside the extension itself. Now, we need to modify the target application's manifest to add the repositories section in the same way we did previously, but this time we will introduce a lot less code there: "repositories": [{"type": "git","url": "/home/vagrant/yii2-malicious/" // put your own URLhere}] All that's needed from you is to specify the correct URL to the Git repository of the extension we were preparing at the beginning of this article. After you specify this repository in the target application's composer manifest, you can just issue the desired command: $ php composer.phar require "malicious/app-info:1.0.0" Everything will be installed as usual. Confirm the successful installation again by having a look at the contents of vendor/yiisoft/extensions.php and by accessing the /app-info/configuration route in the application. The pros and con of the repository-based installation are as follows: Pro: Relatively little code to write in the application's manifest. Pro: You don't need to really publish your extension (or the package in general). In some settings, it's really useful, for closed-source software, for example. Con: You still have to meddle with the manifest of the application itself, which can be out of your control and in this case, you'll have to guide your users about how to install your extension, which is not good for PR. In short, the following pieces inside the composer.json manifest turn the arbitrary composer package into the Yii 2 extension: First, we tell composer to use the special Yii 2 installer for packages as follows: "type": "yii2-extension" Then, we tell the Yii 2 extension installer where the bootstrap for the extension (if any) is as follows: "extra": {"bootstrap": "<Fully qualified name>"} Next, we tell the Yii 2 extension installer how to prepare aliases for your extension so that classes can be autoloaded as follows: "autoloader": {"psr-4": { "namespace": "<folder path>"}} Finally, we add the explicit requirement of the Yii 2 framework itself in the following code, so we'll be sure that the Yii 2 extension installer will be installed at all: "require": {"yiisoft/yii2": "*"} Everything else is the details of the installation of any other composer package, which you can read in the official composer documentation. Summary In this article, we looked at how Yii 2 implements its extensions so that they're easily installable by a single composer invocation and can be automatically attached to the application afterwards. We learned that this required some level of integration between these two systems, Yii 2 and composer, and in turn this requires some additional preparation from you as a developer of the extension. We used a really silly, even a bit dangerous, example for extension. It was for three reasons: The extension was fun to make (we hope) We showed that using bootstrap mechanics, we can basically automatically wire up the pieces of the extension to the target application without any need for elaborate manual installation instructions We showed the potential danger in installing random extensions from the Web, as an extension can run absolutely arbitrary code right at the application initialization and more than that, at each request made to the application We have discussed three methods of distribution of composer packages, which also apply to the Yii 2 extensions. The general rule of thumb is this: if you want your extension to be publicly available, just use the Packagist service. In any other case, use the local repositories, as you can use both local filesystem paths and web URLs. We looked at the option to attach the extension completely manually, not using the composer installation at all. Resources for Article: Further resources on this subject: Yii: Adding Users and User Management to Your Site [Article] Meet Yii [Article] Yii 1.1: Using Zii Components [Article]
Read more
  • 0
  • 0
  • 6494

article-image-protecting-gpg-keys-beaglebone
Packt
24 Sep 2014
23 min read
Save for later

Protecting GPG Keys in BeagleBone

Packt
24 Sep 2014
23 min read
In this article by Josh Datko, author of BeagleBone for Secret Agents, you will learn how to use the BeagleBone Black to safeguard e-mail encryption keys. (For more resources related to this topic, see here.) After our investigation into BBB hardware security, we'll now use that technology to protect your personal encryption keys for the popular GPG software. GPG is a free implementation of the OpenPGP standard. This standard was developed based on the work of Philip Zimmerman and his Pretty Good Privacy (PGP) software. PGP has a complex socio-political backstory, which we'll briefly cover before getting into the project. For the project, we'll treat the BBB as a separate cryptographic co-processor and use the CryptoCape, with a keypad code entry device, to protect our GPG keys when they are not in use. Specifically, we will do the following: Tell you a little about the history and importance of the PGP software Perform basic threat modeling to analyze your project Create a strong PGP key using the free GPG software Teach you to use the TPM to protect encryption keys History of PGP The software used in this article would have once been considered a munition by the U.S. Government. Exporting it without a license from the government, would have violated the International Traffic in Arms Regulations (ITAR). As late as the early 1990s, cryptography was heavily controlled and restricted. While the early 90s are filled with numerous accounts by crypto-activists, all of which are well documented in Steven Levy's Crypto, there is one man in particular who was the driving force behind the software in this project: Philip Zimmerman. Philip Zimmerman had a small pet project around the year 1990, which he called Pretty Good Privacy. Motivated by a strong childhood passion for codes and ciphers, combined with a sense of political activism against a government capable of strong electronic surveillance, he set out to create a strong encryption program for the people (Levy 2001). One incident in particular helped to motivate Zimmerman to finish PGP and publish his work. This was the language that the then U.S. Senator Joseph Biden added to Senate Bill #266, which would mandate that: "Providers of electronic communication services and manufacturers of electronic communications service equipment shall ensure that communication systems permit the government to obtain the plaintext contents of voice, data, and other communications when appropriately authorized by law." In 1991, in a rush to release PGP 1.0 before it was illegal, Zimmerman released his software as a freeware to the Internet. Subsequently, after PGP spread, the U.S. Government opened a criminal investigation on Zimmerman for the violation of the U.S. export laws. Zimmerman, in what is best described as a legal hack, published the entire source code of PGP, including instructions on how to scan it back into digital form, as a book. As Zimmerman describes: "It would be politically difficult for the Government to prohibit the export of a book that anyone may find in a public library or a bookstore."                                                                                                                           (Zimmerman, 1995) A book published in the public domain would no longer fall under ITAR export controls. The genie was out of the bottle; the government dropped its case against Zimmerman in 1996. Reflecting on the Crypto Wars Zimmerman's battle is considered a resilient victory. Many other outspoken supporters of strong cryptography, known as cypherpunks, also won battles popularizing and spreading encryption technology. But if the Crypto Wars were won in the early nineties, why hasn't cryptography become ubiquitous? Well, to a degree, it has. When you make purchases online, it should be protected by strong cryptography. Almost nobody would insist that their bank or online store not use cryptography and most probably feel more secure that they do. But what about personal privacy protecting software? For these tools, habits must change as the normal e-mail, chat, and web browsing tools are insecure by default. This change causes tension and resistance towards adoption. Also, security tools are notoriously hard to use. In the seminal paper on security usability, researchers conclude that the then PGP version 5.0, complete with a Graphical User Interface (GUI), was not able to prevent users, who were inexperienced with cryptography but all of whom had at least some college education, from making catastrophic security errors (Whitten 1999). Glenn Greenwald delayed his initial contact with Edward Snowden for roughly two months because he thought GPG was too complicated to use (Greenwald, 2014). Snowden absolutely refused to share anything with Greenwald until he installed GPG. GPG and PGP enable an individual to protect their own communications. Implicitly, you must also trust the receiving party not to forward your plaintext communication. GPG expects you to protect your private key and does not rely on a third party. While this adds some complexity and maintenance processes, trusting a third party with your private key can be disastrous. In August of 2013, Ladar Levison decided to shut down his own company, Lavabit, an e-mail provider, rather than turn over his users' data to the authorities. Levison courageously pulled the plug on his company rather then turn over the data. The Lavabit service generated and stored your private key. While this key was encrypted to the user's password, it still enabled the server to have access to the raw key. Even though the Lavabit service alleviated users from managing their private key themselves, it enabled the awkward position for Levison. To use GPG properly, you should never turn over your private key. For a complete analysis of Lavabit, see Moxie Marlinspike's blog post at http://www.thoughtcrime.org/blog/lavabit-critique/. Given the breadth and depth of state surveillance capabilities, there is a re-kindled interest in protecting one's privacy. Researchers are now designing secure protocols, with these threats in mind (Borisov, 2014). Philip Zimmerman ended the chapter on Why Do You Need PGP? in the Official PGP User's Guide with the following statement, which is as true today as it was when first inked: "PGP empowers people to take their privacy into their own hands. There's a growing social need for it." Developing a threat model We introduced the concept of a threat model. A threat model is an analysis of the security of the system that identifies assets, threats, vulnerabilities, and risks. Like any model, the depth of the analysis can vary. In the upcoming section, we'll present a cursory analysis so that you can start thinking about this process. This analysis will also help us understand the capabilities and limitations of our project. Outlining the key protection system The first step of our analysis is to clearly provide a description of the system we are trying to protect. In this project, we'll build a logical GPG co-processor using the BBB and the CryptoCape. We'll store the GPG keys on the BBB and then connect to the BBB over Secure Shell (SSH) to use the keys and to run GPG. The CryptoCape will be used to encrypt your GPG key when not in use, known as at rest. We'll add a keypad to collect a numeric code, which will be provided to the TPM. This will allow the TPM to unwrap your GPG key. The idea for this project was inspired by Peter Gutmann's work on open source cryptographic co-processors (Gutmann, 2000). The BBB, when acting as a co-processor to a host, is extremely flexible, and considering the power usage, relatively high in performance. By running sensitive code that will have access to cleartext encryption keys on a separate hardware, we gain an extra layer of protection (or at the minimum, a layer of indirection). Identifying the assets we need to protect Before we can protect anything, we must know what to protect. The most important assets are the GPG private keys. With these keys, an attacker can decrypt past encrypted messages, recover future messages, and use the keys to impersonate you. By protecting your private key, we are also protecting your reputation, which is another asset. Our decrypted messages are also an asset. An attacker may not care about your key if he/she can easily access your decrypted messages. The BBB itself is an asset that needs protecting. If the BBB is rendered inoperable, then an attacker has successfully prevented you from accessing your private keys, which is known as a Denial-Of-Service (DOS). Threat identification To identify the threats against our system, we need to classify the capabilities of our adversaries. This is a highly personal analysis, but we can generalize our adversaries into three archetypes: a well funded state actor, a skilled cracker, and a jealous ex-lover. The state actor has nearly limitless resources both from a financial and personnel point of view. The cracker is a skilled operator, but lacks the funding and resources of the state actor. The jealous ex-lover is not a sophisticated computer attacker, but is very motivated to do you harm. Unfortunately, if you are the target of directed surveillance from a state actor, you probably have much bigger problems than your GPG keys. This actor can put your entire life under monitoring and why go through the trouble of stealing your GPG keys when the hidden video camera in the wall records everything on your screen. Also, it's reasonable to assume that everyone you are communicating with is also under surveillance and it only takes one mistake from one person to reveal your plans for world domination. The adage by Benjamin Franklin is apropos here: Three may keep a secret if two of them are dead. However, properly using GPG will protect you from global passive surveillance. When used correctly, neither your Internet Service Provider, nor your e-mail provider, or any passive attacker would learn the contents of your messages. The passive adversary is not going to engage your system, but they could monitor a significant amount of Internet traffic in an attempt to collect it all. Therefore, the confidentiality of your message should remain protected. We'll assume the cracker trying to harm you is remote and does not have physical access to your BBB. We'll also assume the worst case that the cracker has compromised your host machine. In this scenario there is, unfortunately, a lot that the cracker can perform. He can install a key logger and capture everything, including the password that is typed on your computer. He will not be able to get the code that we'll enter on the BBB; however, he would be able to log in to the BBB when the key is available. The jealous ex-lover doesn't understand computers very well, but he doesn't need to, because he knows how to use a golf club. He knows that this BBB connected to your computer is somehow important to you because you've talked his ear off about this really cool project that you read in a book. He physically can destroy the BBB and with it, your private key (and probably the relationship as well!). Identifying the risks How likely are the previous risks? The risk of active government surveillance in most countries is fortunately low. However, the consequences of this attack are very damaging. The risk of being caught up in passive surveillance by a state actor, as we have learned from Edward Snowden, is very likely. However, by using GPG, we add protection against this threat. An active cracker seeking you harm is probably unlikely. Contracting keystroke-capturing malware, however, is probably not an unreasonable event. A 2013 study by Microsoft concluded that 8 out of every 1,000 computers were infected with malware. You may be tempted to play these odds but let's rephrase this statement: in a group of 125 computers, one is infected with malware. A school or university easily has more computers than this. Lastly, only you can assess the risk of a jealous ex-lover. For the full Microsoft report, refer to http://blogs.technet.com/b/security/archive/2014/03/31/united-states-malware-infection-rate-more-than-doubles-in-the-first-half-of-2013.aspx. Mitigating the identified risks If you find yourself the target of a state, this project alone is not going to help much. We can protect ourselves somewhat from the cracker with two strategies. The first is instead of connecting the BBB to your laptop or computer, you can use the BBB as a standalone machine and transfer files via a microSD card. This is known as an air-gap. With a dedicated monitor and keyboard, it is much less likely for software vulnerabilities to break the gap and infect the BBB. However, this comes as a high level of personal inconvenience, depending on how often you encrypt files. If you consider the risk of running the BBB attached to your computer too high, create an air-gapped BBB for maximum protection. If you deem the risk low, because you've hardened your computer and have other protection mechanism, then keep the BBB attached to the computer. An air-gapped computer can still be compromised. In 2010, a highly specialized worm known as Stuxnet was able to spread to networked isolated machines through USB flash drives. The second strategy is to somehow enter the GPG passphrase directly into the BBB without using the host's keyboard. After we complete the project, we'll suggest a mechanism to do this, but it is slightly more complicated. This would eliminate the threat of the key logger since the pin is directly entered. The mitigation against the ex-lover is to treat your BBB as you would your own wallet, and don't leave it out of your sight. It's slightly larger than you would want, but it's certainly small enough to fit in a small backpack or briefcase. Summarizing our threat model Our threat model, while cursory, illustrates the thought process one should go through before using or developing security technologies. The term threat model is specific to the security industry, but it's really just proper planning. The purpose of this analysis is to find logic bugs and prevent you from spending thousands of dollars on high-tech locks for your front door when you keep your backdoor unlocked. Now that we understand what we are trying to protect and why it is important to use GPG, let's build the project. Generating GPG keys First, we need to install GPG on the BBB. It is mostly likely already installed, but you can check and install it with the following command: sudo apt-get install gnupg gnupg-curl Next, we need to add a secret key. For those that already have a secret key, you can import your secret key ring, secring.gpg, to your ~/.gnupg folder. For those that want to create a new key, on the BBB, proceed to the upcoming section. This project assumes some familiarity with GPG. If GPG is new to you, the Free Software Foundation maintains the Email Self-Defense guide which is a very approachable introduction to the software and can be found at https://emailselfdefense.fsf.org/en/index.html. Generating entropy If you decided to create a new key on the BBB, there are a few technicalities we must consider. First of all, GPG will need a lot of random data to generate the keys. The amount of random data available in the kernel is proportional to the amount of entropy that is available. You can check the available entropy with the following command: cat /proc/sys/kernel/random/entropy_avail If this command returns a relatively low number, under 200, then GPG will not have enough entropy to generate a key. On a PC, one can increase the amount of entropy by interacting with the computer such as typing on the keyboard or moving the mouse. However, such sources of entropy are difficult for embedded systems, and in our current setup, we don't have the luxury of moving a mouse. Fortunately, there are a few tools to help us. If your BBB is running kernel version 3.13 or later, we can use the hardware random number generator on the AM3358 to help us out. You'll need to install the rng-tools package. Once installed, you can edit /etc/default/rng-tools and add the following line to register the hardware random number generated for rng-tools: HRNGDEVICE=/dev/hwrng After this, you should start the rng-tools daemon with: /etc/init.d/rng-tools start If you don't have /dev/hwrng—and currently, the chips on the CryptoCape do not yet have character device support and aren't available to /dev/hwrng—then you can install haveged. This daemon implements the Hardware Volatile Entropy Gathering and Expansion (HAVEGE) algorithm, the details of which are available at http://www.irisa.fr/caps/projects/hipsor/. This daemon will ensure that the BBB maintains a pool of entropy, which will be sufficient for generating a GPG key on the BBB. Creating a good gpg.conf file Before you generate your key, we need to establish some more secure defaults for GPG. As we discussed earlier, it is still not as easy as it should be to use e-mail encryption. Riseup.net, an e-mail provider with a strong social cause, maintains an OpenPGP best practices guide at https://help.riseup.net/en/security/message-security/openpgp/best-practices. This guide details how to harden your GPG configuration and provides the motivation behind each option. It is well worth a read to understand the intricacies of GPG key management. Jacob Applebaum maintains an implementation of these best practices, which you should download from https://github.com/ioerror/duraconf/raw/master/configs/gnupg/gpg.conf and save as your ~/.gnupg/gpg.conf file. The configuration is well commented and you can refer to the best practices guide available at Riseup.net for more information. There are three entries, however, that you should modify. The first is default-key, which is the fingerprint of your primary GPG key. Later in this article, we'll show you how to retrieve that fingerprint. We can't perform this action now because we don't have a key yet. The second is keyserver-options ca-cert-file, which is the certificate authority for the keyserver pool. Keyservers host your public keys and a keyserver pool is a redundant collection of keyservers. The instructions on Riseup.net gives the details on how to download and install that certificate. Lastly, you can use Tor to fetch updates on your keys. The act of you requesting a public key from a keyserver signals that you have a potential interest in communicating with the owner of that key. This metadata might be more interesting to a passive adversary than the contents of your message, since it reveals your social network. Tor is apt at protecting traffic analysis. You probably don't want to store your GPG keys on the same BBB as your bridge, so a second BBB would help here. On your GPG BBB, you need to only run Tor as a client, which is its default configuration. Then you can update keyserver-options http-proxy to point to your Tor SOCKS proxy running on localhost. The Electronic Frontier Foundation (EFF) provides some hypothetical examples on the telling nature of metadata, for example, They (the government) know you called the suicide prevention hotline from the Golden Gate Bridge. But the topic of the call remains a secret. Refer to the EFF blog post at https://www.eff.org/deeplinks/2013/06/why-metadata-matters for more details. Generating the key Now you can generate your GPG key. Follow the on screen instructions and don't include a comment. Depending on your entropy source, this could take a while. This example took 10 minutes using haveged as the entropy collector. There are various opinions on what to set as the expiration date. If this is your first GPG, try one year at first. You can always make a new key or extend the same one. If you set the key to never expire and you lose the key, by forgetting the passphrase, people will still think it's valid unless you revoke it. Also, be sure to set the user ID to a name that matches some sort of identification, which will make it easier for people to verify that the holder of the private key is the same person as a certified piece of paper. The command to create a new key is gpg –-gen-key: Please select what kind of key you want:    (1) RSA and RSA (default)    (2) DSA and Elgamal    (3) DSA (sign only)    (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 4096 Requested keysize is 4096 bits Please specify how long the key should be valid.          0 = key does not expire      <n> = key expires in n days      <n>w = key expires in n weeks      <n>m = key expires in n months      <n>y = key expires in n years Key is valid for? (0) 1y Key expires at Sat 06 Jun 2015 10:07:07 PM UTC Is this correct? (y/N) y   You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form:    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"   Real name: Tyrone Slothrop Email address: tyrone.slothrop@yoyodyne.com Comment: You selected this USER-ID:    "Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>"   Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key.   We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. ......+++++ ..+++++   gpg: key 0xABD9088171345468 marked as ultimately trusted public and secret key created and signed.   gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid:   1 signed:   0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: next trustdb check due at 2015-06-06 pub   4096R/0xABD9088171345468 2014-06-06 [expires: 2015-06-06]      Key fingerprint = CBF9 1404 7214 55C5 C477 B688 ABD9 0881 7134 5468 uid                 [ultimate] Tyrone Slothrop <tyrone.slothrop@yoyodyne.com> sub   4096R/0x9DB8B6ACC7949DD1 2014-06-06 [expires: 2015-06-06]   gpg --gen-key 320.62s user 0.32s system 51% cpu 10:23.26 total From this example, we know that our secret key is 0xABD9088171345468. If you end up creating multiple keys, but use just one of them more regularly, you can edit your gpg.conf file and add the following line: default-key 0xABD9088171345468 Postgeneration maintenance In order for people to send you encrypted messages, they need to know your public key. Having your public key server can help distribute your public key. You can post your key as follows, and replace the fingerprint with your primary key ID: gpg --send-keys 0xABD9088171345468 GPG does not rely on third parties and expects you to perform key management. To ease this burden, the OpenPGP standards define the Web-of-Trust as a mechanism to verify other users' keys. Details on how to participate in the Web-of-Trust can be found in the GPG Privacy Handbook at https://www.gnupg.org/gph/en/manual/x334.html. You are also going to want to create a revocation certificate. A revocation certificate is needed when you want to revoke your key. You would do this when the key has been compromised, say if it was stolen. Or more likely, if the BBB fails and you can no longer access your key. Generate the certificate and follow the ensuing prompts replacing the ID with your key ID: gpg --output revocation-certificate.asc --gen-revoke 0xABD9088171345468   sec 4096R/0xABD9088171345468 2014-06-06 Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>   Create a revocation certificate for this key? (y/N) y Please select the reason for the revocation: 0 = No reason specified 1 = Key has been compromised 2 = Key is superseded 3 = Key is no longer used Q = Cancel (Probably you want to select 1 here) Your decision? 0 Enter an optional description; end it with an empty line: >  Reason for revocation: No reason specified (No description given) Is this okay? (y/N) y   You need a passphrase to unlock the secret key for user: "Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>" 4096-bit RSA key, ID 0xABD9088171345468, created 2014-06-06   ASCII armored output forced. Revocation certificate created.   Please move it to a medium which you can hide away; if Mallory gets access to this certificate he can use it to make your key unusable. It is smart to print this certificate and store it away, just in case your media become unreadable. But have some caution: The print system of your machine might store the data and make it available to others! Do take the advice and move this file off the BeagleBone. Printing it out and storing it somewhere safe is a good option, or burn it to a CD. The lifespan of a CD or DVD may not be as long as you think. The United States National Archives Frequently Asked Questions (FAQ) page on optical storage media states that: "CD/DVD experiential life expectancy is 2 to 5 years even though published life expectancies are often cited as 10 years, 25 years, or longer." Refer to their website http://www.archives.gov/records-mgmt/initiatives/temp-opmedia-faq.html for more details. Lastly, create an encrypted backup of your encryption key and consider storing that in a safe location on durable media. Using GPG With your GPG private key created or imported, you can now use GPG on the BBB as you would on any other computer. You may have already installed Emacs on your host computer. If you follow the GNU/Linux instructions, you can also install Emacs on the BBB. If you do, you'll enjoy automatic GPG encryption and decryption for files that end in the .gpg extension. For example, suppose you want to send a message to your good friend, Pirate Prentice, whose GPG key you already have. Compose your message in Emacs, and then save it with a .gpg extension. Emacs will prompt you to select the public keys for encryption and will automatically encrypt the buffer. If a GPG-encrypted message is encrypted to a public key, with which you have the corresponding private key, Emacs will automatically decrypt the message if it ends with .gpg. When using Emacs from the terminal, the prompt for encryption should look like the following screenshot: Summary This article covered and taught you about how GPG can protect e-mail confidentiality Resources for Article: Further resources on this subject: Making the Unit Very Mobile - Controlling Legged Movement [Article] Pulse width modulator [Article] Home Security by BeagleBone [Article]
Read more
  • 0
  • 0
  • 23042

article-image-how-to-build-a-recommender-server-with-mahout-and-solr
Pat Ferrel
24 Sep 2014
8 min read
Save for later

How to Build a Recommender Server with Mahout and Solr

Pat Ferrel
24 Sep 2014
8 min read
In the last post, Mahout on Spark: Recommenders we talked about creating a co-occurrence indicator matrix for a recommender using Mahout. The goals for a recommender are many, but first they must be fast and must make “good” personalized recommendations. We’ll start with the basics and improve on the “good” part as we go. As we saw last time, co-occurrence or item-based recommenders are described by: rp = hp[AtA] Calculating [AtA] We needed some more interesting data first so I captured video preferences by mining the web. The target demo would be a Guide to Online Video. Input was collected for many users by simply logging their preferences to CSV text files: 906507914,dislike,mars_needs_moms 906507914,dislike,mars_needs_moms 906507914,like,moneyball 906507914,like,the_hunger_games 906535685,like,wolf_creek 906535685,like,inside 906535685,like,le_passe 576805712,like,persepolis 576805712,dislike,barbarella 576805712,like,gentlemans_agreement 576805712,like,europa_report 576805712,like,samsara 596511523,dislike,a_hijacking 596511523,like,the_kings_speech … The technique for actually using multiple actions hasn’t been described yet so for now we’ll use the dislikes in the application to filter out recommendations and use only the likes to calculate recommendations. That means we need to use only the “like” preferences. The Mahout 1.0 version of spark-itemsimilarity can read these files directly and filter out all but the lines with “like” in them: mahout spark-itemsimilarity –i root-input-dir -o output-dir --filter1 like -fc 1 –ic 2 #filter out all but likes #indicate columns for filter and items --omitStrength This will give us output like this: the_hunger_games_catching_fire<tab>holiday_inn 2_guns superman_man_of_steel five_card_stud district_13 blue_jasmine this_is_the_end riddick ... law_abiding_citizen<tab>stone ong_bak_2 the_grey american centurion edge_of_darkness orphan hausu buried ... munich<tab>the_devil_wears_prada brothers marie_antoinette singer brothers_grimm apocalypto ghost_dog_the_way_of_the_samurai ... private_parts<tab>puccini_for_beginners finding_forrester anger_management small_soldiers ice_age_2 karate_kid magicians ... world-war-z<tab>the_wolverine the_hunger_games_catching_fire ghost_rider_spirit_of_vengeance holiday_inn the_hangover_part_iii ... This is a tab delimited file with a video itemID followed by a string containing a list of similar videos that is space delimited. The similar video list may contain some surprises because here “similarity” means “liked by similar people”. It doesn’t mean the videos were similar in content or genre, so don’t worry if they look odd. We’ll use another technique to make “on subject” recommendations later. Anyone familiar with the Mahout first generation recommender will notice right away that we are using IDs that have meaning to the application, whereas before Mahout required its own integer IDs. A fast, scalable similarity engine In Mahout’s first generation recommenders, all recs were calculated for all users. This meant that new users would have to wait for a long running batch job to happen before they saw recommendations. In the Guide demo app, we want to make good recommendations to new users and use new preferences in real time. We have already calculated [AtA] indicating item similarities so we need a real-time method for the final part of the equation rp = hp[AtA]. Capturing hp is the first task and in the demo we log all actions to a database in real time. This may have scaling issues but is fine for a demo. Now we will make use of the “multiply as similarity” idea we introduced in the first post. Multiplying hp[AtA] can be done with a fast similarity engine—otherwise known as a search engine. At their core, search engines are primarily similarity engines that index textual data (AKA a matrix of token vectors) and take text as the query (a token vector). Another way to look at this is that search engines find by example—they are optimized to find a collection of items by similarity to the query. We will use the search engine to find the most similar indicator vectors in [AtA] to our query hp, thereby producing rp. Using this method, rp will be the list of items returned from the search—row IDs in [AtA]. Spark-itemsimilarity is designed to create output that can be directly indexed by search engines. In the Guide demo we chose to create a catalog of items in a database and to use Solr to index columns in the database. Both Solr and Elasticsearch have highly scalable fast engines that can perform searches on database columns or a variety of text formats so you don’t have to use a database to store the indicators. We loaded the indicators along with some metadata about the items into the database like this: itemID foreign-key genres indicators 123 world-war-z sci-fi action the_wolverine … 456 captain_phillips action drama pi when_com… … So, the foreign-key is our video item ID from the indicator output and the indicator is the space-delimited list of similar video item IDs. We must now set the search engine to index the indicators. This integration is usually pretty simple and depends on what database you use or if you are storing the entire catalog in files (leaving the database out of the loop). Once you’ve triggered the indexing of your indicators, we are ready for the query. The query will be a preference history vector consisting of the same tokens/video item IDs you see in the indicators. For a known user these should be logged and available, perhaps in the database, but for a new user we’ll have to find a way to encourage preference feedback. New users The demo site asks a new user to create an account and run through a trainer that collects important preferences from the user. We can probably leave the details of how to ask for “important” preferences for later. Suffice to say, we clustered items and took popular ones from each cluster so that the users were more likely to have seen them. From this we see that the user liked: argo django iron_man_3 pi looper … Whether you are responding to a new user or just accounting for the most recent preferences of returning users, recentness is very important. Using the previous IDs as a query on the indicator field of the database returns recommendations, even though the new user’s data was not used to train the recommender. Here’s what we get: The first line shows the result of the search engine query for the new user. The trainer on the demo site has several pages of examples to rate and the more you rate the better the recommendations become, as one would expect but these look pretty good given only 9 ratings. I can make a value judgment because they were rated by me. In a small sampling of 20 people using the site and after having them complete the entire 20 pages of training examples, we asked them to tell us how many of the recommendations on the first line were things they liked or would like to see. We got 53-90% right. Only a few people participated and your data will vary greatly but this was at least some validation. The second line of recommendations and several more below it are calculated using a genre and this begins to show the power of the search engine method. In the trainer I picked movies where the number 1 genre was “drama”. If you have the search engine index both indicators as well as genres you can combine indicator and genre preferences in the query. To produce line 1 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” To produce line 2 the query was: Query: indicator field: “argo django iron_man_3 pi looper …” genre field: “drama”; boost: 5 The boost is used to skew results towards a field. In practice this will give you mostly matching genres but is not the same as a filter, which can also be used if you want a guarantee that the results will be from “drama”. Conclusion Combining a search engine with Mahout created a recommender that is extremely fast and scalable but also seamlessly blends results using collaborative filtering data and metadata. Using metadata boosting in the query allows us to skew results in a direction that makes sense.  Using multiple fields in a single query gives us even more power than this example shows. It allows us to mix in different actions. Remember the “dislike” action that we discarded? One simple and reasonable way to use that is to filter results by things the user disliked, and the demo site does just that. But we can go even further; we can use dislikes in a cross-action recommender. Certain of the user’s dislikes might even predict what they will like, but that requires us to go back to the original equation so we’ll leave it for another post.  About the author Pat is a serial entrepreneur, consultant, and Apache Mahout committer working on the next generation of Spark-based recommenders. He lives in Seattle and can be contacted through his site at https://finderbots.com or @occam on Twitter.
Read more
  • 0
  • 0
  • 7273
article-image-exploring-usages-delphi
Packt
24 Sep 2014
12 min read
Save for later

Exploring the Usages of Delphi

Packt
24 Sep 2014
12 min read
This article written by Daniele Teti, the author of Delphi Cookbook, explains the process of writing enumerable types. It also discusses the steps to customize FireMonkey controls. (For more resources related to this topic, see here.) Writing enumerable types When the for...in loop was introduced in Delphi 2005, the concept of enumerable types was also introduced into the Delphi language. As you know, there are some built-in enumerable types. However, you can create your own enumerable types using a very simple pattern. To make your container enumerable, implement a single method called GetEnumerator, that must return a reference to an object, interface, or record, that implements the following three methods and one property (in the sample, the element to enumerate is TFoo):    function GetCurrent: TFoo;    function MoveNext: Boolean;    property Current: TFoo read GetCurrent; There are a lot of samples related to standard enumerable types, so in this recipe you'll look at some not-so-common utilizations. Getting ready In this recipe, you'll see a file enumerable function as it exists in other, mostly dynamic, languages. The goal is to enumerate all the rows in a text file without actual opening, reading and closing the file, as shown in the following code: var row: String; begin for row in EachRows('....myfile.txt') do WriteLn(row); end; Nice, isn't it? Let's start… How to do it... We have to create an enumerable function result. The function simply returns the actual enumerable type. This type is not freed automatically by the compiler so you've to use a value type or an interfaced type. For the sake of simplicity, let's code to return a record type: function EachRows(const AFileName: String): TFileEnumerable; begin Result := TFileEnumerable.Create(AFileName); end; The TFileEnumerable type is defined as follows: type TFileEnumerable = record private FFileName: string; public constructor Create(AFileName: String); function GetEnumerator: TEnumerator<String>; end; . . . constructor TFileEnumerable.Create(AFileName: String); begin FFileName := AFileName; end; function TFileEnumerable.GetEnumerator: TEnumerator<String<; begin Result := TFileEnumerator.Create(FFileName); end; No logic here; this record is required only because you need a type that has a GetEnumerator method defined. This method is called automatically by the compiler when the type is used on the right side of the for..in loop. An interesting thing happens in the TFileEnumerator type, the actual enumerator, declared in the implementation section of the unit. Remember, this object is automatically freed by the compiler because it is the return of the GetEnumerator call: type TFileEnumerator = class(TEnumerator<String>) private FCurrent: String; FFile: TStreamReader; protected constructor Create(AFileName: String); destructor Destroy; override; function DoGetCurrent: String; override; function DoMoveNext: Boolean; override; end; { TFileEnumerator } constructor TFileEnumerator.Create(AFileName: String); begin inherited Create; FFile := TFile.OpenText(AFileName); end; destructor TFileEnumerator.Destroy; begin FFile.Free; inherited; end; function TFileEnumerator.DoGetCurrent: String; begin Result := FCurrent; end; function TFileEnumerator.DoMoveNext: Boolean; begin Result := not FFile.EndOfStream; if Result then FCurrent := FFile.ReadLine; end; The enumerator inherits from TEnumerator<String> because each row of the file is represented as a string. This class also gives a mechanism to implement the required methods. The DoGetCurrent (called internally by the TEnumerator<T>.GetCurrent method) returns the current line. The DoMoveNext method (called internally by the TEnumerator<T>.MoveNext method) returns true or false if there are more lines to read in the file or not. Remember that this method is called before the first call to the GetCurrent method. After the first call to the DoMoveNext method, FCurrent is properly set to the first row of the file. The compiler generates a piece of code similar to the following pseudo code: it = typetoenumerate.GetEnumerator; while it.MoveNext do begin S := it.Current; //do something useful with string S end it.free; There's more… Enumerable types are really powerful and help you to write less, and less error prone, code. There are some shortcuts to iterate over in-place data without even creating an actual container. If you have a bounce or integers or if you want to create a not homogenous for loop over some kind of data type, you can use the new TArray<T> type as shown here: for i in TArray<Integer>.Create(2, 4, 8, 16) do WriteLn(i); //write 2 4 8 16 TArray<T> is a generic type, so the same works also for strings: for s in TArray<String>.Create('Hello','Delphi','World') do WriteLn(s); It can also be used for Plain Old Delphi Object (PODO) or controls: for btn in TArray<TButton>.Create(btn1, btn31,btn2) do btn.Enabled := false See also http://docwiki.embarcadero.com/RADStudio/XE6/en/Declarations_and_Statements#Iteration_Over_Containers_Using_For_statements: This Embarcadero documentation will provide a detailed introduction to enumerable types Giving a new appearance to the standard FireMonkey controls using styles Since Version XE2, RAD Studio includes FireMonkey. FireMonkey is an amazing library. It is a really ambitious target for Embarcadero, but it's important for its long-term strategy. VCL is and will remain a Windows-only library, while FireMonkey has been designed to be completely OS and device independent. You can develop one application and compile it anywhere (if anywhere is contained in Windows, OS X, Android, and iOS; let's say that is a good part of anywhere). Getting ready A styled component doesn't know how it will be rendered on the screen, but the style. Changing the style, you can change the aspect of the component without changing its code. The relation between the component code and style is similar to the relation between HTML and CSS, one is the content and another is the display. In terms of FireMonkey, the component code contains the actual functionalities the component has, but the aspect is completely handled by the associated style. All the TStyledControl child classes support styles. Let's say you have to create an application to find a holiday house for a travel agency. Your customer wants a nice-looking application to search for the dream house their customers. Your graphic design department (if present) decided to create a semitransparent look-and-feel, as shown in the following screenshot, and you've to create such an interface. How to do that? This is the UI we want How to do it… In this case, you require some step-by-step instructions, so here they are: Create a new FireMonkey desktop application (navigate to File | New | FireMonkey Desktop Application). Drop a TImage component on the form. Set its Align property to alClient, and use the MultiResBitmap property and its property editor to load a nice-looking picture. Set the WrapMode property to iwFit and resize the form to let the image cover the entire form. Now, drop a TEdit component and a TListBox component over the TImage component. Name the TEdit component as EditSearch and the TListBox component as ListBoxHouses. Set the Scale property of the TEdit and TListBox components to the following values: Scale.X: 2 Scale.Y: 2 Your form should now look like this: The form with the standard components The actions to be performed by the users are very simple. They should write some search criteria in the Edit field and click on Return. Then, the listbox shows all the houses available for that criteria (with a "contains" search). In a real app, you require a database or a web service to query, but this is a sample so you'll use fake search criteria on fake data. Add the RandomUtilsU.pas file from the Commons folder of the project and add it to the uses clause of the main form. Create an OnKeyUp event handler for the TEdit component and write the following code inside it: procedure TForm1.EditSearchKeyUp(Sender: TObject;      var Key: Word; var KeyChar: Char; Shift: TShiftState); var I: Integer; House: string; SearchText: string; begin if Key <> vkReturn then    Exit;   // this is a fake search... ListBoxHouses.Clear; SearchText := EditSearch.Text.ToUpper; //now, gets 50 random houses and match the criteria for I := 1 to 50 do begin    House := GetRndHouse;    if House.ToUpper.Contains(SearchText) then      ListBoxHouses.Items.Add(House); end; if ListBoxHouses.Count > 0 then    ListBoxHouses.ItemIndex := 0 else    ListBoxHouses.Items.Add('<Sorry, no houses found>'); ListBoxHouses.SetFocus; end; Run the application and try it to familiarize yourself with the behavior. Now, you have a working application, but you still need to make it transparent. Let's start with the FireMonkey Style Designer (FSD). Just to be clear, at the time of writing, the FireMonkey Style Designer is far to be perfect. It works, but it is not a pleasure to work with it. However, it does its job. Right-click on the TEdit component. From the contextual menu, choose Edit Custom Style (general information about styles and the style editor can be found at http://docwiki.embarcadero.com/RADStudio/XE6/en/FireMonkey_Style_Designer and http://docwiki.embarcadero.com/RADStudio/XE6/en/Editing_a_FireMonkey_Style). Delphi opens a new tab that contains the FSD. However, to work with it, you need the Structure pane to be visible as well (navigate to View | Structure or Shift + Alt + F11). In the Structure pane, there are all the styles used by the TEdit control. You should see a Structure pane similar to the following screenshot: The Structure pane showing the default style for the TEdit control In the Structure pane, open the editsearchstyle1 node, select the background subnode, and go to the Object Inspector. In the Object Inspector window, remove the content of the SourceLookup property. The background part of the style is TActiveStyleObject. A TActiveStyleObject style is a style that is able to show a part of an image as default and another part of the same image when the component that uses it is active, checked, focused, mouse hovered, pressed, or selected. The image to use is in the SourceLookup property. Our TEdit component must be completely transparent in every state, so we removed the value of the SourceLookup property. Now the TEdit component is completely invisible. Click on Apply and Close and run the application. As you can confirm, the edit works but it is completely transparent. Close the application. When you opened the FSD for the first time, a TStyleBook component has been automatically dropped on the form and contains all your custom styles. Double-click on it and the style designer opens again. The edit, as you saw, is transparent, but it is not usable at all. You need to see at least where to click and write. Let's add a small bottom line to the edit style, just like a small underline. To perform the next step, you require the Tool Palette window and the Structure pane visible. Here is my preferred setup for this situation: The Structure pane and the Tool Palette window are visible at the same time using the docking mechanism; you can also use the floating windows if you wish Now, search for a TLine component in the Tool Palette window. Drag-and-drop the TLine component onto the editsearchstyle1 node in the Structure pane. Yes, you have to drop a component from the Tool Palette window directly onto the Structure pane. Now, select the TLine component in the Structure Pane (do not use the FSD to select the components, you have to use the Structure pane nodes). In the Object Inspector, set the following properties: Align: alContents HitTest: False LineType: ltTop RotationAngle: 180 Opacity: 0.6 Click on Apply and Close. Run the application. Now, the text is underlined by a small black line that makes it easy to identify that the application is transparent. Stop the application. Now, you've to work on the listbox; it is still 100 percent opaque. Right-click on the ListBoxHouses option and click on Edit Custom Style. In the Structure pane, there are some new styles related to the TListBox class. Select the listboxhousesstyle1 option, open it, and select its child style, background. In the Object Inspector, change the Opacity property of the background style to 0.6. Click on Apply and Close. That's it! Run the application, write Calif in the Edit field and press Return. You should see a nice-looking application with a semitransparent user interface showing your dream houses in California (just like it was shown in the screenshot in the Getting ready section of this recipe). Are you amazed by the power of FireMonkey styles? How it works... The trick used in this recipe is simple. If you require a transparent UI, just identify which part of the style of each component is responsible to draw the background of the component. Then, put the Opacity setting to a level less than 1 (0.6 or 0.7 could be enough for most cases). Why not simply change the Opacity property of the component? Because if you change the Opacity property of the component, the whole component will be drawn with that opacity. However, you need only the background to be transparent; the inner text must be completely opaque. This is the reason why you changed the style and not the component property. In the case of the TEdit component, you completely removed the painting when you removed the SourceLookup property from TActiveStyleObject that draws the background. As a thumb rule, if you have to change the appearance of a control, check its properties. If the required customization is not possible using only the properties, then change the style. There's more… If you are new to FireMonkey styles, probably most concepts in this recipe must have been difficult to grasp. If so, check the official documentation on the Embarcadero DocWiki at the following URL: http://docwiki.embarcadero.com/RADStudio/XE6/en/Customizing_FireMonkey_Applications_with_Styles Summary In this article, we discussed ways to write enumerable types in Delphi. We also discussed how we can use styles to make our FireMonkey controls look better. Resources for Article: Further resources on this subject: Adding Graphics to the Map [Article] Application Performance [Article] Coding for the Real-time Web [Article]
Read more
  • 0
  • 0
  • 15011

article-image-windows-phone-8-applications
Packt
23 Sep 2014
17 min read
Save for later

Windows Phone 8 Applications

Packt
23 Sep 2014
17 min read
In this article by Abhishek Sur, author of Visual Studio 2013 and .NET 4.5 Expert Cookbook, we will build your first Windows Phone 8 application following the MVVM pattern. We will work with Launchers and Choosers in a Windows Phone, relational databases and persistent storage, and notifications in a Windows Phone (For more resources related to this topic, see here.) Introduction Windows Phones are the newest smart device that has come on to the market and host the Windows operating system from Microsoft. The new operating system that was recently introduced to the market significantly differs from the previous Windows mobile operating system. Microsoft has shifted gears on producing a consumer-oriented phone rather than an enterprise mobile environment. The operating system is stylish and focused on the consumer. It was built keeping a few principles in mind: Simple and light, with focus on completing primary tasks quickly Distinct typography (Segoe WP) for all its UI Smart and predefined animation for the UI Focus on content, not chrome (the whole screen is available to the application for use) Honesty in design Unlike the previous Windows Phone operating system, Windows Phone 8 is built on the same core on which Windows PC is now running. The shared core indicates that the Windows core system includes the same Windows OS, including NT Kernel, NT filesystem, and networking stack. Above the core, there is a Mobile Core specific to mobile devices, which includes components such as Multimedia, Core CLR, and IE Trident, as shown in the following screenshot: In the preceding screenshot, the Windows Phone architecture has been depicted. The Windows Core System is shared between the desktop and mobile devices. The Mobile Core is specific to mobile devices that run Windows Phone Shell, all the apps, and platform services such as background downloader/uploader and scheduler. It is important to note that even though both Windows 8 and Windows Phone 8 share the same core and most of the APIs, the implementation of APIs is different from one another. The Windows 8 APIs are considered WinRT, while Windows Phone 8 APIs are considered Windows Phone Runtime (WinPRT). Building your first Windows Phone 8 application following the MVVM pattern Windows Phone applications are generally created using either HTML5 or Silverlight. Most of the people still use the Silverlight approach as it has a full flavor of backend languages such as C# and also the JavaScript library is still in its infancy. With Silverlight or XAML, the architecture that always comes into the developer's mind is MVVM. Like all XAML-based development, Windows 8 Silverlight apps also inherently support MVVM models and hence, people tend to adopt it more often when developing Windows Phone apps. In this recipe, we are going to take a quick look at how you can use the MVVM pattern to implement an application. Getting ready Before starting to develop an application, you first need to set up your machine with the appropriate SDK, which lets you develop a Windows Phone application and also gives you an emulator to debug the application without a device. The SDK for Windows Phone 8 apps can be downloaded from Windows Phone Dev Center at http://dev.windowsphone.com. The Windows Phone SDK includes the following: Microsoft Visual Studio 2012 Express for Windows Phone Microsoft Blend 2012 Express for Windows Phone The Windows Phone Device Emulator Project templates, reference assemblies, and headers/libraries A Windows 8 PC to run Visual Studio 2012 for Windows Phone After everything has been set up for application development, you can open Visual Studio and create a Windows Phone app. When you create the project, it will first ask the target platform; choose Windows Phone 8 as the default and select OK. You need to name and create the project. How to do it... Now that the template is created, let's follow these steps to demonstrate how we can start creating an application: By default, the project template that is loaded will display a split view with the Visual Studio Designer on the left-hand side and an XAML markup on the right-hand side. The MainPage.xaml file should already be loaded with a lot of initial adjustments to support Windows Phone with factors. Microsoft makes sure that they give the best layout to the developer to start with. So the important thing that you need to look at is defining the content inside the ContentPanel property, which represents the workspace area of the page. The Visual Studio template for Windows Phone 8 already gives you a lot of hints on how to start writing your first app. The comments indicate where to start and how the project template behaves on the code edits in XAML. Now let's define some XAML designs for the page. We will create a small page and use MVVM to connect to the data. For simplicity, we use dummy data to show on screen. Let's create a login screen for the application to start with. Add a new page, call it Login.xaml, and add the following code in ContentPanel defined inside the page: <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0" VerticalAlignment="Center"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition /> <ColumnDefinition /> </Grid.ColumnDefinitions> <TextBlock Text="UserId" Grid.Row="0" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Center"/> <TextBox Text="{Binding UserId, Mode=TwoWay}" Grid.Row="0"Grid.Column="1" InputScope="Text"/> <TextBlock Text="Password" Grid.Row="1" Grid.Column="0" HorizontalAlignment="Right" VerticalAlignment="Center"/> <PasswordBox x_Name="txtPassword" Grid.Row="1" Grid.Column="1" PasswordChanged="txtPassword_PasswordChanged"/> <Button Command="{Binding LoginCommand}" Content="Login"Grid.Row="2" Grid.Column="0" /> <Button Command="{Binding ClearCommand}" Content="Clear"Grid.Row="2" Grid.Column="1" /> </Grid> In the preceding UI Design, we added a TextBox and a PasswordBox inside ContentPanel. Each TextBox has an InputScope property, which you can define to specify the behavior of the input. We define it as Text, which specifies that the TextBox can have any textual data. The PasswordBox takes any input from the user, but shows asterisks (*) instead of the actual data. The actual data is stored in an encrypted format inside the control and can only be recovered using its Password property. We are going to follow the MVVM pattern to design the application. We create a folder named Model in the solution and put a LoginDataContext class in it. This class is used to generate and validate the login of the UI. Inherit the class from INotifyPropertyChanged, which indicates that the properties can act by binding with the corresponding DependencyProperty that exists in the control, thereby interacting to and fro with the UI. We create properties for UserName, Password, and Status, as shown in the following code: private string userid; public string UserId { get { return userid; } set { UserId = value; this.OnPropertyChanged("UserId"); } } private string password; public string Password { get { return password; } set { password = value; this.OnPropertyChanged("Password"); } } public bool Status { get; set; } You can see in the preceding code that the property setter invokes an OnPropertyChanged event. This ensures that the update on the properties is reflected in the UI control: public ICommand LoginCommand { get { return new RelayCommand((e) => { this.Status = this.UserId == "Abhishek" && this.Password == "winphone"; if (this.Status) { var rootframe = App.Current.RootVisual as PhoneApplicationFrame; rootframe.Navigate(new Uri(string.Format ("/FirstPhoneApp;component/MainPage.xaml?name={0}",this.UserId), UriKind.Relative)); } }); } } public ICommand ClearCommand { get { return new RelayCommand((e) => { this.UserId = this.Password = string.Empty; }); } } We also define two more properties of type ICommand. The UI button control implements the command pattern and uses an ICommand interface to invoke a command. The RelayCommand used on the code is an implementation of the ICommand interface, which could be used to invoke some action. Now let's bind the Text property of the TextBox in XAML with the UserId property, and make it a TwoWay binding. The binder automatically subscribes to the PropertyChanged event. When the UserId property is set and the PropertyChanged event is invoked, the UI automatically receives the invoke request of code, which updates the text in the UI. Similarly, we add two buttons and name them Login and Clear and bind them with the properties LoginCommand and ClearCommand, as shown in the following code: <Button Command="{Binding LoginCommand}" Content="Login" Grid.Row="2" Grid.Column="0" /> <Button Command="{Binding ClearCommand}" Content="Clear" Grid.Row="2" Grid.Column="1" /> In the preceding XAML, we defined the two buttons and specified a command for each of them. We create another page so that when the login is successful, we can navigate the Login page to somewhere else. Let's make use of the existing MainPage.xaml file as follows: <StackPanel x_Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28"> <TextBlock Text="MY APPLICATION" x_Name="txtApplicationDescription"Style=" {StaticResource PhoneTextNormalStyle}" Margin="12,0"/> <TextBlock Text="Enter Details" Margin="9,-7,0,0"Style=" {StaticResource PhoneTextTitle1Style}"/> </StackPanel> We add the preceding XAML to show the message that is passed from the Login screen. We create another class and name it MainDataContext. We define a property that will hold the data to be displayed on the screen. We go to Login.xaml.cs created as a code-behind of Login.xaml, create an instance of LoginDataContext, and assign it to DataContext of the page. We assign this inside the InitializeComponent method of the class, as shown in the following code: this.DataContext = new LoginDataContext(); Now, go to Properties in the Solution Explorer pane, open the WMAppManifest file, and specify Login.xaml as the Navigation page. Once this is done, if you run the application now in any of the emulators available with Visual Studio, you will see what is shown in the following screenshot: You can enter data in the UserId and Password fields and click on Login, but nothing happens. Put a breakpoint on LoginCommand and press Login again with the credentials, and you will see that the Password property is never set to anything and evaluates to null. Note that, PasswordBox in XAML does not support binding to its properties. To deal with this, we define a PasswordChanged event on PasswordBox and specify the following code: private void txtPassword_PasswordChanged(object sender, RoutedEventArgs e) { this.logindataContext.Password = txtPassword.Password; } The preceding code will ensure that the password goes properly to the ViewModel. Finally, clicking on Login, you will see Status is set to true. However, our idea is to move the page from the Login screen to MainPage.xaml. To do this, we change the LoginCommand property to navigate the page, as shown in the following code: if (this.Status) { var rootframe = App.Current.RootVisual as PhoneApplicationFrame; rootframe.Navigate(new Uri(string.Format("/FirstPhoneApp;component/MainPage.xaml?name={0}", this.UserId), UriKind.Relative)); } Each WPF app contains an ApplicationFrame class that is used to show the UI. The application frame can use the navigate method to navigate from one page to another. The navigate method uses NavigationService to redirect the page to the URL provided. Here in the code, after authentication, we pass UserId as querystring to MainPage. We design the MainPage.xaml file to include a pivot control. A pivot control is just like traditional tab controls, but looks awesome in a phone environment. Let's add the following code: <phone:Pivot> <phone:PivotItem Header="Main"> <StackPanel Orientation="Vertical"> <TextBlock Text="Choose your avatar" /> <Image x_Name="imgSelection" Source="{Binding AvatarImage}"/> <Button x_Name="btnChoosePhoto" ClickMode="Release"Content="Choose Photo" Command="{Binding ChoosePhoto}" /> </StackPanel> </phone:PivotItem> <phone:PivotItem Header="Task"> <StackPanel> <phone:LongListSelector ItemsSource="{Binding LongList}" /> </StackPanel> </phone:PivotItem> </phone:Pivot> The phone tag is referred to a namespace that has been added automatically in the header where the Pivot class exists. In the previously defined Pivot class, there are two PivotItem with headers Main and Task. When Main is selected, it allows you to choose a photo from MediaLibrary and the image is displayed on Image Control. The ChoosePhoto command defined inside MainDataContext sets the image to its source, as shown in the following code: public ICommand ChoosePhoto { get { return new RelayCommand((e) => { PhotoChooserTask pTask = new PhotoChooserTask(); pTask.Completed += pTask_Completed; pTask.Show(); }); } } void pTask_Completed(object sender, PhotoResult e) { if (e.TaskResult == TaskResult.OK) { var bitmap = new BitmapImage(); bitmap.SetSource(e.ChosenPhoto); this.AvatarImage = bitmap; } } In the preceding code, the RelayCommand that is invoked when the button is clicked uses PhotoChooserTask to select an image from MediaLibrary and that image is shown on the AvatarImage property bound to the image source. On the other hand, the other PivotItem shows LongList where the ItemsSource is bound to a long list of strings, as shown in the following code: public List<string> LongList { get { this.longList = this.longList ?? this.LoadList(); return this.longList; } } The long list can be anything, a long list that is needed to be shown in the ListBox class. How it works... Windows Phone, being an XAML-based technology, uses Silverlight to generate UI and controls supporting the Model-View-ViewModel (MVVM) pattern. Each of the controls present in the Windows Phone environment implements a number of DependencyProperties. The DependencyProperty is a special type of property that supports DataBinding. When bound to another CLR object, these properties try to find the INotifyPropertyChanged interface and subscribe to the PropertyChanged event. When the data is modified in the controls, the actual bound object gets modified automatically by the dependency property system, and vice versa. Similar to normal DependencyProperties, there is a Command property that allows you to call a method. Just like the normal property, Command implements the ICommand interface and has a return type Action that maps to Command. The RelayCommand here is an implementation of ICommand interfaces, which can be bound to the Command property of Button. There's more... Now let's talk about some other options, or possibly some pieces of general information that are relevant to this task. Using ApplicationBar on the app Just like any of the modern smartphones, Windows Phones also provides a standard way of communicating with any application. Each application can have a standard set of icons at the bottom of the application, which enable the user to perform some actions on the application. The ApplicationBar class is present at the bottom of any application across the operating system and hence, people tend to expect commands to be placed on ApplicationBar rather than on the application itself, as shown in the following screenshot. The ApplicationBar class accepts 72 pixels of height, which cannot be modified by code. When an application is open, the application bar is shown at the bottom of the screen. The preceding screenshot shows how the ApplicationBar class is laid out with two buttons, login and clear. Each ApplicationBar class can also associate a number of menu items for additional commands. The menu could be opened by clicking on the … button in the left-hand side of ApplicationBar. The page of Windows Phone allows you to define one application bar. There is a property called ApplicationBar on PhoneApplicationPage that lets you define the ApplicationBar class of that particular page, as shown in the following screenshot: <phone:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBarIconButton Click="ApplicationBarIconButton_Click" Text="Login" IconUri="/Assets/next.png"/> <shell:ApplicationBarIconButton Click="ApplicationBarIconButtonSave_Click" Text="clear" IconUri="/Assets/delete.png"/> <shell:ApplicationBar.MenuItems> <shell:ApplicationBarMenuItem Click="about_Click" Text="about" /> </shell:ApplicationBar.MenuItems> </shell:ApplicationBar> </phone:PhoneApplicationPage.ApplicationBar> In the preceding code, we defined two ApplicationBarIconButton classes. Each of them defines the Command items placed on the ApplicationBar class. The ApplicationBar.MenuItems method allows us to add menu items to the application. There can be a maximum of four application bar buttons and four menus per page. The ApplicationBar button also follows a special type of icon. There are a number of these icons added with the SDK, which could be used for the application. They can be found at DriveNameProgram FilesMicrosoft SDKs Windows Phonev8.0Icons. There are separate folders for both dark and light themes. It should be noted that ApplicationBar buttons do not allow command bindings. Tombstoning When dealing with Windows Phone applications, there are some special things to consider. When a user navigates out of the application, the application is transferred to a dormant state, where all the pages and state of the pages are still in memory but their execution is totally stopped. When the user navigates back to the application again, the state of the application is resumed and the application is again activated. Sometimes, it might also be possible that the app gets tombstoned after the user navigates away from the app. In this case, the app is not preserved in memory, but some information of the app is stored. Once the user comes back to the app, the application needs to be restored, and the application needs to resume in such a way that the user gets the same state as he or she left it. In the following figure, you can see the entire process: There are four states defined, the first one is the Not Running state where there is no existence of the process in memory. The Activated state is when the app is tapped by the user. When the user moves out of the app, it goes from Suspending to Suspended. It can be reactivated or it will be terminated after a certain time automatically. Let's look at the Login screen, where you might sometimes tombstone the login page while entering the user ID and password. To deal with storing the user state data before tombstoning, we use PhoneApplicationPage. The idea is to serialize the whole DataModel once the user navigates away from the page and retrieves the page state again when it navigates back. Let's annotate the UserId and Password of the LoginDataContext with DataMember and LoginDataContext with DataContract, as shown in the following code: [DataContract] public class LoginDataContext : PropertyBase { private string userid; [DataMember] public string UserId { get { return userid; } set { UserId = value; this.OnPropertyChanged("UserId"); } } private string password; [DataMember] public string Password { get { return password; } set { password = value; this.OnPropertyChanged("Password"); } } } The DataMember property will indicate that the properties are capable of serializing. As the user types into these properties, the properties get filled with data so that when the user navigates away, the model will always have the latest data present. In LoginPage, we define a property called _isNewPageInstance and set it to false, and in constructor, we set it to true. This will indicate that only when the page is instantiated, _isNewPageInstance is set to true. Now, when the user navigates away from the page, OnNavigatedFrom gets called. If the user navigates from the page, we save ViewModel into State as shown in the following code: protected override void OnNavigatedFrom(NavigationEventArgs e) { base.OnNavigatedFrom(e); if (e.NavigationMode != System.Windows.Navigation.NavigationMode.Back) { // Save the ViewModel variable in the page''s State dictionary. State[""ViewModel""] = logindataContext; } } Once DataModel is saved in the State object, it is persistent and can be retrieved later on when the application is resumed as follows: protected override void OnNavigatedTo(NavigationEventArgs e) { base.OnNavigatedTo(e); if (_isNewPageInstance) { if (this.logindataContext == null) { if (State.Count > 0) { this.logindataContext = (LoginDataContext)State[""ViewModel""]; } else { this.logindataContext = new LoginDataContext(); } } DataContext = this.logindataContext; } _isNewPageInstance = false; } When the application is resumed from tombstoning, it calls OnNavigatedTo and retrieves DataModel back from the state. Summary In this article, we learned device application development with the Windows Phone environment. It provided us with simple solutions to some of the common problems when developing a Windows Phone application. Resources for Article: Further resources on this subject: Layout with Ext.NET [article] ASP.NET: Creating Rich Content [article] ASP.NET: Using jQuery UI Widgets [article]
Read more
  • 0
  • 0
  • 1559

article-image-animations-cocos2d-x
Packt
23 Sep 2014
24 min read
Save for later

Animations in Cocos2d-x

Packt
23 Sep 2014
24 min read
In this article, created by Siddharth Shekhar, the author of Learning Cocos2d-x Game Development, we will learn different tools that can be used to animate the character. Then, using these animations, we will create a simple state machine that will automatically check whether the hero is falling or is being boosted up into the air, and depending on the state, the character will be animated accordingly. We will cover the following in this article: Animation basics TexturePacker Creating spritesheet for the player Creating and coding the enemy animation Creating the skeletal animation Coding the player walk cycle (For more resources related to this topic, see here.) Animation basics First of all, let's understand what animation is. An animation is made up of different images that are played in a certain order and at a certain speed, for example, movies that run images at 30 fps or 24 fps, depending on which format it is in, NTSC or PAL. When you pause a movie, you are actually seeing an individual image of that movie, and if you play the movie in slow motion, you will see the frames or images that make up to create the full movie. In games while making animations, we will do the same thing: adding frames and running them at a certain speed. We will control the images to play in a particular sequence and interval by code. For an animation to be "smooth", you should have at least 24 images or frames being played in a second, which is known as frames per second (FPS). Each of the images in the animation is called a frame. Let's take the example of a simple walk cycle. Each walk cycle should be of 24 frames. You might say that it is a lot of work, and for sure it is, but the good news is that these 24 frames can be broken down into keyframes, which are important images that give the illusion of the character walking. The more frames you add between these keyframes, the smoother the animation will be. The keyframes for a walk cycle are Contact, Down, Pass, and Up positions. For mobile games, as we would like to get away with as minimal work as possible, instead of having all the 24 frames, some games use just the 4 keyframes to create a walk animation and then speed up the animation so that player is not able to see the missing frames. So overall, if you are making a walk cycle for your character, you will create eight images or four frames for each side. For a stylized walk cycle, you can even get away with a lesser number of frames. For the animation in the game, we will create images that we will cycle through to create two sets of animation: an idle animation, which will be played when the player is moving down, and a boost animation, which will get played when the player is boosted up into the air. Creating animation in games is done using two methods. The most popular form of animation is called spritesheet animation and the other is called skeletal animation. Spritesheet animation Spritesheet animation is when you keep all the frames of the animation in a single file accompanied by a data file that will have the name and location of each of the frames. This is very similar to the BitmapFont. The following is the spritesheet we will be using in the game. For the boost and idle animations, each of the frames for the corresponding animation will be stored in an array and made to loop at a particular predefined speed. The top four images are the frames for the boost animation. Whenever the player taps on the screen, the animation will cycle through these four images appearing as if the player is boosted up because of the jetpack. The bottom four images are for the idle animation when the player is dropping down due to gravity. In this animation, the character will look as if she is blinking and the flames from the jetpack are reduced and made to look as if they are swaying in the wind. Skeletal animation Skeletal animation is relatively new and is used in games such as Rayman Origins that have loads and loads of animations. This is a more powerful way of making animations for 2D games as it gives a lot of flexibility to the developer to create animations that are fast to produce and test. In the case of spritesheet animations, if you had to change a single frame of the animation, the whole spritesheet would have to be recreated causing delay; imagine having to rework 3000 frames of animations in your game. If each frame was hand painted, it would take a lot of time to produce the individual images causing delay in production time, not to mention the effort and time in redrawing images. The other problem is device memory. If you are making a game for the PC, it would be fine, but in the case of mobiles where memory is limited, spritesheet animation is not a viable option unless cuts are made to the design of the game. So, how does skeletal animation work? In the case of skeletal animation, each item to be animated is stored in a separate spritesheet along with the data file for the locations of the individual images for each body part and object to be animated, and another data file is generated that positions and rotates the individual items for each of the frames of the animation. To make this clearer, look at the spritesheet for the same character created with skeletal animation: Here, each part of the body and object to be animated is a separate image, unlike the method used in spritesheet animation where, for each frame of animation, the whole character is redrawn. TexturePacker To create a spritesheet animation, you will have to initially create individual frames in Photoshop, Illustrator, GIMP or any other image editing software. I have already made it and have each of the images for the individual frames ready. Next, you will have to use a software to create spritesheets from images. TexturePacker is a very popular software that is used by industry professionals to create spritesheets. You can download it from https://www.codeandweb.com/. These are the same guys who made PhysicsEditor, which we used to make shapes for Box2D. You can use the trial version of this software. While downloading, choose the version that is compatible with your operating system. Fortunately, TexturePacker is available for all the major operating systems, including Linux. Refer to the following screenshot to check out the steps to use TexturePacker: Once you have downloaded TexturePacker, you have three options: you can click to try the full version for a week, or you can purchase the license, or click on the essential version to use in the trial version. In the trial version, some of the professional features are disabled, so I recommend trying the professional features for a week. Once you click the option, you should see the following interface: Texture packer has three panels; let's start from the right. The right-hand side panel will display the names of all the images that you select to create the spritesheet. The center panel is a preview window that shows how the images are packed. The left-hand side panel gives you options to store the packed texture and data file to be published to and decide the maximum size of the packed image. The Layout section gives a lot of flexibility to set up the individual images in TexturePacker, and then you have the advanced section. Let's look at some of the key items on the panel on the left. The display section The display section consists of the following options: Data Format: As we saw earlier, each exported file creates a spritesheet that has a collection of images and a data file that keeps track of the positions on the spritesheet. The data format usually changes depending upon the framework or engine. In TexturePacker, you can select the framework that you are using to develop the game, and TexturePacker will create a data file format that is compatible with the framework. If you look at the drop-down menu, you can see a lot of popular frameworks and engines in the list such as 2DToolkit, OGRE, Cocos2d, Corona SDK, LibGDX, Moai, Sparrow/Starling, SpriteKit, and Unity. You can also create a regular JSON file too if you wish. Java Script Object Notification (JSON) is similar to an XML file that is used to store and retrieve data. It is a collection of names and value pairs used for data interchanging. Data file: This is the location where you want the exported file to be placed. Texture format: Usually, this is set to .png, but you can select the one that is most convenient. Apart from PNG, you also have PVR, which is used so that people cannot view the image readily and also provides image compression. Png OPT file: This is used to set the quality of PNG images. Image format: This sets the RGB format to be used; usually, you would want this to be set at the default value. AutoSD: If you are going to create images for different resolutions, this option allows you to create resources depending on the different resolutions you are developing the game for, without the need for going into the graphics software, shrinking the images and packing them again for all the resolutions. Content protection: This protects the image and data file with an encryption key so that people can't steal spritesheets from the game file. The Geometry section The Geometry section consists of the following options: Max size: You can specify the maximum width and height of the spritesheet depending upon the framework. Usually, all frameworks allow up to 4092 x 4092, but it mostly depends on the device. Fixed size: Apparently, if you want a fixed size, you will go with this option. Size constraint: Some frameworks prefer the spritesheets to be in the power of 2 (POT), for example, 32x32, 64x64, 256x256, and so on. If this is the case, you need to select the size accordingly. For Cocos2d, you can choose any size. Scale: This is used to scale up or scale down the image. The Layout section The Layout section consists of the following options: Algorithm: This is the algorithm that will be used to make sure that the images you select to create the spritesheet are packed in the most efficient way. If you are using the pro version, choose MaxRects, but if you are using the essential version, you will have to choose Basic. Border Padding / Shape Padding: Border padding packs the gap between the border of the spritesheet and the image that it is surrounding. Shape padding is the padding between the individual images of the spritesheets. If you find that the images are getting overlapped while playing the animation in the game, you might want to increase the values to avoid overlapping. Trim: This removes the extra alpha that is surrounding the image, which would unnecessarily increase the image size of the spritesheet. Advanced features The following are some miscellaneous options in TexturePacker: Texture path: This appends the path of the texture file at the beginning of the texture name Clean transparent pixels: This sets the transparent pixels color to #000 Trim sprite names: This will remove the extension from the names of the sprites (.png and .jpg), so while calling for the name of the frame, you will not have to use extensions Creating a spritesheet for the player Now that we understand the different items in the TextureSettings panel of TexturePacker, let's create our spritesheet for the player animation from individual frames provided in the Resources folder. Open up the folder in the system and select all the images for the player that contains the idle and boost frames. There will be four images for each of the animation. Select all eight images and click-and-drag all the images to the Sprites panel, which is the right-most panel of TexturePacker. Once you have all the images on the Sprites panel, the preview panel at the center will show a preview of the spritesheet that will be created: Now on the TextureSettings panel, for the Data format option, select cocos2d. Then, in the Data file option, click on the folder icon on the right and select the location where you would like to place the data file and give the name as player_anim. Once selected, you will see that the Texture file location also auto populates with the same location. The data file will have a format of .plist and the texture file will have an extension of .png. The .plist format creates data in a markup language similar to XML. Although it is more common on Mac, you can use this data type independent of the platform you use while developing the game using Cocos2d-x. Keep the rest of the settings the same. Save the file by clicking on the save icon on the top to a location where the data and spritesheet files are saved. This way, you can access them easily the next time if you want to make the same modifications to the spritesheet. Now, click on the Publish button and you will see two files, player_anim.plist and player_anim.png, in the location you specified in the Data file and Location file options. Copy and paste these two files in the Resources folder of the project so that we can use these files to create the player states. Creating and coding enemy animation Now, let's create a similar spritesheet and data file for the enemy also. All the required files for the enemy frames are provided in the Resources folder. So, once you create the spritesheet for the enemy, it should look something like the following screenshot. Don't worry if the images are shown in the wrong sequence, just make sure that the files are numbered correctly from 1 to 4 and it is in the sequence the animations needs to be played in. Now, place the enemy_anim.png spritesheet and data file in the Resources folder in the directory and add the following lines of code in the Enemy.cpp file to animate the enemy:   //enemy animation       CCSpriteBatchNode* spritebatch = CCSpriteBatchNode::create("enemy_anim.png");    CCSpriteFrameCache* cache = CCSpriteFrameCache::sharedSpriteFrameCache();    cache->addSpriteFramesWithFile("enemy_anim.plist");       this->createWithSpriteFrameName("enemy_idle_1.png");    this->addChild(spritebatch);             //idle animation    CCArray* animFrames = CCArray::createWithCapacity(4);      char str1[100] = {0};    for(int i = 1; i <= 4; i++)    {        sprintf(str1, "enemy_idle_%d.png", i);        CCSpriteFrame* frame = cache->spriteFrameByName( str1 );        animFrames->addObject(frame);    }           CCAnimation* idleanimation = CCAnimation::createWithSpriteFrames(animFrames, 0.25f);    this->runAction (CCRepeatForever::create(CCAnimate::create(idleanimation))) ; This is very similar to the code for the player. The only difference is that for the enemy, instead of calling the function on the hero, we call it to the same class. So, now if you build and run the game, you should see the enemy being animated. The following is the screenshot from the updated code. You can now see the flames from the booster engine of the enemy. Sadly, he doesn't have a boost animation but his feet swing in the air. Now that we have mastered the spritesheet animation technique, let's see how to create a simple animation using the skeletal animation technique. Creating the skeletal animation Using this technique, we will create a very simple player walk cycle. For this, there is a software called Spine by Esoteric Software, which is a very widely used professional software to create skeletal animations for 2D games. The software can be downloaded from the company's website at http://esotericsoftware.com/spine-purchase: There are three versions of the software available: the trial, essential, and professional versions. Although majority of the features of the professional version are available in the essential version, it doesn't have ghosting, meshes, free-form deformation, skinning, and IK pinning, which is in beta stage. The inclusion of these features does speed up the animation process and certainly takes out a lot of manual work for the animator or illustrator. To learn more about these features, visit the website and hover the mouse over these features to have a better understanding of what they do. You can follow along by downloading the trial version, which can be done by clicking the Download trial link on the website. Spine is available for all platforms including Windows, Mac, and Linux. So download it for the OS of your choice. On Mac, after downloading and running the software, it will ask to install X11, or you can download and install it from http://xquartz.macosforge.org/landing/. After downloading and installing the plugin, you can open Spine. Once the software is up and running, you should see the following window: Now, create a new project by clicking on the spine icon on the top left. As we can see in the screenshot, we are now in the SETUP mode where we set up the character. On the Tree panel on the right-hand side, in the Hierarchy pane, select the Images folder. After selecting the folder, you will be able to select the path where the individual files are located for the player. Navigate to the player_skeletal_anim folder where all the images are present. Once selected, you will see the panel populate with the images that are present in the folder, namely the following: bookGame_player_Lleg bookGame_player_Rleg bookGame_player_bazooka bookGame_player_body bookGame_player_hand bookGame_player_head Now drag-and-drop all the files from the Images folder onto the scene. Don't worry if the images are not in the right order. In the Draw Order dropdown in the Hierarchy panel, you can move around the different items by drag-and-drop to make them draw in the order that you want them to be displayed. Once reordered, move the individual images on the screen to the appropriate positions: You can move around the images by clicking on the translate button on the bottom of the screen. If you hover over the buttons, you can see the names of the buttons. We will now start creating the bones that we will use to animate the character. In the panel on the bottom of the Tools section, click on the Create button. You should now see the cursor change to the bone creation icon. Before you create a bone, you have to always select the bone that will be the parent. In this case, we select the root bone that is in the center of the character. Click on it and drag downwards and hold the Shift key at the same time. Click-and-drag downwards up to the end of the blue dress of the character; make sure that the blue dress is highlighted. Now release the mouse button. The end point of this bone will be used as the hip joint from where the leg bones will be created for the character. Now select the end of the newly created bone, which you made in the last step, and click-and-drag downwards again holding Shift at the same time to make a bone that goes all the way to the end of the leg. With the leg still getting highlighted, release the mouse button. To create the bone for the other leg, create a new bone again starting from end of the first bone and the hip joint, and while the other leg is selected, release the mouse button to create a bone for the leg. Now, we will create a bone for the hand. Select the root node, the node in the middle of the character while holding Shift again, and draw a bone to the hand while the hand is highlighted. Create a bone for the head by again selecting the root node selected earlier. Draw a bone from the root node to the head while holding Shift and release the mouse button once you are near the ear of the character and the head is highlighted. You will notice that we never created a bone for the bazooka. For the bazooka, we will make the hand as the parent bone so that when the hand gets rotated, the bazooka also rotates along. Click on the bazooka node on the Hierarchy panel (not the image) and drag it to the hand node in the skeleton list. You can rotate each of the bones to check whether it is rotating properly. If not, you can move either the bones or images around by locking either one of them in its place so that you can move or rotate the other freely by clicking either the bones or the images button in the compensate button at the bottom of the screen. The following is the screenshot that shows my setup. You can use it to follow and create the bones to get a more satisfying animation. To animate the character, click on the SETUP button on the top and the layout will change to ANIMATE. You will see that a new timeline has appeared at the bottom. Click on the Animations tab in Hierarchy and rename the animation name from animation to runCycle by double-clicking on it. We will use the timeline to animate the character. Click on the Dopesheet icon at the bottom. This will show all the keyframes that we have made for the animation. As we have not created any, the dopesheet is empty. To create our first keyframe, we will click on the legs and rotate both the bones so that it reflects the contact pose of the walk cycle. Now to set a keyframe, click on the orange-colored key icon next to Rotate in the Transform panel at the bottom of the screen. Click on the translate key, as we will be changing the translation as well later. Once you click on it, the dopesheet will show the bones that you just rotated and also show what changes you made to the bone. Here, we rotated the bone, so you will see Rotation under the bones, and as we clicked on the translate key, it will show the Translate also. Now, frame 24 is the same as frame 0. So, to create the keyframe at frame 24, drag the timeline scrubber to frame 24 and click on the rotate and translate keys again. To set the keyframe at the middle where the contact pose happens but with opposite legs, rotate the legs to where the opposite leg was and select the keys to create a keyframe. For frames 6 and 18, we will keep the walk cycle very simple, so just raise the character above by selecting the root node, move it up in the y direction and click the orange key next to the translate button in the Transform panel at the bottom. Remember that you have to click it once in frame 6 and then move the timeline scrubber to frame 18, move the character up again, and click on the key again to create keyframes for both frames 6 and 18. Now the dopesheet should look as follow: Now to play the animation in a loop, click on the Repeat Animation button to the right of the Play button and then on the Play button. You will see the simple walk animation we created for the character. Next, we will export the data required to create the animation in Cocos2d-x. First, we will export the data for the animation. Click on the Spine button on top and select Export. The following window should pop up. Select JSON and choose the directory in which you would want to save the file to and click on Export: That is not all; we have to create a spritesheet and data file just as we created one in texture packer. There is an inbuilt tool in Spine to create a packed spritesheet. Again, click on the Spine icon and this time select Texture Packer. Here, in the input directory, select the Images folder from where we imported all the images initially. For the output directory, select the location to where the PNG and data files should be saved to. If you click on the settings button, you will see that it looks very similar to what we saw in TexturePacker. Keep the default values as they are. Click on Pack and give the name as player. This will create the .png and .atlas files, which are the spritesheet and data file, respectively: You have three files instead of the two in TexturePacker. There are two data files and an image file. While exporting the JSON file, if you didn't give it a name, you can rename the file manually to player.json just for consistency. Drag the player.atlas, player.json, and player.png files into the project folder. Finally, we come to the fun part where we actually use the data files to animate the character. For testing, we will add the animations to the HelloWorldScene.cpp file and check the result. Later, when we add the main menu, we will move it there so that it shows as soon as the game is launched. Coding the player walk cycle If you want to test the animations in the current project itself, add the following to the HelloWorldScene.h file first: #include <spine/spine-cocos2dx.h> Include the spine header file and create a variable named skeletonNode of the CCSkeletalAnimation type: extension::CCSkeletonAnimation* skeletonNode; Next, we initialize the skeletonNode variable in the HelloWorldScene.cpp file:    skeletonNode = extension::CCSkeletonAnimation::createWithFile("player.json", "player.atlas", 1.0f);    skeletonNode->addAnimation("runCycle",true,0,0);    skeletonNode->setPosition(ccp(visibleSize.width/2 , skeletonNode->getContentSize().height/2));    addChild(skeletonNode); Here, we give the two data files into the createWithFile() function of CCSkeletonAnimation. Then, we initiate it with addAnimation and give it the animation name we gave when we created the animation in Spine, which is runCycle. We next set the position of the skeletonNode; we set it right above the bottom of the screen. Next, we add the skeletonNode to the display list. Now, if you build and run the project, you will see the player getting animated forever in a loop at the bottom of the screen: On the left, we have the animation we created using TexturePacker from CodeAndWeb, and in the middle, we have the animation that was created using Spine from Esoteric Software. Both techniques have their set of advantages, and it also depends upon the type and scale of the game that you are making. Depending on this, you can choose the tool that is more tuned to your needs. If you have a smaller number of animations in your game and if you have good artists, you could use regular spritesheet animations. If you have a lot of animations or don't have good animators in your team, Spine makes the animation process a lot less cumbersome. Either way, both tools in professional hands can create very good animations that will give life to the characters in the game and therefore give a lot of character to the game itself. Summary This article took a very brief look at animations and how to create an animated character in the game using the two of the most popular animation techniques used in games. We also looked at FSM and at how we can create a simple state machine between two states and make the animation change according to the state of the player at that moment. Resources for Article: Further resources on this subject: Moving the Space Pod Using Touch [Article] Sprites [Article] Cocos2d-x: Installation [Article]
Read more
  • 0
  • 0
  • 11395
article-image-galera-cluster-basics
Packt
23 Sep 2014
5 min read
Save for later

Galera Cluster Basics

Packt
23 Sep 2014
5 min read
This article written by Pierre MAVRO, the author of MariaDB High Performance, provides a brief introduction to Galera Cluster. (For more resources related to this topic, see here.) Galera Cluster is a synchronous multimaster solution created by Codership. It's a patch for MySQL and MariaDB with its own commands and configuration. On MariaDB, it has been officially promoted as the MariaDB Cluster. Galera Cluster provides certification-based replication. This means that each node certifies the replicated write set against other write sets. You don't have to worry about data integrity, as it manages it automatically and very well. Galera Cluster is a young product, but is ready for production. If you have already heard of MySQL Cluster, don't be confused; this is not the same thing at all. MySQL Cluster is a solution that has not been ported to MariaDB due to its complexity, code, and other reasons. MySQL Cluster provides availability and partitioning, while Galera Cluster provides consistency and availability. Galera Cluster is a simple yet powerful solution. How Galera Cluster works The following are some advantages of Galera Cluster: True multimaster: It can read and write to any node at any time Synchronous replication: There is no slave lag and no data is lost at node crash Consistent data: All nodes have the same state (same data exists between nodes at a point in time) Multithreaded slave: This enables better performance with any workload No need of an HA Cluster for management: There are no master-slave failover operations (such as Pacemaker, PCR, and so on) Hot standby: There is no downtime during failover Transparent to applications: No specific drivers or application changes are required No read and write splitting needed: There is no need to split the read and write requests WAN: Galera Cluster supports WAN replication Galera Cluster needs at least three nodes to work properly (because of the notion of quorum, election, and so on). You can also work with a two-node cluster, but you will need an arbiter (hence three nodes). The arbiter could be used on another machine available in the same LAN of your Galera Cluster, if possible. The multimaster replication has been designed for InnoDB/XtraDB. It doesn't mean you can't perform a replication with other storage engines! If you want to use other storage engines, you will be limited by the following: They can only write on a single node at a time to maintain consistency. Replication with other nodes may not be fully supported. Conflict management won't be supported. Applications that connect to Galera will only be able to write on a single node (IP/DNS) at the same time. As you can see in the preceding diagram, HTTP and App servers speak directly to their respective DBMS servers without wondering which node of the Galera Cluster they are targeting. Usually, without Galera Cluster, you can use a cluster software such as Pacemaker/Corosync to get a VIP on a master node that can switch over in case a problem occurs. No need to get PCR in that case; a simple VIP with a custom script will be sufficient to check whether the server is in sync with others is enough. Galera Cluster uses the following advanced mechanisms for replication: Transaction reordering: Transactions are reordered before commitment to other nodes. This increases the number of successful transaction certification pass tests. Write sets: This reduces the number of operations between nodes by writing sets in a single write set to avoid too much node coordination. Database state machine: Read-only transactions are processed on the local node. Write transactions are executed locally on shadow copies and then broadcasted as a read set to the other nodes for certification and commit. Group communication: High-level abstraction for communication between nodes to guarantee consistency (gcomm or spread). To get consistency and similar IDs between nodes, Galera Cluster uses GTID, similar to MariaDB 10 replication. However, it doesn't use the MariaDB GTID replication mechanism at all, as it has its own implementation for its own usage. Galera Cluster limitations Galera Cluster has limitations that prevent it from working correctly. Do not go live in production if you haven't checked that your application is in compliance with the limitations listed. The following are the limitations: Galera Cluster only fully supports InnoDB tables. TokuDB is planned but not yet available and MyISAM is partially supported. Galera Cluster uses primary keys on all your tables (mandatory) to avoid different query execution orders between all your nodes. If you do not do it on your own, Galera will create one. The delete operation is not supported on the tables without primary keys. Locking/unlocking tables and lock functions are not supported. They will be ignored if you try to use them. Galera Cluster disables query cache. XA transactions (global transactions) are not supported. Query logs can't be directed to a table, but can be directed to a file instead. Other less common limitations exist (please refer to the full list if you want to get them all: http://galeracluster.com/documentation-webpages/limitations.html) but in most cases, you shouldn't be annoyed with those ones. Summary This article introduced the benefits and drawbacks of Galera Cluster. It also discussed the features of Galera Cluster that makes it a good solution for write replications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [Article] Using SHOW EXPLAIN with running queries [Article] Installing MariaDB on Windows and Mac OS X [Article]
Read more
  • 0
  • 0
  • 6759

article-image-using-socketio-and-express-together
Packt
23 Sep 2014
16 min read
Save for later

Using Socket.IO and Express together

Packt
23 Sep 2014
16 min read
In this article by Joshua Johanan, the author of the book Building Scalable Apps with Redis and Node.js, tells us that Express application is just the foundation. We are going to add features until it is a fully usable app. We currently can serve web pages and respond to HTTP, but now we want to add real-time communication. It's very fortunate that we just spent most of this article learning about Socket.IO; it does just that! Let's see how we are going to integrate Socket.IO with an Express application. (For more resources related to this topic, see here.) We are going to use Express and Socket.IO side by side. Socket.IO does not use HTTP like a web application. It is event based, not request based. This means that Socket.IO will not interfere with Express routes that we have set up, and that's a good thing. The bad thing is that we will not have access to all the middleware that we set up for Express in Socket.IO. There are some frameworks that combine these two, but it still has to convert the request from Express into something that Socket.IO can use. I am not trying to knock down these frameworks. They simplify a complex problem and most importantly, they do it well (Sails is a great example of this). Our app, though, is going to keep Socket.IO and Express separated as much as possible with the least number of dependencies. We know that Socket.IO does not need Express, as all our examples have not used Express in any way. This has an added benefit in that we can break off our Socket.IO module and run it as its own application at a future point in time. The other great benefit is that we learn how to do it ourselves. We need to go into the directory where our Express application is. Make sure that our pacakage.json has all the additional packages for this article and run npm.install. The first thing we need to do is add our configuration settings. Adding Socket.IO to the config We will use the same config file that we created for our Express app. Open up config.js and change the file to what I have done in the following code: var config = {port: 3000,secret: 'secret',redisPort: 6379,redisHost: 'localhost',routes: {   login: '/account/login',   logout: '/account/logout'}};module.exports = config; We are adding two new attributes, redisPort and redisHost. This is because of how the redis package configures its clients. We also are removing the redisUrl attribute. We can configure all our clients with just these two Redis config options. Next, create a directory under the root of our project named socket.io. Then, create a file called index.js. This will be where we initialize Socket.IO and wire up all our event listeners and emitters. We are just going to use one namespace for our application. If we were to add multiple namespaces, I would just add them as files underneath the socket.io directory. Open up app.js and change the following lines in it: //variable declarations at the topVar io = require('./socket.io');//after all the middleware and routesvar server = app.listen(config.port);io.startIo(server); We will define the startIo function shortly, but let's talk about our app.listen change. Previously, we had the app.listen execute, and we did not capture it in a variable; now we are. Socket.IO listens using Node's http.createServer. It does this automatically if you pass in a number into its listen function. When Express executes app.listen, it returns an instance of the HTTP server. We capture that, and now we can pass the http server to Socket.IO's listen function. Let's create that startIo function. Open up index.js present in the socket.io location and add the following lines of code to it: var io = require('socket.io');var config = require('../config');var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.on('connection', socketConnection);return io;}; We are exporting the startIo function that expects a server object that goes right into Socket.IO's listen function. This should start Socket.IO serving. Next, we get a reference to our namespace and listen on the connection event, sending a message event back to the client. We also are loading our configuration settings. Let's add some code to the layout and see whether our application has real-time communication. We will need the Socket.IO client library, so link to it from node_modules like you have been doing, and put it in our static directory under a newly created js directory. Open layout.ejs present in the packtchatviews location and add the following lines to it: <!-- put these right before the body end tag --><script type="text/javascript" src="/js/socket.io.js"></script><script>var socket = io.connect("http://localhost:3000/packtchat");socket.on('message', function(d){console.log(d);});</script> We just listen for a message event and log it to the console. Fire up the node and load your application, http://localhost:3000. Check to see whether you get a message in your console. You should see your message logged to the console, as seen in the following screenshot: Success! Our application now has real-time communication. We are not done though. We still have to wire up all the events for our app. Who are you? There is one glaring issue. How do we know who is making the requests? Express has middleware that parses the session to see if someone has logged in. Socket.IO does not even know about a session. Socket.IO lets anyone connect that knows the URL. We do not want anonymous connections that can listen to all our events and send events to the server. We only want authenticated users to be able to create a WebSocket. We need to get Socket.IO access to our sessions. Authorization in Socket.IO We haven't discussed it yet, but Socket.IO has middleware. Before the connection event gets fired, we can execute a function and either allow the connection or deny it. This is exactly what we need. Using the authorization handler Authorization can happen at two places, on the default namespace or on a named namespace connection. Both authorizations happen through the handshake. The function's signature is the same either way. It will pass in the socket server, which has some stuff we need such as the connection's headers, for example. For now, we will add a simple authorization function to see how it works with Socket.IO. Open up index.js, present at the packtchatsocket.io location, and add a new function that will sit next to the socketConnection function, as seen in the following code: var io = require('socket.io');var socketAuth = function socketAuth(socket, next){return next();return next(new Error('Nothing Defined'));};var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});};exports.startIo = function startIo(server){io = io.listen(server);var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; I know that there are two returns in this function. We are going to comment one out, load the site, and then switch the lines that are commented out. The socket server that is passed in will have a reference to the handshake data that we will use shortly. The next function works just like it does in Express. If we execute it without anything, the middleware chain will continue. If it is executed with an error, it will stop the chain. Let's load up our site and test both by switching which return gets executed. We can allow or deny connections as we please now, but how do we know who is trying to connect? Cookies and sessions We will do it the same way Express does. We will look at the cookies that are passed and see if there is a session. If there is a session, then we will load it up and see what is in it. At this point, we should have the same knowledge about the Socket.IO connection that Express does about a request. The first thing we need to do is get a cookie parser. We will use a very aptly named package called cookie. This should already be installed if you updated your package.json and installed all the packages. Add a reference to this at the top of index.js present in the packtchatsocket.io location with all the other variable declarations: Var cookie = require('cookie'); And now we can parse our cookies. Socket.IO passes in the cookie with the socket object in our middleware. Here is how we parse it. Add the following code in the socketAuth function: var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie); At this point, we will have an object that has our connect.sid in it. Remember that this is a signed value. We cannot use it as it is right now to get the session ID. We will need to parse this signed cookie. This is where cookie-parser comes in. We will now create a reference to it, as follows: var cookieParser = require('cookie-parser'); We can now parse the signed connect.sid cookie to get our session ID. Add the following code right after our parsing code: var sid = cookieParser.signedCookie (parsedCookie['connect.sid'], config.secret); This will take the value from our parsedCookie and using our secret passphrase, will return the unsigned value. We will do a quick check to make sure this was a valid signed cookie by comparing the unsigned value to the original. We will do this in the following way: if (parsedCookie['connect.sid'] === sid)   return next(new Error('Not Authenticated')); This check will make sure we are only using valid signed session IDs. The following screenshot will show you the values of an example Socket.IO authorization with a cookie: Getting the session We now have a session ID so we can query Redis and get the session out. The default session store object of Express is extended by connect-redis. To use connect-redis, we use the same session package as we did with Express, express-session. The following code is used to create all this in index.js, present at packtchatsocket.io: //at the top with the other variable declarationsvar expressSession = require('express-session');var ConnectRedis = require('connect-redis')(expressSession);var redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort}); The final line is creating the object that will connect to Redis and get our session. This is the same command used with Express when setting the store option for the session. We can now get the session from Redis and see what's inside of it. What follows is the entire socketAuth function along with all our variable declarations: var io = require('socket.io'),connect = require('connect'),cookie = require('cookie'),expressSession = require('express-session'),ConnectRedis = require('connect-redis')(expressSession),redis = require('redis'),config = require('../config'),redisSession = new ConnectRedis({host: config.redisHost, port: config.redisPort});var socketAuth = function socketAuth(socket, next){var handshakeData = socket.request;var parsedCookie = cookie.parse(handshakeData.headers.cookie);var sid = connect.utils.parseSignedCookie(parsedCookie['connect.sid'], config.secret);if (parsedCookie['connect.sid'] === sid) return next(new Error('Not Authenticated'));redisSession.get(sid, function(err, session){   if (session.isAuthenticated)   {     socket.user = session.user;     socket.sid = sid;     return next();   }   else     return next(new Error('Not Authenticated'));});}; We can use redisSession and sid to get the session out of Redis and check its attributes. As far as our packages are concerned, we are just another Express app getting session data. Once we have the session data, we check the isAuthenticated attribute. If it's true, we know the user is logged in. If not, we do not let them connect yet. We are adding properties to the socket object to store information from the session. Later on, after a connection is made, we can get this information. As an example, we are going to change our socketConnection function to send the user object to the client. The following should be our socketConnection function: var socketConnection = function socketConnection(socket){socket.emit('message', {message: 'Hey!'});socket.emit('message', socket.user);}; Now, let's load up our browser and go to http://localhost:3000. Log in and then check the browser's console. The following screenshot will show that the client is receiving the messages: Adding application-specific events The next thing to do is to build out all the real-time events that Socket.IO is going to listen for and respond to. We are just going to create the skeleton for each of these listeners. Open up index.js, present in packtchatsocket.io, and change the entire socketConnection function to the following code: var socketConnection = function socketConnection(socket){socket.on('GetMe', function(){});socket.on('GetUser', function(room){});socket.on('GetChat', function(data){});socket.on('AddChat', function(chat){});socket.on('GetRoom', function(){});socket.on('AddRoom', function(r){});socket.on('disconnect', function(){});}; Most of our emit events will happen in response to a listener. Using Redis as the store for Socket.IO The final thing we are going to add is to switch Socket.IO's internal store to Redis. By default, Socket.IO uses a memory store to save any data you attach to a socket. As we know now, we cannot have an application state that is stored only on one server. We need to store it in Redis. Therefore, we add it to index.js, present in packtchatsocket.io. Add the following code to the variable declarations: Var redisAdapter = require('socket.io-redis'); An application state is a flexible idea. We can store the application state locally. This is done when the state does not need to be shared. A simple example is keeping the path to a local temp file. When the data will be needed by multiple connections, then it must be put into a shared space. Anything with a user's session will need to be shared, for example. The next thing we need to do is add some code to our startIo function. The following code is what our startIo function should look like: exports.startIo = function startIo(server){io = io.listen(server);io.adapter(redisAdapter({host: config.redisHost, port: config.redisPort}));var packtchat = io.of('/packtchat');packtchat.use(socketAuth);packtchat.on('connection', socketConnection);return io;}; The first thing is to start the server listening. Next, we will call io.set, which allows us to set configuration options. We create a new redisStore and set all the Redis attributes (redisPub, redisSub, and redisClient) to a new Redis client connection. The Redis client takes a port and the hostname. Socket.IO inner workings We are not going to completely dive into everything that Socket.IO does, but we will discuss a few topics. WebSockets This is what makes Socket.IO work. All web servers serve HTTP, that is, what makes them web servers. This works great when all you want to do is serve pages. These pages are served based on requests. The browser must ask for information before receiving it. If you want to have real-time connections, though, it is difficult and requires some workaround. HTTP was not designed to have the server initiate the request. This is where WebSockets come in. WebSockets allow the server and client to create a connection and keep it open. Inside of this connection, either side can send messages back and forth. This is what Socket.IO (technically, Engine.io) leverages to create real-time communication. Socket.IO even has fallbacks if you are using a browser that does not support WebSockets. The browsers that do support WebSockets at the time of writing include the latest versions of Chrome, Firefox, Safari, Safari on iOS, Opera, and IE 11. This means the browsers that do not support WebSockets are all the older versions of IE. Socket.IO will use different techniques to simulate a WebSocket connection. This involves creating an Ajax request and keeping the connection open for a long time. If data needs to be sent, it will send it in an Ajax request. Eventually, that request will close and the client will immediately create another request. Socket.IO even has an Adobe Flash implementation if you have to support really old browsers (IE 6, for example). It is not enabled by default. WebSockets also are a little different when scaling our application. Because each WebSocket creates a persistent connection, we may need more servers to handle Socket.IO traffic then regular HTTP. For example, when someone connects and chats for an hour, there will have only been one or two HTTP requests. In contrast, a WebSocket will have to be open for the entire hour. The way our code base is written, we can easily scale up more Socket.IO servers by themselves. Ideas to take away from this article The first takeaway is that for every emit, there needs to be an on. This is true whether the sender is the server or the client. It is always best to sit down and map out each event and which direction it is going. The next idea is that of note, which entails building our app out of loosely coupled modules. Our app.js kicks everything that deals with Express off. Then, it fires the startIo function. While it does pass over an object, we could easily create one and use that. Socket.IO just wants a basic HTTP server. In fact, you can just pass the port, which is what we used in our first couple of Socket.IO applications (Ping-Pong). If we wanted to create an application layer of Socket.IO servers, we could refactor this code out and have all the Socket.IO servers run on separate servers other than Express. Summary At this point, we should feel comfortable about using real-time events in Socket.IO. We should also know how to namespace our io server and create groups of users. We also learned how to authorize socket connections to only allow logged-in users to connect. Resources for Article: Further resources on this subject: Exploring streams [article] Working with Data Access and File Formats Using Node.js [article] So, what is Node.js? [article]
Read more
  • 0
  • 0
  • 18817
Modal Close icon
Modal Close icon