Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-human-interactions-bpel
Packt
08 Sep 2010
13 min read
Save for later

Human Interactions in BPEL

Packt
08 Sep 2010
13 min read
(For more resources on BPEL, SOA and Oracle see here.) Human interactions in business processes The main objective of BPEL has been to standardize the process automation. BPEL business processes make use of services and externalize their functionality as services. BPEL processes are defined as a collection of activities through which services are invoked. BPEL does not make a distinction between services provided by applications and other interactions, such as human interactions, which are particularly important. Real-world business processes namely often integrate not only systems and services, but also humans. Human interactions in business processes can be very simple, such as approval of certain tasks or decisions, or complex, such as delegation, renewal, escalation, nomination, chained execution, and so on. Human interactions are not limited to approvals and can include data entries, process monitoring and management, process initiation, exception handling, and so on. Task approval is the simplest and probably the most common human interaction. In a business process for opening a new account, a human interaction might be required to decide whether the user is allowed to open the account. In a travel approval process, a human might approve the decision from which airline to buy the ticket (as shown in the following figure). If the situation is more complex, a business process might require several users to make approvals, either in sequence or in parallel. In sequential scenarios, the next user often wants to see the decision made by the previous user. Sometimes, particularly in parallel human interactions, users are not allowed to see the decisions taken by other users. This improves the decision potential. Sometimes one user does not even know which other users are involved, or whether any other users are involved at all. A common scenario for involving more than one user is workflow with escalation. Escalation is typically used in situations where an activity does not fulfill a time constraint. In such a case, a notification is sent to one or more users. Escalations can be chained, going first to the first-line employees and advancing to senior staff if the activity is not fulfilled. Sometimes it is difficult or impossible to define in advance which user should perform an interaction. In this case, a supervisor might manually nominate the task to other employees; the nomination can also be made by a group of users or by a decision-support system. In other scenarios, a business process may require a single user to perform several steps that can be defined in advance or during the execution of the process instance. Even more complex processes might require that one workflow is continued with another workflow. Human interactions are not limited to only approvals; they may also include data entries or process management issues, such as process initiation, suspension, and exception management. This is particularly true for long-running business processes, where, for example, user exception handling can prevent costly process termination and related compensation for those activities that have already been successfully completed. As a best practice for human workflows, it is usually not wise to associate human interactions directly to specific users; it is better to connect tasks to roles and then associate those roles with individual users. This gives business processes greater flexibility, allowing any user with a certain role to interact with the process and enabling changes to users and roles to be made dynamically. To achieve this, the process has to gain access to users and roles, stored in the enterprise directory, such as LDAP (Lightweight Directory Access Protocol). Workflow theory has defined several workflow patterns, which specify the abovedescribed scenarios in detail. Examples of workflow patterns include sequential workflow, parallel workflow, workflow with escalation, workflow with nomination, ad-hoc workflow, workflow continuation, and so on. Human Tasks in BPEL So far we have seen that human interaction in business processes can get quite complex. Although BPEL specification does not specifically cover human interactions, BPEL is appropriate for human workflows. BPEL business processes are defined as collections of activities that invoke services. BPEL does not make a distinction between services provided by applications and other interactions, such as human interactions. There are mainly two approaches to support human interactions in BPEL. The first approach is to use a human workflow service. Several vendors today have created workflow services that leverage the rich BPEL support for asynchronous services. In this fashion, people and manual tasks become just another asynchronous service from the perspective of the orchestrating process and the BPEL processes stay 100% standard. The other approach has been to standardize the human interactions and go beyond the service invocations. This approach resulted in the workflow specifications emerging around BPEL with the objective to standardize the explicit inclusion of human tasks in BPEL processes. The BPEL4People specification has emerged, which was originally put forth by IBM and SAP in July 2005. Other companies, such as Oracle, Active Endpoints, and Adobe joined later. Finally, this specification is now being advanced within the OASIS BPEL4People Technical Committee. The BPEL4People specification contains two parts: BPEL4People version 1.0, which introduces BPEL extensions to address human interactions in BPEL as a first-class citizen. It defines a new type of basic activity, which uses human tasks as an implementation, and allows specifying tasks local to a process or use tasks defined outside of the process definition. BPEL4People is based on the WS-HumanTask specification that it uses for the actual specification of human tasks. Web Services Human Task (WS-HumanTask) version 1.0 introduces the definition of human tasks, including their properties, behavior, and a set of operations used to manipulate human tasks. It also introduces a coordination protocol in order to control autonomy and lifecycle of service-enabled human tasks in an interoperable manner. The most important extensions introduced in BPEL4People are people activities and people links. People activity is a new BPEL activity used to define user interactions; in other words, tasks that a user has to perform. For each people activity, the BPEL server must create work items and distribute them to users eligible to execute them. People activities can have input and output variables and can specify deadlines. To specify the implementation of people activities, BPEL4People introduced tasks. Tasks specify actions that users must perform. Tasks can have descriptions, priorities, deadlines, and other properties. To represent tasks to users, we need a client application that provides a user interface and interacts with tasks. It can query available tasks, claim and revoke them, and complete or fail them. To associate people activities and the related tasks with users or groups of users, BPEL4People introduced people links. People links are somewhat similar to partner links; they associate users with one or more people activities. People links are usually associated with generic human roles, such as process initiator, process stakeholders, owners, and administrators. The actual users that are associated with people activities can be determined at design time, deployment time, or runtime. BPEL4People anticipates the use of directories such as LDAP to select users. However, it doesn't define the query language used to select users. Rather, it foresees the use of LDAP filters, SQL, XQuery, or other methods. BPEL4People proposes complex extensions to the BPEL specification. However, so far it is still quite high level and doesn't yet specify the exact syntax of the new activities mentioned above. Until the specification becomes more concrete, we don't expect vendors to implement the proposed extensions. But while BPEL4People is early in the standardization process, it shows a great deal of promise. The BPEL4People proposal raises an important question: Is it necessary to introduce such complex extensions to BPEL to cover user interactions? Some vendor solutions model user interactions as just another web service, with well-defined interfaces for both BPEL processes and client applications. This approach does not require any changes to BPEL. To become portable, it would only need an industry-wide agreement on the two interfaces. And, of course, both interfaces can be specified with WSDL, which gives developers great flexibility and lets them use practically any environment, language, or platform that supports Web Services. Clearly, a single standard approach has not yet been adopted for extending BPEL to include Human Tasks and workflow services. However, this does not mean that developers cannot use BPEL to develop business processes with user interactions. Human Task integration with BPEL To interleave user interactions with service invocations in BPEL processes we can use a workflow service, which interacts with BPEL using standard WSDL interfaces. This way, the BPEL process can assign user tasks and wait for responses by invoking the workflow service using the same syntax as for any other service. The BPEL process can also perform more complex operations such as updating, completing, renewing, routing, and escalating tasks. After the BPEL process has assigned tasks to users, users can act on the tasks by using the appropriate applications. The applications communicate with the workflow service by using WSDL interfaces or another API (such as Java) to acquire the list of tasks for selected users, render appropriate user interfaces, and return results to the workflow service, which forwards them to the BPEL process. User applications can also perform other tasks such as reassign, escalate, route, suspend, resume, and withdraw. Finally, the workflow service may allow other communication channels, such as e-mail and SMS, as shown in the following figure: Oracle Human Workflow concepts Oracle SOA Suite 11g provides the Human Workflow component, which enables including human interaction in BPEL processes in a relatively easy way. The Human Workflow component consists of different services that handle various aspects of human interaction with business process and expose their interfaces through WSDL; therefore, BPEL processes invoke them just like any other service. The following figure shows the overall architecture of the Oracle Workflow services: As we can see in the previous figure, the Workflow consists of the following services: Task Service exposes operations for task state management, such as operations to update a task, complete a task, escalate a task, reassign a task, and so on. When we add a human task to the BPEL process, the corresponding partner link for the Task Service is automatically created. Task Assignment Service provides functionality to route, escalate, reassign tasks, and more. Task Query Service enables retrieving the task list for a user based on a search criterion. Task Metadata Service enables retrieving the task metadata. Identity Service provides authentication and authorization of users and lookup of user properties and privileges. Notification Service enables sending of notifications to users using various channels (e-mail, voice message, IM, SMS, and so on). User Metadata Service manages metadata, related to workflow users, such as user work queues, preferences, and so on. Runtime Configuration Service provides functionality for managing metadata used in the task service runtime environment. Evidence Store Service supports management of digitally-signed workflow tasks. BPEL processes use the Task Service to assign tasks to users. More specifically, tasks can be assigned to: Users: Users are defined in an identity store configured with the SOA infrastructure. Groups: Groups contain individual users, which can claim a task and act upon it. Application roles: Used to logically group users and other roles. These roles are application specific and are not stored in the identity store. Assigning tasks to groups or roles is more flexible, as every user in a certain group (role) can review the task to complete it. Oracle SOA Suite 11g provides three methods for assigning users, groups, and application roles to tasks: Static assignment: Static users, groups, or application roles can be assigned at design time. Dynamic assignment: We can define an XPath expression to determine the task participant at runtime. Rule-based assignment: We can create a list of participants with complex expressions. Once the user has completed the task, the BPEL process receives a callback from the Task Service with the result of the user action. The BPEL process continues to execute. The Oracle Workflow component provides several possibilities regarding how users can review the tasks that have been assigned to them, and take the corresponding actions. The most straightforward approach is to use the Oracle BPM Worklist application. This application comes with Oracle SOA Suite 11g and allows users to review the tasks, to see the task details, and to select the decision to complete the task. If the Oracle BPM Worklist application is not appropriate, we can develop our own user interface in Java (using JSP, JSF, Swing, and so on) or almost any other environment that supports Web Services (such as .NET for example). In this respect, the Workflow service is very flexible and we can use a portal, such as Oracle Portal, a web application, or almost any other application to review the tasks. The third possibility is to use e-mail for task reviews. We use e-mails over the Notification service. Workflow patterns To simplify the development of workflows, Oracle SOA Suite 11g provides a library of workflow patterns (participant types). Workflow patterns define typical scenarios of human interactions with BPEL processes. The following participant types are supported: Single approver: Used when a participant maps to a user, group, or role. Parallel: Used if multiple users have to act in parallel (for example, if multiple users have to provide their opinion or vote). The percentage of required user responses can be specified. Serial: Used if multiple users have to act in a sequence. A management chain or a list of users can be specified. FYI (For Your Information): Used if a user only needs to be notified about a task, but a user response is not required. With these, we can realize various workflow patterns, such as: Simple workflow: Used if a single user action is required, such as confirmation, decision, and so on. A timeout can also be specified. Simple workflow has two extension patterns: Escalation: Provides the ability to escalate the task to another user or role if the original user does not complete the task in the specified amount of time. Renewal: Provides the ability to extend the timeout if the user does not complete the task in the specified time. Sequential workflow: Used if multiple users have to act in a sequence. A management chain or a list of users can be specified. Sequential workflow has one extension pattern: Escalation: Same functionality as above. Parallel workflow: Used if multiple users have to act in parallel (for example, if multiple users have to provide their opinion or vote). The percentage of required user responses can be specified. This pattern has an extension pattern: Final reviewer: Is used when the final review has to act after parallel users have provided feedback. Ad-hoc (dynamic) workflow: Used to assign the task to one user, who can then route the task to other user. The task is completed when the user does not route it forward. FYI workflow: Used if a user only needs to be notified about a task, but a user response is not required. Task continuation: Used to build complex workflow patterns as a chain of simple patterns (those described above).
Read more
  • 0
  • 0
  • 3481

article-image-python-testing-coverage-analysis
Packt
01 Jun 2011
13 min read
Save for later

Python Testing: Coverage Analysis

Packt
01 Jun 2011
13 min read
Python Testing Cookbook Over 70 simple but incredibly effective recipes for taking control of automated testing using powerful Python testing tools Introduction A coverage analyzer can be used while running a system in production, but what are the pros and cons, if we used it this way? What about using a coverage analyzer when running test suites? What benefits would this approach provide compared to checking systems in production? Coverage helps us to see if we are adequately testing our system. But it must be performed with a certain amount of skepticism. This is because, even if we achieve 100 percent coverage, meaning every line of our system was exercised, in no way does this guarantee us having no bugs. A quick example involves a code we write and what it processes is the return value from a system call. What if there are three possible values, but we only handle two of them? We may write two test cases covering our handling of it, and this could certainly achieve 100 percent statement coverage. However, it doesn't mean we have handled the third possible return value; thus, leaving us with a potentially undiscovered bug. 100 percent code coverage can also be obtained by condition coverage but may not be achieved with statement coverage. The kind of coverage we are planning to target should be clear. Another key point is that not all testing is aimed at bug fixing. Another key purpose is to make sure that the application meets our customer's needs. This means that, even if we have 100 percent code coverage, we can't guarantee that we are covering all the scenarios expected by our users. This is the difference between 'building it right' and 'building the right thing'. In this article, we will explore various recipes to build a network management application, run coverage tools, and harvest the results. We will discuss how coverage can introduce noise, and show us more than we need to know, as well as introduce performance issues when it instruments our code. We will also see how to trim out information we don't need to get a concise, targeted view of things. This article uses several third-party tools in many recipes. Spring Python (http://springpython.webfactional.com) contains many useful abstractions. The one used in this article is its DatabaseTemplate, which offers easy ways to write SQL queries and updates without having to deal with Python's verbose API. Install it by typing pip install springpython. Install the coverage tool by typing pip install coverage. This may fail because other plugins may install an older version of coverage. If so, uninstall coverage by typing pip uninstall coverage, and then install it again with pip install coverage. Nose is a useful test runner.   Building a network management application For this article, we will build a very simple network management application, and then write different types of tests and check their coverage. This network management application is focused on digesting alarms, also referred to as network events. This is different from certain other network management tools that focus on gathering SNMP alarms from devices. For reasons of simplicity, this correlation engine doesn't contain complex rules, but instead contains simple mapping of network events onto equipment and customer service inventory. We'll explore this in the next few paragraphs as we dig through the code. How to do it... With the following steps, we will build a simple network management application. Create a file called network.py to store the network application. Create a class definition to represent a network event. class Event(object): def __init__(self, hostname, condition, severity, event_time): self.hostname = hostname self.condition = condition self.severity = severity self.id = -1 def __str__(self): return "(ID:%s) %s:%s - %s" % (self.id, self.hostname, self.condition, self.severity) hostname: It is assumed that all network alarms originate from pieces of equipment that have a hostname. condition: Indicates the type of alarm being generated. Two different alarming conditions can come from the same device. severity: 1 indicates a clear, green status; and 5 indicates a faulty, red status. id: The primary key value used when the event is stored in a database. Create a new file called network.sql to contain the SQL code. Create a SQL script that sets up the database and adds the definition for storing network events. CREATE TABLE EVENTS ( ID INTEGER PRIMARY KEY, HOST_NAME TEXT, SEVERITY INTEGER, EVENT_CONDITION TEXT ); Code a high-level algorithm where events are assessed for impact to equipment and customer services and add it to network.py. from springpython.database.core import* class EventCorrelator(object): def __init__(self, factory): self.dt = DatabaseTemplate(factory) def __del__(self): del(self.dt) def process(self, event): stored_event, is_active = self.store_event(event) affected_services, affected_equip = self.impact(event) updated_services = [ self.update_service(service, event) for service in affected_services] updated_equipment = [ self.update_equipment(equip, event) for equip in affected_equip] return (stored_event, is_active, updated_services, updated_equipment) The __init__ method contains some setup code to create a DatabaseTemplate. This is a Spring Python utility class used for database operations. See http://static.springsource.org/spring- python/1.2.x/sphinx/html/dao.html for more details. We are also using sqlite3 as our database engine, since it is a standard part of Python. The process method contains some simple steps to process an incoming event. We first need to store the event in the EVENTS table. This includes evaluating whether or not it is an active event, meaning that it is actively impacting a piece of equipment. Then we determine what equipment and what services the event impacts. Next, we update the affected services by determining whether it causes any service outages or restorations. Then we update the affected equipment by determining whether it fails or clears a device. Finally, we return a tuple containing all the affected assets to support any screen interfaces that could be developed on top of this. Implement the store_event algorithm. def store_event(self, event): try: max_id = self.dt.query_for_int("""select max(ID) from EVENTS""") except DataAccessException, e: max_id = 0 event.id = max_id+1 self.dt.update("""insert into EVENTS (ID, HOST_NAME, SEVERITY, EVENT_CONDITION) values (?,?,?,?)""", (event.id, event.hostname, event.severity, event.condition)) is_active = self.add_or_remove_from_active_events(event) return (event, is_active) This method stores every event that is processed. This supports many things including data mining and post mortem analysis of outages. It is also the authoritative place where other event-related data can point back using a foreign key. The store_event method looks up the maximum primary key value from the EVENTS table. It increments it by one. It assigns it to event.id. It then inserts it into the EVENTS table. Next, it calls a method to evaluate whether or not the event should be add to the list of active events, or if it clears out existing active events. Active events are events that are actively causing a piece of equipment to be unclear. Finally, it returns a tuple containing the event and whether or not it was classified as an active event. For a more sophisticated system, some sort of partitioning solution needs to be implemented. Querying against a table containing millions of rows is very inefficient. However, this is for demonstration purposes only, so we will skip scaling as well as performance and security. Implement the method to evaluate whether to add or remove active events. def add_or_remove_from_active_events(self, event): """Active events are current ones that cause equipment and/or services to be down.""" if event.severity == 1: self.dt.update("""delete from ACTIVE_EVENTS where EVENT_FK in ( select ID from EVENTS where HOST_NAME = ? and EVENT_CONDITION = ?)""", (event.hostname,event.condition)) return False else: self.dt.execute("""insert into ACTIVE_EVENTS (EVENT_FK) values (?)""", (event.id,)) return True When a device fails, it sends a severity 5 event. This is an active event and in this method, a row is inserted into the ACTIVE_EVENTS table, with a foreign key pointing back to the EVENTS table. Then we return back True, indicating this is an active event. Add the table definition for ACTIVE_EVENTS to the SQL script. CREATE TABLE ACTIVE_EVENTS ( ID INTEGER PRIMARY KEY, EVENT_FK, FOREIGN KEY(EVENT_FK) REFERENCES EVENTS(ID) ); This table makes it easy to query what events are currently causing equipment failures. Later, when the failing condition on the device clears, it sends a severity 1 event. This means that severity 1 events are never active, since they aren't contributing to a piece of equipment being down. In our previous method, we search for any active events that have the same hostname and condition, and delete them. Then we return False, indicating this is not an active event. Write the method that evaluates the services and pieces of equipment that are affected by the network event. def impact(self, event): """Look up this event has impact on either equipment or services.""" affected_equipment = self.dt.query( """select * from EQUIPMENT where HOST_NAME = ?""", (event.hostname,), rowhandler=DictionaryRowMapper()) affected_services = self.dt.query( """select SERVICE.* from SERVICE join SERVICE_MAPPING SM on (SERVICE.ID = SM.SERVICE_FK) join EQUIPMENT on (SM.EQUIPMENT_FK = EQUIPMENT.ID where EQUIPMENT.HOST_NAME = ?""", (event.hostname,), rowhandler=DictionaryRowMapper()) return (affected_services, affected_equipment) We first query the EQUIPMENT table to see if event.hostname matches anything. Next, we join the SERVICE table to the EQUIPMENT table through a many-to many relationship tracked by the SERVICE_MAPPING table. Any service that is related to the equipment that the event was reported on is captured. Finally, we return a tuple containing both the list of equipment and list of services that are potentially impacted. Spring Python provides a convenient query operation that returns a list of objects mapped to every row of the query. It also provides an out-of-the-box DictionaryRowMapper that converts each row into a Python dictionary, with the keys matching the column names. Add the table definitions to the SQL script for EQUIPMENT, SERVICE, and SERVICE_MAPPING. CREATE TABLE EQUIPMENT ( ID INTEGER PRIMARY KEY, HOST_NAME TEXT UNIQUE, STATUS INTEGER ); CREATE TABLE SERVICE ( ID INTEGER PRIMARY KEY, NAME TEXT UNIQUE, STATUS TEXT ); CREATE TABLE SERVICE_MAPPING ( ID INTEGER PRIMARY KEY, SERVICE_FK, EQUIPMENT_FK, FOREIGN KEY(SERVICE_FK) REFERENCES SERVICE(ID), FOREIGN KEY(EQUIPMENT_FK) REFERENCES EQUIPMENT(ID) ); Write the update_service method that stores or clears service-related even and then updates the service's status based on the remaining active events. def update_service(self, service, event): if event.severity == 1: self.dt.update("""delete from SERVICE_EVENTS where EVENT_FK in ( select ID from EVENTS where HOST_NAME = ? and EVENT_CONDITION = ?)""", (event.hostname,event.condition)) else: self.dt.execute("""insert into SERVICE_EVENTS (EVENT_FK, SERVICE_FK) values (?,?)""", (event.id,service["ID"])) try: max = self.dt.query_for_int( """select max(EVENTS.SEVERITY) from SERVICE_EVENTS SE join EVENTS on (EVENTS.ID = SE.EVENT_FK) join SERVICE on (SERVICE.ID = SE.SERVICE_FK) where SERVICE.NAME = ?""", (service["NAME"],)) except DataAccessException, e: max = 1 if max > 1 and service["STATUS"] == "Operational": service["STATUS"] = "Outage" self.dt.update("""update SERVICE set STATUS = ? where ID = ?""", (service["STATUS"], service["ID"])) if max == 1 and service["STATUS"] == "Outage": service["STATUS"] = "Operational" self.dt.update("""update SERVICE set STATUS = ? where ID = ?""", (service["STATUS"], service["ID"])) if event.severity == 1: return {"service":service, "is_active":False} else: return {"service":service, "is_active":True} Service-related events are active events related to a service. A single event can be related to many services. For example, what if we were monitoring a wireless router that provided Internet service to a lot of users, and it reported a critical error? This one event would be mapped as an impact to all the end users. When a new active event is processed, it is stored in SERVICE_EVENTS for each related service. Then, when a clearing event is processed, the previous service event must be deleted from the SERVICE_EVENTS table. Add the table defnition for SERVICE_EVENTS to the SQL script. CREATE TABLE SERVICE_EVENTS ( ID INTEGER PRIMARY KEY, SERVICE_FK, EVENT_FK, FOREIGN KEY(SERVICE_FK) REFERENCES SERVICE(ID), FOREIGN KEY(EVENT_FK) REFERENCES EVENTS(ID) ); It is important to recognize that deleting an entry from SERVICE_EVENTS doesn't mean that we delete the original event from the EVENTS table. Instead, we are merely indicating that the original active event is no longer active and it does not impact the related service. Prepend the entire SQL script with drop statements, making it possible to run the script for several recipes DROP TABLE IF EXISTS SERVICE_MAPPING; DROP TABLE IF EXISTS SERVICE_EVENTS; DROP TABLE IF EXISTS ACTIVE_EVENTS; DROP TABLE IF EXISTS EQUIPMENT; DROP TABLE IF EXISTS SERVICE; DROP TABLE IF EXISTS EVENTS; Append the SQL script used for database setup with inserts to preload some equipment and services. INSERT into EQUIPMENT (ID, HOST_NAME, STATUS) values (1, 'pyhost1', 1); INSERT into EQUIPMENT (ID, HOST_NAME, STATUS) values (2, 'pyhost2', 1); INSERT into EQUIPMENT (ID, HOST_NAME, STATUS) values (3, 'pyhost3', 1); INSERT into SERVICE (ID, NAME, STATUS) values (1, 'service-abc', 'Operational'); INSERT into SERVICE (ID, NAME, STATUS) values (2, 'service-xyz', 'Outage'); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (1,1); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (1,2); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (2,1); INSERT into SERVICE_MAPPING (SERVICE_FK, EQUIPMENT_FK) values (2,3); Finally, write the method that updates equipment status based on the current active events. def update_equipment(self, equip, event): try: max = self.dt.query_for_int( """select max(EVENTS.SEVERITY) from ACTIVE_EVENTS AE join EVENTS on (EVENTS.ID = AE.EVENT_FK) where EVENTS.HOST_NAME = ?""", (event.hostname,)) except DataAccessException: max = 1 if max != equip["STATUS"]: equip["STATUS"] = max self.dt.update("""update EQUIPMENT set STATUS = ?""", (equip["STATUS"],)) return equip Here, we need to find the maximum severity from the list of active events for a given host name. If there are no active events, then Spring Python raises a DataAccessException and we translate that to a severity of 1. We check if this is different from the existing device's status. If so, we issue a SQL update. Finally, we return the record for the device, with its status updated appropriately. How it works... This application uses a database-backed mechanism to process incoming network events, and checks them against the inventory of equipment and services to evaluate failures and restorations. Our application doesn't handle specialized devices or unusual types of services. This real-world complexity has been traded in for a relatively simple application, which can be used to write various test recipes. Events typically map to a single piece of equipment and to zero or more services. A service can be thought of as a string of equipment used to provide a type of service to the customer. New failing events are considered active until a clearing event arrives. Active events, when aggregated against a piece of equipment, define its current status. Active events, when aggregated against a service, defines the service's current status.  
Read more
  • 0
  • 0
  • 3472

article-image-apache-cassandra-libraries-and-applications
Packt
29 Jun 2011
6 min read
Save for later

Apache Cassandra: Libraries and Applications

Packt
29 Jun 2011
6 min read
  Cassandra High Performance Cookbook Over 150 recipes to design and optimize large scale Apache Cassandra deployments Introduction Cassandra's popularity has led to several pieces of software that have developed around it. Some of these are libraries and utilities that make working with Cassandra easier. Other software applications have been built completely around Cassandra to take advantage of its scalability. This article describes some of these utilities. Building Cassandra from source The Cassandra code base is active and typically has multiple branches. It is a good practice to run official releases, but at times it may be necessary to use a feature or a bug fix that has not yet been released. Building and running Cassandra from source allows for a greater level of control of the environment. Having the source code, it is also possible to trace down and understand the context or warning or error messages you may encounter. This recipe shows how to checkout Cassandra code from Subversion (SVN) and build it. How to do it... Visit http://svn.apache.org/repos/asf/cassandra/branches with a web browser. Multiple sub folders will be listed: /cassandra-0.5/ /cassandra-0.6/ Each folder represents a branch. To check out the 0.6 branch: $ svn co http://svn.apache.org/repos/asf/cassandra/branches/ cassandra-0.6/ Trunk is where most new development happens. To check out trunk: $ svn co http://svn.apache.org/repos/asf/cassandra/trunk/ To build the release tar, move into the folder created and run: $ ant release This creates a release tar in build/apache-cassandra-0.6.5-bin.tar.gz, a release jar, and an unzipped version in build/dist. How it works... Subversion (SVN) is a revision control system commonly used to manage software projects. Subversion repositories are commonly accessed via the HTTP protocol. This allows for simple browsing. This recipe is using the command-line client to checkout code from the repository. Building the contrib stress tool for benchmarking Stress is an easy-to-use command-line tool for stress testing and benchmarking Cassandra. It can be used to generate a large quantity of requests in short periods of time, and it can also be used to generate a large amount of data to test performance with. This recipe shows how to build it from the Cassandra source. Getting ready Before running this recipe, complete the Building Cassandra from source recipe discussed above. How to do it... From the source directory, run ant. Then, change to the contrib/stress directory and run ant again. $ cd <cassandra_src> $ ant jar $ cd contrib/stress $ ant jar ... BUILD SUCCESSFUL Total time: 0 seconds How it works... The build process compiles code into the stress.jar file. Inserting and reading data with the stress tool The stress tool is a multithreaded load tester specifically for Cassandra. It is a command-line program with a variety of knobs that control its operation. This recipe shows how to run the stress tool. Before you begin... See the previous recipe, Building the contrib stress tool for benchmarking before doing this recipe. How to do it... Run the <cassandra_src>/bin/stress command to execute 10,000 insert operations. $ bin/stress -d 127.0.0.1,127.0.0.2,127.0.0.3 -n 10000 --operation INSERT Keyspace already exists. total,interval_op_rate,interval_key_rate,avg_latency,elapsed_time 10000,1000,1000,0.0201764,3 How it works... The stress tool is an easy way to do load testing against a cluster. It can insert or read data and report on the performance of those operations. This is also useful in staging environments where significant volumes of disk data are needed to test at scale. Generating data is also useful to practice administration techniques such as joining new nodes to a cluster. There's more... It is best to run the load testing tool on a different node than on the system being tested and remove anything else that causes other unnecessary contention. Running the Yahoo! Cloud Serving Benchmark The Yahoo! Cloud Serving Benchmark (YCSB) provides benchmarking for the bases of comparison between NoSQL systems. It works by generating random workloads with varying portions of insert, get, delete, and other operations. It then uses multiple threads for executing these operations. This recipe shows how to build and run the YCSB. Information on the YCSB can be found here: http://research.yahoo.com/Web_Information_Management/YCSB https://github.com/brianfrankcooper/YCSB/wiki/ https://github.com/joaquincasares/YCSB How to do it... Use the git tool to obtain the source code. $ git clone git://github.com/brianfrankcooper/YCSB.git Build the code using the ant. $ cd YCSB/ $ ant Copy the JAR files from your <cassandra_hom>/lib directory to the YCSB classpath. $ cp $HOME/apache-cassandra-0.7.0-rc3-1/lib/*.jar db/ cassandra-0.7/lib/ $ ant dbcompile-cassandra-0.7 Use the Cassandra CLI to create the required keyspace and column family. [default@unknown] create keyspace usertable with replication_ factor=3; [default@unknown] use usertable; [default@unknown] create column family data; Create a small shell script run.sh to launch the test with different parameters. CP=build/ycsb.jar for i in db/cassandra-0.7/lib/*.jar ; do CP=$CP:${i} done java -cp $CP com.yahoo.ycsb.Client -t -db com.yahoo.ycsb. db.CassandraClient7 -P workloads/workloadb -p recordcount=10 -p hosts=127.0.0.1,127.0.0.2 -p operationcount=10 -s Run the script ant pipe the output to more command to control pagination: $ sh run.sh | more YCSB Client 0.1 Command line: -t -db com.yahoo.ycsb.db.CassandraClient7 -P workloads/workloadb -p recordcount=10 -p hosts=127.0.0.1,127.0.0.2 -p operationcount=10 -s Loading workload... Starting test. data 0 sec: 0 operations; 0 sec: 10 operations; 64.52 current ops/sec; [UPDATE AverageLatency(ms)=30] [READ AverageLatency(ms)=3] [OVERALL], RunTime(ms), 152.0 [OVERALL], Throughput(ops/sec), 65.78947368421052 [UPDATE], Operations, 1 [UPDATE], AverageLatency(ms), 30.0 [UPDATE], MinLatency(ms), 30 [UPDATE], MaxLatency(ms), 30 [UPDATE], 95thPercentileLatency(ms), 30 [UPDATE], 99thPercentileLatency(ms), 30 [UPDATE], Return=0, 1 How it works... YCSB has many configuration knobs. An important configuration option is -P, which chooses the workload. The workload describes the portion of read, write, and update percentage. The -p option overrides options from the workload file. YCSB is designed to test performance as the number of nodes grows and shrinks, or scales out. There's more... Cassandra has historically been one of the strongest performers in the YCSB.
Read more
  • 0
  • 0
  • 3469

article-image-security-settings-salesforce
Packt
11 Sep 2014
10 min read
Save for later

Security Settings in Salesforce

Packt
11 Sep 2014
10 min read
In the article by Rakesh Gupta and Sagar Pareek, authors of Salesforce.com Customization Handbook, we will discuss Organization-Wide Default (OWD) and various ways to share records. We will also discuss the various security settings in Salesforce. The following topics will be covered in this article: (For more resources related to this topic, see here.) Concepts of OWD The sharing rule Field-Level Security and its effect on data visibility Setting up password polices Concepts of OWD Organization-Wide Default is also known as OWD. This is the base-level sharing and setting of objects in your organization. By using this, you can secure your data so that other users can't access data that they don't have access to. The following diagram shows the basic database security in Salesforce. In this, OWD plays a key role. It's a base-level object setting in the organization, and you can't go below this. So here, we will discuss OWD in Salesforce. Let's start with an example. Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies told him that the user who has created or owns the account records as well as the users that are higher in the role hierarchy can access the records. Here, you have to think first about OWD because it is the basic thing to restrict object-level access in Salesforce. To achieve this, Sagar Pareek has to set Organization-Wide Default for the account object to private. Setting up OWD To change or update OWD for your organization, follow these steps: Navigate to Setup | Administer | Security Controls | Sharing Settings. From the Manage sharing settings for drop-down menu, select the object for which you want to change OWD. Click on Edit. From the Default Access drop-down menu, select an access as per your business needs. For the preceding scenario, select Private to grant access to users who are at a high position in the role hierarchy, by selecting Grant access using hierarchy. For standard objects, it is automatically selected, and for custom objects, you have the option to select it. Click on Save. The following table describes the various types of OWD access and their respective description: OWD access Description Private Only the owner of the records and the higher users in the role hierarchy are able to access and report on the records. Public read only All users can view the records, but only the owners and the users higher in the role hierarchy can edit them. Public read/write All users can view, edit, and report on all records. Public read/write/ transfer All users can view, edit, transfer, and report on all records. This is only available for case and lead objects. Controlled by parent This says that access on the child object's records is controlled by the parent. Public full access This is available for campaigns. In this, all users can view, edit, transfer, and report on all records.   You can assign this access to campaigns, accounts, cases, contacts, contracts, leads, opportunities, users, and custom objects. This feature is only available for Professional, Enterprise, Unlimited, Performance, Developer, and Database Editions. Basic OWD settings for objects Whenever you buy your Salesforce Instance, it comes with the predefined OWD settings for standard objects. You can change them anytime by following the path Setup | Administer | Security Controls | Sharing Settings. The following table describes the default access to objects: Object Default access Account Public read/write Activity Private Asset Public read/write Campaign Public full access Case Public read/write transfer Contact Controlled by parent (that is, account) Contract Public read/write Custom Object Public read/write Lead Public read/write transfer Opportunity Public read only Users Public read only and private for external users Let's continue with another example. Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies told him that only the users who created the record for the demo object can access the records, and no one else can have the power to view/edit/delete it. To do this, you have to change OWD for a demo object to private, and don't select Grant Access Using Hierarchy. When you select the Grant Access Using Hierarchy field, it provides access to people who are above in the role hierarchy. Sharing Rule To open the record-level access for a group of users, roles, or roles and subordinates beyond OWD, you can use Sharing Rule. Sharing Rule is used for open access; you can't use Sharing Rule to restrict access. Let's start with an example where Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies wants every user in the organization to be able to view the account records but only a group of users (all the users do not belong to the same role or have the same profile) can edit it. To solve the preceding business requirement, you have to follow these steps: First, change the OWD account to Public Read Only by following the path Setup | Administer | Security Controls | Sharing Settings, so all users from the organization can view the account records. Now, create a public group Account access and add users as per the business requirement. To create a public group, follow the path Name | Setup | Administration Setup | Manage Users | Public Groups. Finally, you have to create a sharing rule. To create sharing rules, follow the path Setup | Administer | Security Controls | Sharing Settings, and navigate to the list related to Account Sharing Rules: Click on New, and it will redirect you to a new window where you have to enter Label, Rule Name, and Description (always write a description so that other administrators or developers get to know why this rule was created). Then, for Rule Type, select Based on criteria. Select the criteria by which records are to be shared and create a criterion so that all records fall under it (such as Account Name not equal to null). Select Public Groups in the Share with option and your group name. Select the level of access for the users. Here, select Read/Write from the drop-down menu of Default Account, Contract and Asset Access. Finally, it will look like the following screenshot: Types of Sharing Rules What we did to solve the preceding business requirement is called Sharing Rule. There is a limitation on Sharing Rules; you can write only 50 Sharing Rules (criteria-based) and 300 Sharing Rules (both owner- and criteria-based) per object. The following are the types of Sharing Rules in Salesforce: Manual Sharing: Only when OWD is set to Private or Public Read for any object will a sharing button be enabled in the record detail page. Record owners or users, who are at a higher position in role and hierarchy, can share records with other users. For the last business use case, we changed the account OWD to Public Read Only. If you navigate to the Account records detail page, you can see the Sharing button: Click on the Sharing button and it will redirect you to a new window. Now, click on Add and you are ready to share records with the following: Public groups Users Roles Roles and subordinates Select the access type for each object and click on Save. It will look like what is shown in the following screenshot: The Lead and Case Sharing buttons will be enabled when OWD is Private, Public Read Only, and Public Read/Write. Apex Sharing: When all other Sharing Rules can't fulfill your requirements, then you can use the Apex Sharing method to share records. It gives you the flexibility to handle complex sharing. Apex-managed sharing is a type of programmatic sharing that allows you to define a custom sharing reason to associate with your programmatic share. Standard Salesforce objects support programmatic sharing while custom objects support Apex-managed sharing. Field-Level Security and its effect on data visibility Data on fields is very important for any organization. They want to show some data to the field-specific users. In Salesforce, you can use Field-Level Security to make fields hidden or read-only for a specific profile. There are three ways in Salesforce to set Field-Level Security: From an object-field From a profile Field accessibility From an object-field Let's start with an example where Sagar Pareek is the system administrator in Appiuss. His manager Sara Barellies wants to create a field (phone) on an account object and make this field read-only for all users and also allowing system administrators to edit the field. To solve this business requirement, follow these steps: Navigate to Setup | Customize | Account | Fields and then click on the Phone (it's a hyperlink) field. It will redirect you to the detail page of the Phone field; you will see a page like the following screenshot: Click on the Set Field-Level Security button, and it will redirect you to a new page where you can set the Field-Level Security. Select Visible and Read-Only for all the profiles other than that of the system administrator. For the system administrator, select only Visible. Click on Save. If you select Read-Only, the visible checkbox will automatically get selected. From a profile Similarly, in Field-Level settings, you can also achieve the same results from a profile. Let's follow the preceding business use case to be achieved through the profile. To do this, follow these steps: Navigate to Setup | Administer | Manage Users | Profile, go to the System Administrator profile, and click on it. Now, you are on the profile detail page. Navigate to the Field-Level Security section. It will look like the following screenshot: Click on the View link beside the Account option. It will open Account Field-Level Security for the profile page. Click on the Edit button and edit Field-Level Security as we did in the previous section. Field accessibility We can achieve the same outcome by using field accessibility. To do this, follow these steps: Navigate to Setup | Administer | Security Controls | Field Accessibility. Click on the object name; in our case, it's Account. It will redirect you to a new page where you can select View by Fields or View by Profiles: In our case, select View by Fields and then select the field Phone. Click on the editable link as shown in the following screenshot: It will open the Access Settings for Account Field page, where you can edit the Field-Level Security. Once done, click on Save. Setting up password policies For security purposes, Salesforce provides an option to set password policies for the organization. Let's start with an example. Sagar Pareek, the system administrator of an organization, has decided to create a policy regarding the password for the organization, where the password of each user must be of 10 characters and must be a combination of alphanumeric and special characters. To do this, he will have to follow these steps: Navigate to Setup | Security Controls | Password Policies. It will open the Password Policies setup page: In the Minimum password length field, select 10 characters. In the Password complexity requirement field, select Must mix Alpha, numeric and special characters. Here, you can also decide when the password should expire under the User password expire in option. Enforce the password history under the option enforce password history, and set a password question requirement as well as the number of invalid attempts allowed and the lock-out period. Click on Save. Summary In this article, we have gone through various security setting features available on Salesforce. Starting from OWD, followed by Sharing Rules and Field-Level Security, we also covered password policy concepts. Resources for Article: Further resources on this subject: Introducing Salesforce Chatter [Article] Salesforce CRM Functions [Article] Adding a Geolocation Trigger to the Salesforce Account Object [Article]
Read more
  • 0
  • 0
  • 3467

article-image-ordered-and-generic-tests-visual-studio-2010
Packt
30 Nov 2010
5 min read
Save for later

Ordered and Generic Tests in Visual Studio 2010

Packt
30 Nov 2010
5 min read
  Software Testing using Visual Studio 2010 Ordered tests The following screenshot shows the list of all the tests. You can see that the tests are independent and there is no link between the tests. We have different types of tests like Unit Test, Web Performance Test, and Load Test under the test project. Let's try to create an ordered test and place some of the dependent tests in an order so that the test execution happens in an order without breaking. Creating an ordered test There are different ways of creating ordered tests similar to the other tests: Select the test project from Solution Explorer, right-click and select Add Ordered Test, and then select ordered test from the list of different types of tests. Save the ordered test by choosing the File | Save option. Select the menu option Test then select New Test..., which opens a dialog with different test types. Select the test type and choose the test project from the Add to Test Project List drop-down and click on OK. Now the ordered test is created under the test project and the ordered test window is shown to select the existing tests from the test project and set the order. The preceding window shows different options for ordering the tests. The first line is the status bar, which shows the number of tests selected for the ordered test. The Select test list to view dropdown has the option to choose the display of tests in the available Test Lists. This dropdown has the default All Loaded Tests, which displays all available tests under the project. The other options in the dropdown are Lists of Tests and Tests Not in a List. The List of Tests will display the test lists created using the Test List Editor. It is easier to include the number of tests grouped together and order them. The next option, Tests Not in a List, displays the available tests, which are not part of any Test Lists. The Available tests list displays all the tests from the test project based on the option chosen in the dropdown. Selected tests contains the tests that are selected from the available tests list to be placed in order. The two right and left arrows are used for selecting and unselecting the tests from the Available tests list to the Selected Tests list. We can also select multiple tests by pressing the Ctrl key and selecting the tests. The up-down arrows on the right of the selected tests list are used for moving up or down the tests and setting the order for the testing in the Selected tests list. The last option, the Continue after failure checkbox at the bottom of the window, is to override the default behavior of the ordered tests, aborting the execution after the failure of any test. If the option Continue after failure is unchecked, and if any test in the order fails, then all remaining tests will get aborted. In case the tests are not dependent, we can check this option and override the default behavior to allow the application to continue running the remaining tests in order. Properties of an ordered test Ordered tests have properties similar to the other test types, in addition to some specific properties. To view the properties, select the ordered test in the Test View or Test List Editor window, right-click and select the Properties option. The Properties dialog box displays the available properties for the ordered test. The preceding screenshot shows that most of the properties are the same as the properties of the other test types. We can associate this test with the TFS work items, iterations, and area. Executing an ordered test An ordered test can be run like any other test. Open the Test View window or the Test List Editor and select the ordered test from the list, then right-click and choose the Run Selection option from Test View or Run Checked Tests from the Test List Editor. Once the option is selected, we can see the tests running one after the other in the same order in which they are placed in the ordered test. After the execution of the ordered tests, the Test Results window will show the status of the ordered test. If any of the tests in the list fails, then the ordered test status will be Failed. The summary of statuses of all the tests in the ordered test is shown in the following screenshot in the toolbar. The sample ordered test application had four tests in the ordered tests, but two of them failed and one had an error. Clicking the Test run failed hyperlink in the status bar shows a detailed view of the test run summary: The Test Results window also provides detailed information about the tests run so far. To get these details, choose the test from the Test Results window and then right-click and choose the option, View Test Results Details, which opens the details window and displays the common results information such as Test Name, Result, Duration of the test run, Start Time, End Time, and so on. The details window also displays the status of each and every test run within the ordered test. In addition it displays the duration for each test run, name, owner, and type of test in the list. Even though the second test in the list fails, the other tests continue to execute as if the Continue after failure option was checked.
Read more
  • 0
  • 0
  • 3462

article-image-advanced-data-access-patterns
Packt
11 Aug 2015
25 min read
Save for later

Advanced Data Access Patterns

Packt
11 Aug 2015
25 min read
In this article by, Suhas Chatekar, author of the book Learning NHibernate 4, we would dig deeper into that statement and try to understand what those downsides are and what can be done about them. In our attempt to address the downsides of repository, we would present two data access patterns, namely specification pattern and query object pattern. Specification pattern is a pattern adopted into data access layer from a general purpose pattern used for effectively filtering in-memory data. Before we begin, let me reiterate – repository pattern is not bad or wrong choice in every situation. If you are building a small and simple application involving a handful of entities then repository pattern can serve you well. But if you are building complex domain logic with intricate database interaction then repository may not do justice to your code. The patterns presented can be used in both simple and complex applications, and if you feel that repository is doing the job perfectly then there is no need to move away from it. (For more resources related to this topic, see here.) Problems with repository pattern A lot has been written all over the Internet about what is wrong with repository pattern. A simple Google search would give you lot of interesting articles to read and ponder about. We would spend some time trying to understand problems introduced by repository pattern. Generalization FindAll takes name of the employee as input along with some other parameters required for performing the search. When we started putting together a repository, we said that Repository<T> is a common repository class that can be used for any entity. But now FindAll takes a parameter that is only available on Employee, thus locking the implementation of FindAll to the Employee entity only. In order to keep the repository still reusable by other entities, we would need to part ways from the common Repository<T> class and implement a more specific EmployeeRepository class with Employee specific querying methods. This fixes the immediate problem but introduces another one. The new EmployeeRepository breaks the contract offered by IRepository<T> as the FindAll method cannot be pushed on the IRepository<T> interface. We would need to add a new interface IEmployeeRepository. Do you notice where this is going? You would end up implementing lot of repository classes with complex inheritance relationships between them. While this may seem to work, I have experienced that there are better ways of solving this problem. Unclear and confusing contract What happens if there is a need to query employees by a different criteria for a different business requirement? Say, we now need to fetch a single Employee instance by its employee number. Even if we ignore the above issue and be ready to add a repository class per entity, we would need to add a method that is specific to fetching the Employee instance matching the employee number. This adds another dimension to the code maintenance problem. Imagine how many such methods we would end up adding for a complex domain every time someone needs to query an entity using a new criteria. With several methods on repository contract that query same entity using different criteria makes the contract less clear and confusing for new developers. Such a pattern also makes it difficult to reuse code even if two methods are only slightly different from each other. Leaky abstraction In order to make methods on repositories reusable in different situations, lot of developers tend to add a single method on repository that does not take any input and return an IQueryable<T> by calling ISession.Query<T> inside it, as shown next: public IQueryable<T> FindAll() {    return session.Query<T>(); } IQueryable<T> returned by this method can then be used to construct any query that you want outside of repository. This is a classic case of leaky abstraction. Repository is supposed to abstract away any concerns around querying the database, but now what we are doing here is returning an IQueryable<T> to the consuming code and asking it to build the queries, thus leaking the abstraction that is supposed to be hidden into repository. IQueryable<T> returned by the preceding method holds an instance of ISession that would be used to ultimately interact with database. Since repository has no control over how and when this IQueryable would invoke database interaction, you might get in trouble. If you are using "session per request" kind of pattern then you are safeguarded against it but if you are not using that pattern for any reason then you need to watch out for errors due to closed or disposed session objects. God object anti-pattern A god object is an object that does too many things. Sometimes, there is a single class in an application that does everything. Such an implementation is almost always bad as it majorly breaks the famous single responsibility principle (SRP) and reduces testability and maintainability of code. A lot can be written about SRP and god object anti-pattern but since it is not the primary topic, I would leave the topic with underscoring the importance of staying away from god object anti-pattern. Avid readers can Google on the topic if they are interested. Repositories by nature tend to become single point of database interaction. Any new database interaction goes through repository. Over time, repositories grow organically with large number of methods doing too many things. You may spot the anti-pattern and decide to break the repository into multiple small repositories but the original single repository would be tightly integrated with your code in so many places that splitting it would be a difficult job. For a contained and trivial domain model, repository pattern can be a good choice. So do not abandon repositories entirely. It is around complex and changing domain that repositories start exhibiting the problems just discussed. You might still argue that repository is an unneeded abstraction and we can very well use NHibernate directly for a trivial domain model. But I would caution against any design that uses NHibernate directly from domain or domain services layer. No matter what design I use for data access, I would always adhere to "explicitly declare capabilities required" principle. The abstraction that offers required capability can be a repository interface or some other abstractions that we would learn. Specification pattern Specification pattern is a reusable and object-oriented way of applying business rules on domain entities. The primary use of specification pattern is to select subset of entities from a larger collection of entities based on some rules. An important characteristic of specification pattern is combining multiple rules by chaining them together. Specification pattern was in existence before ORMs and other data access patterns had set their feet in the development community. The original form of specification pattern dealt with in-memory collections of entities. The pattern was then adopted to work with ORMs such as NHibernate as people started seeing the benefits that specification pattern could bring about. We would first discuss specification pattern in its original form. That would give us a good understanding of the pattern. We would then modify the implementation to make it fit with NHibernate. Specification pattern in its original form Let's look into an example of specification pattern in its original form. A specification defines a rule that must be satisfied by domain objects. This can be generalized using an interface definition, as follows: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); } ISpecification<T> defines a single method IsSatisifedBy. This method takes the entity instance of type T as input and returns a Boolean value depending on whether the entity passed satisfies the rule or not. If we were to write a rule for employees living in London then we can implement a specification as follows: public class EmployeesLivingIn : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == "London"; } } The EmployeesLivingIn class implements ISpecification<Employee> telling us that this is a specification for the Employee entity. This specification compares the city from the employee's ResidentialAddress property with literal string "London" and returns true if it matches. You may be wondering why I have named this class as EmployeesLivingIn. Well, I had some refactoring in mind and I wanted to make my final code read nicely. Let's see what I mean. We have hardcoded literal string "London" in the preceding specification. This effectively stops this class from being reusable. What if we need a specification for all employees living in Paris? Ideal thing to do would be to accept "London" as a parameter during instantiation of this class and then use that parameter value in the implementation of the IsSatisfiedBy method. Following code listing shows the modified code: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == city; } } This looks good without any hardcoded string literals. Now if I wanted my original specification for employees living in London then following is how I could build it: var specification = new EmployeesLivingIn("London"); Did you notice how the preceding code reads in plain English because of the way class is named? Now, let's see how to use this specification class. Usual scenario where specifications are used is when you have got a list of entities that you are working with and you want to run a rule and find out which of the entities in the list satisfy that rule. Following code listing shows a very simple use of the specification we just implemented: List<Employee> employees = //Loaded from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London");   foreach(var employee in employees) { if(specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } We have a list of employees loaded from somewhere and we want to filter this list and get another list comprising of employees living in London. Till this point, the only benefit we have had from specification pattern is that we have managed to encapsulate the rule into a specification class which can be reused anywhere now. For complex rules, this can be very useful. But for simple rules, specification pattern may look like lot of plumbing code unless we overlook the composability of specifications. Most power of specification pattern comes from ability to chain multiple rules together to form a complex rule. Let's write another specification for employees who have opted for any benefit: public class EmployeesHavingOptedForBenefits : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.Benefits.Count > 0; } } In this rule, there is no need to supply any literal value from outside so the implementation is quite simple. We just check if the Benefits collection on the passed employee instance has count greater than zero. You can use this specification in exactly the same way as earlier specification was used. Now if there is a need to apply both of these specifications to an employee collection, then very little modification to our code is needed. Let's start with adding an And method to the ISpecification<T> interface, as shown next: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); ISpecification<T> And(ISpecification<T> specification); } The And method accepts an instance of ISpecification<T> and returns another instance of the same type. As you would have guessed, the specification that is returned from the And method would effectively perform a logical AND operation between the specification on which the And method is invoked and specification that is passed into the And method. The actual implementation of the And method comes down to calling the IsSatisfiedBy method on both the specification objects and logically ANDing their results. Since this logic does not change from specification to specification, we can introduce a base class that implements this logic. All specification implementations can then derive from this new base class. Following is the code for the base class: public abstract class Specification<T> : ISpecification<T> { public abstract bool IsSatisfiedBy(T entity);   public ISpecification<T> And(ISpecification<T> specification) {    return new AndSpecification<T>(this, specification); } } We have marked Specification<T> as abstract as this class does not represent any meaningful business specification and hence we do not want anyone to inadvertently use this class directly. Accordingly, the IsSatisfiedBy method is marked abstract as well. In the implementation of the And method, we are instantiating a new class AndSepcification. This class takes two specification objects as inputs. We pass the current instance and one that is passed to the And method. The definition of AndSpecification is very simple. public class AndSpecification<T> : Specification<T> { private readonly Specification<T> specification1; private readonly ISpecification<T> specification2;   public AndSpecification(Specification<T> specification1, ISpecification<T> specification2) {    this.specification1 = specification1;    this.specification2 = specification2; }   public override bool IsSatisfiedBy(T entity) {    return specification1.IsSatisfiedBy(entity) &&    specification2.IsSatisfiedBy(entity); } } AndSpecification<T> inherits from abstract class Specification<T> which is obvious. IsSatisfiedBy is simply performing a logical AND operation on the outputs of the ISatisfiedBy method on each of the specification objects passed into AndSpecification<T>. After we change our previous two business specification implementations to inherit from abstract class Specification<T> instead of interface ISpecification<T>, following is how we can chain two specifications using the And method that we just introduced: List<Employee> employees = null; //= Load from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London")                                     .And(new EmployeesHavingOptedForBenefits());   foreach (var employee in employees) { if (specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } There is literally nothing changed in how the specification is used in business logic. The only thing that is changed is construction and chaining together of two specifications as depicted in bold previously. We can go on and implement other chaining methods but point to take home here is composability that the specification pattern offers. Now let's look into how specification pattern sits beside NHibernate and helps in fixing some of pain points of repository pattern. Specification pattern for NHibernate Fundamental difference between original specification pattern and the pattern applied to NHibernate is that we had an in-memory list of objects to work with in the former case. In case of NHibernate we do not have the list of objects in the memory. We have got the list in the database and we want to be able to specify rules that can be used to generate appropriate SQL to fetch the records from database that satisfy the rule. Owing to this difference, we cannot use the original specification pattern as is when we are working with NHibernate. Let me show you what this means when it comes to writing code that makes use of specification pattern. A query, in its most basic form, to retrieve all employees living in London would look something as follows: var employees = session.Query<Employee>()                .Where(e => e.ResidentialAddress.City == "London"); The lambda expression passed to the Where method is our rule. We want all the Employee instances from database that satisfy this rule. We want to be able to push this rule behind some kind of abstraction such as ISpecification<T> so that this rule can be reused. We would need a method on ISpecification<T> that does not take any input (there are no entities in-memory to pass) and returns a lambda expression that can be passed into the Where method. Following is how that method could look: public interface ISpecification<T> where T : EntityBase<T> { Expression<Func<T, bool>> IsSatisfied(); } Note the differences from the previous version. We have changed the method name from IsSatisfiedBy to IsSatisfied as there is no entity being passed into this method that would warrant use of word By in the end. This method returns an Expression<Fund<T, bool>>. If you have dealt with situations where you pass lambda expressions around then you know what this type means. If you are new to expression trees, let me give you a brief explanation. Func<T, bool> is a usual function pointer. This pointer specifically points to a function that takes an instance of type T as input and returns a Boolean output. Expression<Func<T, bool>> takes this function pointer and converts it into a lambda expression. An implementation of this new interface would make things more clear. Next code listing shows the specification for employees living in London written against the new contract: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.ResidentialAddress.City == city; } } There is not much changed here compared to the previous implementation. Definition of IsSatisfied now returns a lambda expression instead of a bool. This lambda is exactly same as the one we used in the ISession example. If I had to rewrite that example using the preceding specification then following is how that would look: var specification = new EmployeeLivingIn("London"); var employees = session.Query<Employee>()                .Where(specification.IsSatisfied()); We now have a specification wrapped in a reusable object that we can send straight to NHibernate's ISession interface. Now let's think about how we can use this from within domain services where we used repositories before. We do not want to reference ISession or any other NHibernate type from domain services as that would break onion architecture. We have two options. We can declare a new capability that can take a specification and execute it against the ISession interface. We can then make domain service classes take a dependency on this new capability. Or we can use the existing IRepository capability and add a method on it which takes the specification and executes it. We started this article with a statement that repositories have a downside, specifically when it comes to querying entities using different criteria. But now we are considering an option to enrich the repositories with specifications. Is that contradictory? Remember that one of the problems with repository was that every time there is a new criterion to query an entity, we needed a new method on repository. Specification pattern fixes that problem. Specification pattern has taken the criterion out of the repository and moved it into its own class so we only ever need a single method on repository that takes in ISpecification<T> and execute it. So using repository is not as bad as it sounds. Following is how the new method on repository interface would look: public interface IRepository<T> where T : EntityBase<T> { void Save(T entity); void Update(int id, Employee employee); T GetById(int id); IEnumerable<T> Apply(ISpecification<T> specification); } The Apply method in bold is the new method that works with specification now. Note that we have removed all other methods that ran various different queries and replaced them with this new method. Methods to save and update the entities are still there. Even the method GetById is there as the mechanism used to get entity by ID is not same as the one used by specifications. So we retain that method. One thing I have experimented with in some projects is to split read operations from write operations. The IRepository interface represents something that is capable of both reading from the database and writing to database. Sometimes, we only need a capability to read from database, in which case, IRepository looks like an unnecessarily heavy object with capabilities we do not need. In such a situation, declaring a new capability to execute specification makes more sense. I would leave the actual code for this as a self-exercise for our readers. Specification chaining In the original implementation of specification pattern, chaining was simply a matter of carrying out logical AND between the outputs of the IsSatisfiedBy method on the specification objects involved in chaining. In case of NHibernate adopted version of specification pattern, the end result boils down to the same but actual implementation is slightly more complex than just ANDing the results. Similar to original specification pattern, we would need an abstract base class Specification<T> and a specialized AndSepcificatin<T> class. I would just skip these details. Let's go straight into the implementation of the IsSatisifed method on AndSpecification where actual logical ANDing happens. public override Expression<Func<T, bool>> IsSatisfied() { var p = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.AndAlso(          Expression.Invoke(specification1.IsSatisfied(), p),          Expression.Invoke(specification2.IsSatisfied(), p)), p); } Logical ANDing of two lambda expression is not a straightforward operation. We need to make use of static methods available on helper class System.Linq.Expressions.Expression. Let's try to go from inside out. That way it is easier to understand what is happening here. Following is the reproduction of innermost call to the Expression class: Expression.Invoke(specification1.IsSatisfied(), parameterName) In the preceding code, we are calling the Invoke method on the Expression class by passing the output of the IsSatisfied method on the first specification. Second parameter passed to this method is a temporary parameter of type T that we created to satisfy the method signature of Invoke. The Invoke method returns an InvocationExpression which represents the invocation of the lambda expression that was used to construct it. Note that actual lambda expression is not invoked yet. We do the same with second specification in question. Outputs of both these operations are then passed into another method on the Expression class as follows: Expression.AndAlso( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName) ) Expression.AndAlso takes the output from both specification objects in the form of InvocationExpression type and builds a special type called BinaryExpression which represents a logical AND between the two expressions that were passed to it. Next we convert this BinaryExpression into an Expression<Func<T, bool>> by passing it to the Expression.Lambda<Func<T, bool>> method. This explanation is not very easy to follow and if you have never used, built, or modified lambda expressions programmatically like this before, then you would find it very hard to follow. In that case, I would recommend not bothering yourself too much with this. Following code snippet shows how logical ORing of two specifications can be implemented. Note that the code snippet only shows the implementation of the IsSatisfied method. public override Expression<Func<T, bool>> IsSatisfied() { var parameterName = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.OrElse( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName)), parameterName); } Rest of the infrastructure around chaining is exactly same as the one presented during discussion of original specification pattern. I have avoided giving full class definitions here to save space but you can download the code to look at complete implementation. That brings us to end of specification pattern. Though specification pattern is a great leap forward from where repository left us, it does have some limitations of its own. Next, we would look into what these limitations are. Limitations Specification pattern is great and unlike repository pattern, I am not going to tell you that it has some downsides and you should try to avoid it. You should not. You should absolutely use it wherever it fits. I would only like to highlight two limitations of specification pattern. Specification pattern only works with lambda expressions. You cannot use LINQ syntax. There may be times when you would prefer LINQ syntax over lambda expressions. One such situation is when you want to go for theta joins which are not possible with lambda expressions. Another situation is when lambda expressions do not generate optimal SQL. I will show you a quick example to understand this better. Suppose we want to write a specification for employees who have opted for season ticket loan benefit. Following code listing shows how that specification could be written: public class EmployeeHavingTakenSeasonTicketLoanSepcification :Specification<Employee> { public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.Benefits.Any(b => b is SeasonTicketLoan); } } It is a very simple specification. Note the use of Any to iterate over the Benefits collection to check if any of the Benefit in that collection is of type SeasonTicketLoan. Following SQL is generated when the preceding specification is run: SELECT employee0_.Id           AS Id0_,        employee0_.Firstname     AS Firstname0_,        employee0_.Lastname     AS Lastname0_,        employee0_.EmailAddress AS EmailAdd5_0_,        employee0_.DateOfBirth   AS DateOfBi6_0_,        employee0_.DateOfJoining AS DateOfJo7_0_,        employee0_.IsAdmin       AS IsAdmin0_,        employee0_.Password     AS Password0_ FROM   Employee employee0_ WHERE EXISTS (SELECT benefits1_.Id FROM   Benefit benefits1_ LEFT OUTER JOIN Leave benefits1_1_ ON benefits1_.Id = benefits1_1_.Id LEFT OUTER JOIN SkillsEnhancementAllowance benefits1_2_ ON benefits1_.Id = benefits1_2_.Id LEFT OUTER JOIN SeasonTicketLoan benefits1_3_ ON benefits1_.Id = benefits1_3_.Id WHERE employee0_.Id = benefits1_.Employee_Id AND CASE WHEN benefits1_1_.Id IS NOT NULL THEN 1      WHEN benefits1_2_.Id IS NOT NULL THEN 2      WHEN benefits1_3_.Id IS NOT NULL THEN 3       WHEN benefits1_.Id IS NOT NULL THEN 0      END = 3) Isn't that SQL too complex? It is not only complex on your eyes but this is not how I would have written the needed SQL in absence of NHibernate. I would have just inner-joined the Employee, Benefit, and SeasonTicketLoan tables to get the records I need. On large databases, the preceding query may be too slow. There are some other such situations where queries written using lambda expressions tend to generate complex or not so optimal SQL. If we use LINQ syntax instead of lambda expressions, then we can get NHibernate to generate just the SQL. Unfortunately, there is no way of fixing this with specification pattern. Summary Repository pattern has been around for long time but suffers through some issues. General nature of its implementation comes in the way of extending repository pattern to use it with complex domain models involving large number of entities. Repository contract can be limiting and confusing when there is a need to write complex and very specific queries. Trying to fix these issues with repositories may result in leaky abstraction which can bite us later. Moreover, repositories maintained with less care have a tendency to grow into god objects and maintaining them beyond that point becomes a challenge. Specification pattern and query object pattern solve these issues on the read side of the things. Different applications have different data access requirements. Some applications are write-heavy while others are read-heavy. But there are a minute number of applications that fit into former category. A large number of applications developed these days are read-heavy. I have worked on applications that involved more than 90 percent database operations that queried data and only less than 10 percent operations that actually inserted/updated data into database. Having this knowledge about the application you are developing can be very useful in determining how you are going to design your data access layer. That brings use to the end of our NHibernate journey. Not quite, but yes, in a way. Resources for Article: Further resources on this subject: NHibernate 3: Creating a Sample Application [article] NHibernate 3.0: Using LINQ Specifications in the data access layer [article] NHibernate 2: Mapping relationships and Fluent Mapping [article]
Read more
  • 0
  • 0
  • 3462
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
25 Nov 2014
7 min read
Save for later

Creating an Apache JMeter™ test workbench

Packt
25 Nov 2014
7 min read
This article is written by Colin Henderson, the author of Mastering GeoServer. This article will give you a brief introduction about how to create an Apache JMeter™ test workbench. (For more resources related to this topic, see here.) Before we can get into the nitty-gritty of creating a test workbench for Apache JMeter™, we must download and install it. Apache JMeter™ is a 100 percent Java application, which means that it will run on any platform provided there is a Java 6 or higher runtime environment present. The binaries can be downloaded from http://jmeter.apache.org/download_jmeter.cgi, and at the time of writing, the latest version is 2.11. No installation is required; just download the ZIP file and decompress it to a location you can access from a command-line prompt or shell environment. To launch JMeter on Linux, simply open shell and enter the following command: $ cd <path_to_jmeter>/bin$ ./jmeter To launch JMeter on Windows, simply open a command prompt and enter the following command: C:> cd <path_to_jmeter>\binC:> jmeter After a short time, JMeter GUI should appear, where we can construct our test plan. For ease and convenience, consider setting your system's PATH environment variable to the location of the JMeter bin directory. In future, you will be able to launch JMeter from the command line without having to CD first. The JMeter workbench will open with an empty configuration ready for us to construct our test strategy: The first thing we need to do is give our test plan a name; for now, let's call it GeoServer Stress Test. We can also provide some comments, which is good practice as it will help us remember for what reason we devised the test plan in future. To demonstrate the use of JMeter, we will create a very simple test plan. In this test plan, we will simulate a certain number of users hitting our GeoServer concurrently and requesting maps. To set this up, we first need to add Thread Group to our test plan. In a JMeter test, a thread is equivalent to a user: In the left-hand side menu, we need to right-click on the GeoServer Stress Test node and choose the Add | Threads (Users) | Thread Group menu option. This will add a child node to the test plan that we right-clicked on. The right-hand side panel provides options that we can set for the thread group to control how the user requests are executed. For example, we can name it something meaningful, such as Web Map Requests. In this test, we will simulate 30 users, making map requests over a total duration of 10 minutes, with a 10-second delay between each user starting. The number of users is set by entering a value for Number of Threads; in this case, 30. The Ramp-Up Period option controls the delay in starting each user by specifying the duration in which all the threads must start. So, in our case, we enter a duration of 300 seconds, which means all 30 users will be started by the end of 300 seconds. This equates to a 10-second delay between starting threads (300 / 30 = 10). Finally, we will set a duration for the test to run over by ticking the box for Scheduler, and then specifying a value of 600 seconds for Duration. By specifying a duration value, we override the End Time setting. Next, we need to provide some basic configuration elements for our test. First, we need to set the default parameters for all web requests. Right-click on the Web Map Requests thread group node that we just created, and then navigate to Add | Config Element | User Defined Variables. This will add a new node in which we can specify the default HTTP request parameters for our test: In the right-hand side panel, we can specify any number of variables. We can use these as replacement tokens later when we configure the web requests that will be sent during our test run. In this panel, we specify all the standard WMS query parameters that we don't anticipate changing across requests. Taking this approach is a good practice as it means that we can create a mix of tests using the same values, so if we change one, we don't have to change all the different test elements. To execute requests, we need to add Logic Controller. JMeter contains a lot of different logic controllers, but in this instance, we will use Simple Controller to execute a request. To add the controller, right-click on the Web Map Requests node and navigate to Add | Logic Controller | Simple Controller. A simple controller does not require any configuration; it is merely a container for activities we want to execute. In our case, we want the controller to read some data from our CSV file, and then execute an HTTP request to WMS. To do this, we need to add a CSV dataset configuration. Right-click on the Simple Controller node and navigate to Add | Config Element | CSV Data Set Config. The settings for the CSV data are pretty straightforward. The filename is set to the file that we generated previously, containing the random WMS request properties. The path can be specified as relative or absolute. The Variable Names property is where we specify the structure of the CSV file. The Recycle on EOF option is important as it means that the CSV file will be re-read when the end of the file is reached. Finally, we need to set Sharing mode to All threads to ensure the data can be used across threads. Next, we need to add a delay to our requests to simulate user activity; in this case, we will introduce a small delay of 5 seconds to simulate a user performing a map-pan operation. Right-click on the Simple Controller node, and then navigate to Add | Timer | Constant Timer: Simply specify the value we want the thread to be paused for in milliseconds. Finally, we need to add a JMeter sampler, which is the unit that will actually perform the HTTP request. Right-click on the Simple Controller node and navigate to Add | Sampler | HTTP Request. This will add an HTTP Request sampler to the test plan: There is a lot of information that goes into this panel; however, all it does is construct an HTTP request that the thread will execute. We specify the server name or IP address along with the HTTP method to use. The important part of this panel is the Parameters tab, which is where we need to specify all the WMS request parameters. Notice that we used the tokens that we specified in the CSV Data Set Config and WMS Request Defaults configuration components. We use the ${token_name} token, and JMeter replaces the token with the appropriate value of the referenced variable. We configured our test plan, but before we execute it, we need to add some listeners to the plan. A JMeter listener is the component that will gather the information from all of the test runs that occur. We add listeners by right-clicking on the thread group node and then navigating to the Add | Listeners menu option. A list of available listeners is displayed, and we can select the one we want to add. For our purposes, we will add the Graph Results, Generate Summary Results, Summary Report, and Response Time Graph listeners. Each listener can have its output saved to a datafile for later review. When completed, our test plan structure should look like the following: Before executing the plan, we should save it for use later. Summary In this article, we looked at how Apache JMeter™ can be used to construct and execute test plans to place loads on our servers so that we can analyze the results and gain an understanding of how well our servers perform. Resources for Article: Further resources on this subject: Geo-Spatial Data in Python: Working with Geometry [article] Working with Geo-Spatial Data in Python [article] Getting Started with GeoServer [article]
Read more
  • 0
  • 0
  • 3458

article-image-arcgis-spatial-analyst
Packt
20 Jan 2015
16 min read
Save for later

ArcGIS Spatial Analyst

Packt
20 Jan 2015
16 min read
In this article by Daniela Cristiana Docan, author of ArcGIS for Desktop Cookbook, we will learn that the ArcGIS Spatial Analyst extension offers a lot of great tools for geoprocessing raster data. Most of the Spatial Analyst tools generate a new raster output. Before starting a raster analysis session, it's best practice to set the main analysis environment parameters settings (for example, scratch the workspace, extent, and cell size of the output raster). In this article, you will store all raster datasets in file geodatabase as file geodatabase raster datasets. (For more resources related to this topic, see here.) Analyzing surfaces In this recipe, you will represent 3D surface data in a two-dimensional environment. To represent 3D surface data in the ArcMap 2D environment, you will use hillshades and contours. You can use the hillshade raster as a background for other raster or vector data in ArcMap. Using the surface analysis tools, you can derive new surface data, such as slope and aspect or locations visibility. Getting ready In the surface analysis context: The term slope refers to the steepness of raster cells Aspect defines the orientation or compass direction of a cell Visibility identifies which raster cells are visible from a surface location In this recipe, you will prepare your data for analysis by creating an elevation surface named Elevation from vector data. The two feature classes involved are the PointElevation point feature class and the ContourLine polyline feature class. All other output raster datasets will derive from the Elevation raster. How to do it... Follow these steps to prepare your data for spatial analysis: Start ArcMap and open the existing map document, SurfaceAnalysis.mxd, from <drive>:PacktPublishingDataSpatialAnalyst. Go to Customize | Extensions and check the Spatial Analyst extension. Open ArcToolbox, right-click on the ArcToolbox toolbox, and select Environments. Set the geoprocessing environment as follows: Workspace | Current Workspace: DataSpatialAnalystTOPO5000.gdb and Scratch Workspace: DataSpatialAnalystScratchTOPO5000.gdb. Output Coordinates: Same as Input. Raster Analysis | Cell Size: As Specified below: type 0.5 with unit as m. Mask: SpatialAnalystTOPO5000.gdbTrapezoid5k. Raster Storage | Pyramid: check Build pyramids and Pyramid levels: type 3. Click on OK. In ArcToolbox, expand Spatial Analyst Tools | Interpolation, and double-click on the Topo to Raster tool to open the dialog box. Click on Show Help to see the meaning of every parameter. Set the following parameters: Input feature data: PointElevation Field: Elevation and Type: PointElevation ContourLine Field: Elevation and Type: Contour WatercourseA Type: Lake Output surface raster: ...ScratchTOPO5000.gdbElevation Output extent (optional): ContourLine Drainage enforcement (optional): NO_ENFORCE Accept the default values for all other parameters. Click on OK. The Elevation raster is a continuous thematic raster. The raster cells are arranged in 4,967 rows and 4,656 columns. Open Layer Properties | Source of the raster and explore the following properties: Data Type (File Geodatabase Raster Dataset), Cell Size (0.5 meters) or Spatial Reference (EPSG: 3844). In the Layer Properties window, click on the Symbology tab. Select the Stretched display method for the continuous raster cell values as follows: Show: Stretched and Color Ramp: Surface. Click on OK. Explore the cell values using the following two options: Go to Layer Properties | Display and check Show MapTips Add the Spatial Analyst toolbar, and from Customize | Commands, add the Pixel Inspector tool Let's create a hillshade raster using the Elevation layer: Expand Spatial Analyst Tools | Interpolation and double-click on the Hillshade tool to open the dialog box. Set the following parameters: Input raster: ScratchTOPO5000.gdbElevation Output raster: ScratchTOPO5000.gdbHillshade Azimuth (optional): 315 and Altitude (optional): 45 Accept the default value for Z factor and leave the Model shadows option unchecked. Click on OK. From time to time, please ensure to save the map document as MySurfaceAnalysis.mxd at ...DataSpatialAnalyst. The Hillshade raster is a discrete thematic raster that has an associated attribute table known as Value Attribute Table (VAT). Right-click on the Hillshade raster layer and select Open Attribute Table. The Value field stores the illumination values of the raster cells based on the position of the light source. The 0 value (black) means that 25406 cells are not illuminated by the sun, and 254 value (white) means that 992 cells are entirely illuminated. Close the table. In the Table Of Contents section, drag the Hillshade layer below the Elevation layer, and use the Effects | Transparency tool to add a transparency effect for the Elevation raster layer, as shown in the following screenshot: In the next step, you will derive a raster of slope and aspect from the Elevation layer. Expand Spatial Analyst Tools | Interpolation and double-click on the Slope tool to open the dialog box. Set the following parameters: Input raster: Elevation Output raster: ScratchTOPO5000.gdbSlopePercent Output measurement (optional): PERCENT_RISE Click on OK. Symbolize the layer using the Classified method, as follows: Show: Classified. In the Classification section, click on Classify and select the Manual classification method. You will add seven classes. To add break values, right-click on the empty space of the Insert Break graph. To delete one, select the break value from the graph, and right-click to select Delete Break. Do not erase the last break value, which represents the maximum value. Secondly, in the Break Values section, edit the following six values: 5; 7; 15; 20; 60; 90, and leave unchanged the seventh value (496,6). Select Slope (green to red) for Color Ramp. Click on OK. The green areas represent flatter slopes, while the red areas represent steep slopes, as shown in the following screenshot: Expand Spatial Analyst Tools | Interpolation and double click on the Aspect tool to open the dialog box. Set the following parameters: Input raster: Elevation Output raster: ScratchTOPO5000.gdbAspect Click on OK. Symbolize the Aspect layer. For Classify, click on the Manual classification method. You will add five classes. To add or delete break values, right-click on the empty space of the graph, and select Insert / Delete Break. Secondly, edit the following four values: 0; 90; 180; 270, leaving unchanged the fifth value in the Break Values section. Click on OK. In the Symbology window, edit the labels of the five classes as shown in the following picture. Click on OK. In the Table Of Contents section, select the <VALUE> label, and type Slope Direction. The following screenshot is the result of this action: In the next step, you will create a raster of visibility between two geodetic points in order to plan some topographic measurements using an electronic theodolite. You will use the TriangulationPoint and Elevation layers: In the Table Of Contents section, turn on the TriangulationPoint layer, and open its attribute table to examine the fields. There are two geodetic points with the following supplementary fields: OffsetA and OffsetB. OffsetA is the proposed height of the instrument mounted on its tripod above stations 8 and 72. OffsetB is the proposed height of the reflector (or target) above the same points. Close the table. Expand Spatial Analyst Tools | Interpolation and double-click on the Visibility tool to open the dialog box. Click on Show Help to see the meaning of every parameter. Set the following parameters: Input raster: Elevation Input point or polyline observer features: TOPO5000.gdbGeodeticPointsTriangulationPoint Output raster: ScratchTOPO5000.gdbVisibility Analysis type (optional): OBSERVERS Observer parameters | Surface offset (optional): OffsetB Observer offset (optional): OffsetA Outer radius (optional): For this, type 1600 Notice that OffsetA and OffsetB were automatically assigned. The Outer radius parameter limits the search distance, and it is the rounded distance between the two geodetic points. All other cells beyond the 1,600-meter radius will be excluded from the visibility analysis. Click on OK. Open the attribute table of the Visibility layer to inspect the fields and values. The Value field stores the value of cells. Value 0 means that cells are not visible from the two points. Value 1 means that 6,608,948 cells are visible only from point 8 (first observer OBS1). Value 2 means that 1,813,578 cells are visible only from point 72 (second observer OBS2). Value 3 means that 4,351,861 cells are visible from both points. In conclusion, there is visibility between the two points if the height of the instrument and reflector is 1.5 meters. Close the table. Symbolize the Visibility layer, as follows: Show: Unique Values and Value Field: Value. Click on Add All Values and choose Color Scheme: Yellow-Green Bright. Select <Heading> and change the Label value to Height 1.5 meters. Double-click on the symbol for Value as 0, and select No Color. Click on OK. The Visibility layer is symbolized as shown in the following screenshot: Turn off all layers except the Visibility, TriangulationPoint, and Hillshade layers. Save your map as MySurfaceAnalysis.mxd and close ArcMap. You can find the final results at <drive>:PacktPublishingDataSpatialAnalystSurfaceAnalysis. How it works... You have started the exercise by setting the geoprocessing environment. You will override those settings in the next recipes. At the application level, you chose to build pyramids. By creating pyramids, your raster will be displayed faster when you zoom out. The pyramid levels contain the copy of the original raster at a low resolution. The original raster will have a cell size of 0.5 meters. The pixel size will double at each level of the pyramid, so the first level will have a cell size of 1 meter; the second level will have a cell size of 2 meters; and the third level will have a cell size of 4 meters. Even if the values of cells refer to heights measured above the local mean sea level (zero-level surface), you should consider the planimetric accuracy of the dataset. Please remember that TOPO5000.gdb refers to a product at the scale 1:5,000. This is the reason why you have chosen 0.5 meters for the raster cell size. At step 4, you used the PointElevation layer as supplementary data when you created the Elevation raster. If one of your ArcToolbox tools fails to execute or you have obtained an empty raster output, you have some options here: Open the Results dialog from the Geoprocessing menu to explore the error report. This will help you to identify the parameter errors. Right-click on the previous execution of the tool and choose Open (step 1). Change the parameters and click on OK to run the tool. Choose Re Run if you want to run the tool with the parameters unchanged (step 2) as shown in the following screenshot: Run the ArcToolbox tool from the ArcCatalog application. Before running the tool, check the geoprocessing environment in ArcCatalog by navigating to Geoprocessing | Environments. There's more... What if you have a model with all previous steps? Open ArcCatalog, and go to ...DataSpatialAnalystModelBuilder. In the ModelBuilder folder, you have a toolbox named MyToolbox, which contains the Surface Analysis model. Right-click on the model and select Properties. Take your time to study the information from the General, Parameters, and Environments tabs. The output (derived data) will be saved in Scratch Workspace: ModelBuilder ScratchTopo5000.gdb. Click on OK to close the Surface Analysis Properties window. Running the entire model will take you around 25 minutes. You have two options: Tool dialog option: Right-click on the Surface Analysis model and select Open. Notice the model parameters that you can modify and read the Help information. Click on OK to run the model. Edit mode: Right-click on the Surface Analysis model and select Edit. The colored model elements are in the second state—they are ready to run the Surface Analysis model by using one of those two options: To run the entire model at the same time, select Run Entire Model from the Model menu. To run the tools (yellow rounded rectangle) one by one, select the Topo to Raster tool with the Select tool, and click on the Run tool from the Standard toolbar. Please remember that a shadow behind a tool means that the model element has already been run. You used the Visibility tool to check the visibility between two points with 1.5 meters for the Observer offset and Surface offset parameters. Try yourself to see what happens if the offset value is less than 1.5 meters. To again run the Visibility tool in the Edit mode, right-click on the tool, and select Open. For Surface offset and Observer offset, type 0.5 meters and click on OK to run the tool. Repeat these steps for a 1 meter offset. Interpolating data Spatial interpolation is the process of estimating an unknown value between two known values taking into account Tobler's First Law: "Everything is related to everything else, but near things are more related than distant things." This recipe does not undertake to teach you the advanced concept of interpolation because it is too complex for this book. Instead, this recipe will guide you to create a terrain surface using the following: A feature class with sample elevation points Two interpolation methods: Inverse Distance Weighted (IDW) and Spline For further research, please refer to: Geographic Information Analysis, David O'Sullivan and David Unwin, John Wiley & Sons, Inc., 2003, specifically the 8.3 Spatial interpolation recipe of Chapter 8, Describing and Analyzing Fields, pp.220-234. Getting ready In this recipe, you will create a terrain surface stored as a raster using the PointElevation sample points. Your sample data has the following characteristics: The average distance between points is 150 meters The density of sample points is not the same on the entire area of interest There are not enough points to define the cliffs and the depressions There are not extreme differences in elevation values How to do it... Follow these steps to create a terrain surface using the IDW tool: Start ArcMap and open an existing map document Interpolation.mxd from <drive>:PacktPublishingDataSpatialAnalyst. Set the geoprocessing environment, as follows: Workspace | Current Workspace: DataSpatialAnalystTOPO5000.gdb and Scratch Workspace: DataSpatialAnalystScratchTOPO5000.gdb Output Coordinates: Same as the PointElevation layer Raster Analysis | Cell Size: As Specified Below: 1 Mask: DataSpatialAnalystTOPO5000.gdbTrapezoid5k In the next two steps, you will use the IDW tool. Running IDW with barrier polyline features will take you around 15 minutes: In ArcToolbox, go to Spatial Analyst Tools | Interpolation, and double-click on the IDW tool. Click on Show Help to see the meaning of every parameter. Set the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbIDW_1 Power (optional): 0.5 Search radius (optional): Variable Search Radius Settings | Number of points: 6 Maximum distance: 500 Input barrier polyline features (optional): TOPO5000.gdbHydrographyWatercourseL Accept the default value of Output cell size (optional). Click on OK. Repeat step 3 by setting the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbIDW_2 Power (optional): 2 The rest of the parameters are the same as in step 3. Click on OK. Symbolize the IDW_1 and IDW_2 layers as follows: Show: Classified; Classification: Equal Interval: 10 classes; Color Scheme: Surface. Click on OK. You should obtain the following results: In the following steps, you will use the Spline tool to generate the terrain surface: In ArcToolbox, go to Spatial Analyst Tools | Interpolation, and double-click on the Spline tool. Set the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbSpline_Regular Spline type (optional): REGULARIZED Weight (optional): 0.1 and Number of points (optional): 6 Accept the default value of Output cell size (optional). Click on OK. Run again the Spline tool with the following parameters: Input point features: PointElevation Z value field: Elevation Output raster: ScratchTOPO5000.gdbSpline_Tension Spline type (optional): TENSION Weight (optional): 0.1 and Number of points (optional): 6 Accept the default value of Output cell size (optional). Click on OK. Symbolize the Spline_Regular and Spline_Tension raster layers using the Equal Interval method classification with 10 classes and the Surface color ramp: In the next steps, you will use the Spline with Barriers tool to generate a terrain surface using an increased number of sample points. You will transform the ContourLine layer in a point feature class. You will combine those new points with features from the PointElevation layer: In ArcToolbox, go to Data Management Tools | Features, and double-click on the Feature vertices to Points tool. Set the following parameters: Input features: ContourLine Output Feature Class: TOPO5000.gdbRelief ContourLine_FeatureVertices Point type (optional): ALL Click on OK. Inspect the attribute table of the newly created layer. In the Catalog window, go to ...TOPO5000.gdbRelief and create a copy of the PointElevation feature class. Rename the new feature class as ContourAndPoint. Right-click on ContourAndPoint and select Load | Load Data. Set the following parameters from the second and fourth panels: Input data: ContourLine_FeatureVertices Target Field: Elevation Matching Source Field: Elevation Accept the default values for the rest of the parameters and click on Finish. In ArcToolbox, go to Spatial Analyst Tools | Interpolation, and double-click on the Spline with Barriers tool. Set the following parameters: Input point features: ContourAndPoint Z value field: Elevation Input barrier features (optional): TOPO5000.gdbHydrographyWatercourseA Output raster: ScratchTOPO5000.gdbSpline_WaterA Smoothing Factor (optional): 0 Accept the default value of Output cell size (optional). Click on OK. You should obtain a similar terrain surface to what's shown here: Explore the results by comparing the similarities or differences of the terrain surface between interpolated raster layers and the ContourLine vector layer. The IDW method works well with a proper density of sample points. Try to create a new surface using the IDW tool and the ContourAndPoint layer as sample points. Save your map as MyInterpolation.mxd and close ArcMap. You can find the final results at <drive>:PacktPublishingDataSpatialAnalyst Interpolation. How it works... The IDW method generated an average surface that will not cross through the known point elevation values and will not estimate the values below the minimum or above the maximum given point values. The IDW tool allows you to define polyline barriers or limits in searching sample points for interpolation. Even if the WatercourseL polyline feature classes do not have elevation values, river features can be used to interrupt the continuity of interpolated surfaces. To obtain fewer averaged estimated values (reduce the IDW smoother effect) you have to: Reduce the sample size to 6 points Choose a variable search radius Increase the power to 2 The Power option defines the influence of sample point values. This value increases with the distance. There is a disadvantage because around a few sample points, there are small areas raised above the surrounding surface or small hollows below the surrounding surface. The Spline method has generated a surface that crosses through all the known point elevation values and estimates the values below the minimum or above the maximum sample point values. Because the density of points is quite low, we reduced the sample size to 6 points and defined a variable search radius of 500 meters in order to reduce the smoothening effect. The Regularized option estimates the hills or depressions that are not cached by the sample point values. The Tension option will force the interpolated values to stay closer to the sample point values. Starting from step 12, we increased the number of sample points in order to better estimate the surface. At step 14, notice that the Spline with Barriers tool allows you to use the polygon feature class as breaks or barriers in searching sample points for interpolation. Summary In this article, we learned about the ArcGIS Spatial Analyst extension and its tools. Resources for Article:   Further resources on this subject: Posting Reviews, Ratings, and Photos [article] Enterprise Geodatabase [article] Adding Graphics to the Map [article]
Read more
  • 0
  • 0
  • 3445

article-image-spring-mvc-configuring-and-deploying-application
Packt
19 Feb 2010
5 min read
Save for later

Spring MVC - Configuring and Deploying the Application

Packt
19 Feb 2010
5 min read
The first section will focus on configuring the application and its components so that the application can be deployed. The focus of the second section will be a real world application that will be developed using the steps described in the article on Developing the MVC components and in this article. That sets the agenda for this discussion. Using Spring MVC – Configuring the Application There are four main steps in configuring of the application. They are: Configure the DispatcherServlet Configure the Controller Configure the View Configure the Build Script The first step will be same for any application that is built using Spring MVC. The other three steps change according to the components that have been developed for the application. Here are the details. Configure the DispatcherServlet The first step is to tell the Application server that all the requests for this (Spring MVC based) application need to be routed to Spring MVC. This is done by setting up the DispatcherServlet. The reason for setting up DispatcherServlet is that it acts as the entry point to the Spring MVC and thus to the application.  Since the DispatcherServlet interacts with the application as a whole (instead of individual components), its configuration or setting up at application level. And any setup that needs to be done at the application level is done by making the required entries in the web.xml. The entries required  in the web.xml can be divided into the following: Servlet mapping URL mapping The former specifies the details of the servlet and the latter specifies how the servlet is related to a URL. Here are the details. Servlet mapping Servlet mapping is akin to declaring a variable. It is through servlet mapping that Application Server knows which servlets of the application it needs to support.  Servlet mapping, in essence, assigns a name to a servlet class that can be reference throughout web.xml. To set up the DispatcherServlet, first it has to be mapped to a name. that can be done using <servlet-name> and <servlet-class> tags that are the child nodes of the <servlet> tag. The following statement maps the DispatcherServlet to the name "dispatcher". <servlet> <servlet-name> dispatcher </servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> </servlet> Since the DispatcherServlet needs to be loaded on the startup of the Application Server instead of the loading when a request arrives, the optional node <load-on-startup> with value of 1 is also required. The modified <servlet> tag will be: <servlet> <servlet-name> dispatcher </servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet> Next step is to map the URL to the servlet name so that the requests can be routed to the DispatcherServlet. URL mapping Once the servlet has been mapped, the next step is to map the servlet name with a URL so that the requests for that particular URL can be passed on to the application via the DispatcherServlet. That can be done using the <servlet-name> and <url-pattern> nodes of the <servlet-mapping> node. The <servlet-name> is used to refer the name that was mapped with the DispatcherServlet class. The <url-pattern> is used to map a URL pattern with a servlet name so that when a request arrives matching the URL pattern, Application Server can redirect it to the mapped servlet. To map the DispatcherServlet with a URL pattern the <servlet-mapping> tag will be: <servlet-mapping> <servlet-name>dispatcher</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> With this configuration/setting up of DispatcherServlet is complete. One point to keep in mind is that the URL pattern can be any pattern of one’s choice. However, it’s a common practice to use *.html for DispatcherServlet and *do for ActionServlet (Struts 1.x). Next step is to configure the View and Controller components of the application. Mapping the Controller By setting up the DispatcherServlet, the routing of requests to the application will be taken care of by the Application Server. However, unless the individual controllers of the application are setup/configured, the Framework would not know which controller to be called once the DispatcherServlet receives the request. The configuration of the Controller as well as the View components is done in the Spring MVC configuration file. The name of the configuration file is dependent on the name of the DispatcherServlet in web.xml, which is of the form <DispatcherServlet_name-servlet>.xml. So if the DispacherServlet is mapped to the name dispatcher, then the name of the configuration file will be dispatcher-servlet.xml. The file will reside in WEB-INF folder of the application. Everything in Spring Framework is a bean. Controllers are no exceptions. Controllers are configured as beans using the <bean> child tag of <beans> tag. A Controller is mapped by providing the URL of the request as the name attribute and complete qualified name of the Controller class as the value of the class attribute. For example, if the request URL is say, http://localhost/test/hello.html, then the name attribute will have /hello.html and the value attribute will have the fully qualified class name say, org.me.HelloWorldController. The following statements depicts the same: <bean name="/hello.html" class=" org.me.HelloWorldController "/> One point to keep in mind is that the "/" in the bean name represents the relative path. In other words, /hello.html means that hello.html is directly under http://localhost/test. If hello.html was under another directory say, jsp which, in turn was directly under the application, then the name attribute will be /jsp/hello.html. Let us move onto configuring the Views.
Read more
  • 0
  • 0
  • 3432

article-image-geocoding-address-based-data
Packt
30 Mar 2015
7 min read
Save for later

Geocoding Address-based Data

Packt
30 Mar 2015
7 min read
In this article by Kurt Menke, GISP, Dr. Richard Smith Jr., GISP, Dr. Luigi Pirelli, Dr. John Van Hoesen, GISP, authors of the book Mastering QGIS, we'll have a look at how to geocode address-based date using QGIS and MMQGIS. (For more resources related to this topic, see here.) Geocoding addresses has many applications, such as mapping the customer base for a store, members of an organization, public health records, or incidence of crime. Once mapped, the points can be used in many ways to generate information. For example, they can be used as inputs to generate density surfaces, linked to parcels of land, and characterized by socio-economic data. They may also be an important component of a cadastral information system. An address geocoding operation typically involves the tabular address data and a street network dataset. The street network needs to have attribute fields for address ranges on the left- and right-hand side of each road segment. You can geocode within QGIS using a plugin named MMQGIS (http://michaelminn.com/linux/mmqgis/). MMQGIS has many useful tools. For geocoding, we will use the tools found in MMQGIS | Geocode. There are two tools there: Geocode CSV with Google/ OpenStreetMap and Geocode from Street Layer as shown in the following screenshot. The first tool allows you to geocode a table of addresses using either the Google Maps API or the OpenStreetMap Nominatim web service. This tool requires an Internet connection but no local street network data as the web services provide the street network. The second tool requires a local street network dataset with address range attributes to geocode the address data: How address geocoding works The basic mechanics of address geocoding are straightforward. The street network GIS data layer has attribute columns containing the address ranges on both the even and odd side of every street segment. In the following example, you can see a piece of the attribute table for the Streets.shp sample data. The columns LEFTLOW, LEFTHIGH, RIGHTLOW, and RIGHTHIGH contain the address ranges for each street segment: In the following example we are looking at Easy Street. On the odd side of the street, the addresses range from 101 to 199. On the even side, they range from 102 to 200. If you wanted to map 150 Easy Street, QGIS would assume that the address is located halfway down the even side of that block. Similarly, 175 Easy Street would be on the odd side of the street three quarters the way down the block. Address geocoding assumes that the addresses are evenly spaced along the linear network. QGIS should place the address point very close to its actual position, but due to variability in lot sizes not every address point will be perfectly positioned. Now that you've learned the basics, let's work through an example. Here we will geocode addresses using web services. The output will be a point shapefile containing all the attribute fields found in the source Addresses.csv file. An example – geocoding using web services Here are the steps for geocoding the Addresses.csv sample data using web services. Load the Addresses.csv and the Streets.shp sample data into QGIS Desktop. Open Addresses.csv and examine the table. These are addresses of municipal facilities. Notice that the street address (for example, 150 Easy Street) is contained in a single field. There are also fields for the city, state, and country. Since both Google and OpenStreetMap are global services, it is wise to include such fields so that the services can narrow down the geography. Install and enable the MMQGIS plugin. Navigate to MMQGIS | Geocode | Geocode CSV with Google/OpenStreetMap. The Web Service Geocode dialog window will open. Select Input CSV File (UTF-8) by clicking on Browse… and locating the delimited text file on your system. Select the address fields by clicking on the drop-down menu and identifying the Address Field, City Field, State Field, and Country Field fields. MMQGIS may identify some or all of these fields by default if they are named with logical names such as Address or State. Choose the web service. Name the output shapefile by clicking on Browse…. Name Not Found Output List by clicking on Browse…. Any records that are not matched will be written to this file. This allows you to easily see and troubleshoot any unmapped records. Click on OK. The status of the geocoding operation can be seen in the lower-left corner of QGIS. The word Geocoding will be displayed, followed by the number of records that have been processed. The output will be a point shapefile and a CSV file listing that addresses were not matched. Two additional attribute columns will be added to the output address point shapefile: addrtype and addrlocat. These fields provide information on how the web geocoding service obtained the location. These may be useful for accuracy assessment. Addrtype is the Google <type> element or the OpenStreetMap class attribute. This will indicate what kind of address type this is (highway, locality, museum, neighborhood, park, place, premise, route, train_station, university etc.). Addrlocat is the Google <location_type> element or OpenStreetMap type attribute. This indicates the relationship of the coordinates to the addressed feature (approximate, geometric center, node, relation, rooftop, way interpolation, and so on). If the web service returns more than one location for an address, the first of the locations will be used as the output feature. Use of this plugin requires an active Internet connection. Google places both rate and volume restrictions on the number of addresses that can be geocoded within various time limits. You should visit the Google Geocoding API website: (http://code.google.com/apis/maps/documentation/geocoding/) for more details, and current information and Google's terms of service. Geocoding via these web services can be slow. If you don't get the desired results with one service, try the other. Geocoding operations rarely have 100% success. Street names in the street shapefile must match the street names in the CSV file exactly. Any discrepancies between the name of a street in the address table, and the street attribute table will lower the geocoding success rate. The following image shows the results of geocoding addresses via street address ranges. The addresses are shown with the street network used in the geocoding operation: Geocoding is often an iterative process. After the initial geocoding operation, you can review the Not Found CSV file. If it's empty then all the records were matched. If it has records in it, compare them with the attributes of the streets layer. This will help you determine why those records were not mapped. It may be due to inconsistencies in the spelling of street names. It may also be due to a street centerline layer that is not as current as the addresses. Once the errors have been identified they can be corrected by editing the data, or obtaining a different street centreline dataset. The geocoding operation can be re-run on those unmatched addresses. This process can be repeated until all records are matched. Use the Identify tool to inspect the mapped points, and the roads, to ensure that the operation was successful. Never take a GIS operation for granted. Check your results with a critical eye. Summary This article introduced you to the process of address geocoding using QGIS and the MMQGIS plugin. Resources for Article: Further resources on this subject: Editing attributes [article] How Vector Features are Displayed [article] QGIS Feature Selection Tools [article]
Read more
  • 0
  • 1
  • 3425
article-image-groovy-closures
Packt
16 Sep 2015
9 min read
Save for later

Groovy Closures

Packt
16 Sep 2015
9 min read
In this article by Fergal Dearle, the author of the book Groovy for Domain-Specific Languages - Second Edition, we will focus exclusively on closures. We will take a close look at them from every angle. Closures are the single most important feature of the Groovy language. Closures are the special seasoning that helps Groovy stand out from Java. They are also the single most powerful feature that we will use when implementing DSLs. In the article, we will discuss the following topics: We will start by explaining just what a closure is and how we can define some simple closures in our Groovy code We will look at how many of the built-in collection methods make use of closures for applying iteration logic, and see how this is implemented by passing a closure as a method parameter We will look at the various mechanisms for calling closures A handy reference that you might want to consider having at hand while you read this article is GDK Javadocs, which will give you full class descriptions of all of the Groovy built-in classes, but of particular interest here is groovy.lang.Closure. (For more resources related to this topic, see here.) What is a closure Closures are such an unfamiliar concept to begin with that it can be hard to grasp initially. Closures have characteristics that make them look like a method in so far as we can pass parameters to them and they can return a value. However, unlike methods, closures are anonymous. A closure is just a snippet of code that can be assigned to a variable and executed later: def flintstones = ["Fred","Barney"] def greeter = { println "Hello, ${it}" } flintstones.each( greeter ) greeter "Wilma" greeter = { } flintstones.each( greeter ) greeter "Wilma" Because closures are anonymous, they can easily be lost or overwritten. In the preceding example, we defined a variable greeter to contain a closure that prints a greeting. After greeter is overwritten with an empty closure, any reference to the original closure is lost. It's important to remember that greeter is not the closure. It is a variable that contains a closure, so it can be supplanted at any time. Because greeter has a dynamic type, we could have assigned any other object to it. All closures are a subclass of the type groovy.lang.Closure. Because groovy.lang is automatically imported, we can refer to Closure as a type within our code. By declaring our closures explicitly as Closure, we cannot accidentally assign a non-closure to them: Closure greeter = { println it } For each closure that is declared in our code, Groovy generates a Closure class for us, which is a subclass of groovy.lang.Closure. Our closure object is an instance of this class. Although we cannot predict what exact type of closure is generated, we can rely on it being a subtype of groovy.lang.Closure. Closures and collection methods We will encounter Groovy lists and see some of the iteration functions, such as the each method: def flintstones = ["Fred","Barney"] flintstones.each { println "Hello, ${it}" } This looks like it could be a specialized control loop similar to a while loop. In fact, it is a call to the each method of Object. The each method takes a closure as one of its parameters, and everything between the curly braces {} defines another anonymous closure. Closures defined in this way can look quite similar to code blocks, but they are not the same. Code defined in a regular Java or Groovy style code block is executed as soon as it is encountered. With closures, the block of code defined in the curly braces is not executed until the call() method of the closure is made: println "one" def two = { println "two" } println "three" two.call() println "four" Will print the following: one three two four Let's dig a bit deeper into the structure of the each of the calls shown in the preceding code. I refer to each as a call because that's what it is—a method call. Groovy augments the standard JDK with numerous helper methods. This new and improved JDK is referred to as the Groovy JDK, or GDK for short. In the GDK, Groovy adds the each method to the java.lang.Object class. The signature of the each method is as follows: Object each(Closure closure) The java.lang.Object class has a number of similar methods such as each, find, every, any, and so on. Because these methods are defined as part of Object, you can call them on any Groovy or Java object. They make little sense on most objects, but they do something sensible if not very useful: given: "an Integer" def number = 1 when: "we call the each method on it" number.each { println it } then: "just the object itself gets passed into the Closure" "1" == output() These methods all have specific implementations for all of the collection types, including arrays, lists, ranges, and maps. So, what is actually happening when we see the call to flintstones.each is that we are calling the list's implementation of the each method. Because each takes a Closure as its last and only parameter, the following code block is interpreted by Groovy as an anonymous Closure object to be passed to the method. The actual call to the closure passed to each is deferred until the body of the each method itself is called. The closure may be called multiple times—once for every element in the collection. Closures as method parameters We already know that parentheses around method parameters are optional, so the previous call to each can also be considered equivalent to: flintstones.each ({ println "Hello, ${it}") Groovy has a special handling for methods whose last parameter is a closure. When invoking these methods, the closure can be defined anonymously after the method call parenthesis. So, yet another legitimate way to call the preceding line is: flintstones.each() { println "hello, ${it}" } The general convention is not to use parentheses unless there are parameters in addition to the closure: given: def flintstones = ["Fred", "Barney", "Wilma"] when: "we call findIndexOf passing int and a Closure" def result = flintstones.findIndexOf(0) { it == 'Wilma'} then: result == 2 The signature of the GDK findIndexOf method is: int findIndexOf(int, Closure) We can define our own methods that accept closures as parameters. The simplest case is a method that accepts only a single closure as a parameter: def closureMethod(Closure c) { c.call() } when: "we invoke a method that accepts a closure" closureMethod { println "Closure called" } then: "the Closure passed in was executed" "Closure called" == output() Method parameters as DSL This is an extremely useful construct when we want to wrap a closure in some other code. Suppose we have some locking and unlocking that needs to occur around the execution of a closure. Rather than the writer of the code to locking via a locking API call, we can implement the locking within a locker method that accepts the closure: def locked(Closure c) { callToLockingMethod() c.call() callToUnLockingMethod() } The effect of this is that whenever we need to execute a locked segment of code, we simply wrap the segment in a locked closure block, as follows: locked { println "Closure called" } In a small way, we are already writing a mini DSL when we use these types on constructs. This call to the locked method looks, to all intents and purposes, like a new language construct, that is, a block of code defining the scope of a locking operation. When writing methods that take other parameters in addition to a closure, we generally leave the Closure argument to last. As already mentioned in the previous section, Groovy has a special syntax handling for these methods, and allows the closure to be defined as a block after the parameter list when calling the method: def closureMethodInteger(Integer i, Closure c) { println "Line $i" c.call() } when: "we invoke a method that accepts an Integer and a Closure" closureMethodInteger(1) { println "Line 2" } then: "the Closure passed in was executed with the parameter" """Line 1 Line 2""" == output() Forwarding parameters Parameters passed to the method may have no impact on the closure itself, or they may be passed to the closure as a parameter. Methods can accept multiple parameters in addition to the closure. Some may be passed to the closure, while others may not: def closureMethodString(String s, Closure c) { println "Greet someone" c.call(s) } when: "we invoke a method that accepts a String and a Closure" closureMethodString("Dolly") { name -> println "Hello, $name" } then: "the Closure passed in was executed with the parameter" """Greet someone Hello, Dolly""" == output() This construct can be used in circumstances where we have a look-up code that needs to be executed before we have access to an object. Say we have customer records that need to be retrieved from a database before we can use them: def withCustomer (id, Closure c) { def cust = getCustomerRecord(id) c.call(cust) } withCustomer(12345) { customer -> println "Found customer ${customer.name}" } We can write an updateCustomer method that saves the customer record after the closure is invoked, and amend our locked method to implement transaction isolation on the database, as follows: class Customer { String name } def locked (Closure c) { println "Transaction lock" c.call() println "Transaction release" } def update (customer, Closure c) { println "Customer name was ${customer.name}" c.call(customer) println "Customer name is now ${customer.name}" } def customer = new Customer(name: "Fred") At this point, we can write code that nests the two method calls by calling update as follows: locked { update(customer) { cust -> cust.name = "Barney" } } This outputs the following result, showing how the update code is wrapped by updateCustomer, which retrieves the customer object and subsequently saves it. The whole operation is wrapped by locked, which includes everything within a transaction: Transaction lock Customer name was Fred Customer name is now Barney Transaction release Summary In this article, we covered closures in some depth. We explored the various ways to call a closure and the means of passing parameters. We saw how we can pass closures as parameters to methods, and how this construct can allow us to appear to add mini DSL syntax to our code. Closures are the real "power" feature of Groovy, and they form the basis of most of the DSLs. Resources for Article: Further resources on this subject: Using Groovy Closures Instead of Template Method [article] Metaprogramming and the Groovy MOP [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article]
Read more
  • 0
  • 0
  • 3406

article-image-new-features-jpa-20
Packt
28 Jul 2010
9 min read
Save for later

New Features in JPA 2.0

Packt
28 Jul 2010
9 min read
(For more resources on Java, see here.) Version 2.0 of the JPA specification introduces some new features to make working with JPA even easier. In the following sections, we discuss some of these new features: Criteria API One of the main additions to JPA in the 2.0 specification is the introduction of the Criteria API. The Criteria API is meant as a complement to the Java Persistence Query Language (JPQL). Although JPQL is very flexible, it has some problems that make working with it more difficult than necessary. For starters, JPQL queries are stored as strings and the compiler has no way of validating JPQL syntax. Additionally, JPQL is not type safe. We could write a JPQL query in which our where clause could have a string value for a numeric property and our code would compile and deploy just fine. To get around the JPQL limitations described in the previous paragraph, the Criteria API was introduced to JPA in version 2.0 of the specification. The Criteria API allows us to write JPA queries programmatically, without having to rely on JPQL. The following code example illustrates how to use the Criteria API in our Java EE 6 applications: package net.ensode.glassfishbook.criteriaapi;import java.io.IOException;import java.io.PrintWriter;import java.util.List;import javax.persistence.EntityManager;import javax.persistence.EntityManagerFactory;import javax.persistence.PersistenceUnit;import javax.persistence.TypedQuery;import ja vax.persistence.criteria.CriteriaBuilder;import javax.persistence.criteria.CriteriaQuery;import javax.persistence.criteria.Path;import javax.persistence.criteria.Predicate;import javax.persistence.criteria.Root;import javax.persistence.metamodel.EntityType;import javax.persistence.metamodel.Metamodel;import javax.persistence.metamodel.SingularAttribute;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;@WebServlet(urlPatterns = {"/criteriaapi"})public class CriteriaApiDemoServlet extends HttpServlet{ @PersistenceUnit(unitName = "customerPersistenceUnit") private EntityManagerFactory entityManagerFactory; @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter printWriter = response.getWriter(); List<UsState> matchingStatesList; EntityManager entityManager = entityManagerFactory.createEntityManager(); CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<UsState> criteriaQuery = criteriaBuilder.createQuery(UsState.class); Root<UsState> root = criteriaQuery.from(UsState.class); Metamodel metamodel = entityManagerFactory.getMetamodel(); EntityType<UsState> usStateEntityType = metamodel.entity(UsState.class); SingularAttribute<UsState, String> usStateAttribute = usStateEntityType.getDeclaredSingularAttribute("usStateNm", String.class); Path<String> path = root.get(usStateAttribute); Predicate predicate = criteriaBuilder.like(path, "New%"); criteriaQuery = criteriaQuery.where(predicate); TypedQuery typedQuery = entityManager.createQuery(criteriaQuery); matchingStatesList = typedQuery.getResultList(); response.setContentType("text/html"); printWriter.println("The following states match the criteria:<br/>"); for (UsState state : matchingStatesList) { printWriter.println(state.getUsStateNm() + "<br/>"); } }} This example takes advantage of the Criteria APIc. When writing code using the Criteria API, the first thing we need to do is to obtain an instance of a class implementing the javax.persistence.criteria. CriteriaBuilder interface. As we can see in the previous example, we need to obtain said instance by invoking the getCriteriaBuilder() method on our EntityManager. From our CriteriaBuilder implementation, we need to obtain an instance of a class implementing the javax.persistence.criteria.CriteriaQuery interface. We do this by invoking the createQuery() method in our CriteriaBuilder implementation. Notice that CriteriaQuery is generically typed. The generic type argument dictates the type of result that our CriteriaQuery implementation will return upon execution. By taking advantage of generics in this way, the Criteria API allows us to write type safe code. Once we have obtained a CriteriaQuery implementation, from it we can obtain an instance of a class implementing the javax.persistence.criteria.Root interface. The Root implementation dictates what JPA entity we will be querying from. It is analogous to the FROM query in JPQL (and SQL). The next two lines in our example take advantage of another new addition to the JPA specification—the Metamodel API. In order to take advantage of the Metamodel API, we need to obtain an implementation of the javax.persistence. metamodel.Metamodel interface by invoking the getMetamodel() method on our EntityManagerFactory. From our Metamodel implementation, we can obtain a generically typed instance of the javax.persistence.metamodel.EntityType interface. The generic type argument indicates the JPA entity our EntityType implementation corresponds to. EntityType allows us to browse the persistent attributes of our JPA entities at runtime. This is exactly what we do in the next line in our example. In our case, we are getting an instance of SingularAttribute, which maps to a simple, singular attribute in our JPA entity. EntityType has methods to obtain attributes that map to collections, sets, lists, and maps. Obtaining these types of attributes is very similar to obtaining a SingularAttribute, therefore we won't be covering those directly. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/ docs/api/ for more information. As we can see in our example, SingularAttribute contains two generic type arguments. The first argument dictates the JPA entity we are working with and the second one indicates the type of attribute. We obtain our SingularAttribute by invoking the getDeclaredSingularAttribute() method on our EntityType implementation and passing the attribute name (as declared in our JPA entity) as a String. Once we have obtained our SingularAttribute implementation, we need to obtain an import javax.persistence.criteria.Path implementation by invoking the get() method in our Root instance and passing our SingularAttribute as a parameter. In our example, we will get a list of all the "new" states in the United States (that is, all states whose names start with "New"). Of course, this is the job of a "like" condition. We can do this with the Criteria API by invoking the like() method on our CriteriaBuilder implementation. The like() method takes our Path implementation as its first parameter and the value to search for as its second parameter. CriteriaBuilder has a number of methods that are analogous to SQL and JPQL clauses such as equals(), greaterThan(), lessThan(), and(), or(), and so on and so forth (for the complete list, refer to the Java EE 6 documentation at http://java.sun.com/javaee/6/docs/api/). These methods can be combined to create complex queries via the Criteria API. The like() method in CriteriaBuilder returns an implementation of the javax.persistence.criteria.Predicate interface, which we need to pass to the where() method in our CriteriaQuery implementation. This method returns a new instance of CriteriaBuilder which we assign to our criteriaBuilder variable. At this point, we are ready to build our query. When working with the Criteria API, we deal with the javax.persistence.TypedQuery interface, which can be thought of as a type-safe version of the Query interface we use with JPQL. We obtain an instance of TypedQuery by invoking the createQuery() method in EntityManager and passing our CriteriaQuery implementation as a parameter. To obtain our query results as a list, we simply invoke getResultList() on our TypedQuery implementation. It is worth reiterating that the Criteria API is type safe. Therefore, attempting to assign the results of getResultList() to a list of the wrong type would result in a compilation error. After building, packaging, and deploying our code, then pointing the browser to our servlet's URL, we should see all the "New" states displayed in the browser. Bean Validation support Another new feature introduced in JPA 2.0 is support for JSR 303, Bean Validation. Bean Validation support allows us to annotate our JPA entities with Bean Validation annotations. These annotations allow us to easily validate user input and perform data sanitation. Taking advantage of Bean Validation is very simple, all we need to do is annotate our JPA entity fields or getter methods with any of the validation annotations defined in the javax.validation.constraints package. Once our fields are annotated as appropriate, the EntityManager will prevent non-validating data from being persisted. The following code example is a modified version of the Customer JPA entity. It has been modifed to take advantage of Bean Validation in some of its fields. package net.ensode.glassfishbook.jpa.beanvalidation;import java.io.Serializable;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.Id;import javax.persistence.Table;import javax.validation.constraints.NotNull;import javax.validation.constraints.Size;@Entity@Table(name = "CUSTOMERS")public class Customer implements Serializable{ @Id @Column(name = "CUSTOMER_ID") private Long customerId; @Column(name = "FIRST_NAME") @NotNull @Size(min=2, max=20) private String firstName; @Column(name = "LAST_NAME") @NotNull @Size(min=2, max=20) private String lastName; private String email; public Long getCustomerId() { return customerId; } public void setCustomerId(Long customerId) { this.customerId = customerId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; }} In this example, we used the @NotNull annotation to prevent the firstName and lastName of our entity from being persisted with null values. We also used the @Size annotation to restrict the minimum and maximum length of these fields. This is all we need to do to take advantage of Bean Validation in JPA. If our code attempts to persist or update an instance of our entity that does not pass the declared validation, an exception of type javax.validation.ConstraintViolationException will be thrown and the entity will not be persisted. As we can see, Bean Validation pretty much automates data validation, freeing us from having to manually write validation code. In addition to the two annotations discussed in the previous example, the javax.validation.constraints package contains several additional annotations that we can use to automate validation on our JPA entities. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/docs/api/ for the complete list. Summary In this article, we discussed some new JPA 2.0 features such as the Criteria API that allows us to build JPA queries programmatically, the Metamodel API that allows us to take advantage of Java's type safety when working with JPA, and Bean Validation that allows us to easily validate input by simply annotating our JPA entity fields. Further resources on this subject: Interacting with Databases through the Java Persistence API [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 3403

article-image-grails-object-relational-mapping-gorm
Packt
08 Jun 2010
5 min read
Save for later

The Grails Object Relational Mapping (GORM)

Packt
08 Jun 2010
5 min read
(For more resources on Groovy DSL, see here.) The Grails framework is an open source web application framework built for the Groovy language. Grails not only leverages Hibernate under the covers as its persistence layer, but also implements its own Object Relational Mapping layer for Groovy, known as GORM. With GORM, we can take a POGO class and decorate it with DSL-like settings in order to control how it is persisted. Grails programmers use GORM classes as a mini language for describing the persistent objects in their application. In this section, we will do a whistle-stop tour of the features of Grails. This won't be a tutorial on building Grails applications, as the subject is too big to be covered here. Our main focus will be on how GORM implements its Object model in the domain classes. Grails quick start Before we proceed, we need to install Grails and get a basic app installation up and running. The Grails' download and installation instructions can be found at http://www.grails.org/Installation. Once it has been installed, and with the Grails binaries in your path, navigate to a workspace directory and issue the following command: grails create-app GroovyDSL This builds a Grails application tree called GroovyDSL under your current workspace directory. If we now navigate to this directory, we can launch the Grails app. By default, the app will display a welcome page at http://localhost:8080/GroovyDSL/. cd GroovyDSLgrails run-app The grails-app directory The GroovyDSL application that we built earlier has a grails-app subdirectory, which is where the application source files for our application will reside. We only need to concern ourselves with the grails-app/domain directory for this discussion, but it's worth understanding a little about some of the other important directories. grails-app/conf: This is where the Grails configuration files reside. grails-app/controllers: Grails uses a Model View Controller (MVC) architecture. The controller directory will contain the Groovy controller code for our UIs. grails-app/domain: This is where Grails stores the GORM model classes of the application. grails-app/view: This is where the Groovy Server Pages (GSPs), the Grails equivalent to JSPs are stored. Grails has a number of shortcut commands that allow us to quickly build out the objects for our model. As we progress through this section, we will take a look back at these directories to see what files have been generated in these directories for us. In this section, we will be taking a whistle-stop tour through GORM. You might like to dig deeper into both GORM and Grails yourself. You can find further online documentation for GORM at http://www.grails.org/GORM. DataSource configuration Out of the box, Grails is configured to use an embedded HSQL in-memory database. This is useful as a means of getting up and running quickly, and all of the example code will work perfectly well with the default configuration. Having an in-memory database is helpful for testing because we always start with a clean slate. However, for the purpose of this section, it's also useful for us to have a proper database instance to peek into, in order to see how GORM maps Groovy objects into tables. We will configure our Grails application to persist in a MySQL database instance. Grails allows us to have separate configuration environments for development, testing, and production. We will configure our development environment to point to a MySQL instance, but we can leave the production and testing environments as they are. First of all we need to create a database, by using the mysqladmin command. This command will create a database called groovydsl, which is owned by the MySQL root user. mysqladmin -u root create groovydsl Database configuration in Grails is done by editing the DataSource.groovy source file in grails-app/conf. We are interested in the environments section of this file. environments { development { dataSource { dbCreate = "create-drop" url = "jdbc:mysql://localhost/groovydsl" driverClassName = "com.mysql.jdbc.Driver" username = "root" password = "" } } test { dataSource { dbCreate = "create-drop" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } }} The first interesting thing to note is that this is a mini Groovy DSL for describing data sources. In the previous version, we have edited the development dataSource entry to point to the MySQL groovydsl database that we created. In early versions of Grails, there were three separate DataSource files that need to be configured for each environment, for example, DevelopmentDataSource.groovy. The equivalent DevelopmentDataSource.groovy file would be as follows: class DevelopmentDataSource { boolean pooling = true String dbCreate = "create-drop" String url = " jdbc:mysql://localhost/groovydsl " String driverClassName = "com.mysql.jdbc.Driver" String username = "root" String password = ""} The dbCreate field tells GORM what it should do with tables in the database, on startup. Setting this to create-drop will tell GORM to drop a table if it exists already, and create a new table, each time it runs. This will keep the database tables in sync with our GORM objects. You can also set dbCreate to update or create. DataSource.groovy is a handy little DSL for configuring the GORM database connections. Grails uses a utility class—groovu.utl. ConfigSlurper—for this DSL. The ConfigSlurper class allows us to easily parse a structured configuration file and convert it into a java.util.Properties object if we wish. Alternatively, we can navigate the ConfigObject returned by using dot notation. We can use the ConfigSlurper to open and navigate DataSource.groovy as shown in the next code snippet. ConfigSlurper has a built-in ability to partition the configuration by environment. If we construct the ConfigSlurper for a particular environment, it will only load the settings appropriate to that environment. def development = new ConfigSlurper("development").parse(newFile('DataSource.groovy').toURL())def production = new ConfigSlurper("production").parse(newFile('DataSource.groovy').toURL())assert development.dataSource.dbCreate == "create-drop"assert production.dataSource.dbCreate == "update"def props = development.toProperties()assert props["dataSource.dbCreate"] == "create-drop"
Read more
  • 0
  • 0
  • 3389
article-image-jira-programming-workflows
Packt
01 Dec 2011
20 min read
Save for later

JIRA: Programming Workflows

Packt
01 Dec 2011
20 min read
(For more resources on this topic, see here.) Introduction Workflows are one standout feature which help users to transform JIRA into a user-friendly system. It helps users to define a lifecycle for the issues, depending on the issue type, the purpose for which they are using JIRA, and so on. As the Atlassian documentation says at http://confluence.atlassian.com/display/JIRA/Configuring+Workflow: A JIRA workflow is the set of steps and transitions an issue goes through during its lifecycle. Workflows typically represent business processes. JIRA uses Opensymphony's OSWorkflow which is highly configurable, and more importantly pluggable, to cater for the various requirements. JIRA uses three different plugin modules to add extra functionalities into its workflow, which we will see in detail through this chapter. To make things easier, JIRA ships with a default workflow. We can't modify the default workflow, but can copy it into a new workflow and amend it to suit our needs. Before we go into the development aspect of a workflow, it makes sense to understand the various components of a workflow. The two most important components of a JIRA workflow are Step and Transition. At any point of time, an Issue will be in a step. Each step in the workflow is linked to a workflow Status (http://confluence.atlassian.com/display/JIRA/Defining+%27Status%27+F ield+Values) and it is this status that you will see on the issue at every stage. A transition, on the other hand, is a link between two steps. It allows the user to move an issue from one step to another (which essentially moves the issue from one status to another). Few key points to remember or understand about a workflow: An issue can exist in only one step at any point in time A status can be mapped to only one step in the workflow A transition is always one-way. So if you need to go back to the previous step, you need a different transition A transition can optionally specify a screen to be presented to the user with the right fields on it OSWorkflow, and hence JIRA, provides us with the option of adding various elements into a workflow transition which can be summarized as follows: Conditions: A set of conditions that need to be satisfied before the user can actually see the workflow action (transition) on the issue Validators: A set of validators which can be used to validate the user input before moving to the destination step Post Functions: A set of actions which will be performed after the issue is successfully moved to the destination step These three elements give us the flexibility of handling the various use cases when an issue is moved from one status to another. JIRA ships with a few built-in conditions, validators, and post functions. There are plugins out there which also provide a wide variety of useful workflow elements. And if you still don't find the one you are looking for, JIRA lets us write them as plugins. We will see how to do it in the various recipes in this chapter. Hopefully, that gives you a fair idea about the various workflow elements. A lot more on JIRA workflows can be found in the JIRA documentation at http://confluence.atlassian.com/display/JIRA/Configuring+Workflow. Writing a workflow condition What are workflow conditions? They determine whether a workflow action is available or not. Considering the importance of a workflow in installations and how there is a need to restrict the actions either to a set of people, roles, and so on, or based on some criteria (for example, the field is not empty!), writing workflow conditions is inevitable. Workflow conditions are created with the help of the workflow-condition module. The following are the key attributes and elements supported. See http://confluence.atlassian.com/display/JIRADEV/Workflow+Plugin+Modules#WorkflowPluginModules-Conditions for more details. Attributes: Name Description key This should be unique within the plugin. class Class to provide contexts for rendered velocity templates. Must implement the com.atlassian.jira.plugin.workflow.WorkflowPluginConditionFactory interface. i18n-name-key The localization key for the human-readable name of the plugin module. name Human-readable name of the workflow condition. Elements: Name Description description Description of the workflow condition. condition-class Class to determine whether the user can see the workflow transition. Must implement com.opensymphony.workflow.Condition. Recommended to extend the com.atlassian.jira.workflow.condition.AbstractJiraCondition class. resource type="velocity" Velocity templates for the workflow condition views. Getting ready As usual, create a skeleton plugin. Create an eclipse project using the skeleton plugin and we are good to go! How to do it... In this recipe, let's assume we are going to develop a workflow condition that limits a transition only to the users belonging to a specific project role. The following are the steps to write our condition: Define the inputs needed to configure the workflow condition. We need to implement the WorkflowPluginFactory interface, which mainly exists to provide velocity parameters to the templates. It will be used to extract the input parameters that are used in defining the condition. To make it clear, the inputs here are not the inputs while performing the workflow action, but the inputs in defining the condition. The condition factory class, RoleConditionFactory in this case, extends the AbstractWorkflowPluginFactory, which implements the WorkflowPluginFactory interface. There are three abstract methods that we should implement, that is, getVelocityParamsForInput, getVelocityParamsForEdit, and getVelocityParamsForView. All of them, as the name suggests, are used for populating the velocity parameters for the different scenarios. In our example, we need to limit the workflow action to a certain project role, and so we need to select the project role while defining the condition. The three methods will be implemented as follows: private static final String ROLE_NAME = "role";private static final String ROLES = "roles";.......@Override  protected void getVelocityParamsForEdit(Map<String, Object>velocityParams, AbstractDescriptor descriptor) {    velocityParams.put(ROLE, getRole(descriptor));    velocityParams.put(ROLES, getProjectRoles());  }   @Override  protected void getVelocityParamsForInput(Map<String, Object> velocityParams) {    velocityParams.put(ROLES, getProjectRoles());  }   @Override  protected void getVelocityParamsForView(Map<String, Object> velocityParams, AbstractDescriptor descriptor) {    velocityParams.put(ROLE, getRole(descriptor));  } Let's look at the methods in detail: getVelocityParamsForInput: This method defines the velocity parameters for input scenario, that is, when the user initially configures the workflow. In our example, we need to display all the project roles so that the user can select one to define the condition. The method getProjectRoles merely returns all the project roles and the collection of roles is then put into the velocity parameters with the key ROLES. getVelocityParamsForView: This method defines the velocity parameters for the view scenario, that is, how the user sees the condition after it is configured. In our example, we have defined a role and so we should display it to the user after retrieving it back from the workflow descriptor. If you have noticed, the descriptor, which is an instance of AbstractDescriptor, is available as an argument in the method. All we need is to extract the role from the descriptor, which can be done as follows: private ProjectRole getRole(AbstractDescriptor descriptor){    if (!(descriptor instanceof ConditionDescriptor)) {      throw new IllegalArgumentException("Descriptor must be aConditionDescriptor.");    }      ConditionDescriptor functionDescriptor = (ConditionDescriptor)descriptor;     String role = (String) functionDescriptor.getArgs().get(ROLE);    if (role!=null && role.trim().length()>0)      return getProjectRole(role);    else       return null;  } Just check if the descriptor is a condition descriptor or not, and then extract the role as shown in the preceding snippet. getVelocityParamsForEdit: This method defines the velocity parameters for the edit scenario, that is, when the user modifies the existing condition. Here we need both the options and the selected value. Hence, we put both the project roles collection and the selected role on to the velocity parameters. The second step is to define the velocity templates for each of the three aforementioned scenarios: input, view, and edit. We can use the same template here for input and edit with a simple check to keep the old role selected for the edit scenario. Let us look at the templates: edit-roleCondition.vm: Displays all project roles and highlights the already-selected one in the edit mode. In the input mode, the same template is reused, but the selected role will be null and hence a null check is done: <tr bgcolor="#ffffff">    <td align="right" valign="top" bgcolor="#fffff0">        <span class="label">Project Role:</span>    </td>    <td bgcolor="#ffffff" nowrap>        <select name="role" id="role">        #foreach ($field in $roles)          <option value="${field.id}"            #if ($role && (${field.id}==${role.id}))                SELECTED            #end            >$field.name</option>        #end        </select>        <br><font size="1">Select the role in which the user should be present!</font>    </td></tr> view-roleCondition.vm: Displays the selected role: #if ($role)  User should have ${role.name} Role!#else  Role Not Defined#end The third step is to write the actual condition. The condition class should extend the AbstractJiraCondition class. Here we need to implement the passesCondition method. In our case, we retrieve the project from the issue, check if the user has the appropriate project role, and return true if the user does: public boolean passesCondition(Map transientVars, Map args,PropertySet ps) throws WorkflowException {    Issue issue = getIssue(transientVars);    User user = getCaller(transientVars, args);     project project = issue.getProjectObject();    String role = (String)args.get(ROLE);    Long roleId = new Long(role);     return projectRoleManager.isUserInProjectRole(user,projectRoleManager.getProjectRole(roleId), project);} The issue on which the condition is checked can be retrieved using the getIssue method implemented in the AbstractJiraCondition class. Similarly, the user can be retrieved using the getCaller method. In the preceding method, projectRoleManager is injected in the constructor, as we have seen before. We can see that the ROLE key is used to retrieve the project role ID from the args parameter in the passesCondition method. In order for the ROLE key to be available in the args map, we need to override the getDescriptorParams method in the condition factory class, RoleConditionFactory in this case. The getDescriptorParams method returns a map of sanitized parameters, which will be passed into workflow plugin instances from the values in an array form submitted by velocity, given a set of name:value parameters from the plugin configuration page (that is, the 'input-parameters' velocity template). In our case, the method is overridden as follows: public Map<String, String> getDescriptorParams(Map<String, Object>conditionParams) {    if (conditionParams != null &&conditionParams.containsKey(ROLE))        {            return EasyMap.build(ROLE,extractSingleParam(conditionParams, ROLE));        }        // Create a 'hard coded' parameter        return EasyMap.build();  } The method here builds a map of the key:value pair, where key is ROLE and the value is the role value entered in the input configuration page. The extractSingleParam method is implemented in the AbstractWorkflowPluginFactory class. The extractMultipleParams method can be used if there is more than one parameter to be extracted! All that is left now is to populate the atlassian-plugin.xml file with the aforementioned components. We use the workflow-condition module and it looks like the following block of code: <workflow-condition key="role-condition" name="Role BasedCondition"  class="com.jtricks.RoleConditionFactory">    <description>Role Based Workflow Condition</description>    <condition-class>com.jtricks.RoleCondition</condition-class>    <resource type="velocity" name="view"location="templates/com/jtricks/view-roleCondition.vm"/>    <resource type="velocity" name="input-parameters"location="templates/com/jtricks/edit-roleCondition.vm"/>    <resource type="velocity" name="edit-parameters" location="templates/com/jtricks/edit-roleCondition.vm"/></workflow-condition> Package the plugin and deploy it! How it works... After the plugin is deployed, we need to modify the workflow to include the condition. The following screenshot is how the condition looks when it is added initially. This, as you now know, is rendered using the input template: After the condition is added (that is, after selecting the Developers role), the view is rendered using the view template and looks as shown in the following screenshot: (Move the mouse over the image to enlarge.) If you try to edit it, the screen will be rendered using the edit template, as shown in the following screenshot: Note that the Developers role is already selected. After the workflow is configured, when the user goes to an issue, he/she will be presented with the transition only if he/she is a member of the project role where the issue belongs. It is while viewing the issue that the passesCondition method in the condition class is executed. Writing a workflow validator Workflow validators are specific validators that check whether some pre-defined constraints are satisfied or not while progressing on a workflow. The constraints are configured in the workflow and the user will get an error if some of them are not satisfied. A typical example would be to check if a particular field is present or not before the issue is moved to a different status. Workflow validators are created with the help of the workflow- validator module. The following are the key attributes and elements supported. Attributes: Name Description key This should be unique within the plugin. class Class to provide contexts for rendered velocity templates. Must implement the com.atlassian.jira.plugin.workflow.WorkflowPluginValidatorFactory interface. i18n-name-key The localization key for the human-readable name of the plugin module. name Human-readable name of the workflow validator. Elements: Name Description description Description of the workflow validator. validator-class Class which does the validation. Must implement com.opensymphony.workflow.Validator. resource type="velocity" Velocity templates for the workflow validator views. See http://confluence.atlassian.com/display/JIRADEV/Workflow+Plugin+Modules#WorkflowPluginModules-Validators for more details. Getting ready As usual, create a skeleton plugin. Create an eclipse project using the skeleton plugin and we are good to go! How to do it... Let us consider writing a validator that checks whether a particular field has a value entered on the issue or not! We can do this using the following steps: Define the inputs needed to configure the workflow validator: We need to implement the WorkflowPluginValidatorFactory interface, which mainly exists to provide velocity parameters to the templates. It will be used to extract the input parameters that are used in defining the validator. To make it clear, the inputs here are not the input while performing the workflow action, but the inputs in defining the validator. The validator factory class, FieldValidatorFactory in this case, extends the AbstractWorkflowPluginFactory interface and implements the WorkflowPluginValidatorFactory interface. Just like conditions, there are three abstract methods that we should implement. They are getVelocityParamsForInput, getVelocityParamsForEdit, and getVelocityParamsForView. All of them, as the names suggest, are used for populating the velocity parameters in different scenarios. In our example, we have a single input field, which is the name of a custom field. The three methods will be implemented as follows: @Overrideprotected void getVelocityParamsForEdit(Map velocityParams,AbstractDescriptor descriptor) {    velocityParams.put(FIELD_NAME, getFieldName(descriptor));  velocityParams.put(FIELDS, getCFFields());} @Overrideprotected void getVelocityParamsForInput(Map velocityParams) {    velocityParams.put(FIELDS, getCFFields());} @Overrideprotected void getVelocityParamsForView(Map velocityParams,AbstractDescriptor descriptor) {    velocityParams.put(FIELD_NAME, getFieldName(descriptor));} You may have noticed that the methods look quite similar to the ones in a workflow condition, except for the business logic! Let us look at the methods in detail: getVelocityParamsForInput: This method defines the velocity parameters for input scenario, that is, when the user initially configures the workflow. In our example, we need to display all the custom fields, so that the user can select one to use in the validator. The method getCFFields returns all the custom fields and the collection of fields is then put into the velocity parameters with the key fields. getVelocityParamsForView: This method defines the velocity parameters for the view scenario, that is, how the user sees the validator after it is configured. In our example, we have defined a field and so we should display it to the user after retrieving it back from the workflow descriptor. You may have noticed that the descriptor, which is an instance of AbstractDescriptor, is available as an argument in the method. All we need is to extract the field name from the descriptor, which can be done as follows: private String getFieldName(AbstractDescriptor descriptor){  if (!(descriptor instanceof ValidatorDescriptor)) {    throw new IllegalArgumentException('Descriptor must be aValidatorDescriptor.');  }    ValidatorDescriptor validatorDescriptor = (ValidatorDescriptor)descriptor;   String field = (String)validatorDescriptor.getArgs().get(FIELD_NAME);  if (field != null && field.trim().length() > 0)    return field;  else    return NOT_DEFINED;} Just check if the descriptor is a validator descriptor or not and then extract the field as shown in the preceding snippet. getVelocityParamsForEdit: This method defines the velocity parameters for the edit scenario, that is, when the user modifies the existing validator. Here we need both the options and the selected value. Hence we put both the custom fields' collection and the field name onto the velocity parameters. The second step is to define the velocity templates for each of the three aforementioned scenarios, namely, input, view, and edit. We can use the same template here for input and edit with a simple checking to keep the old field selected for the edit scenario. Let us look at the template: edit-fieldValidator.vm: Displays all custom fields and highlights the already selected one in edit mode. In input mode, the field variable will be null, and so nothing is pre-selected: <tr bgcolor="#ffffff">  <td align="right" valign="top" bgcolor="#fffff0">    <span class="label">Custom Fields :</span>  </td>  <td bgcolor="#ffffff" nowrap>    <select name="field" id="field">    #foreach ($cf in $fields)      <option value="$cf.name"        #if ($cf.name.equals($field)) SELECTED #end      >$cf.name</option>    #end    </select>    <br><font size="1">Select the Custom Field to be validated for NULL</font>  </td></tr> view-fieldValidator.vm: Displays the selected field: #if ($field)  Field '$field' is Required!#end The third step is to write the actual validator. The validator class should implement the Validator interface. All we need here is to implement the validate method. In our example, we retrieve the custom field value from the issue and throw an InvalidInputException if the value is null (empty): public void validate(Map transientVars, Map args, PropertySet ps)throws InvalidInputException, WorkflowException {    Issue issue = (Issue) transientVars.get("issue");    String field = (String) args.get(FIELD_NAME);      CustomField customField = customFieldManager.getCustomFieldObjectByName(field);     if (customField!=null){      //Check if the custom field value is NULL      if (issue.getCustomFieldValue(customField) == null){        throw new InvalidInputException("The field:"+field+" is             required!"); }    }  } The issue on which the validation is done can be retrieved from the transientVars map. customFieldManager is injected in the constructor as usual. All that is left now is to populate the atlassian-plugin.xml file with these components. We use the workflow-validator module, and it looks like the following block of code: <workflow-validator key="field-validator" name="Field Validator"  class="com.jtricks.FieldValidatorFactory">    <description>Field Not Empty Workflow Validator</description>     <validator-class>com.jtricks.FieldValidator</validator-class>     <resource type="velocity" name="view"location="templates/com/jtricks/view-fieldValidator.vm"/>    <resource type="velocity" name="input-parameters" location="templates/com/jtricks/edit-fieldValidator.vm"/>    <resource type="velocity" name="edit-parameters"location="templates/com/jtricks/edit-fieldValidator.vm"/></workflow-validator> Package the plugin and deploy it! Note that we have stored the role name instead of the ID in the workflow, unlike what we did in the workflow condition. However, it is safe to use the ID because administrators can rename the roles, which would then need changes in the workflows. How it works... After the plugin is deployed, we need to modify the workflow to include the validator. The following screenshot is how the validator looks when it is added initially. This, as you now know, is rendered using the input template: After the validator is added (after selecting the Test Number field), it is rendered using the view template and looks as follows: If you try to edit it, the screen will be rendered using the edit template, as shown in the following screenshot: Note that the Test Number field is already selected. After the workflow is configured, when the user goes to an issue and tries to progress it, the validator will check if the Test Number field has a value or not. It is at this point that the validate method in the FieldValidator class is executed. If the value is missing, you will see an error, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 3380

article-image-adding-authentication
Packt
23 Jan 2015
15 min read
Save for later

Adding Authentication

Packt
23 Jan 2015
15 min read
This article written by Mat Ryer, the author of Go Programming Blueprints, is focused on high-performance transmission of messages from the clients to the server and back again, but our users have no way of knowing who they are talking to. One solution to this problem is building of some kind of signup and login functionality and letting our users create accounts and authenticate themselves before they can open the chat page. (For more resources related to this topic, see here.) Whenever we are about to build something from scratch, we must ask ourselves how others have solved this problem before (it is extremely rare to encounter genuinely original problems), and whether any open solutions or standards already exist that we can make use of. Authorization and authentication are hardly new problems, especially in the world of the Web, with many different protocols out there to choose from. So how do we decide the best option to pursue? As always, we must look at this question from the point of view of the user. A lot of websites these days allow you to sign in using your accounts existing elsewhere on a variety of social media or community websites. This saves users the tedious job of entering all their account information over and over again as they decide to try out different products and services. It also has a positive effect on the conversion rates for new sites. In this article, we will enhance our chat codebase to add authentication, which will allow our users to sign in using Google, Facebook, or GitHub and you'll see how easy it is to add other sign-in portals too. In order to join the chat, users must first sign in. Following this, we will use the authorized data to augment our user experience so everyone knows who is in the room, and who said what. In this article, you will learn to: Use the decorator pattern to wrap http.Handler types to add additional functionality to handlers Serve HTTP endpoints with dynamic paths Use the Gomniauth open source project to access authentication services Get and set cookies using the http package Encode objects as Base64 and back to normal again Send and receive JSON data over a web socket Give different types of data to templates Work with channels of your own types Handlers all the way down For our chat application, we implemented our own http.Handler type in order to easily compile, execute, and deliver HTML content to browsers. Since this is a very simple but powerful interface, we are going to continue to use it wherever possible when adding functionality to our HTTP processing. In order to determine whether a user is authenticated, we will create an authentication wrapper handler that performs the check, and passes execution on to the inner handler only if the user is authenticated. Our wrapper handler will satisfy the same http.Handler interface as the object inside it, allowing us to wrap any valid handler. In fact, even the authentication handler we are about to write could be later encapsulated inside a similar wrapper if needed. Diagram of a chaining pattern when applied to HTTP handlers The preceding figure shows how this pattern could be applied in a more complicated HTTP handler scenario. Each object implements the http.Handler interface, which means that object could be passed into the http.Handle method to directly handle a request, or it can be given to another object, which adds some kind of extra functionality. The Logging handler might write to a logfile before and after the ServeHTTP method is called on the inner handler. Because the inner handler is just another http.Handler, any other handler can be wrapped in (or decorated with) the Logging handler. It is also common for an object to contain logic that decides which inner handler should be executed. For example, our authentication handler will either pass the execution to the wrapped handler, or handle the request itself by issuing a redirect to the browser. That's plenty of theory for now; let's write some code. Create a new file called auth.go in the chat folder: package main import ( "net/http" ) type authHandler struct { next http.Handler } func (h *authHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { if _, err := r.Cookie("auth"); err == http.ErrNoCookie { // not authenticated w.Header().Set("Location", "/login") w.WriteHeader(http.StatusTemporaryRedirect) } else if err != nil { // some other error panic(err.Error()) } else { // success - call the next handler h.next.ServeHTTP(w, r) } } func MustAuth(handler http.Handler) http.Handler { return &authHandler{next: handler} } The authHandler type not only implements the ServeHTTP method (which satisfies the http.Handler interface) but also stores (wraps) http.Handler in the next field. Our MustAuth helper function simply creates authHandler that wraps any other http.Handler. This is the pattern in general programming practice that allows us to easily add authentication to our code in main.go. Let us tweak the following root mapping line: http.Handle("/", &templateHandler{filename: "chat.html"}) Let us change the first argument to make it explicit about the page meant for chatting. Next, let's use the MustAuth function to wrap templateHandler for the second argument: http.Handle("/chat", MustAuth(&templateHandler{filename: "chat.html"})) Wrapping templateHandler with the MustAuth function will cause execution to run first through our authHandler, and only to templateHandler if the request is authenticated. The ServeHTTP method in our authHandler will look for a special cookie called auth, and use the Header and WriteHeader methods on http.ResponseWriter to redirect the user to a login page if the cookie is missing. Build and run the chat application and try to hit http://localhost:8080/chat: go build -o chat ./chat -host=":8080" You need to delete your cookies to clear out previous auth tokens, or any other cookies that might be left over from other development projects served through localhost. If you look in the address bar of your browser, you will notice that you are immediately redirected to the /login page. Since we cannot handle that path yet, you'll just get a 404 page not found error. Making a pretty social sign-in page There is no excuse for building ugly apps, and so we will build a social sign-in page that is as pretty as it is functional. Bootstrap is a frontend framework used to develop responsive projects on the Web. It provides CSS and JavaScript code that solve many user-interface problems in a consistent and good-looking way. While sites built using Bootstrap all tend to look the same (although there are plenty of ways in which the UI can be customized), it is a great choice for early versions of apps, or for developers who don't have access to designers. If you build your application using the semantic standards set forth by Bootstrap, it becomes easy for you to make a Bootstrap theme for your site or application and you know it will slot right into your code. We will use the version of Bootstrap hosted on a CDN so we don't have to worry about downloading and serving our own version through our chat application. This means that in order to render our pages properly, we will need an active Internet connection, even during development. If you prefer to download and host your own copy of Bootstrap, you can do so. Keep the files in an assets folder and add the following call to your main function (it uses http.Handle to serve the assets via your application): http.Handle("/assets/", http.StripPrefix("/assets", http.FileServer(http.Dir("/path/to/assets/")))) Notice how the http.StripPrefix and http.FileServer functions return objects that satisfy the http.Handler interface as per the decorator pattern that we implement with our MustAuth helper function. In main.go, let's add an endpoint for the login page: http.Handle("/chat", MustAuth(&templateHandler{filename: "chat.html"})) http.Handle("/login", &templateHandler{filename: "login.html"}) http.Handle("/room", r) Obviously, we do not want to use the MustAuth method for our login page because it will cause an infinite redirection loop. Create a new file called login.html inside our templates folder, and insert the following HTML code: <html> <head> <title>Login</title> <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css"> </head> <body> <div class="container"> <div class="page-header"> <h1>Sign in</h1> </div> <div class="panel panel-danger"> <div class="panel-heading"> <h3 class="panel-title">In order to chat, you must be signed in</h3> </div> <div class="panel-body"> <p>Select the service you would like to sign in with:</p> <ul> <li> <a href="/auth/login/facebook">Facebook</a> </li> <li> <a href="/auth/login/github">GitHub</a> </li> <li> <a href="/auth/login/google">Google</a> </li> </ul> </div> </div> </div> </body> </html> Restart the web server and navigate to http://localhost:8080/login. You will notice that it now displays our sign-in page: Endpoints with dynamic paths Pattern matching for the http package in the Go standard library isn't the most comprehensive and fully featured implementation out there. For example, Ruby on Rails makes it much easier to have dynamic segments inside the path: "auth/:action/:provider_name" This then provides a data map (or dictionary) containing the values that it automatically extracted from the matched path. So if you visit auth/login/google, then params[:provider_name] would equal google, and params[:action] would equal login. The most the http package lets us specify by default is a path prefix, which we can do by leaving a trailing slash at the end of the pattern: "auth/" We would then have to manually parse the remaining segments to extract the appropriate data. This is acceptable for relatively simple cases, which suits our needs for the time being since we only need to handle a few different paths such as: /auth/login/google /auth/login/facebook /auth/callback/google /auth/callback/facebook If you need to handle more advanced routing situations, you might want to consider using dedicated packages such as Goweb, Pat, Routes, or mux. For extremely simple cases such as ours, the built-in capabilities will do. We are going to create a new handler that powers our login process. In auth.go, add the following loginHandler code: // loginHandler handles the third-party login process. // format: /auth/{action}/{provider} func loginHandler(w http.ResponseWriter, r *http.Request) { segs := strings.Split(r.URL.Path, "/") action := segs[2] provider := segs[3] switch action { case "login": log.Println("TODO handle login for", provider) default: w.WriteHeader(http.StatusNotFound) fmt.Fprintf(w, "Auth action %s not supported", action) } } In the preceding code, we break the path into segments using strings.Split before pulling out the values for action and provider. If the action value is known, we will run the specific code, otherwise we will write out an error message and return an http.StatusNotFound status code (which in the language of HTTP status code, is a 404 code). We will not bullet-proof our code right now but it's worth noticing that if someone hits loginHandler with too few segments, our code will panic because it expects segs[2] and segs[3] to exist. For extra credit, see whether you can protect against this and return a nice error message instead of a panic if someone hits /auth/nonsense. Our loginHandler is only a function and not an object that implements the http.Handler interface. This is because, unlike other handlers, we don't need it to store any state. The Go standard library supports this, so we can use the http.HandleFunc function to map it in a way similar to how we used http.Handle earlier. In main.go, update the handlers: http.Handle("/chat", MustAuth(&templateHandler{filename: "chat.html"})) http.Handle("/login", &templateHandler{filename: "login.html"}) http.HandleFunc("/auth/", loginHandler) http.Handle("/room", r) Rebuild and run the chat application: go build –o chat ./chat –host=":8080" Hit the following URLs and notice the output logged in the terminal: http://localhost:8080/auth/login/google outputs TODO handle login for google http://localhost:8080/auth/login/facebook outputs TODO handle login for facebook We have successfully implemented a dynamic path-matching mechanism that so far just prints out TODO messages; we need to integrate with authentication services in order to make our login process work. OAuth2 OAuth2 is an open authentication and authorization standard designed to allow resource owners to give clients delegated access to private data (such as wall posts or tweets) via an access token exchange handshake. Even if you do not wish to access the private data, OAuth2 is a great option that allows people to sign in using their existing credentials, without exposing those credentials to a third-party site. In this case, we are the third party and we want to allow our users to sign in using services that support OAuth2. From a user's point of view, the OAuth2 flow is: A user selects provider with whom they wish to sign in to the client app. The user is redirected to the provider's website (with a URL that includes the client app ID) where they are asked to give permission to the client app. The user signs in from the OAuth2 service provider and accepts the permissions requested by the third-party application. The user is redirected back to the client app with a request code. In the background, the client app sends the grant code to the provider, who sends back an auth token. The client app uses the access token to make authorized requests to the provider, such as to get user information or wall posts. To avoid reinventing the wheel, we will look at a few open source projects that have already solved this problem for us. Open source OAuth2 packages Andrew Gerrand has been working on the core Go team since February 2010, that is two years before Go 1.0 was officially released. His goauth2 package (see https://code.google.com/p/goauth2/) is an elegant implementation of the OAuth2 protocol written entirely in Go. Andrew's project inspired Gomniauth (see https://github.com/stretchr/gomniauth). An open source Go alternative to Ruby's omniauth project, Gomniauth provides a unified solution to access different OAuth2 services. In the future, when OAuth3 (or whatever next-generation authentication protocol it is) comes out, in theory, Gomniauth could take on the pain of implementing the details, leaving the user code untouched. For our application, we will use Gomniauth to access OAuth services provided by Google, Facebook, and GitHub, so make sure you have it installed by running the following command: go get github.com/stretchr/gomniauth Some of the project dependencies of Gomniauth are kept in Bazaar repositories, so you'll need to head over to http://wiki.bazaar.canonical.com to download them. Tell the authentication providers about your app Before we ask an authentication provider to help our users sign in, we must tell them about our application. Most providers have some kind of web tool or console where you can create applications to kick this process. Here's one from Google: In order to identify the client application, we need to create a client ID and secret. Despite the fact that OAuth2 is an open standard, each provider has their own language and mechanism to set things up, so you will most likely have to play around with the user interface or the documentation to figure it out in each case. At the time of writing this, in Google Developer Console , you navigate to APIs & auth | Credentials and click on the Create new Client ID button. In most cases, for added security, you have to be explicit about the host URLs from where requests will come. For now, since we're hosting our app locally on localhost:8080, you should use that. You will also be asked for a redirect URI that is the endpoint in our chat application and to which the user will be redirected after successfully signing in. The callback will be another action on our loginHandler, so the redirection URL for the Google client will be http://localhost:8080/auth/callback/google. Once you finish the authentication process for the providers you want to support, you will be given a client ID and secret for each provider. Make a note of these, because we will need them when we set up the providers in our chat application. If we host our application on a real domain, we have to create new client IDs and secrets, or update the appropriate URL fields on our authentication providers to ensure that they point to the right place. Either way, it's not bad practice to have a different set of development and production keys for security. Summary This article shows how to add OAuth to our chat application so that we can keep track of who is saying what, but let them log in using Google, Facebook, or GitHub. We also learned how to use handlers for efficient coding. This article also thought us how to make a pretty social sign-in page. Resources for Article: Further resources on this subject: WebSockets in Wildfly [article] Using Socket.IO and Express together [article] The Importance of Securing Web Services [article]
Read more
  • 0
  • 0
  • 3363
Modal Close icon
Modal Close icon