Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-rapid-application-development-django-openduty-story
Bálint Csergő
01 Aug 2016
5 min read
Save for later

Rapid Application Development with Django, the Openduty story

Bálint Csergő
01 Aug 2016
5 min read
Openduty is an open source incident escalation tool, which is something like Pagerduty but free and much simpler. It was born during a hackathon at Ustream back in 2014. The project received a lot of attention in the devops community, and was also featured in Devops weekly andPycoders weekly.It is listed at Full Stack Python as an example Django project. This article is going to include some design decisions we made during the hackathon, and detail some of the main components of the Opendutysystem. Design When we started the project, we already knew what we wanted to end up with: We had to work quickly—it was a hackathon after all An API similar to Pagerduty Ability to send notifications asynchronously A nice calendar to organize on—call schedules can’t hurt anyone, right? Tokens for authorizing notifiers So we chose the corresponding components to reach our goal. Get the job done quickly If you have to develop apps rapidly in Python, Django is the framework you choose. It's a bit heavyweight, but hey, it gives you everything you need and sometimes even more. Don't get me wrong; I'm a big fan of Flask also, but it can be a bit fiddly to assemble everything by hand at the start. Flask may pay off later, and you may win on a lower amount of dependencies, but we only had 24 hours, so we went with Django. An API When it comes to Django and REST APIs, one of the GOTO soluitions is The Django REST Framework. It has all the nuts and bolts you'll need when you're assembling an API, like serializers, authentication, and permissions. It can even give you the possibility to make all your API calls self-describing. Let me show you howserializers work in the Rest Framework. class OnCallSerializer(serializers.Serializer): person = serializers.CharField() email = serializers.EmailField() start = serializers.DateTimeField() end = serializers.DateTimeField() The code above represents a person who is on-call on the API. As you can see, it is pretty simple; you just have to define the fields. It even does the validation for you, since you have to give a type to every field. But believe me, it's capable of more good things like generating a serializer from your Django model: class SchedulePolicySerializer(serializers.HyperlinkedModelSerializer): rules = serializers.RelatedField(many=True, read_only=True) class Meta: model = SchedulePolicy fields = ('name', 'repeat_times', 'rules') This example shows how you can customize a ModelSerializer, make fields read-only, and only accept given fields from an API call. Async Task Execution When you have tasks that are long-running, such as generating huge reports, resizing images, or even transcoding some media, it is a common practice thatyou must move the actual execution of those out of your webapp into a separate layer. This decreases the load on the webservers, helps in avoiding long or even timing out requests, and just makes your app more resilient and scalable. In the Python world, the go-to solution for asynchronous task execution is called Celery. In Openduty, we use Celery heavily to send notifications asynchronously and also to delay the execution of any given notification task by the delay defined in the service settings. Defining a task is this simple: @app.task(ignore_result=True) def send_notifications(notification_id): try: notification = ScheduledNotification.objects.get(id = notification_id) if notification.notifier == UserNotificationMethod.METHOD_XMPP: notifier = XmppNotifier(settings.XMPP_SETTINGS) #choosing notifier removed from example code snippet notifier.notify(notification) #logging task result removed from example snippet raise And calling an already defined task is also almost as simple as calling any regular function: send_notifications.apply_async((notification.id,) ,eta=notification.send_at) This means exactly what you think: Send the notification with the id: notification.id at notification.send_at. But how do these things get executed? Under the hood, Celery wraps your decorated functions so that when you call them, they get enqueued instead of being executed directly. When the celery worker detects that there is a task to be executed, it simply takes it from the queue and executes it asynchronously. Calendar We use django-scheduler for the awesome-looking calendar in Openduty. It is a pretty good project generally, supports recurring events, and provides you with a UI for your calendar, so you won't even have to fiddle with that. Tokens and Auth Service token implementation is a simple thing. You want them to be unique, and what else would you choose if not aUUID? There is a nice plugin for Django models used to handle UUID fields, called django-uuidfield. It just does what it says—addingUUIDField support to your models. User authentication is a bit more interesting, so we currently support plain Django Users, and you can use LDAP as your user provider. Summary This was just a short summary about the design decisions made when we coded Openduty. I also demonstrated the power of the components through some snippets that are relevant. If you are on a short deadline, consider using Django and its extensions. There is a good chance that somebody has already done what you need to do, or something similar, which can always be adapted to your needs thanks to the awesome power of the open source community. About the author BálintCsergő is a software engineer from Budapest, currently working as an infrastructure engineer at Hortonworks. He lovesUnix systems, PHP, Python, Ruby, the Oracle database, Arduino, Java, C#, music, and beer.
Read more
  • 0
  • 0
  • 14960

article-image-inheritance-python
Packt
30 Dec 2010
8 min read
Save for later

Inheritance in Python

Packt
30 Dec 2010
8 min read
Python 3 Object Oriented Programming Harness the power of Python 3 objects Learn how to do Object Oriented Programming in Python using this step-by-step tutorial Design public interfaces using abstraction, encapsulation, and information hiding Turn your designs into working software by studying the Python syntax Raise, handle, define, and manipulate exceptions using special error objects Implement Object Oriented Programming in Python using practical examples        Basic inheritance Technically, every class we create uses inheritance. All Python classes are subclasses of the special class named object. This class provides very little in terms of data and behaviors (those behaviors it does provide are all double-underscore methods intended for internal use only), but it does allow Python to treat all objects in the same way. If we don't explicitly inherit from a different class, our classes will automatically inherit from object. However, we can openly state that our class derives from object using the following syntax: class MySubClass(object): pass This is inheritance! Since Python 3 automatically inherits from object if we don't explicitly provide a different superclass. A superclass, or parent class, is a class that is being inherited from. A subclass is a class that is inheriting from a superclass. In this case, the superclass is object, and MySubClass is the subclass. A subclass is also said to be derived from its parent class or that the subclass extends the parent. As you've probably figured out from the example, inheritance requires a minimal amount of extra syntax over a basic class definition. Simply include the name of the parent class inside a pair of parentheses after the class name, but before the colon terminating the class definition. This is all we have to do to tell Python that the new class should be derived from the given superclass. How do we apply inheritance in practice? The simplest and most obvious use of inheritance is to add functionality to an existing class. Let's start with a simple contact manager that tracks the name and e-mail address of several people. The contact class is responsible for maintaining a list of all contacts in a class variable, and for initializing the name and address, in this simple class: class Contact: all_contacts = [] def __init__(self, name, email): self.name = name self.email = email Contact.all_contacts.append(self) This example introduces us to class variables. The all_contacts list, because it is part of the class definition, is actually shared by all instances of this class. This means that there is only one Contact.all_contacts list, and if we call self.all_contacts on any one object, it will refer to that single list. The code in the initializer ensures that whenever we create a new contact, the list will automatically have the new object added. Be careful with this syntax, for if you ever set the variable using self.all_contacts, you will actually be creating a new instance variable on the object; the class variable will still be unchanged and accessible as Contact.all_contacts. This is a very simple class that allows us to track a couple pieces of data about our contacts. But what if some of our contacts are also suppliers that we need to order supplies from? We could add an order method to the Contact class, but that would allow people to accidentally order things from contacts who are customers or family friends. Instead, let's create a new Supplier class that acts like a Contact, but has an additional order method: class Supplier(Contact): def order(self, order): print("If this were a real system we would send " "{} order to {}".format(order, self.name)) Now, if we test this class in our trusty interpreter, we see that all contacts, including suppliers, accept a name and e-mail address in their __init__, but only suppliers have a functional order method: >>> c = Contact("Some Body", "somebody@example.net") >>> s = Supplier("Sup Plier", "supplier@example.net") >>> print(c.name, c.email, s.name, s.email) Some Body somebody@example.net Sup Plier supplier@example.net >>> c.all_contacts [<__main__.Contact object at 0xb7375ecc>, <__main__.Supplier object at 0xb7375f8c>] >>> c.order("Ineed pliers") Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Contact' object has no attribute 'order' >>> s.order("I need pliers") If this were a real system we would send I need pliers order to Supplier >>> So now our Supplier class can do everything a Contact can do (including adding itself to the list of all_contacts) and all the special things it needs to handle as a supplier. This is the beauty of inheritance. Extending built-ins One of the most interesting uses of this kind of inheritance is adding functionality to built-in classes. In the Contact class seen earlier, we are adding contacts to a list of all contacts. What if we also wanted to search that list by name? Well, we could add a method on the Contact class to search it, but it feels like this method actually belongs on the list itself. We can do this using inheritance: class ContactList(list): def search(self, name): '''Return all contacts that contain the search value in their name.''' matching_contacts = [] for contact in self: if name in contact.name: matching_contacts.append(contact) return matching_contacts class Contact: all_contacts = ContactList() def __init__(self, name, email): self.name = name self.email = email self.all_contacts.append(self) Instead of instantiating a normal list as our class variable, we create a new ContactList class that extends the built-in list. Then we instantiate this subclass as our all_contacts list. We can test the new search functionality as follows: >>> c1 = Contact("John A", "johna@example.net") >>> c2 = Contact("John B", "johnb@example.net") >>> c3 = Contact("Jenna C", "jennac@example.net") >>> [c.name for c in Contact.all_contacts.search('John')] ['John A', 'John B'] >>> Are you wondering how we changed the built-in syntax [] into something we can inherit from? Creating an empty list with [] is actually a shorthand for creating an empty list using list(); the two syntaxes are identical: >>> [] == list() True So, the list data type is like a class that we can extend, not unlike object. As a second example, we can extend the dict class, which is the long way of creating a dictionary (the {:} syntax). class LongNameDict(dict): def longest_key(self): longest = None for key in self: if not longest or len(key) > len(longest): longest = key return longest This is easy to test in the interactive interpreter: >>> longkeys = LongNameDict() >>> longkeys['hello'] = 1 >>> longkeys['longest yet'] = 5 >>> longkeys['hello2'] = 'world' >>> longkeys.longest_key() 'longest yet' Most built-in types can be similarly extended. Commonly extended built-ins are object, list, set, dict, file, and str. Numerical types such as int and float are also occasionally inherited from. Overriding and super So inheritance is great for adding new behavior to existing classes, but what about changing behavior? Our contact class allows only a name and an e-mail address. This may be sufficient for most contacts, but what if we want to add a phone number for our close friends? We can do this easily by just setting a phone attribute on the contact after it is constructed. But if we want to make this third variable available on initialization, we have to override __init__. Overriding is altering or replacing a method of the superclass with a new method (with the same name) in the subclass. No special syntax is needed to do this; the subclass's newly created method is automatically called instead of the superclass's method. For example: class Friend(Contact): def __init__(self, name, email, phone): self.name = name self.email = email self.phone = phone Any method can be overridden, not just __init__. Before we go on, however, we need to correct some problems in this example. Our Contact and Friend classes have duplicate code to set up the name and email properties; this can make maintenance complicated, as we have to update the code in two or more places. More alarmingly, our Friend class is neglecting to add itself to the all_contacts list we have created on the Contact class. What we really need is a way to call code on the parent class. This is what the super function does; it returns the object as an instance of the parent class, allowing us to call the parent method directly: class Friend(Contact): def __init__(self, name, email, phone): super().__init__(name, email) self.phone = phone This example first gets the instance of the parent object using super, and calls __init__ on that object, passing in the expected arguments. It then does its own initialization, namely setting the phone attribute. A super() call can be made inside any method, not just __init__. This means all methods can be modified via overriding and calls to super. The call to super can also be made at any point in the method; we don't have to make the call as the first line in the method. For example, we may need to manipulate the incoming parameters before forwarding them to the superclass.  
Read more
  • 0
  • 0
  • 14958

article-image-dependency-management-sbt
Packt
07 Oct 2013
17 min read
Save for later

Dependency Management in SBT

Packt
07 Oct 2013
17 min read
(For more resources related to this topic, see here.) In the early days of Java, when projects were small and didn't have many external dependencies, developers tended to manage dependencies manually by copying the required JAR files in the lib folder and checking it in their SCM/VCS with their code. This is still followed by a lot of developers, even today. But due to the aforementioned issues, this is not an option for larger projects. In many enterprises, there are central servers, FTP, shared drives, and so on, which store the approved libraries for use and also internally released libraries. But managing and tracking them manually is never easy. They end up relying on scripts and build files. Maven came and standardized this process. Maven defines standards for the project format to define its dependencies, formats for repositories to store libraries, the automated process to fetch transitive dependencies, and much more. Most of the systems today either back onto Maven's dependency management system or on Ivy's, which can function in the same way, and also provides its own standards, which is heavily inspired by Maven. SBT uses Ivy in the backend for dependency management, but uses a custom DSL to specify the dependency. Quick introduction to Maven or Ivy dependency management Apache Maven is not a dependency management tool. It is a project management and a comprehension tool. Maven is configured using a Project Object Model (POM), which is represented in an XML file. A POM has all the details related to the project right from the basic ones, such as groupId, artifactId, version, and so on, to environment settings such as prerequisites, and repositories. Apache Ivy is a dependency management tool and a subproject of Apache Ant. Ivy integrates publicly available artifact repositories automatically. The project dependencies are declared using XML in a file called ivy.xml. This is commonly known as the Ivy file. Ivy is configured using a settings file. The settings file (ivysettings.xml) defines a set of dependency resolvers. Each resolver points to an Ivy file and/or artifacts. So, the configuration essentially indicates which resource should be used to resolve a module. How Ivy works The following diagram depicts the usual cycle of Ivy modules between different locations: The tags along the arrows are the Ivy commands that need to be run for that task, which are explained in detail in the following sections. Resolve Resolve is the phase where Ivy resolves the dependencies of a module by accessing the Ivy file defined for that module. For each dependency in the Ivy file, Ivy finds the module using the configuration. A module could be an Ivy file or artifact. Once a module is found, its Ivy file is downloaded to the Ivy cache. Then, Ivy checks for the dependencies of that module. If the module has dependencies on other modules, Ivy recursively traverses the graph of dependencies, handling conflicts simultaneously. After traversing the whole graph, Ivy downloads all the dependencies that are not already in the cache and have not been evicted by conflict management. Ivy uses a filesystem-based cache to avoid loading dependencies already available in the cache. In the end, an XML report of the dependencies of the module is generated in the cache. Retrieve Retrieve is the act of copying artifacts from the cache to another directory structure. The destination for the files to be copied is specified using a pattern. Before copying, Ivy checks if the files are not already copied to maximize performance. After dependencies have been copied, the build becomes independent of Ivy. Publish Ivy can then be used to publish the module to a repository. This can be done by manually running a task or from a continuous integration server. Dependency management in SBT In SBT, library dependencies can be managed in the following two ways: By specifying the libraries in the build definition By manually adding the JAR files of the library Manual addition of JAR files may seem simple in the beginning of a project. But as the project grows, it may depend on a lot of other projects, or the projects it depends on may have newer versions. These situations make handling dependencies manually a cumbersome task. Hence, most developers prefer to automate dependency management. Automatic dependency management SBT uses Apache Ivy to handle automatic dependency management. When dependencies are configured in this manner, SBT handles the retrieval and update of the dependencies. An update does not happen every time there is a change, since that slows down all the processes. To update the dependencies, you need to execute the update task. Other tasks depend on the output generated through the update. Whenever dependencies are modified, an update should be run for these changes to get reflected. There are three ways in which project dependencies can be specified. They are as follows: Declarations within the build definition Maven dependency files, that is, POM files Configuration and settings files used for Ivy Adding JAR files manually Declaring dependencies in the build definition The Setting key libraryDependencies is used to configure the dependencies of a project. The following are some of the possible syntaxes for libraryDependencies: libraryDependencies += groupID % artifactID % revision libraryDependencies += groupID %% artifactID % revision libraryDependencies += groupID % artifactID % revision % configuration libraryDependencies ++= Seq( groupID %% artifactID % revision, groupID %% otherID % otherRevision ) Let's explain some of these examples in more detail: groupID: This is the organization/group's ID by whom it was published artifactID: This is the project's name on which there is a dependency revision: This is the Ivy revision of the project on which there is a dependency configuration: This is the Ivy configuration for which we want to specify the dependency Notice that the first and second syntax are not the same. The second one has a %% symbol after groupID. This tells SBT to append the project's Scala version to artifactID. So, in a project with Scala Version 2.9.1, libraryDependencies ++= Seq("mysql" %% "mysql-connector-java" % "5.1.18") is equivalent to libraryDependencies ++= Seq("mysql" % "mysql-connector-java_2.9.1" % "5.1.18"). The %% symbol is very helpful for cross-building a project. Cross-building is the process of building a project for multiple Scala versions. SBT uses the crossScalaVersion key's value to configure dependencies for multiple versions of Scala. Cross-building is possible only for Scala Version 2.8.0 or higher. The %% symbol simply appends the current Scala version, so it should not be used when you know that there is no dependency for a given Scala version, although it is compatible with an older version. In such cases, you have to hardcode the version using the first syntax. Using the third syntax, we could add a dependency only for a specific configuration. This is very useful as some dependencies are not required by all configurations. For example, the dependency on a testing library is only for the test configuration. We could declare this as follows: libraryDependencies ++= Seq("org.specs2" % "specs2_2.9.1" % "1.12.3" % "test") We could also specify dependency for the provided scope (where the JDK or container provides the dependency at runtime).This scope is only available on compilation and test classpath, and is not transitive. Generally, servlet-api dependencies are declared in this scope: libraryDependencies += "javax.servlet" % "javax.servlet-api" % "3.0.1" % "provided" The revision does not have to be a single-fixed version, that is, it can be set with some constraints, and Ivy will select the one that matches best. For example, it could be latest integration or 12.0 or higher, or even a range of versions. A URL for the dependency JAR If the dependency is not published to a repository, you can also specify a direct URL to the JAR file: libraryDependencies += groupID %% artifactID % revision from directURL directURL is used only if the dependency cannot be found in the specified repositories and is not included in published metadata. For example: libraryDependencies += "slinky" % "slinky" % "2.1" from "http://slinky2.googlecode.com/svn/artifacts/2.1/slinky.jar" Extra attributes SBT also supports Ivy's extra attributes. To specify extra attributes, one could use the extra method. Consider that the project has a dependency on the following Ivy module: <ivy-module version ="2.0" > <info organization="packt" module = "introduction" e:media = "screen" status = "integration" e:codeWord = "PP1872"</ivy-module> A dependency on this can be declared by using the following: libraryDependencies += "packt" % "introduction" % "latest.integration" extra( "media"->"screen", "codeWord"-> "PP1872") The extra method can also be used to specify extra attributes for the current project, so that when it is published to the repository its Ivy file will also have extra attributes. An example for this is as follows: projectID << projectID {id => id extra( "codeWord"-> "PP1952")} Classifiers Classifiers ensure that the dependency being loaded is compatible with the platform for which the project is written. For example, to fetch the dependency relevant to JDK 1.5, use the following: libraryDependencies += "org.testng" % "testng" % "5.7" classifier "jdk15" We could also have multiple classifiers, as follows: libraryDependencies += "org.lwjgl.lwjgl" % "lwjgl-platform" % lwjglVersion classifier "natives-windows" classifier "natives-linux" classifier "natives-osx" Transitivity In logic and mathematics, a relationship between three elements is said to be transitive. If the relationship holds between the first and second elements and between the second and third elements, it implies that it also holds a relationship between the first and third elements. Relating this to the dependencies of a project, imagine that you have a project that depends on the project Foo for some of its functionality. Now, Foo depends on another project, Bar, for some of its functionality. If a change in the project Bar affects your project's functionality, then this implies that your project indirectly depends on project Bar. This means that your project has a transitive dependency on the project Bar. But if in this case a change in the project Bar does not affect your project's functionality, then your project does not depend on the project Bar. This means that your project does not have a dependency on the project Bar. SBT cannot know whether your project has a transitive dependency or not, so to avoid dependency issues, it loads the library dependencies transitively by default. In situations where this is not required for your project, you can disable it using intransitive() or notTransitive(). A common case where artifact dependencies are not required is in projects using the Felix OSGI framework (only its main JAR is required). The dependency can be declared as follows: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" intransitive() Or, it can be declared as follows: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" notTransitive() If we need to exclude certain transitive dependencies of a dependency, we could use the excludeAll or exclude method. libraryDependencies += "log4j" % "log4j" % "1.2.15" exclude("javax.jms", "jms")libraryDependencies += "log4j" % "log4j" % "1.2.15" excludeAll( ExclusionRule(organization = "com.sun.jdmk"), ExclusionRule(organization = "com.sun.jmx"), ExclusionRule(organization = "javax.jms") ) Although excludeAll provides more flexibility, it should not be used in projects that will be published in the Maven style as it cannot be represented in a pom.xml file. The exclude method is more useful in projects that require pom.xml when being published, since it requires both organizationID and name to exclude a module. Download documentation Generally, an IDE plugin is used to download the source and API documentation JAR files. However, one can configure SBT to download the documentation without using an IDE plugin. To download the dependency's sources, add withSources() to the dependency definition. For example: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() To download API JAR files, add withJavaDoc() to the dependency definition. For example: libraryDependencies += "org.apache.felix" % "org.apache.felix.framework" % "1.8.0" withSources() withJavadoc() The documentation downloaded like this is not transitive. You must use the update-classifiers task to do so. Dependencies using Maven files SBT can be configured to use a Maven POM file to handle the project dependencies by using the externalPom method. The following statements can be used in the build definition: externalPom(): This will set pom.xml in the project's base directory as the source for project dependencies externalPom(baseDirectory{base=>base/"myProjectPom"}): This will set the custom-named POM file myProjectPom.xml in the project's base directory as the source for project dependencies There are a few restrictions with using a POM file, as follows: It can be used only for configuring dependencies. The repositories mentioned in POM will not be considered. They need to be specified explicitly in the build definition or in an Ivy settings file. There is no support for relativePath in the parent element of POM and its existence will result in an error. Dependencies using Ivy files or Ivy XML Both Ivy settings and dependencies can be used to configure project dependencies in SBT through the build definition. They can either be loaded from a file or can be given inline in the build definition. The Ivy XML can be declared as follows: ivyXML := <dependencies> <dependency org="org.specs2" name="specs2" rev="1.12.3"></dependency></ dependencies> The commands to load from a file are as follows: externalIvySettings(): This will set ivysettings.xml in the project's base directory as the source for dependency settings. externalIvySettings(baseDirectory{base=>base/"myIvySettings"}): This will set the custom-named settings file myIvySettings.xml in the project's base directory as the source for dependency settings. externalIvySettingsURL(url("settingsURL")): This will set the settings file at settingsURL as the source for dependency settings. externalIvyFile(): This will set ivy.xml in the project's base directory as the source for dependency. externalIvyFile(baseDirectory(_/"myIvy"): This will set the custom-named settings file myIvy.xml in the project's base directory as the source for project dependencies. When using Ivy settings and configuration files, the configurations need to be mapped, because Ivy files specify their own configurations. So, classpathConfiguration must be set for the three main configurations. For example: classpathConfiguration in Compile := Compile classpathConfiguration in Test := Test classpathConfiguration in Runtime := Runtime Adding JAR files manually To handle dependencies manually in SBT, you need to create a lib folder in the project and add the JAR files to it. That is the default location where SBT looks for unmanaged dependencies. If you have the JAR files located in some other folder, you could specify that in the build definition. The key used to specify the source for manually added JAR files is unmanagedBase. For example, if the JAR files your project depends on are in project/extras/dependencies instead of project/lib, modify the value of unmanagedBase as follows: unmanagedBase <<= baseDirectory {base => base/"extras/dependencies"} Here, baseDirectory is the project's root directory. unmanagedJars is a task which lists the JAR files from the unmanagedBase directory. To see the list of JAR files in the interactive shell type, type the following: > show unmanaged-jars [info] ArrayBuffer() Or in the project folder, type: $ sbt show unmanaged-jars If you add a Spring JAR (org.springframework.aop-3.0.1.jar) to the dependencies folder, then the result of the previous command would be: > show unmanaged-jars [info] ArrayBuffer(Attributed(/home/introduction/extras/dependencies/ org.springframework.aop-3.0.1.jar)) It is also possible to specify the path(s) of JAR files for different configurations using unmanagedJars. In the build definition, the unmanagedJars task may need to be replaced when the jars are in multiple directories and other complex cases. unmanagedJars in Compile += file("/home/downloads/ org.springframework.aop-3.0.1.jar") Resolvers Resolvers are alternate resources provided for the projects on which there is a dependency. If the specified project's JAR is not found in the default repository, these are tried. The default repository used by SBT is Maven2 and the local Ivy repository. The simplest ways of adding a repository are as follows: resolvers += name at location. For example: resolvers += "releases" at "http://oss.sonatype.org/content/ repositories/releases" resolvers ++= Seq (name1 at location1, name2 at location2). For example: resolvers ++= Seq("snapshots" at "http://oss.sonatype.org/ content/repositories/snapshots", "releases" at "http://oss.sonatype.org/content/repositories/releases") resolvers := Seq (name1 at location1, name2 at location2). For example: resolvers := Seq("sgodbillon" at "https://bitbucket.org/ sgodbillon/repository/raw/master/snapshots/", "Typesafe backup repo" at " http://repo.typesafe.com/typesafe/repo/", "Maven repo1" at "http://repo1.maven.org/") ) You can also add their own local Maven repository as a resource using the following syntax: resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository" An Ivy repository of the file types URL, SSH, or SFTP can also be added as resources using sbt.Resolver. Note that sbt.Resolver is a class with factories for interfaces to Ivy repositories that require a hostname, port, and patterns. Let's see how to use the Resolver class. For filesystem repositories, the following line defines an atomic filesystem repository in the test directory of the current working directory: resolvers += Resolver.file ("my-test-repo", file("test")) transactional() For URL repositories, the following line defines a URL repository at http://example.org/repo-releases/: resolvers += Resolver.url(" my-test-repo", url("http://example.org/repo-releases/")) The following line defines an Ivy repository at http://joscha.github.com/play-easymail/repo/releases/: resolvers += Resolver.url("my-test-repo", url("http://joscha.github.com/play-easymail/repo/releases/")) (Resolver.ivyStylePatterns) For SFTP repositories, the following line defines a repository that is served by SFTP from the host example.org: resolvers += Resolver.sftp(" my-sftp-repo", "example.org") The following line defines a repository that is served by SFTP from the host example.org at port 22: resolvers += Resolver.sftp("my-sftp-repo", "example.org", 22) The following line defines a repository that is served by SFTP from the host example.org with maven2/repo-releases/ as the base path: resolvers += Resolver.sftp("my-sftp-repo", "example.org", "maven2/repo-releases/") For SSH repositories, the following line defines an SSH repository with user-password authentication: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", "password") The following line defines an SSH repository with an access request for the given user. The user will be prompted to enter the password to complete the download. resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user") The following line defines an SSH repository using key authentication: resolvers += { val keyFile: File = ... Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile, "keyFilePassword") } The next line defines an SSH repository using key authentication where no keyFile password is required to be prompted for before download: resolvers += Resolver.ssh("my-ssh-repo", "example.org") as("user", keyFile) The following line defines an SSH repository with the permissions. It is a mode specification such as chmod: resolvers += Resolver.ssh("my-ssh-repo", "example.org") withPermissions("0644") SFTP authentication can be handled in the same way as shown for SSH in the previous examples. Ivy patterns can also be given to the factory methods. Each factory method uses a Patterns instance which defines the patterns to be used. The default pattern passed to the factory methods gives the Maven-style layout. To use a different layout, provide a Patterns object describing it. The following are some examples that specify custom repository layouts using patterns: resolvers += Resolver.url("my-test-repo", url)( Patterns("[organisation]/[module]/ [revision]/[artifact].[ext]") ) You can specify multiple patterns or patterns for the metadata and artifacts separately. For filesystem and URL repositories, you can specify absolute patterns by omitting the base URL, passing an empty patterns instance, and using Ivy instances and artifacts. resolvers += Resolver.url("my-test-repo") artifacts "http://example.org/[organisation]/[module]/ [revision]/[artifact].[ext]" When you do not need the default repositories, you must override externalResolvers. It is the combination of resolvers and default repositories. To use the local Ivy repository without the Maven repository, define externalResolvers as follows: externalResolvers <<= resolvers map { rs => Resolver.withDefaultResolvers(rs, mavenCentral = false) } Summary In this article, we have seen how dependency management tools such as Maven and Ivy work and how SBT handles project dependencies. This article also talked about the different options that SBT provides to handle your project dependencies and configuring resolvers for the module on which your project has a dependency. Resources for Article : Further resources on this subject: So, what is Play? [Article] Play! Framework 2 – Dealing with Content [Article] Integrating Scala, Groovy, and Flex Development with Apache Maven [Article]
Read more
  • 0
  • 0
  • 14945

article-image-setting-qt-creator-android
Packt
27 Nov 2014
8 min read
Save for later

Setting up Qt Creator for Android

Packt
27 Nov 2014
8 min read
This article by Ray Rischpater, the author of the book Application Development with Qt Creator Second Edition, focusses on setting up Qt Creator for Android. Android's functionality is delimited in API levels; Qt for Android supports Android level 10 and above: that's Android 2.3.3, a variant of Gingerbread. Fortunately, most devices in the market today are at least Gingerbread, making Qt for Android a viable development platform for millions of devices. Downloading all the pieces To get started with Qt Creator for Android, you're going to need to download a lot of stuff. Let's get started: Begin with a release of Qt for Android, which was either. For this, you need to download it from http://qt-project.org/downloads. The Android developer tools require the current version of the Java Development Kit (JDK) (not just the runtime, the Java Runtime Environment, but the whole kit and caboodle); you can download it from http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. You need the latest Android Software Development Kit (SDK), which you can download for Mac OS X, Linux, or Windows at http://developer.android.com/sdk/index.html. You need the latest Android Native Development Kit (NDK), which you can download at http://developer.android.com/tools/sdk/ndk/index.html. You need the current version of Ant, the Java build tool, which you can download at http://ant.apache.org/bindownload.cgi. Download, unzip, and install each of these, in the given order. On Windows, I installed the Android SDK and NDK by unzipping them to the root of my hard drive and installed the JDK at the default location I was offered. Setting environment variables Once you install the JDK, you need to be sure that you've set your JAVA_HOME environment variable to point to the directory where it was installed. How you will do this differs from platform to platform; on a Mac OS X or Linux box, you'd edit .bashrc, .tcshrc, or the likes; on Windows, go to System Properties, click on Environment Variables, and add the JAVA_HOME variable. The path should point to the base of the JDK directory; for me, it was C:Program FilesJavajdk1.7.0_25, although the path for you will depend on where you installed the JDK and which version you installed. (Make sure you set the path with the trailing directory separator; the Android SDK is pretty fussy about that sort of thing.) Next, you need to update your PATH to point to all the stuff you just installed. Again, this is an environment variable and you'll need to add the following: There are two different components/subsystems shown in the diagram. The first is YARN, which is the new resource management layer introduced in Hadoop 2.0. The second is HDFS. Let's first delve into HDFS since that has not changed much since Hadoop 1.0. The bin directory of your JDK The androidsdktools directory The androidsdkplatform-tools directory For me, on my Windows 8 computer, my PATH includes this now: …C:Program FilesJavajdk1.7.0_25bin;C:adt-bundle- windows-x86_64-20130729sdktools;;C:adt-bundlewindows-x86_64- 20130729sdkplatform-tools;… Don't forget the separators: on Windows, it's a semicolon (;), while on Mac OS X and Linux, it's a colon (:). An environment variable is a variable maintained by your operating system which affects its configuration; see http://en.wikipedia.org/wiki/Environment_variable for more details. At this point, it's a good idea to restart your computer (if you're running Windows) or log out and log in again (on Linux or Mac OS X) to make sure that all these settings take effect. If you're on a Mac OS X or Linux box, you might be able to start a new terminal and have the same effect (or reload your shell configuration file) instead, but I like the idea of restarting at this point to ensure that the next time I start everything up, it'll work correctly. Finishing the Android SDK installation Now, we need to use the Android SDK tools to ensure that you have a full version of the SDK for at least one Android API level installed. We'll need to start Eclipse, the Android SDK's development environment, and run the Android SDK manager. To do this, follow these steps: Find Eclipse. It's probably in the Eclipse directory of the directory where you installed the Android SDK. If Eclipse doesn't start, check your JAVA_HOME and PATH variables; the odds are that Eclipse will not find the Java environment it needs to run. Click on OK when Eclipse prompts you for a workspace. This doesn't matter; you won't use Eclipse except to download Android SDK components. Click on the Android SDK Manager button in the Eclipse toolbar (circled in the next screenshot): Make sure that you have at least one Android API level above API level 10 installed, along with the Google USB Driver (you'll need this to debug on the hardware). Quit Eclipse. Next, let's see whether the Android Debug Bridge—the software component that transfers your executables to your Android device and supports on-device debugging—is working as it should. Fire up a shell prompt and type adb. If you see a lot of output and no errors, the bridge is correctly installed. If not, go back and check your PATH variable to be sure it's correct. While you're at it, you should developer-enable your Android device too so that it'll work with ADB. Follow the steps provided at http://bit.ly/1a29sal. Configuring Qt Creator Now, it's time to tell Qt Creator about all the stuff you just installed. Perform the following steps: Start Qt Creator but don't create a new project. Under the Tools menu, select Options and then click on Android. Fill in the blanks, as shown in the next screenshot. They should be: The path to the SDK directory, in the directory where you installed the Android SDK. The path to where you installed the Android NDK. Check Automatically create kits for Android tool chains. The path to Ant; here, enter either the path to the Ant executable itself on Mac OS X and Linux platforms or the path to ant.bat in the bin directory of the directory where you unpacked Ant. The directory where you installed the JDK (this might be automatically picked up from your JAVA_HOME directory), as shown in the following screenshot: Click on OK to close the Options window. You should now be able to create a new Qt GUI or Qt Quick application for Android! Do so, and ensure that Android is a target option in the wizard, as the next screenshot shows; be sure to choose at least one ARM target, one x86 target, and one target for your desktop environment: If you want to add Android build configurations to an existing project, the process is slightly different. Perform the following steps: Load the project as you normally would. Click on Projects in the left-hand side pane. The Projects pane will open. Click on Add Kit and choose the desired Android (or other) device build kit. The following screenshot shows you where the Projects and Add Kit buttons are in Qt Creator: Building and running your application Write and build your application normally. A good idea is to build the Qt Quick Hello World application for Android first before you go to town and make a lot of changes, and test the environment by compiling for the device. When you're ready to run on the device, perform the following steps: Navigate to Projects (on the left-hand side) and then select the Android for arm kit's Run Settings. Under Package Configurations, ensure that the Android SDK level is set to the SDK level of the SDK you installed. Ensure that the Package name reads something similar to org.qtproject.example, followed by your project name. Connect your Android device to your computer using the USB cable. Select the Android for arm run target and then click on either Debug or Run to debug or run your application on the device. Summary Qt for Android gives you an excellent leg up on mobile development, but it's not a panacea. If you're planning to target mobile devices, you should be sure to have a good understanding of the usage patterns for your application's users as well as the constraints in CPU, GPU, memory, and network that a mobile application must run on. Once we understand these, however, all of our skills with Qt Creator and Qt carry over to the mobile arena. To develop for Android, begin by installing the JDK, Android SDK, Android NDK, and Ant, and then develop applications as usual: compiling for the device and running on the device frequently to iron out any unexpected problems along the way. Resources for Article: Further resources on this subject: Reversing Android Applications [article] Building Android (Must know) [article] Introducing an Android platform [article]
Read more
  • 0
  • 0
  • 14930

article-image-working-colors-scribus
Packt
10 Dec 2010
10 min read
Save for later

Working with Colors in Scribus

Packt
10 Dec 2010
10 min read
Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents. Applying colors in Scribus Applying color is as basic as creating a frame or writing text. In this article we will often give color values. Each time we need to, we will use the first letter of the color followed by its value. For example, C75 will mean 75 percent of cyan. K will be used for black and B for blue. There are five main things you could apply colors to: Frame or Shape fill Frame or Shape border Line Text Text border You'd like to colorize pictures too. It's a very different method, using duotone or any equivalent image effect. Applying a color to a frame means that you will use the Colors tab of the PP, whereas applying color to text will require you to go to the Color & Effects expander in the Text tab. In both cases you'll find what's needed to apply color to the fill and the border, but the user interfaces are a bit different. (Move the mouse over the image to enlarge.) Time for action – applying colors to a Text Frame's text Colors on frames will use the same color list. Let's follow some steps to see how this is done. Draw a Text Frame where you want it on a page. Type some text inside like "colors of the world" or use Insert | Sample Text. Go to the Colors tab of the PP (F2). Click on the second button placed above the color list to specify that you want to apply the changes to the fill. Then click on the color you want in the list below, for example, Magenta. Click on the paintbrush button, and apply a black color that will be applied to the border (we could call it stroke too). Don't forget that applying a stroke color will need some border refinements in the Line tab to set the width and style of the border. If you need more information about these options. Now, you can select the text or some part of it and go to the Colors & Effects expander of the Text tab. Here you will again see the same icon we used previously. Each has its own color list. Let's choose Yellow for the text color. The stroke color cannot be changed. To change this, click on the Shadow button placed below, and now choose black as the stroke color. The text shadow should be black. What just happened? Color on text is quicker than frame colors in some ways because each has its own list. So, there is no need to click on any button, and you can see both at a glance. Just remember that text has no stroke color activated when setting it first. You need to add the stroke or shadow to a selection to activate the border color for that selection. Quick apply in story editor If, like me, you like the Story Editor (Edit | Edit Text), notice that colors can be applied from there. They are not displayed in the editor but will be displayed when the changes will be applied to the layout. This is much faster, but you need to know exactly what you're doing and need to be precise in your selection. If you need to apply the same color setting to a word in the overall document, you can alternatively use the Edit | Search/Replace window. You can set there the word you're looking for in the first field, and in the right-hand side, replace with the same word, and choose the Fill and Stroke color that you want to apply. Of course, it would be nice if this window could let us apply character styles to make future changes easier. The Scribus new document receives a default color list, which is the same all over your document. In this article, we will deal with many ways of adapting existing colors or creating new ones. Applying shade or transparency Shade and transparency are two ways of setting more precisely how a specific color will be applied on your items. Shades and transparencies are fake effects that will be interpreted by some later elements of the printing workflow, such as Raster Image Processors, to know how the set color can be rendered with pure colors. This is the key point of reproducing colors: if you want a gray, you'll generally have a black color for that. In offset printing which is the reference, the size of the point will vary relatively to the darkness of the black you chose. This will be optically interpreted by the reader. Using shades Each color property has a Shade value. The default is set to 100 percent, meaning that the color will be printed fully saturated. Reducing the shade value will produce a lighter color. When at 0 percent, the color, whatever it may be, will be interpreted as white. On a pure color item like any primary or spot, changing the shade won't affect the color composition. However, on processed colors that are made by mixing several primary colors, modifying the shade will proportionally change the amount of each ink used in the process. Our C75 M49 Y7 K12 at a 50 percent shade value will become a C37 M25 Y4 K6 color in the final PDF. Less color means less ink on the paper and more white (or paper color), which results in a lighter color. You should remember that Shade is a frame property and not a color property. So, if you apply a new color to the frame, the shade value will be kept and applied immediately. To change the shade of the color applied to some characters, it will be a bit different: we don't have a field to fill but a drop-down list with predefined values of 10 percent increments. If you need another value, just choose Other to display a window in which you'll add the amount that you exactly need. You can do the same in the Text Editor. Using transparency While shade is used to lighten a color, the Opacity value will tell you how the color will be less solid. Once again, the range goes from 0%, meaning the object is completely transparent and invisible, to 100% to make it opaque. The latter value is the default. When two objects overlap, the top object hides the bottom object. But when Opacity is decreased, the object at the bottom will become more and more visible. One difference to notice is that Opacity won't affect only the color rendering but the content too (if there is some). As for Shade, Opacity too is applied separately to the fill and to the stroke. So you'll need to set both if needed. One important aspect is that Shade and Opacity can both be applied on the frame and a value 50% of each will give a lighter color than if only one was used. Several opacity values applied to objects show how they can act and add to each other: The background for the text in the title, in the following screenshot, is done in the same color as the background at the top of the page. Using transparency or shade can help create this background and decrease the number of used colors. Time for action – transparency and layers Let's now use transparency and layers to create some custom effects over a picture, as can often be done for covers. Create a new document and display the Layers window from the Windows menu. This window will already contain a line called Background. You can add a layer by clicking on the + button at the bottom left-hand side of the window: it will be called New Layer 1. You can rename it by double-clicking on its name. On the first page of it, add an Image Frame that covers the entire page. Then draw a rectangular shape that covers almost half of the page height. Duplicate this layer by clicking on the middle button placed at the bottom of the Layers window. Select the visibility checkbox (it is the first column headed with an eye icon) of this layer to hide it. We'll modify the transparency of each object. Click on New Layer 1 to specify that you want to work on this layer; otherwise you won't be able to select its frames. The frames or shapes you'll create from now on will be added to this layer called New Layer 1. Select the black shape and decrease the Opacity value of the Colors tab of the PP to 50%. Do the same for the Image Frame. Now, hide this layer by clicking on its visibility icon and show the top layer. In the Layers window, verify if this layer is selected and decrease its opacity. What just happened? If there is a need to make several objects transparent at once, an idea would be to put them on a layer and set the layer Opacity. This way, the same amount of transparency will be applied to the whole. You can open the Layer window from the Window menu. When working with layers, it's important to have the right layer selected to work on it. Basically, any new layer will be added at the top of the stack and will be activated once created. When a layer is selected, you can change the Opacity of this layer by using the field on the top right-hand side of the Layer window. Since it is applied to the layer itself, all the objects placed on it will be affected, but their own opacity values won't be changed. If you look at the differences between the two layers we have made, you'll see that the area in the first black rectangle explicitly becomes transparent by itself because you can see the photo through it. This is not seen in the second. So using layer, as we have seen, can help us work faster when we need to apply the same opacity setting to several objects, but we have to take care, because the result is slightly different. Using layers to blend colors More rarely, layers can be used to mix colors. Blend Mode is originally set to Normal, which does no blending. But if you use any other mode on a layer, its colors will be mixed with colors of the item placed on a lower layer, relatively to the chosen mode. This can be very creative. If you need a more precise action, Blend Mode can be set to Single Object from the Colors tab of the PP. Just give it a try. Layers are still most commonly used to organize a document: a layer for text, a layer for images, a layer for each language for a multi-lingual document, and so on. They are a smart way to work, but are not necessary in your documents and really we can work without them in a layout program.
Read more
  • 0
  • 0
  • 14925

article-image-how-create-new-jsf-project
Packt
20 Jun 2011
17 min read
Save for later

How to Create a New JSF Project

Packt
20 Jun 2011
17 min read
  Java EE 6 Development with NetBeans 7 Develop professional enterprise Java EE applications quickly and easily with this popular IDE       Introduction to JavaServer faces Before JSF existed, most Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available, often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE specification resulted in having a standard web application framework available in any Java EE compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all. However, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework per se, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE component framework, one benefit of JSF is that it provides good support for tools vendors, allowing tools such as NetBeans to take advantage of the JSF component model with drag and drop support for components.   Developing our first JSF application From an application developer's point of view, a JSF application consists of a series of XHTML pages containing custom JSF tags, one or more JSF managed beans, and an optional configuration file named faces-config.xml. faces-config.xml used to be required in JSF 1.x, however, in JSF 2.0, some conventions were introduced that reduce the need for configuration. Additonally, a lot of JSF configuration can be specified using annotations, reducing, and in some cases, eliminating the need for this XML configuration file. Creating a new JSF project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next>, we need to enter a project name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the server, Java EE version, and context path of our application. In our example we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. When clicking on Finish, the wizard generates a skeleton JSF project for us, consisting of a single facelet file called index.xhtml, a web.xml configuration file. web.xml is the standard, optional configuration file needed for Java web applications, this file became optional in version 3.0 of the Servlet API, which was introduced with Java EE 6. In many cases, web.xml is not needed anymore, since most of the configuration options can now be specified via annotations. For JSF applications, however, it is a good idea to add one, since it allows us to specify the JSF project stage. <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout> 30 </session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/index.xhtml</welcome-file> </welcome-file-list> As we can see, NetBeans automatically sets the JSF project stage to Development, setting the project stage to development configures JSF to provide additional debugging help not present in other stages. For example, one common problem when developing a page is that while a page is being developed, validation for one or more of the fields on the page fails, but the developer has not added an <h:message> or <h:messages> tag to the page. When this happens and the form is submitted, the page seems to do nothing, or page navigation doesn't seem to be working. When setting the project stage to Development, these validation errors will automatically be added to the page, without the developer having to explicitly add one of these tags to the page (we should, of course, add the tags before releasing our code to production, since our users will not see the automatically generated validation errors). The following are the valid values for the javax.faces.PROJECT_STAGE context parameter for the faces servlet: Development Production SystemTest UnitTest The Development project stage adds additional debugging information to ease development. The Production project stage focuses on performance. The other two valid values for the project stage (SystemTest and UnitTest), allow us to implement our own custom behavior for these two phases. The javax.faces.application.Application class has a getProjectStage() method that allows us to obtain the current project stage. Based on the value of this method, we can implement the code that will only be executed in the appropriate stage. The following code snippet illustrates this: public void someMethod() { FacesContext facesContext = FacesContext.getCurrentInstance(); Application application = facesContext.getApplication(); ProjectStage projectStage = application.getProjectStage(); if (projectStage.equals(ProjectStage.Development)) { //do development stuff } else if (projectStage.equals(ProjectStage.Production)) { //do production stuff } else if (projectStage.equals(ProjectStage.SystemTest)) { // do system test stuff } else if (projectStage.equals(ProjectStage.UnitTest)) { //do unit test stuff } } As illustrated in the snippet above, we can implement the code to be executed in any valid project stage, based on the return value of the getProjectStage() method of the Application class. When creating a Java Web project using JSF, a facelet is automatically generated. The generated facelet file looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <title>Facelet Title</title> </h:head> <h:body> Hello from Facelets </h:body> </html> As we can see, a facelet is nothing but an XHTML file using some facelets-specific XML name spaces. In the automatically generated page above, the following namespace definition allows us to use the "h" (for HTML) JSF component library: The above namespace declaration allows us to use JSF specific tags such as <h:head> and <h:body> which are a drop in replacement for the standard HTML/XHTML <head> and <body> tags, respectively. The application generated by the new project wizard is a simple, but complete JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's default page. Modifying our page to capture user data The generated application, of course, is nothing but a starting point for us to create a new application. We will now modify the generated index.xhtml file to collect some data from the user. The first thing we need to do is add an <h:form> tag to our page. The <h:form> tag is equivalent to the <form> tag in standard HTML pages. After typing the first few characters of the <h:form> tag into the page, and hitting Ctrl+Space, we can take advantage of NetBeans' excellent code completion. After adding the <h:form> tag and a number of additional JSF tags, our page now looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <title>Registration</title> <h:outputStylesheet library="css" name="styles.css"/> </h:head> <h:body> <h3>Registration Page</h3> <h:form> <h:panelGrid columns="3" columnClasses="rightalign,leftalign,leftalign"> <h:outputLabel value="Salutation: " for="salutation"/> <h:selectOneMenu id="salutation" label="Salutation" value="#{registrationBean.salutation}" > <f:selectItem itemLabel="" itemValue=""/> <f:selectItem itemLabel="Mr." itemValue="MR"/> <f:selectItem itemLabel="Mrs." itemValue="MRS"/> <f:selectItem itemLabel="Miss" itemValue="MISS"/> <f:selectItem itemLabel="Ms" itemValue="MS"/> <f:selectItem itemLabel="Dr." itemValue="DR"/> </h:selectOneMenu> <h:message for="salutation"/> <h:outputLabel value="First Name:" for="firstName"/> <h:inputText id="firstName" label="First Name" required="true" value="#{registrationBean.firstName}" /> <h:message for="firstName" /> <h:outputLabel value="Last Name:" for="lastName"/> <h:inputText id="lastName" label="Last Name" required="true" value="#{registrationBean.lastName}" /> <h:message for="lastName" /> <h:outputLabel for="age" value="Age:"/> <h:inputText id="age" label="Age" size="2" value="#{registrationBean.age}"/> <h:message for="age"/> <h:outputLabel value="Email Address:" for="email"/> <h:inputText id="email" label="Email Address" required="true" value="#{registrationBean.email}"> </h:inputText> <h:message for="email" /> <h:panelGroup/> <h:commandButton id="register" value="Register" action="confirmation" /> </h:panelGrid> </h:form> </h:body> </html> The following screenshot illustrates how our page will be rendered at runtime: All JSF input fields must be inside an <h:form> tag. The <h:panelGrid> helps us to easily lay out JSF tags on our page. It can be thought of as a grid where other JSF tags will be placed. The columns attribute of the <h:panelGrid> tag indicates how many columns the grid will have, each JSF component inside the <h:panelGrid> component will be placed in an individual cell of the grid. When the number of components matching the value of the columns attribute (three in our example) has been placed inside <h:panelGrid>, a new row is automatically started. The following table illustrates how tags will be laid out inside an <h:panelGrid> tag: Each row in our <h:panelGrid> consists of an <h:outputLabel> tag, an input field, and an <h:message> tag. The columnClasses attribute of <h:panelGrid> allows us to assign CSS styles to each column inside the panel grid, its value attribute must consist of a comma separated list of CSS styles (defined in a CSS stylesheet). The first style will be applied to the first column, the second style will be applied to the second column, the third style will be applied to the third column, so on and so forth. Had our panel grid had more than three columns, then the fourth column would have been styled using the first style in the columnClasses attribute, the fifth column would have been styled using the second style in the columnClasses attribute, so on and so forth. If we wish to style rows in an <h:panelGrid>, we can do so with its rowClasses attribute, which works the same way that the columnClasses works for columns. Notice the <h:outputStylesheet> tag inside <h:head> near the top of the page, this is a new tag that was introduced in JSF 2.0. One new feature that JSF 2.0 brings to the table is standard resource directories. Resources such as CSS stylesheets, JavaScript files, images, and so on, can be placed under a top level directory named resources, and JSF tags will have access to those resources automatically. In our NetBeans project, we need to place the resources directory under the Web Pages folder. We then need to create a subdirectory to hold our CSS stylesheet (by convention, this directory should be named css), then we place our CSS stylesheet(s) on this subdirectory. The value of the library attribute in <h:outputStylesheet> must match the directory where our CSS file is located, and the value of its name attribute must match the CSS file name. In addition to CSS files, we should place any JavaScript files in a subdirectory called javascript under the resources directory. The file can then be accessed by the <h:outputScript> tag using "javascript" as the value of its library attribute and the file name as the value of its name attribute. Similarly, images should be placed in a directory called images under the resources directory. These images can then be accessed by the JSF <h:graphicImage> tag, where the value of its library attribute would be "images" and the value of its name attribute would be the corresponding file name. Now that we have discussed how to lay out elements on the page and how to access resources, let's focus our attention on the input and output elements on the page. The <h:outputLabel> tag generates a label for an input field in the form, the value of its for attribute must match the value of the id attribute of the corresponding input field. <h:message> generates an error message for an input field, the value of its for field must match the value of the id attribute for the corresponding input field. The first row in our grid contains an <h:selectOneMenu>. This tag generates an HTML <select> tag on the rendered page. Every JSF tag has an id attribute, the value for this attribute must be a string containing a unique identifier for the tag. If we don't specify a value for this attribute, one will be generated automatically. It is a good idea to explicitly state the ID of every component, since this ID is used in runtime error messages. Affected components are a lot easier to identify if we explicitly set their IDs. When using <h:label> tags to generate labels for input fields, or when using <h:message> tags to generate validation errors, we need to explicitly set the value of the id tag, since we need to specify it as the value of the for attribute of the corresponding <h:label> and <h:message> tags. Every JSF input tag has a label attribute. This attribute is used to generate validation error messages on the rendered page. If we don't specify a value for the label attribute, then the field will be identified in the error message by its ID. Each JSF input field has a value attribute, in the case of <h:selectOneMenu>, this attribute indicates which of the options in the rendered <select> tag will be selected. The value of this attribute must match the value of the itemValue attribute of one of the nested <f:selectItem> tags. The value of this attribute is usually a value binding expression, that means that the value is read at runtime from a JSF managed bean. In our example, the value binding expression #{registrationBean.salutation} is used. What will happen is at runtime JSF will look for a managed bean named registrationBean, and look for an attribute named salutation on this bean, the getter method for this attribute will be invoked, and its return value will be used to determine the selected value of the rendered HTML <select> tag. Nested inside the <h:selectOneMenu> there are a number of <f:selectItem> tags. These tags generate HTML <option> tags inside the HTML <select> tag generated by <h:selectOneMenu>. The value of the itemLabel attribute is the value that the user will see while the value of the itemValue attribute will be the value that will be sent to the server when the form is submitted. All other rows in our grid contain <h:inputText> tags, this tag generates an HTML input field of type text, which accept a single line of typed text as input. We explicitly set the id attribute of all of our <h:inputText> fields, this allows us to refer to them from the corresponding <h:outputLabel> and <h:message> fields. We also set the label attribute for all of our <h:inputText> tags, this results in more user-friendly error messages. Some of our <h:inputText> fields require a value, these fields have their required attribute set to true, each JSF input field has a required attribute, if we need to require the user to enter a value for this attribute, then we need to set this attribute to true. This attribute is optional, if we don't explicitly set a value for it, then it defaults to false. In the last row of our grid, we added an empty <h:panelGroup> tag. The purpose of this tag is to allow adding several tags into a single cell of an <h:panelGrid>. Any tags placed inside this tag are placed inside the same cell of the grid where <h:panelGrid> is placed. In this particular case, all we want to do is to have an "empty" cell in the grid so that the next tag, <h:commandButton>, is aligned with the input fields in the rendered page. <h:commandButton> is used to submit a form to the server. The value of its value attribute is used to generate the text of the rendered button. The value of its action attribute is used to determine what page to display after the button is pressed. In our example, we are using static navigation. When using JSF static navigation, the value of the action attribute of a command button is hard-coded in the markup. When using static navigation, the value of the action attribute of <h:commandButton> corresponds to the name of the page we want to navigate to, minus its .xhtml extension. In our example, when the user clicks on the button, we want to navigate to a file named confirmation.xhtml, therefore we used a value of "confirmation" for its action attribute. An alternative to static navigation is dynamic navigation. When using dynamic navigation, the value of the action attribute of the command button is a value binding expression resolving to a method returning a String in a managed bean. The method may then return different values based on certain conditions. Navigation would then proceed to a different page depending on the value of the method. As long as it returns a String, the managed bean method executed when using dynamic navigation can contain any logic inside it, and is frequently used to save data in a managed bean into a database. When using dynamic navigation, the return value of the method executed when clicking the button must match the name of the page we want to navigate to (again, minus the file extension). In earlier versions of JSF, it was necessary to specify navigation rules in facesconfig.xml, with the introduction of the conventions introduced in the previous paragraphs, this is no longer necessary.  
Read more
  • 0
  • 0
  • 14790
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-understanding-basics-rxjava
Packt
20 Jun 2017
15 min read
Save for later

Understanding the Basics of RxJava

Packt
20 Jun 2017
15 min read
In this article, by Tadas Subonis author of the book Reactive Android Programming, will go through the core basics of RxJava so that we can fully understand what it is, what are the core elements, and how they work. Before that, let's take a step back and briefly discuss how RxJava is different from other approaches. RxJava is about reacting to results. It might be an item that originated from some source. It can also be an error. RxJava provides a framework to handle these items in a reactive way and to create complicated manipulation and handling schemes in a very easy-to-use interface. Things like waiting for an arrival of an item before transforming it become very easy with RxJava. To achieve all this, RxJava provides some basic primitives: Observables: A source of data Subscriptions: An activated handle to the Observable that receives data Schedulers: A means to define where (on which Thread) the data is processed First of all, we will cover Observables--the source of all the data and the core structure/class that we will be working with. We will explore how are they related to Disposables (Subscriptions). Furthermore, the life cycle and hook points of an Observable will be described, so we will actually know what's happening when an item travels through an Observable and what are the different stages that we can tap into. Finally, we will briefly introduce Flowable--a big brother of Observable that lets you handle big amounts of data with high rates of publishing. To summarize, we will cover these aspects: What is an Observable? What are Disposables (formerly Subscriptions)? How items travel through the Observable? What is backpressure and how we can use it with Flowable? Let's dive into it! (For more resources related to this topic, see here.) Observables Everything starts with an Observable. It's a source of data that you can observe for emitted data (hence the name). In almost all cases, you will be working with the Observable class. It is possible to (and we will!) combine different Observables into one Observable. Basically, it is a universal interface to tap into data streams in a reactive way. There are lots of different ways of how one can create Observables. The simplest way is to use the .just() method like we did before: Observable.just("First item", "Second item"); It is usually a perfect way to glue non-Rx-like parts of the code to Rx compatible flow. When an Observable is created, it is not usually defined when it will start emitting data. If it was created using simple tools such as.just(), it won't start emitting data until there is a subscription to the observable. How do you create a subscription? It's done by calling .subscribe() : Observable.just("First item", "Second item") .subscribe(); Usually (but not always), the observable be activated the moment somebody subscribes to it. So, if a new Observable was just created, it won't magically start sending data "somewhere". Hot and Cold Observables Quite often, in the literature and documentation terms, Hot and Cold Observables can be found. Cold Observable is the most common Observable type. For example, it can be created with the following code: Observable.just("First item", "Second item") .subscribe(); Cold Observable means that the items won't be emitted by the Observable until there is a Subscriber. This means that before the .subscribe() is called, no items will be produced and thus none of the items that are intended to be omitted will be missed, everything will be processed. Hot Observable is an Observable that will begin producing (emitting) items internally as soon as it is created. The status updates are produced constantly and it doesn't matter if there is something that is ready to receive them (like Subscription). If there were no subscriptions to the Observable, it means that the updates will be lost. Disposables A disposable (previously called Subscription in RxJava 1.0) is a tool that can be used to control the life cycle of an Observable. If the stream of data that the Observable is producing is boundless, it means that it will stay active forever. It might not be a problem for a server-side application, but it can cause some serious trouble on Android. Usually, this is the common source of memory leaks. Obtaining a reference to a disposable is pretty simple: Disposable disposable = Observable.just("First item", "Second item") .subscribe(); Disposable is a very simple interface. It has only two methods: dispose() and isDisposed() .  dispose() can be used to cancel the existing Disposable (Subscription). This will stop the call of .subscribe()to receive any further items from Observable, and the Observable itself will be cleaned up. isDisposed() has a pretty straightforward function--it checks whether the subscription is still active. However, it is not used very often in regular code as the subscriptions are usually unsubscribed and forgotten. The disposed subscriptions (Disposables) cannot be re-enabled. They can only be created anew. Finally, Disposables can be grouped using CompositeDisposable like this: Disposable disposable = new CompositeDisposable( Observable.just("First item", "Second item").subscribe(), Observable.just("1", "2").subscribe(), Observable.just("One", "Two").subscribe() ); It's useful in the cases when there are many Observables that should be canceled at the same time, for example, an Activity being destroyed. Schedulers As described in the documentation, a scheduler is something that can schedule a unit of work to be executed now or later. In practice, it means that Schedulers control where the code will actually be executed and usually that means selecting some kind of specific thread. Most often, Subscribers are used to executing long-running tasks on some background thread so that it wouldn't block the main computation or UI thread. This is especially relevant on Android when all long-running tasks must not be executed on MainThread. Schedulers can be set with a simple .subscribeOn() call: Observable.just("First item", "Second item") .subscribeOn(Schedulers.io()) .subscribe(); There are only a few main Schedulers that are commonly used: Schedulers.io() Schedulers.computation() Schedulers.newThread() AndroidSchedulers.mainThread() The AndroidSchedulers.mainThread() is only used on Android systems. Scheduling examples Let's explore how schedulers work by checking out a few examples. Let's run the following code: Observable.just("First item", "Second item") .doOnNext(e -> Log.d("APP", "on-next:" + Thread.currentThread().getName() + ":" + e)) .subscribe(e -> Log.d("APP", "subscribe:" + Thread.currentThread().getName() + ":" + e)); The output will be as follows: on-next:main:First item subscribe:main:First item on-next:main:Second item subscribe:main:Second item Now let's try changing the code to as shown: Observable.just("First item", "Second item") .subscribeOn(Schedulers.io()) .doOnNext(e -> Log.d("APP", "on-next:" + Thread.currentThread().getName() + ":" + e)) .subscribe(e -> Log.d("APP", "subscribe:" + Thread.currentThread().getName() + ":" + e)); Now, the output should look like this: on-next:RxCachedThreadScheduler-1:First item subscribe:RxCachedThreadScheduler-1:First item on-next:RxCachedThreadScheduler-1:Second item subscribe:RxCachedThreadScheduler-1:Second item We can see how the code was executed on the main thread in the first case and on a new thread in the next. Android requires that all UI modifications should be done on the main thread. So, how can we execute a long-running process in the background but process the result on the main thread? That can be done with .observeOn() method: Observable.just("First item", "Second item") .subscribeOn(Schedulers.io()) .doOnNext(e -> Log.d("APP", "on-next:" + Thread.currentThread().getName() + ":" + e)) .observeOn(AndroidSchedulers.mainThread()) .subscribe(e -> Log.d("APP", "subscribe:" + Thread.currentThread().getName() + ":" + e)); The output will be as illustrated: on-next:RxCachedThreadScheduler-1:First item on-next:RxCachedThreadScheduler-1:Second item subscribe:main:First item subscribe:main:Second item You will note that the items in the doOnNext block were executed on the "RxThread", and the subscribe block items were executed on the main thread. Investigating the Flow of Observable The logging inside the steps of an Observable is a very powerful tool when one wants to understand how they work. If you are in doubt at any point as to what's happening, add logging and experiment. A few quick iterations with logs will definitely help you understand what's going on under the hood. Let's use this technique to analyze a full flow of an Observable. We will start off with this script: private void log(String stage, String item) { Log.d("APP", stage + ":" + Thread.currentThread().getName() + ":" + item); } private void log(String stage) { Log.d("APP", stage + ":" + Thread.currentThread().getName()); } Observable.just("One", "Two") .subscribeOn(Schedulers.io()) .doOnDispose(() -> log("doOnDispose")) .doOnComplete(() -> log("doOnComplete")) .doOnNext(e -> log("doOnNext", e)) .doOnEach(e -> log("doOnEach")) .doOnSubscribe((e) -> log("doOnSubscribe")) .doOnTerminate(() -> log("doOnTerminate")) .doFinally(() -> log("doFinally")) .observeOn(AndroidSchedulers.mainThread()) .subscribe(e -> log("subscribe", e)); It can be seen that it has lots of additional and unfamiliar steps (more about this later). They represent different stages during the processing of an Observable. So, what's the output of the preceding script?: doOnSubscribe:main doOnNext:RxCachedThreadScheduler-1:One doOnEach:RxCachedThreadScheduler-1 doOnNext:RxCachedThreadScheduler-1:Two doOnEach:RxCachedThreadScheduler-1 doOnComplete:RxCachedThreadScheduler-1 doOnEach:RxCachedThreadScheduler-1 doOnTerminate:RxCachedThreadScheduler-1 doFinally:RxCachedThreadScheduler-1 subscribe:main:One subscribe:main:Two doOnDispose:main Let's go through some of the steps. First of all, by calling .subscribe() the doOnSubscribe block was executed. This started the emission of items from the Observable as we can see on the doOnNext and doOnEach lines. Finally, the stream finished at termination life cycle was activated--the doOnComplete, doOnTerminate and doOnFinally. Also, the reader will note that the doOnDispose block was called on the main thread along with the subscribe block. The flow will be a little different if .subscribeOn() and .observeOn() calls won't be there: doOnSubscribe:main doOnNext:main:One doOnEach:main subscribe:main:One doOnNext:main:Two doOnEach:main subscribe:main:Two doOnComplete:main doOnEach:main doOnTerminate:main doOnDispose:main doFinally:main You will readily note that now, the doFinally block was executed after doOnDispose while in the former setup, doOnDispose was the last. This happens due to the way Android Looper schedulers code blocks for execution and the fact that we used two different threads in the first case. The takeaway here is that whenever you are unsure of what is going on, start logging actions (and the thread they are running on) to see what's actually happening. Flowable Flowable can be regarded as a special type of Observable (but internally it isn't). It has almost the same method signature like the Observable as well. The difference is that Flowable allows you to process items that emitted faster from the source than some of the following steps can handle. It might sound confusing, so let's analyze an example. Assume that you have a source that can emit a million items per second. However, the next step uses those items to do a network request. We know, for sure, that we cannot do more than 50 requests per second: That poses a problem. What will we do after 60 seconds? There will be 60 million items in the queue waiting to be processed. The items are accumulating at a rate of 1 million items per second between the first and the second steps because the second step processes them at a much slower rate. Clearly, the problem here is that the available memory will be exhausted and the programming will fail with an OutOfMemory (OOM) exception. For example, this script will cause an excessive memory usage because the processing step just won't be able to keep up with the pace the items are emitted at. PublishSubject<Integer> observable = PublishSubject.create(); observable .observeOn(Schedulers.computation()) .subscribe(v -> log("s", v.toString()), this::log); for (int i = 0; i < 1000000; i++) { observable.onNext(i); } private void log(Throwable throwable) { Log.e("APP", "Error", throwable); } By converting this to a Flowable, we can start controlling this behavior: observable.toFlowable(BackpressureStrategy.MISSING) .observeOn(Schedulers.computation()) .subscribe(v -> log("s", v.toString()), this::log); Since we have chosen not to specify how we want to handle items that cannot be processed (it's called Backpressuring), it will throw a MissingBackpressureException. However, if the number of items was 100 instead of a million, it would have been just fine as it wouldn't hit the internal buffer of Flowable. By default, the size of the Flowable queue (buffer) is 128. There are a few Backpressure strategies that will define how the excessive amount of items should be handled. Drop Items Dropping means that if the downstream processing steps cannot keep up with the pace of the source Observable, just drop the data that cannot be handled. This can only be used in the cases when losing data is okay, and you care more about the values that were emitted in the beginning. There are a few ways in which items can be dropped. The first one is just to specify Backpressure strategy, like this: observable.toFlowable(BackpressureStrategy.DROP) Alternatively, it will be like this: observable.toFlowable(BackpressureStrategy.MISSING) .onBackpressureDrop() A similar way to do that would be to call .sample(). It will emit items only periodically, and it will take only the last value that's available (while BackpressureStrategy.DROP drops it instantly unless it is free to push it down the stream). All the other values between "ticks" will be dropped: observable.toFlowable(BackpressureStrategy.MISSING) .sample(10, TimeUnit.MILLISECONDS) .observeOn(Schedulers.computation()) .subscribe(v -> log("s", v.toString()), this::log); Preserve Latest Item Preserving the last items means that if the downstream cannot cope with the items that are being sent to them, stop emitting values and wait until they become available. While waiting, keep dropping all the values except the last one that arrived and when the downstream becomes available to send the last message that's currently stored. Like with Dropping, the "Latest" strategy can be specified while creating an Observable: observable.toFlowable(BackpressureStrategy.LATEST) Alternatively, by calling .onBackpressure(): observable.toFlowable(BackpressureStrategy.MISSING) .onBackpressureLatest() Finally, a method, .debounce(), can periodically take the last value at specific intervals: observable.toFlowable(BackpressureStrategy.MISSING) .debounce(10, TimeUnit.MILLISECONDS) Buffering It's usually a poor way to handle different paces of items being emitted and consumed as it often just delays the problem. However, this can work just fine if there is just a temporal slowdown in one of the consumers. In this case, the items emitted will be stored until later processing and when the slowdown is over, the consumers will catch up. If the consumers cannot catch up, at some point the buffer will run out and we can see a very similar behavior to the original Observable with memory running out. Enabling buffers is, again, pretty straightforward by calling the following: observable.toFlowable(BackpressureStrategy.BUFFER) or observable.toFlowable(BackpressureStrategy.MISSING) .onBackpressureBuffer() If there is a need to specify a particular value for the buffer, one can use .buffer(): observable.toFlowable(BackpressureStrategy.MISSING) .buffer(10) Completable, Single, and Maybe Types Besides the types of Observable and Flowable, there are three more types that RxJava provides: Completable: It represents an action without a result that will be completed in the future Single: It's just like Observable (or Flowable) that returns a single item instead of a stream Maybe: It stands for an action that can complete (or fail) without returning any value (like Completable) but can also return an item like Single However, all these are used quite rarely. Let's take a quick look at the examples. Completable Since Completable can basically process just two types of actions--onComplete and onError--we will cover it very briefly. Completable has many static factory methods available to create it but, most often, it will just be found as a return value in some other libraries. For example, the Completable can be created by calling the following: Completable completable = Completable.fromAction(() -> { log("Let's do something"); }); Then, it is to be subscribed with the following: completable.subscribe(() -> { log("Finished"); }, throwable -> { log(throwable); }); Single Single provides a way to represent an Observable that will return just a single item (thus the name). You might ask, why it is worth having it at all? These types are useful to tell the developers about the specific behavior that they should expect. To create a Single, one can use this example: Single.just("One item") The Single and the Subscription to it can be created with the following: Single.just("One item") .subscribe((item) -> { log(item); }, (throwable) -> { log(throwable); }); Make a note that this differs from Completable in that the first argument to the .subscribe() action now expects to receive an item as a result. Maybe Finally, the Maybe type is very similar to the Single type, but the item might not be returned to the subscriber in the end. The Maybe type can be created in a very similar fashion as before: Maybe.empty(); or like Maybe.just("Item"); However, the .subscribe() can be called with arguments dedicated to handling onSuccess (for received items), onError (to handle errors), and onComplete (to do a final action after the item is handled): Maybe.just("Item") .subscribe( s -> log("success: " + s), throwable -> log("error"), () -> log("onComplete") ); Summary In this article, we covered the most essentials parts of RxJava. Resources for Article: Further resources on this subject: The Art of Android Development Using Android Studio [article] Drawing and Drawables in Android Canvas [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 14739

article-image-events-notifications-and-reporting
Packt
04 Jun 2015
55 min read
Save for later

Events, Notifications, and Reporting

Packt
04 Jun 2015
55 min read
In this article by Martin Wood, the author of the book, Mastering ServiceNow, has discussed about communication which is a key part of any business application. Not only does the boss need to have an updated report by Monday, but your customers and users also want to be kept informed. ServiceNow helps users who want to know what's going on. In this article, we'll explore the functionality available. The platform can notify and provide information to people in a variety of ways: Registering events and creating Scheduled Jobs to automate functionality Sending out informational e-mails when something happens Live dashboards and homepages showing the latest reports and statistics Scheduled reports that help with handover between shifts Capturing information with metrics Presenting a single set of consolidated data with database views (For more resources related to this topic, see here.) Dealing with events Firing an event is a way to tell the platform that something happened. Since ServiceNow is a data-driven system, in many cases, this means that a record has been updated in some way. For instance, maybe a guest has been made a VIP, or has stayed for 20 nights. Several parts of the system may be listening for an event to happen. When it does, they perform an action. One of these actions may be sending an e-mail to thank our guest for their continued business. These days, e-mail notifications don't need to be triggered by events. However, it is an excellent example. When you fire an event, you pass through a GlideRecord object and up to two string parameters. The item receiving this data can then use it as necessary, so if we wanted to send an e-mail confirming a hotel booking, we have those details to hand during processing. Registering events Before an event can be fired, it must be known to the system. We do this by adding it to Event Registry [sysevent_register], which can be accessed by navigating to System Policy > Events > Registry. It's a good idea to check whether there isn't one you can use before you add a new one. An event registration record consists of several fields, but most importantly a string name. An event can be called anything, but by convention it is in a dotted namespace style format. Often, it is prefixed by the application or table name and then by the activity that occurred. Since a GlideRecord object accompanies an event, the table that the record will come from should also be selected. It is also a good idea to describe your event and what will cause it in the Description and Fired by fields. Finally, there is a field that is often left empty, called Queue. This gives us the functionality to categorize events and process them in a specific order or frequency. Firing an event Most often, a script in a Business Rule will notice that something happens and will add an event to the Event [sysevent] queue. This table stores all of the events that have been fired, if it has been processed, and what page the user was on when it happened. As the events come in, the platform deals with them in a first in, first out order by default. It finds everything that is listening for this specific event and executes them. That may be an e-mail notification or a script. By navigating to System Policy > Events > Event Log, you can view the state of an event, when it was added to the queue, and when it was processed. To add an event to the queue, use the eventQueue function of GlideSystem. It accepts four parameters: the name of the event, a GlideRecord object, and two run time parameters. These can be any text strings, but most often are related to the user that caused the event. Sending an e-mail for new reservations Let's create an event that will fire when a Maintenance task has been assigned to one of our teams. Navigate to System Policy > Events > Registry. Click on New and set the following fields:     Event name: maintenance.assigned     Table: Maintenance [u_maintenance] Next, we need to add the event to the Event Queue. This is easily done with a simple Business Rule:     Name: Maintenance assignment events     Table: Maintenance [u_maintenance]     Advanced: <ticked>     When: after Make sure to always fire events after the record has been written to the database. This stops the possibility of firing an event even though another script has aborted the action. Insert: <ticked> Update: <ticked> Filter Conditions: Assignment group – changes Assignment group – is not empty Assigned to – is empty This filter represents when a task is sent to a new group but someone hasn't yet been identified to own the work. Script: gs.eventQueue('maintenance.assigned', current, gs.getUserID(), gs.getUserName()); This script follows the standard convention when firing events—passing the event name, current, which contains the GlideRecord object the Business Rule is working with, and some details about the user who is logged in. We'll pick this event up later and send an e-mail whenever it is fired. There are several events, such as <table_name>.view, that are fired automatically. A very useful one is the login event. Take a look at the Event Log to see what is happening. Scheduling jobs You may be wondering how the platform processes the event queue. What picks them up? How often are they processed? In order to make things happen automatically, ServiceNow has a System Scheduler. Processing the event queue is one job that is done on a repeated basis. ServiceNow can provide extra worker nodes that only process events. These shift the processing of things such as e-mails onto another system, enabling the other application nodes to better service user interactions. To see what is going on, navigate to System Scheduler > Scheduled Jobs > Today's Scheduled Jobs. This is a link to the Schedule Item [sys_trigger] table, a list of everything the system is doing in the background. You will see a job that collects database statistics, another that upgrades the instance (if appropriate), and others that send and receive e-mails or SMS messages. You should also spot one called events process, which deals with the event queue. A Schedule Item has a Next action date and time field. This is when the platform will next run the job. Exactly what will happen is specified through the Job ID field. This is a reference to the Java class in the platform that will actually do the work. The majority of the time, this is RunScriptJob, which will execute some JavaScript code. The Trigger type field specifies how often the job will repeat. Most jobs are run repetitively, with events process set to run every 30 seconds. Others run when the instance is started—perhaps to preload the cache. Another job that is run on a periodic basis is SMTP Sender. Once an e-mail has been generated and placed in the sys_email table, the SMTP Sender job performs the same function as many desktop e-mail clients: it connects to an e-mail server and asks it to deliver the message. It runs every minute by default. This schedule has a direct impact on how quickly our e-mail will be sent out. There may be a delay of up to 30 seconds in generating the e-mail from an event, and a further delay of up to a minute before the e-mail is actually sent. Other jobs may process a particular event queue differently. Events placed into the metric queue will be worked with after 5 seconds. Adding your own jobs The sys_trigger table is a backend data store. It is possible to add your own jobs and edit what is already there, but I don't recommend it. Instead, there is a more appropriate frontend: the Scheduled Job [sysauto] table. The sysauto table is designed to be extended. There are many things that can be automated in ServiceNow, including data imports, sending reports, and creating records, and they each have a table extended from sysauto. Once you create an entry in the sysauto table, the platform creates the appropriate record in the sys_trigger table. This is done through a call in the automation synchronizer Business Rule. Each table extended from sysauto contains fields that are relevant to its automation. For example, a Scheduled Email of Report [sysauto_report] requires e-mail addresses and reports to be specified. Creating events every day Navigate to System Definition > Scheduled Jobs. Unfortunately, the sys_trigger and sysauto tables have very similar module names. Be sure to pick the right one. When you click on New, an interceptor will fire, asking you to choose what you want to automate. Let's write a simple script that will create a maintenance task at the end of a hotel stay, so choose Automatically run a script of your choosing. Our aim is to fire an event for each room that needs cleaning. We'll keep this for midday to give our guests plenty of time to check out. Set the following fields: Name: Clean on end of reservation Time: 12:00:00 Run this script: var res = new GlideRecord('u_reservation'); res.addQuery('u_departure', gs.now()); res.addNotNullQuery('u_room'); res.query(); while (res.next()) { gs.eventQueue('room.reservation_end', res.u_room.getRefRecord()); } Remember to enclose scripts in a function if they could cause other scripts to run. Most often, this is when records are updated, but it is not the case here. Our reliable friend, GlideRecord, is employed to get reservation records. The first filter ensures that only reservations that are ending today will be returned, while the second filter ignores reservations that don't have a room. Once the database has been queried, the records are looped round. For each one, the eventQueue function of GlideSystem is used to add in an event into the event queue. The record that is being passed into the event queue is actually the Room record. The getRefRecord function of GlideElement dot-walks through a reference field and returns a newly initialized GlideRecord object rather than more GlideElement objects. Once the Scheduled Job has been saved, it'll generate the events at midday. But for testing, there is a handy Execute Now UI action. Ensure there is test data that fits the code and click on the button. Navigate to System Policy > Events > Event Log to see the entries. There is a Conditional checkbox with a separate Condition script field. However, I don't often use this; instead, I provide any conditions inline in the script that I'm writing, just like we did here. For anything more than a few lines, a Script Include should be used for modularity and efficiency. Running scripts on events The ServiceNow platform has several items that listen for events. Email Notifications are one, which we'll explore soon. Another is Script Actions. Script Actions is server-side code that is associated with a table and runs against a record, just like a Business Rule. But instead of being triggered by a database action, a Script Action is started with an event. There are many similarities between a Script Action and an asynchronous Business Rule. They both run server-side, asynchronous code. Unless there is a particular reason, stick to Business Rules for ease and familiarity. Just like a Business Rule, the GlideRecord variable called current is available. This is the same record that was passed into the second parameter when gs.eventQueue was called. Additionally, another GlideRecord variable called event is provided. It is initialized against the appropriate Event record on the sysevent table. This gives you access to the other parameters (event.param1 and event.param2) as well as who created the event, when, and more. Creating tasks automatically When creating a Script Action, the first step is to register or identify the event it will be associated with. Create another entry in Event Registry. Event name: room.reservation_end Table: Room [u_room] In order to make the functionality more data driven, let's create another template. Either navigate to System Definition > Templates or create a new Maintenance task and use the Save as Template option in the context menu. Regardless, set the following fields: Name: End of reservation room cleaning Table: Maintenance [u_maintenance] Template: Assignment group: Housekeeping Short description: End of reservation room cleaning Description: Please perform the standard cleaning for the room listed above. To create the Script Action, go to System Policy > Events > Script Actions and use the following details: Name: Produce maintenance tasks Event name: room.reservation_end Active: <ticked> Script: var tsk = new GlideRecord('u_maintenance'); tsk.newRecord(); tsk.u_room = current.sys_id; tsk.applyTemplate('End of reservation room cleaning'); tsk.insert(); This script is quite straightforward. It creates a new GlideRecord object that represents a record in the Maintenance table. The fields are initialized through newRecord, and the Room field is populated with the sys_id of current—which is the Room record that the event is associated with. The applyTemplate function is given the name of the template. It would be better to use a property here instead of hardcoding a template name. Now, the following items should occur every day: At midday, a Scheduled Job looks for any reservations that are ending today For each one, the room.reservation_end event is fired A Script Action will be called, which creates a new Maintenance task The Maintenance task is assigned, through a template, to the Housekeeping group. But how does Housekeeping know that this task has been created? Let's send them an e-mail! Sending e-mail notifications E-mail is ubiquitous. It is often the primary form of communication in business, so it is important that ServiceNow has good support. It is easy to configure ServiceNow to send out communications to whoever needs to know. There are a few general use cases for e-mail notifications: Action: Asking the receiver to do some work Informational: Giving the receiver an update or some data Approval: Asking for a decision While this is similar enough to an action e-mail, it is a common enough scenario to make it independent. We'll work through these scenarios in order to understand how ServiceNow can help. There are obviously a lot more ways you can use e-mails. One of them is for a machine-to-machine integration, such as e-bonding. It is possible to do this in ServiceNow, but it is not the best solution. Setting e-mail properties A ServiceNow instance uses standard protocols to send and receive e-mail. E-mails are sent by connecting to an SMTP server with a username and password, just like Outlook or any other e-mail client. When an instance is provisioned, it also gets an e-mail account. If your instance is available at instance.service-now.com through the Web, it has an e-mail address of instance@service-now.com. This e-mail account is not unusual. It is accessible via POP to receive mail, and uses SMTP to send it. Indeed, any standard e-mail account can be used with an instance. Navigate to System Properties > Email to investigate the settings. The properties are unusually laid out in two columns, for sending and receiving for the SMTP and POP connections. When you reach the page, the settings will be tested, so you can immediately see if the platform is capable of sending or receiving e-mails. Before you spend time configuring Email Notifications, make sure the basics work! ServiceNow will only use one e-mail account to send out e-mails, and by default, will only check for new e-mails in one account too. Tracking sent e-mails in the Activity Log One important feature of Email Notifications is that they can show up in the Activity Log if configured. This means that all e-mails associated with a ticket are associated and kept together. This is useful when tracking correspondence with a Requester. To configure the Activity Log, navigate to a Maintenance record. Right-click on the field and choose Personalize Activities. At the bottom of the Available list is Sent/Received Emails. Add it to the Selected list and click on Save. Once an e-mail has been sent out, check back to the Activity Formatter to see the results. Assigning work Our Housekeeping team is equipped with the most modern technology. Not only are they users of ServiceNow, but they have mobile phones that will send and receive e-mails. They have better things to do than constantly refresh the web interface, so let's ensure that ServiceNow will come to them. One of the most common e-mail notifications is for ServiceNow to inform people when they have been assigned a task. It usually gives an overview and a link to view more details. This e-mail tells them that something needs to happen and that ServiceNow should be updated with the result. Sending an e-mail notification on assignment When our Maintenance tasks have the Assignment group field populated, we need the appropriate team members to be aware. We are going to achieve this by sending an e-mail to everyone in that group. At Gardiner Hotels, we empower our staff: they know that one member of the team should pick the task up and own it by setting the Assigned to field to themselves and then get it done. Navigate to System Policy > Email > Notifications. You will see several examples that are useful to understand the basic configuration, but we'll create our own. Click on New. The Email Notifications form is split into three main sections: When to send, Who will receive, and What it will contain. Some options are hidden in a different view, so click on Advanced view to see them all. Start off by giving the basic details: Name: Group assignment Table: Maintenance [u_maintenance] Now, let's see each of the sections of Email Notifications form in detail, in the following sections. When to send This section gives you a choice of either using an event to determine which record should be worked with or for the e-mail notification system to monitor the table directly. Either way, Conditions and Advanced conditions lets you provide a filter or a script to ensure you only send e-mails at the right time. If you are using an event, the event must be fired and the condition fields satisfied for the e-mail to be sent. The Weight field is often overlooked. A single event or record update may satisfy the condition of multiple Email Notifications. For example, a common scenario is to send an e-mail to the Assignment group when it is populated and to send an e-mail to the Assigned to person when that is populated. But what if they both happen at the same time? You probably don't want the Assignment group being told to pick up a task if it has already been assigned. One way is to give the Assignment group e-mail a higher weight: if two e-mails are being generated, only the lower weight will be sent. The other will be marked as skipped. Another way to achieve this scenario is through conditions. Only send the Assignment group e-mail if the Assigned to field is empty. Since we've already created an event, let's use it. And because of the careful use of conditions in the Business Rule, it only sends out the event in the appropriate circumstances. That means no condition is necessary in this Email Notification. Send when: Event is fired Event name: maintenance.assigned Who will receive Once we've determined when an e-mail should be sent, we need to know who it will go to. The majority of the time, it'll be driven by data on the record. This scenario is exactly that: the people who will receive the e-mail are those in the Assignment group field on the Maintenance task. Of course, it is possible to hardcode recipients and the system can also deliver e-mails to Users and Groups that have been sent as a parameter when creating the event. Users/Groups in fields: Assignment group You can also use scripts to specify the From, To, CC, and BCC of an e-mail. The wiki here contains more information: http://wiki.servicenow.com/?title=Scripting_for_Email_Notifications Send to event creator When someone comes to me and says: "Martin, I've set up the e-mail notification, but it isn't working. Do you know why?", I like to put money on the reason. I very often win, and you can too. Just answer: "Ensure Send to event creator is ticked and try again". The Send to event creator field is only visible on the Advanced view, but is the cause of this problem. So tick Send to event creator. Make sure this field is ticked, at least for now. If you do not, when you test your e-mail notifications, you will not receive your e-mail. Why? By default, the system will not send confirmation e-mails. If you were the person to update a record and it causes e-mails to be sent, and it turns out that you are one of the recipients, it'll go to everyone other than you. The reasoning is straightforward: you carried out the action so why do you need to be informed that it happened? This cuts down on unnecessary e-mails and so is a good thing. But it confuses everyone who first comes across it. If there is one tip I can give to you in this article, it is this – tick the Send to event creator field when testing e-mails. Better still, test realistically! What it will contain The last section is probably the simplest to understand, but the one that takes most time: deciding what to send. The standard view contains just a few fields: a space to enter your message, a subject line, and an SMS alternate field that is used for text messages. Additionally, there is an Email template field that isn't often used but is useful if you want to deliver the same content in multiple e-mail messages. View them by navigating to System Policy > Email > Templates. These fields all support variable substitution. This is a special syntax that instructs the instance to insert data from the record that the e-mail is triggered for. This Maintenance e-mail can easily contain data from the Maintenance record. This lets you create data-driven e-mails. I like to compare it to a mail-merge system; you have some fixed text, some placeholders, and some data, and the platform puts them all together to produce a personalized e-mail. By default, the message will be delivered as HTML. This means you can make your messages look more styled by using image tags and font controls, among other options. Using variable substitution The format for substitution is ${variable}. All of the fields on the record are available as variables, so to include the Short description field in an e-mail, use ${short_description}. Additionally, you can dot-walk. So by having ${assigned_to.email} in the message, you insert the e-mail address of the user that the task is assigned to. Populate the fields with the following information and save: Subject: Maintenance task assigned to your group Message HTML: Hello ${assignment_group}. Maintenance task ${number} has been assigned to your group, for room: ${u_room}. Description: ${description} Please assign to a team member here: ${URI} Thanks! To make this easier, there is a Select variables section on the Message HTML and SMS alternate fields that will create the syntax in a single click. But don't forget that variable substitution is available for the Subject field too. In addition to adding the value of fields, variable substitution like the following ones also makes it easy to add HTML links. ${<reference field>.URI} will create an HTML link to the reference field, with the text LINK ${<reference field>.URI_REF} will create an HTML link, but with the display value of the record as the text Linking to CMS sites is possible through ${CMS_URI+<site>/<page>} Running scripts in e-mail messages If the variables aren't giving you enough control, like everywhere else in ServiceNow, you can add a script. To do so, create a new entry in the Email Scripts [sys_script_email] table, which is available under System Policy > Email > Notification Email Scripts. Typical server-side capability is present, including the current GlideRecord variable. To output text, use the print function of the template object. For example: template.print('Hello, world!'); Like a Script Include, the Name field is important. Call the script by placing ${mail_script:<name>} in the Message HTML field in the e-mail. An object called email is also available. This gives much more control with the resulting e-mail, giving functions such as setImportance, addAddress, and setReplyTo. This wiki has more details: http://wiki.servicenow.com/?title=Scripting_for_Email_Notifications. Controlling the watermark Every outbound mail contains a reference number embedded into the body of the message, in the format Ref:MSG0000100. This is very important for the inbound processing of e-mails, as discussed in a later section. Some options are available to hide or remove the watermark, but this may affect how the platform treats a reply. Navigating to System Mailboxes > Administration > Watermarks shows a full list of every watermark and the associated record and e-mail. Including attachments and other options There are several other options to control how an e-mail is processed: Include Attachments: It will copy any attachments from the record into the e-mail. There is no selection available: it simply duplicates each one every time. You probably wouldn't want this option ticked on many e-mails, since otherwise you will fill up the recipient's inboxes quickly! The attach_links Email Script is a good alternative—it gives HTML links that will let an interested recipient download the file from the instance. Importance: This allows a Low or High priority flag to be set on an e-mail From and Reply-To fields: They'll let you configure who the e-mail purports to be from, on a per–e-mail basis. It is important to realize that this is e-mail spoofing: while the e-mail protocols accept this, it is often used by spam to forge a false address. Sending informational updates Many people rely on e-mails to know what is going on. In addition to telling users when they need to do work, ServiceNow can keep everyone informed as to the current situation. This often takes the form of one of these scenarios: Automatic e-mails, often based on a change of the State field Completely freeform text, with or without a template A combination of the preceding two: a textual update given by a person, but in a structured template Sending a custom e-mail Sometimes, you need to send an e-mail that doesn't fit into a template. Perhaps you need to attach a file, copy in additional people, or want more control over formatting. In many cases, you would turn to the e-mail client on your desktop, such as Outlook or perhaps even Lotus Notes. But the big disadvantage is that the association between the e-mail and the record is lost. Of course, you could save the e-mail and upload it as an attachment, but that isn't as good as it being part of the audit history. ServiceNow comes with a basic e-mail client built in. In fact, it is just shortcutting the process. When you use the e-mail client, you are doing exactly the same as the Email Notifications engine would, by generating an entry in the sys_email table. Enabling the e-mail client The Email Client is accessed by a little icon in the form header of a record. In order to show it, a property must be set in the Dictionary Entry of the table. Navigate to System Definition > Dictionary and find the entry for the u_maintenance table that does not have an entry in the Column name field. The value for the filter is Table - is - u_maintenance and Column name - is – empty. Click on Advanced view. Ensure the Attributes field contains email_client. Navigate to an existing Maintenance record, and next to the attachments icon is the envelope icon. Click on it to open the e-mail client window. The Email Client is a simple window, and the fields should be obvious. Simply fill them out and click on Send to deliver the mail. You may have noticed that some of the fields were prepopulated. You can control what each field initially contains by creating an Email Client Template. Navigate to System Policy > Email > Client Templates, click on New, and save a template for the appropriate table. You can use the variable substitution syntax to place the contents of fields in the e-mail. There is a Conditions field you can add to the form to have the right template used. Quick Messages are a way to let the e-mail user populate Message Text, similar to a record template. Navigate to System Policy > Email > Quick Messages and define some text. These are then available in a dropdown selection field at the top of the e-mail client. The e-mail client is often seized upon by customers who send a lot of e-mail. However, it is a simple solution and does not have a whole host of functionality that is often expected. I've found that this gap can be frustrating. For example, there isn't an easy way to include attachments from the parent record. Instead, often a more automated way to send custom text is useful. Sending e-mails with Additional comments and Work notes The journal fields on the task table are useful enough, allowing you to record results that are then displayed on the Activity log in a who, what, when fashion. But sending out the contents via e-mail makes them especially helpful. This lets you combine two actions in one: documenting information against the ticket and also giving an update to interested parties. The Task table has two fields that let you specify who those people are: the Watch list and the Work notes list. An e-mail notification can then use this information in a structured manner to send out the work note. It can include the contents of the work notes as well as images, styled text, and background information. Sending out Work notes The Work notes field should already be on the Maintenance form. Use Form Design to include the Work notes list field too, placing it somewhere appropriate, such as underneath the Assignment group field. Both the Watch list and the Work notes list are List fields (often referred to as Glide Lists). These are reference fields that contain more than one sys_id from the sys_user table. This makes it is easy to add a requester or fulfiller who is interested in updates to the ticket. What is special about List fields is that although they point towards the sys_user table and store sys_id references, they also store e-mail addresses in the same database field. The e-mail notification system knows all about this. It will run through the following logic: If it is a sys_id, the user record is looked up. The e-mail address in the user record is used. If it is an e-mail address, the user record is searched for. If one is found, any notification settings they have are respected. A user may turn off e-mails, for example, by setting the Notification field to Disabled in their user record. If a user record is not found, the e-mail is sent directly to the e-mail address. Now create a new Email Notification and fill out the following fields: Name: Work notes update Table: Maintenance [u_maintenance] Inserted: <ticked> Updated: <ticked> Conditions: Work notes - changes Users/Groups in fields: Work notes list Subject: New work notes update on ${number} Send to event creator: <ticked> Message: ${number} - ${short_description} has a new work note added.   ${work_notes} This simple message would normally be expanded and made to fit into the corporate style guidelines—use appropriate colors and styles. By default, the last three entries in the Work notes field would be included. If this wasn't appropriate, the global property could be updated or a mail script could use getJournalEntry(1) to grab the last one. Refer to this wiki article for more information: http://wiki.servicenow.com/?title=Using_Journal_Fields#Restrict_the_Number_of_Entries_Sent_in_a_Notification. To test, add an e-mail address or a user into the Work notes list, enter something into the Work notes field, and save. Don't forget about Send to event creator! This is a typical example of how, normally, the person doing the action wouldn't need to receive the e-mail update, since they were the one doing it. But set it so it'll work with your own updates. Approving via e-mail Graphical Workflow generates records that someone will need to evaluate and make a decision on. Most often, approvers will want to receive an e-mail notification to alert them to the situation. There are two approaches to sending out an e-mail when an approval is needed. An e-mail is associated with a particular record; and with approvals, there are two records to choose from: The Approval record, asking for your decision. The response will be processed by the Graphical Workflow. The system will send out one e-mail to each person that is requested to approve it. The Task record that generated the Approval request. The system will send out one e-mail in total. Attaching notifications to the task is sometimes helpful, since it gives you access to all the fields on the record without dot-walking. This section deals with how the Approval record itself uses e-mail notifications. Using the Approval table An e-mail that is sent out from the Approval table often contains the same elements: Some text describing what needs approving: perhaps the Short description or Priority. This is often achieved by dot-walking to the data through the Approval for reference field. A link to view the task that needs approval. A link to the approval record. Two mailto links that allow the user to approve or reject through e-mail. This style is captured in the Email Template named change.itil.approve.role and is used in an Email Notification called Approval Request that is against the Approval [sys_approver] table. The mailto links are generated through a special syntax: ${mailto:mailto.approval} and ${mailto:mailto.rejection}. These actually refer to Email Templates themselves (navigate to System Policy > Email > Templates and find the template called mailto.approval). Altogether, these generate HTML code in the e-mail message that looks something like this: <a href="mailto:<instance>@service-now.com.com?subject=Re:MAI0001001 - approve&body=Ref:MSG0000001">Click here to approve MAI0001001</a> Normally, this URL would be encoded, but I've removed the characters for clarity. When this link is clicked on in the receiver's e-mail client, it creates a new e-mail message addressed to the instance, with Re:MAI0001001 - approve in the subject line and Ref:MSG0000001 in the body. If this e-mail was sent, the instance would process it and approve the approval record. A later section, on processing inbound e-mails, shows in detail how this happens. Testing the default approval e-mail In the baseline system, there is an Email Notification called Approval Request. It is sent when an approval event is fired, which happens in a Business Rule on the Approval table. It uses the e-mail template mentioned earlier, giving the recipient information and an opportunity to approve it either in their web browser, or using their e-mail client. If Howard Johnson was set as the manager of the Maintenance group, he will be receiving any approval requests generated when the Send to External button is clicked on. Try changing the e-mail address in Howard's user account to your own, but ensure the Notification field is set to Enable. Then try creating some approval requests. Specifying Notification Preferences Every user that has access to the standard web interface can configure their own e-mail preferences through the Subscription Notification functionality. Navigate to Self-Service > My profile and click on Notification Preferences to explore what is available. It represents the Notification Messages [cmn_notif_message] table in a straightforward user interface. The Notification Preferences screen shows all the notifications that the user has received, such as the Approval Request and Work notes update configured earlier. They are organized by device. By default, every user has a primary e-mail device. To never receive a notification again, just choose the Off selection and save. This is useful if you are bombarded by e-mails and would rather use the web interface to see updates! If you want to ensure a user cannot unsubscribe, check the Mandatory field in the Email Notification definition record. You may need to add it to the form. This disables the choice, as per the Work notes update notification in the screenshot. Subscribing to Email Notifications The Email Notifications table has a field labeled Subscribable. If this is checked, then users can choose to receive a message every time the Email Notification record's conditions are met. This offers a different way of working: someone can decide if they want more information, rather than the administrator deciding. Edit the Work notes update Email Notification. Switch to the Advanced view, and using Form Design, add the Subscribable field to the Who will receive section on the form. Now make the following changes. Once done, use Insert and Stay to make a copy.     Name: Work notes update (Subscribable)     Users/Groups in fields: <blank>     Subscribable: <ticked> Go to Notification Preferences and click on To subscribe to a new notification click here. The new notification can be selected from the list. Now, every time a Work note is added to any Maintenance record, a notification will be sent to the subscriber. It is important to clear Users/Groups in field if Subscribable is ticked. Otherwise, everyone in the Work notes list will then become subscribed and receive every single subsequent notification for every record! The user can also choose to only receive a subset of the messages. The Schedule field lets them choose when to receive notifications: perhaps only during working hours. The filter lets you define conditions, such as only receiving notifications for important issues. In this instance, a Notification Filter could be created for the Maintenance table, based upon the Priority field. Then, only Work notes for high-priority Maintenance tasks would be sent out. Creating a new device The Notification Devices [cmn_notif_device] table stores e-mail addresses for users. It allows every user to have multiple e-mail addresses, or even register mobile phones for text messages. When a User record is created, a Business Rule named Create primary email device inserts a record in the Notification Devices table. The value in the Email field on the User table is just copied to this table by another Business Rule named Update Email Devices. A new device can be added from the Notification Preferences page, or a Related List can be added to the User form. Navigate to User Administration > Users and create a new user. Once saved, you should receive a message saying Primary email device created for user (the username is displayed in place of user). Then add the Notification Device > User Related List to the form where the e-mail address record should be visible. Click on New. The Notification Device form allows you to enter the details of your e-mail- or SMS-capable device. Service provider is a reference field to the Notification Service Provider table, which specifies how an SMS message should be sent. If you have an account with one of the providers listed, enter your details. There are many hundreds of inactive providers in the Notification Service Provider [cmn_notif_service_provider] table. You may want to try enabling some, though many do not work for the reasons discussed soon. Once a device has been added, they can be set up to receive messages through Notification Preferences. For example, a user can choose to receive approval requests via a text message by adding the Approval Request Notification Message and associating their SMS device. Alternatively, they could have two e-mail addresses, with one for an assistant. If a Notification is sent to a SMS device, the contents of the SMS alternate field are used. Remember that a text message can only be 160 characters at maximum. The Notification Device table has a field called Primary Email. This determines which device is used for a notification that has not been sent to this user before. Despite the name, Primary Email can be ticked for an SMS device. Sending text messages Many mobile phone networks in the US supply e-mail-to-SMS gateways. AT&T gives every subscriber an e-mail address in the form of 5551234567@txt.att.net. This allows the ServiceNow instance to actually send an e-mail and have the gateway convert it into an SMS. The Notification Service Provider form gives several options to construct the appropriate e-mail address. In this scheme, the recipient pays for the text message, so the sending of text messages is free. Many European providers do not provide such functionality, since the sender is responsible for paying. Therefore, it is more common to use the Web to deliver the message to the gateway: perhaps using REST or SOAP. This gives an authenticated method of communication, which allows charging. The Notifications Service Provider table also provides an Advanced notification checkbox that enables a script field. The code is run whenever the instance needs to send out an e-mail. This is a great place to call a Script Include that does the actual work, providing it with the appropriate parameters. Some global variables are present: email.SMSText contains the SMS alternate text and device is the GlideRecord of the Notification Device. This means device.phone_number and device.user are very useful values to access. Delivering an e-mail There are a great many steps that the instance goes through to send an e-mail. Some may be skipped or delivered as a shortcut, depending on the situation, but there are usually a great many steps that are processed. An e-mail may not be sent if any one of these steps goes wrong! A record is updated: Most notifications are triggered when a task changes state or a comment is added. Use debugging techniques to determine what is changing. These next two steps may not be used if the Notification does not use events. An event is fired: A Business Rule may fire an event. Look under System Policy > Events > Event Log to see if it was fired. The event is processed: A Scheduled Job will process each event in turn. Look in the Event Log and ensure that all events have their state changed to Processed. An Email Notification is processed: The event is associated with an Email Notification or the Email Notification uses the Inserted and Updated checkboxes to monitor a table directly. Conditions are evaluated: The platform checks the associated record and ensures the conditions are met. If not, no further processing occurs. The receivers are evaluated: The recipients are determined from the logic in the Email Notification. The use of Send to event creator makes a big impact on this step. The Notification Device is determined: The Notification Messages table is queried. The appropriate Notification Device is then found. If the Notification Device is set to inactive, the recipient is dropped. The Notification field on the User record will control the Active flag of the Notification Devices. Any Notification Device filters are applied: Any further conditions set in the Notification Preferences interface are evaluated, such as Schedule and Filter. An e-mail record is generated: Variable substitution takes place on the Message Text and a record is saved into the sys_email table, with details of the messages in the Outbox. The Email Client starts at this point. The weight is evaluated: If an Email Notification with a lower weight has already been generated for the same event, the e-mail has the Mailbox field set to Skipped. The email is sent: The SMTP Sender Scheduled Job runs every minute. It picks up all messages in the Outbox, generates the message ID, and connects to the SMTP server specified in Email properties. This only occurs if Mail sending is enabled in the properties. Errors will be visible under System Mailboxes > Outbound > Failed. The generated e-mails can be monitored in the System Mailboxes Application Menu, or through System Logs > Emails. They are categorized into Mailboxes, just like an e-mail client. This should be considered a backend table, though some customers who want more control over e-mail notifications make this more accessible. Knowing who the e-mail is from ServiceNow uses one account when sending e-mails. This account is usually the one provided by ServiceNow, but it can be anything that supports SMTP: Exchange, Sendmail, NetMail, or even Gmail. The SMTP protocol lets the sender specify who the mail is from. By default, no checks are done to ensure that the sender is allowed to send from that address. Every e-mail client lets you specify who the e-mail address is from, so I could change the settings in Outlook to say my e-mail address is president@whitehouse.gov or primeminister@number10.gov.uk. Spammers and virus writers have taken advantage of this situation to fill our mailboxes with unwanted e-mails. Therefore, e-mail systems are doing more authentication and checking of addresses when the message is received. You may have seen some e-mails from your client saying an e-mail has been delivered on behalf of another when this validation fails, or it even falling into the spam directly. ServiceNow uses SPF to specify which IP addresses can deliver service-now.com e-mails. Spam filters often use this to check if a sender is authorized. If you spoof the e-mail address, you may need to make an exception for ServiceNow. Read up more about it at: http://en.wikipedia.org/wiki/Sender_Policy_Framework. You may want to change the e-mail addresses on the instance to be your corporate domain. That means that your ServiceNow instance will send the message but will pretend that it is coming from another source. This runs the real risk of the e-mails being marked as spam. Instead, think about only changing the From display (not the e-mail address) or use your own e-mail account. Receiving e-mails Many systems can send e-mails. But isn't it annoying when they are broadcast only? When I get sent a message, I want to be able to reply to it. E-mail should be a conversation, not a fire-and-forget distribution mechanism. So what happens when you reply to a ServiceNow e-mail? It gets categorized, and then processed according to the settings in Inbound Email Actions. Lots of information is available on the wiki: http://wiki.servicenow.com/?title=Inbound_Email_Actions. Determining what an inbound e-mail is Every two minutes, the platform runs the POP Reader scheduled job. It connects to the e-mail account specified in the properties and pulls them all into the Email table, setting the Mailbox to be Inbox. Despite the name, the POP Reader job also supports IMAP accounts. This fires an event called email.read, which in turn starts the classification of the e-mail. It uses a series of logic decisions to determine how it should respond. The concept is that an inbound e-mail can be a reply to something that the platform has already sent out, is an e-mail that someone forwarded, or is part of an e-mail chain that the platform has not seen before; that is, it is a new e-mail. Each of these are handled differently, with different assumptions. As the first step in processing the e-mail, the platform attempts to find the sender in the User table. It takes the address that the e-mail was sent from as the key to search for. If it cannot find a User, it either creates a new User record (if the property is set), or uses the Guest account. Should this e-mail be processed at all? If either of the following conditions match, then the e-mail has the Mailbox set to skipped and no further processing takes place:     Does the subject line start with recognized text such as "out of office autoreply"?     Is the User account locked out? Is this a forward? Both of the following conditions must match, else the e-mail will be checked as a reply:     Does the subject line start with a recognized prefix (such as FW)?     Does the string "From" appear anywhere in the body? Is this a reply? One of the following conditions must match, else the e-mail will be processed as new:     Is there a valid, appropriate watermark that matches an existing record?     Is there an In-Reply-To header in the e-mail that references an e-mail sent by the instance?     Does the subject line start with a recognized prefix (such as RE) and contain a number prefix (such as MAI000100)? If none of these are affirmative, the e-mail is treated as a new e-mail. The prefixes and recognized text are controlled with properties available under System Properties > Email. This order of processing and logic cannot be changed. It is hardcoded into the platform. However, clever manipulation of the properties and prefixes allows great control over what will happen. One common request is to treat forwarded e-mails just like replies. To accomplish this, a nonsensical string should be added into the forward_subject_prefix, and the standard values added to the reply_subject prefix. property. For example, the following values could be used: Forward prefix: xxxxxxxxxxx Reply prefix: re:, aw:, r:, fw:, fwd:… This will ensure that a match with the forwarding prefixes is very unlikely, while the reply logic checks will be met. Creating Inbound Email Actions Once an e-mail has been categorized, it will run through the appropriate Inbound Email Action. The main purpose of an Inbound Email Action is to run JavaScript code that manipulates a target record in some way. The target record depends upon what the e-mail has been classified as: A forwarded or new e-mail will create a new record A reply will update an existing record Every Inbound Email Action is associated with a table and a condition, just like Business Rules. Since a reply must be associated with an existing record (usually found using the watermark), the platform will only look for Inbound Email Actions that are against the same table. The platform initializes the GlideRecord object current as the existing record. An e-mail classified as Reply must have an associated record, found via the watermark, the In-Reply-To header, or by running a search for a prefix stored in the sys_number table, or else it will not proceed. Forwarded and new e-mails will create new records. They will use the first Inbound Email Action that meets the condition, regardless of the table. It will then initialize a new GlideRecord object called current, expecting it to be inserted into the table. Accessing the e-mail information In order to make the scripting easier, the platform parses the e-mail and populates the properties of an object called email. Some of the more helpful properties are listed here: email.to is a comma-separated list of e-mail addresses that the e-mail was sent to and was CC'ed to. email.body_text contains the full text of the e-mail, but does not include the previous entries in the e-mail message chain. This behavior is controlled by a property. For example, anything that appears underneath two empty lines plus -----Original Message----- is ignored. email.subject is the subject line of the e-mail. email.from contains the e-mail address of the User record that the platform thinks sent the e-mail. email.origemail uses the e-mail headers to get the e-mail address of the original sender. email.body contains the body of the e-mail, separated into name:value pairs. For instance, if a line of the body was hello:world, it would be equivalent to email.body.hello = 'world'. Approving e-mails using Inbound Email Actions The previous section looked at how the platform can generate mailto links, ready for a user to select. They generate an e-mail that has the word approve or reject in the subject line and watermark in the body. This is a great example of how e-mail can be used to automate steps in ServiceNow. Approving via e-mail is often much quicker than logging in to the instance, especially if you are working remotely and are on the road. It means approvals happen faster, which in turn provides better service to the requesters and reduces the effort for our approvers. Win win! The Update Approval Request Inbound Email Action uses the information in the inbound e-mail to update the Approval record appropriately. Navigate to System Policy > Email > Inbound Actions to see what it does. We'll inspect a few lines of the code to get a feel for what is possible when automating actions with incoming e-mails. Understanding the code in Update Approval Request One of the first steps within the function, validUser, performs a check to ensure the sender is allowed to update this Approval. They must either be a delegate or the user themselves. Some companies prefer to use an e-Signature method to perform approval, where a password must be entered. This check is not up to that level, but does go some way to helping. E-mail addresses (and From strings) can be spoofed in an e-mail client. Assuming the validation is passed, the Comments field of the Approval record is updated with the body of the e-mail. current.comments = "reply from: " + email.from + "nn" + email.body_text; In order to set the State field, and thus make the decision on the Approval request, the script simply runs a search for the existence of approve or reject within the subject line of the e-mail using the standard indexOf string function. If it is found, the state is set. if (email.subject.indexOf("approve") >= 0) current.state = "approved"; if (email.subject.indexOf("reject") >= 0) current.state = "rejected"; Once the fields have been updated, it saves the record. This triggers the standard Business Rules and will run the Workflow as though this was done in the web interface. Updating the Work notes of a Maintenance task Most often, a reply to an e-mail is to add Additional comments or Work notes to a task. Using scripting, you could differentiate between the two scenarios by seeing who has sent the e-mail: a requester would provide Additional comments and a fulfiller may give either, but it is safer to assume Work notes. Let's make a simple Inbound Email Action to process e-mails and populate the Work notes field. Navigate to System Policy > Email > Inbound Actions and click on New. Use these details: Name: Work notes for Maintenance task Target table: Maintenance [u_maintenance] Active: <ticked> Type: Reply Script: current.work_notes = "Reply from: " + email.origemail + "nn" + email.body_text; current.update(); This script is very simple: it just updates our task record after setting the Work notes field with the e-mail address of the sender and the text they sent. It is separated out with a few new lines. The platform impersonates the sender, so the Activity Log will show the update as though it was done in the web interface. Once the record has been saved, the Business Rules run as normal. This includes ServiceNow sending out e-mails. Anyone who is in the Work notes list will receive the e-mail. If Send to event creator is ticked, it means the person who sent the e-mail may receive another in return, telling them they updated the task! Having multiple incoming e-mail addresses Many customers want to have logic based upon inbound e-mail addresses. For example, sending a new e-mail to invoices@gardiner-hotels.com would create a task for the Finance team, while wifi@gardiner-hotels.com creates a ticket for the Networking group. These are easy to remember and work with, and implementing ServiceNow should not mean that this simplicity should be removed. ServiceNow provides a single e-mail account that is in the format instance@service-now.com and is not able to provide multiple or custom e-mail addresses. There are two broad options for meeting this requirement: Checking multiple accounts Redirecting e-mails Using the Email Accounts plugin While ServiceNow only provides a single e-mail address, it has the ability to pull in e-mails from multiple e-mail accounts through the Email Accounts plugin. The wiki has more information here: http://wiki.servicenow.com/?title=Email_Accounts. Once the plugin has been activated, it converts the standard account information into a new Email Account [sys_email_account] record. There can be multiple Email Accounts for a particular instance, and the POP Reader job is repurposed to check each one. Once the e-mails have been brought into ServiceNow, they are treated as normal. Since ServiceNow does not provide multiple e-mail accounts, it is the customer's responsibility to create, maintain, and configure the instance with the details, including the username and passwords. The instance will need to connect to the e-mail account, which is often hosted within the customer's datacenter. This means that firewall rules or other security methods may need to be considered. Redirecting e-mails Instead of having the instance check multiple e-mail accounts, it is often preferable to continue to work with a single e-mail address. The additional e-mail addresses can be redirected to the one that ServiceNow provides. The majority of e-mail platforms, such as Microsoft Exchange, make it possible to redirect e-mail accounts. When an e-mail is received by the e-mail system, it is resent to the ServiceNow account. This process differs from e-mail forwarding: Forwarding involves adding the FW: prefix to the subject line, altering the message body, and changing the From address. Redirection sends the message unaltered, with the original To address, to the new address. There is little indication that the message has not come directly from the original sender. Redirection is often an easier method to work with than having multiple e-mail accounts. It gives more flexibility to the customer's IT team, since they do not need to provide account details to the instance, and enables them to change the redirection details easily. If a new e-mail address has to be added or an existing one decommissioned, only the e-mail platform needs to be involved. It also reduces the configuration on the ServiceNow instance; nothing needs to change. Processing multiple e-mail address Once the e-mails have been brought into ServiceNow, the platform will need to examine who the e-mail was sent to and make some decisions. This will allow the e-mails sent to wifi@gardiner-hotels.com to be routed as tasks to the networking team. There are several methods available for achieving this: A look-up table can be created, containing a list of e-mail addresses and a matching Group reference. The Inbound Email Script would use a GlideRecord query to find the right entry and populate the Assignment group on the new task. The e-mail address could be copied over into a new field on the task. Standard routing techniques, such as Assignment Rules and Data Lookup, could be used to examine the new field and populate the Assignment group. The Inbound Email Action could contain the addresses hardcoded in the script. While this is not a scalable or maintainable solution, it may be appropriate for a simple deployment. Recording Metrics ServiceNow provides several ways to monitor the progress of a task. These are often reported and e-mailed to the stakeholders, thus providing insight into the effectiveness of processes. Metrics are a way to record information. It allows the analysis and improvement of a process by measuring statistics, based upon particular defined criteria. Most often, these are time based. One of the most common metrics is how long it takes to complete a task: from when the record was created to the moment the Active flag became false. The duration can then be averaged out and compared over time, helping to answer questions such as Are we getting quicker at completing tasks? Metrics provide a great alternative to creating lots of extra fields and Business Rules on a table. Other metrics are more complex and may involve getting more than one result per task. How long does each Assignment group take to deal with the ticket? How long does an SLA get paused for? How many times does the incident get reassigned? The difference between Metrics and SLAs At first glance, a Metric appears to be very similar to an SLA, since they both record time. However, there are some key differences between Metrics and SLAs: There is no target or aim defined in a Metric. It cannot be breached; the duration is simply recorded. A Metric cannot be paused or made to work to a schedule. There is no Workflow associated with a Metric. In general, a Metric is a more straightforward measurement, designed for collecting statistics rather than being in the forefront when processing a task. Running Metrics Every time the Task table gets updated, the metrics events Business Rule fires an event called metric.update. A Script Action named Metric Update is associated with the event and calls the appropriate Metric Definitions. If you define a metric on a non-task-based table, make sure you fire the metric.update event through a Business Rule. The Metric Definition [metric_definition] table specifies how a metric should be recorded, while the Metric Instance [metric_instance] table records the results. As ever, each Metric Definition is applied to a specific table. The Type field of a Metric Definition refers to two situations: Field value duration is associated with a field on the table. Each time the field changes value, the platform creates a new Metric Instance. The duration for which that value was present is recorded. No code is required, but if some is given, it is used as a condition. Script calculation uses JavaScript to determine what the Metric Instance contains. Scripting a Metric Definition There are several predefined variables available to a Metric Definition: current refers to the GlideRecord under examination and definition is a GlideRecord of the Metric Definition. The MetricInstance Script Include provides some helpful functions, including startDuration and endDuration, but it is really only relevant for time-based metrics. Metrics can be used to calculate many statistics (like the number of times a task is reopened), but code must be written to accomplish this. Monitoring the duration of Maintenance tasks Navigate to Metrics > Definitions and click on New. Set the following fields: Name: Maintenance states Table: Maintenance [u_maintenance] Field: State Timeline: <ticked> Once saved, test it out by changing the State field on a Maintenance record to several different values. Make sure to wait 30 seconds or so between each State change, so that the Scheduled Job has time to fire. Right-click on the Form header and choose Metrics Timeline to visualize the changes in the State field. Adding the Metrics Related List to the Maintenance form will display all the captured data. Another Related List is available on the Maintenance Definition form. Summary This article showed how to deal with all the data collected in ServiceNow. The key to this is the automated processing of information. We started with exploring events. When things happen in ServiceNow, the platform can notice and set a flag for processing later. This keeps the system responsive for the user, while ensuring all the work that needs to get done, does get done. Scheduled Jobs is the background for a variety of functions: scheduled reports, scripts, or even task generation. They run on a periodic basis, such as every day or every hour. They are often used for the automatic closure of tasks if the requester hasn't responded recently. Email Notifications are a critical part of any business application. We explored how e-mails are used to let requesters know when they've got work to do, to give requesters a useful update, or when an approver must make a decision. We even saw how approvers can make that decision using only e-mail. Every user has a great deal of control over how they receive these notifications. The Notification Preferences interface lets them add multiple devices, including mobile phones to receive text messages. The Email Client in ServiceNow gives a simple, straightforward interface to send out e-mails, but the Additional comments and Work notes fields are often better and quicker to use. Every e-mail can include the contents of fields and even the output of scripts. Every two minutes, ServiceNow checks for e-mails sent to its account. If it finds any, the e-mail is categorized into being a reply, forward, or new and runs Inbound Email Actions to update or create new records.
Read more
  • 0
  • 0
  • 14720

article-image-using-cloud-applications-and-containers
Xavier Bruhiere
10 Nov 2015
7 min read
Save for later

Using Cloud Applications and Containers

Xavier Bruhiere
10 Nov 2015
7 min read
We can find a certain comfort while developing an application on our local computer. We debug logs in real time. We know the exact location of everything, for we probably started it by ourselves. Make it work, make it right, make it fast - Kent Beck Optimization is the root of all devil - Donald Knuth So hey, we hack around until interesting results pop up (ok that's a bit exaggerated). The point is, when hitting the production server our code will sail a much different sea. And a much more hostile one. So, how to connect to third party resources ? How do you get a clear picture of what is really happening under the hood ? In this post we will try to answer those questions with existing tools. We won't discuss continuous integration or complex orchestration. Instead, we will focus on what it takes to wrap a typical program to make it run as a public service. A sample application Before diving into the real problem, we need some code to throw on remote servers. Our sample application below exposes a random key/value store over http. // app.js // use redis for data storage var Redis = require('ioredis'); // and express to expose a RESTFul API var express = require('express'); var app = express(); // connecting to redis server var redis = new Redis({ host: process.env.REDIS_HOST || '127.0.0.1', port: process.env.REDIS_PORT || 6379 }); // store random float at the given path app.post('/:key', function (req, res) { var key = req.params.key var value = Math.random(); console.log('storing', value,'at', key) res.json({set: redis.set(key, value)}); }); // retrieve the value at the given path app.get('/:key', function (req, res) { console.log('fetching value at ', req.params.key); redis.get(req.params.key).then(function(err, result) { res.json({ result: result || err }); }) }); var server = app.listen(3000, function () { var host = server.address().address; var port = server.address().port; console.log('Example app listening at http://%s:%s', host, port); }); And we define the following package.json and Dockerfile. { "name": "sample-app", "version": "0.1.0", "scripts": { "start": "node app.js" }, "dependencies": { "express": "^4.12.4", "ioredis": "^1.3.6", }, "devDependencies": {} } # Given a correct package.json, those two lines alone will properly install and run our code FROM node:0.12-onbuild # application's default port EXPOSE 3000 A Dockerfile ? Yeah, here is a first step toward cloud computation under control. Packing our code and its dependencies into a container will allow us to ship and launch the application with a few reproducible commands. # download official redis image docker pull redis # cd to the root directory of the app and build the container docker build -t article/sample . # assuming we are logged in to hub.docker.com, upload the resulting image for future deployment docker push article/sample Enough for the preparation, time to actually run the code. Service Discovery The server code needs a connection to redis. We can't hardcode it because host and port are likely to change under different deployments. Fortunately The Twelve-Factor App provides us with an elegant solution. The twelve-factor app stores config in environment variables (often shortened to env vars or env). Env vars are easy to change between deploys without changing any code; Indeed, this strategy integrates smoothly with an infrastructure composed of containers. docker run --detach --name redis redis # 7c5b7ff0b3f95e412fc7bee4677e1c5a22e9077d68ad19c48444d55d5f683f79 # fetch redis container virtual ip export REDIS_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis) # note : we don't specify REDIS_PORT as the redis container listens on the default port (6379) docker run -it --rm --name sample --env REDIS_HOST=$REDIS_HOST article/sample # > sample-app@0.1.0 start /usr/src/app # > node app.js # Example app listening at http://:::3000 In another terminal, we can check everything is working as expected. export SAMPLE_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' sample)) curl -X POST $SAMPLE_HOST:3000/test # {"set":{"isFulfilled":false,"isRejected":false}} curl -X GET $SAMPLE_HOST:3000/test # {"result":"0.5807915225159377"} We didn't precise any network informations but even so, containers can communicate. This method is widely used and projects like etcd or consul let us automate the whole process. Monitoring Performances can be a critical consideration for end-user experience or infrastructure costs. We should be able to identify bottlenecks or abnormal activities and once again, we will take advantage of containers and open source projects. Without modifying the running server, let's launch three new components to build a generic monitoring infrastructure. Influxdb is a fast time series database where we will store containers metrics. Since we properly defined the application into two single-purpose containers, it will give us an interesting overview of what's going on. # default parameters export INFLUXDB_PORT=8086 export INFLUXDB_USER=root export INFLUXDB_PASS=root export INFLUXDB_NAME=cadvisor # Start database backend docker run --detach --name influxdb --publish 8083:8083 --publish $INFLUXDB_PORT:8086 --expose 8090 --expose 8099 --env PRE_CREATE_DB=$INFLUXDB_NAME tutum/influxdb export INFLUXDB_HOST=$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' influxdb) cadvisor Analyzes resource usage and performance characteristics of running containers. The command flags will instruct it how to use the database above to store metrics. docker run --detach --name cadvisor --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 google/cadvisor:latest --storage_driver=influxdb --storage_driver_user=$INFLUXDB_USER --storage_driver_password=$INFLUXDB_PASS --storage_driver_host=$INFLUXDB_HOST:$INFLUXDB_PORT --log_dir=/ # A live dashboard is available at $CADVISOR_HOST:8080/containers # We can also point the brower to $INFLUXDB_HOST:8083, with credentials above, to inspect containers data. # Query example: # > list series # > select time,memory_usage from stats where container_name='cadvisor' limit 1000 # More infos: https://github.com/google/cadvisor/blob/master/storage/influxdb/influxdb.go Grafana is a feature rich metrics dashboard and graph editor for Graphite, InfluxDB and OpenTSB. From its web interface, we will query the database and graph the metrics cadvisor collected and stored. docker run --detach --name grafana -p 8000:80 -e INFLUXDB_HOST=$INFLUXDB_HOST -e INFLUXDB_PORT=$INFLUXDB_PORT -e INFLUXDB_NAME=$INFLUXDB_NAME -e INFLUXDB_USER=$INFLUXDB_USER -e INFLUXDB_PASS=$INFLUXDB_PASS -e INFLUXDB_IS_GRAFANADB=true tutum/grafana # Get login infos generated docker logs grafana  Now we can head to localhost:8000 and build a custom dashboard to monitor the server. I won't repeat the comprehensive documentation but here is a query example: # note: cadvisor stores metrics in series named 'stats' select difference(cpu_cumulative_usage) where container_name='cadvisor' group by time 60s Grafana's autocompletion feature shows us what we can track : cpu, memory and network usage among other metrics. We all love screenshots and dashboards so here is a final reward for our hard work. Conclusion Development best practices and a good understanding of powerful tools gave us a rigorous workflow to launch applications with confidence. To sum up: Containers bundle code and requirements for flexible deployment and execution isolation. Environment stores third party services informations, giving developers a predictable and robust solution to read them. InfluxDB + Cadvisor + Grafana feature a complete monitoring solution independently of the project implementation. We fullfilled our expections but there's room for improvements. As mentioned, service discovery could be automated, but we also omitted how to manage logs. There are many discussions around this complex subject and we can expect shortly new improvements in our toolbox. About the author Xavier Bruhiere is the CEO of Hive Tech. He contributes to many community projects, including Occulus Rift, Myo, Docker and Leap Motion. In his spare time he enjoys playing tennis, the violin and the guitar. You can reach him at @XavierBruhiere.
Read more
  • 0
  • 0
  • 14686

article-image-asking-if-developers-can-be-great-entrepreneurs-is-like-asking-if-moms-can-excel-at-work-and-motherhood
Aarthi Kumaraswamy
05 Jun 2018
13 min read
Save for later

Asking if good developers can be great entrepreneurs is like asking if moms can excel at work and motherhood

Aarthi Kumaraswamy
05 Jun 2018
13 min read
You heard me right. Asking, ‘can developers succeed at entrepreneurship?’ is like asking ‘if women should choose between work and having children’. ‘Can you have a successful career and be a good mom?’ is a question that many well-meaning acquaintances still ask me. You see I am a new mom, (not a very new one, my son is 2.5 years old now). I’m also the managing editor of this site since its inception last year when I rejoined work after a maternity break. In some ways, the Packt Hub feels like running a startup too. :) Now how did we even get to this question? It all started with the results of this year's skill up survey. Every year we conduct a survey amongst developers to know the pulse of the industry and to understand their needs, aspirations, and apprehensions better so that we can help them better to skill up. One of the questions we asked them this year was: ‘Where do you see yourself in the next 5 years?’ To this, an overwhelming one third responded stating they want to start a business. Surveys conducted by leading consultancies, time and again, show that only 2 or 3 in 10 startups survive. Naturally, a question that kept cropping up in our editorial discussions after going through the above response was: Can developers also succeed as entrepreneurs? To answer this, first let’s go back to the question: Can you have a successful career and be a good mom? The short answer is, Yes, it is possible to be both, but it will be hard work. The long answer is, This path is not for everyone. As a working mom, you need to work twice as hard, befriend uncertainty and nurture a steady support system that you trust both at work and home to truly flourish. At times, when you see your peers move ahead in their careers or watch stay-at-home moms with their kids, you might feel envy, inadequacy or other unsavory emotions in between. You need superhuman levels of mental, emotional and physical stamina to be the best version of yourself to move past such times with grace, and to truly appreciate the big picture: you have a job you love and a kid who loves you. But what has my experience as a working mom got to do anything with developers who want to start their own business? You’d be surprised to see the similarities. Starting a business is, in many ways, like having a baby. There is a long incubation period, then the painful launch into the world followed by sleepless nights of watching over the business so it doesn’t choke on itself while you were catching a nap. And you are doing this for the first time too, even if you have people giving you advice. Then there are those sustenance issues to look at and getting the right people in place to help the startup learn to turn over, sit up, stand up, crawl and then finally run and jump around till it makes you go mad with joy and apprehension at the same time thinking about what’s next in store now. How do I scale my business? Does my business even need me? To me, being a successful developer, writer, editor or any other profession, for that matter, is about being good at what you do (write code, write stories, spot raw talent, and bring out the best in others etc) and enjoying doing it immensely. It requires discipline, skill, expertise, and will. To see if the similarities continue, let’s try rewriting my statement for a developer turned entrepreneur. Can you be a good developer and a great entrepreneur? This path is not for everyone. As a developer-turned-entrepreneur, you need to work twice as hard as your friends who have a full-time job, to make it through the day wearing many hats and putting out workplace fires that have got nothing to do with your product development. You need a steady support system both at work and home to truly flourish. At times, when you see others move ahead in their careers or listen to entrepreneurs who have sold their business to larger companies or just got VC funded, you might feel envy, selfishness, inadequacy or any other emotion in between. You need superhuman levels of maturity to move past such times, and to truly appreciate the big picture: you built something incredible and now you are changing the world, even if it is just one customer at a time with your business. Now that we sail on the same boat, here are the 5 things I learned over the last year as a working mom that I hope you as a developer-entrepreneur will find useful. I’d love to hear your take on them in the comments below. [divider style="normal" top="20" bottom="20"] #1. Become a productivity master. Compartmentalize your roles and responsibilities into chunks spread across the day. Ruthlessly edit out clutter from your life. [divider style="normal" top="20" bottom="20"] Your life changed forever when your child (business) was born. What worked for you as a developer may not work as an entrepreneur. The sooner you accept this, the faster you will see the differences and adapt accordingly. For example, I used to work quite late into the night and wake up late in the morning before my son was born. I also used to binge watch TV shows during weekends. Now I wake up as early as 4.30 AM so that I can finish off the household chores for the day and get my son ready for play school by 9 AM. I don’t think about work at all at this time. I must accommodate my son during lunch break and have a super short lunch myself. But apart from that I am solely focused on the work at hand during office hours and don’t think about anything else. My day ends at 11 PM. Now I am more choosy about what I watch as I have very less time for leisure. I instead prefer to ‘do’ things like learning something new, spending time bonding with my son, or even catching up on sleep, on weekends. This spring-cleaning and compartmentalization took a while to get into habit, but it is worth the effort. Truth be told, I still occasionally fallback to binging, like I am doing right now with this article, writing it at 12 AM on a Saturday morning because I’m in a flow. As a developer-entrepreneur, you might be tempted to spend most of your day doing what you love, i.e., developing/creating something because it is a familiar territory and you excel at that. But doing so at the cost of your business will cost you your business, sooner than you think. Resist the urge and instead, organize your day and week such that you spend adequate time on key aspects of your business including product development. Make a note of everything you do for a few days and then decide what’s not worth doing and what else can you do instead in its place. Have room only for things that matter to you which enrich your business goals and quality of life. [divider style="normal" top="20" bottom="20"] #2. Don’t aim to create the best, aim for good enough. Ship the minimum viable product (MVP). [divider style="normal" top="20" bottom="20"] All new parents want the best for their kids from the diaper they use to the food they eat and the toys they play with. They can get carried away buying stuff that they think their child needs only to have a storeroom full of unused baby items. What I’ve realized is that babies actually need very less stuff. They just need basic food, regular baths, clean diapers (you could even get away without those if you are up for the challenge!), sleep, play and lots of love (read mommy and daddy time, not gifts). This is also true for developers who’ve just started up. They want to build a unique product that the world has never seen before and they want to ship the perfect version with great features and excellent customer reviews. But the truth is, your first product is, in fact, a prototype built on your assumption of what your customer wants. For all you know, you may be wrong about not just your customer’s needs but even who your customer is. This is why a proper market study is key to product development. Even then, the goal should be to identify the key features that will make your product viable. Ship your MVP, seek quick feedback, iterate and build a better product in the next version. This way you haven’t unnecessarily sunk upfront costs, you’re ahead of your competitors and are better at estimating customer needs as well. Simplicity is the highest form of sophistication. You need to find just the right elements to keep in your product and your business model. As Michelangelo would put it, “chip away the rest”. [divider style="normal" top="20" bottom="20"] #3. There is no shortcut to success. Don’t compromise on quality or your values. [divider style="normal" top="20" bottom="20"] The advice to ship a good enough product is not a permission to cut corners. Since their very first day, babies watch and learn from us. They are keen observers and great mimics. They will feel, talk and do what we feel, say and do, even if they don’t understand any of the words or gestures. I think they do understand emotions. One more reason for us to be better role models for our children. The same is true in a startup. The culture mimics the leader and it quickly sets in across the organization. As a developer, you may have worked long hours, even into the night to get that piece of code working and felt a huge rush from the accomplishment. But as an entrepreneur remember that you are being watched and emulated by those who work for you. You are what they want to be when they ‘grow up’. Do you want a crash and burn culture at your startup? This is why it is crucial that you clarify your business’s goals, purpose, and values to everyone in your company, even if it just has one other person working there. It is even more important that you practice what you preach. Success is an outcome of right actions taken via the right habits cultivated. [divider style="normal" top="20" bottom="20"] #4. You can’t do everything yourself and you can’t be everywhere. You need to trust others to do their job and appreciate them for a job well done. This also means you hire the right people first. [divider style="normal" top="20" bottom="20"] It takes a village to raise a child, they say. And it is very true, especially in families where both parents work. It would’ve been impossible for me to give my 100% at work, if I had to keep checking in on how my son is doing with his grandparents or if I refused the support my husband offers sharing household chores. Just because you are an exceptional developer, you can’t keep micromanaging product development at your startup. As a founder, you have a larger duty towards your entire business and customers. While it is good to check how your product is developing, your pure developer days are behind. Find people better than you at this job and surround yourself with people who are good at what they do, and share your values for key functions of your startup. Products succeed because of developers, businesses succeed because their leader knew when to get out of the way and when to step in. [divider style="normal" top="20" bottom="20"] #5. Customer first, product next, ego last. [divider style="normal" top="20" bottom="20"] This has been the toughest lesson so far. It looks deceptively simple in theory but hard to practice in everyday life. As developers and engineers, we are primed to find solutions. We also see things in binaries: success and failure, right and wrong, good and bad. This is a great quality to possess to build original products. However, it is also the Achilles heel for the developer turned entrepreneur. In business, things are seldom in black and white. Putting product first can be detrimental to knowing your customers. For example, you build a great product, but the customer is not using it as you intended it to be used. Do you train your customers to correct their ways or do you unearth the underlying triggers that contribute to that customer behavior and alter your product’s vision? The answer is, ‘it depends’. And your job as an entrepreneur is to enable your team to find the factors that influence your decision on the subject; it is not to find the answer yourself or make a decision based on your experience. You need to listen more than you talk to truly put your customer first. To do that you need to acknowledge that you don’t know everything. Put your ego last. Make it easy for customers to share their views with you directly. Acknowledge that your product/service exists to serve your customers’ needs. It needs to evolve with your customer. Find yourself a good mentor or two who you respect and want to emulate. They will be your sounding board and the light at the end of the tunnel during your darkest hours (you will have many of those, I guarantee). Be grateful for your support network of family, friends, and colleagues and make sure to let them know how much you appreciate them for playing their part in your success. If you have the right partner to start the business, jump in with both feet. Most tech startups have a higher likelihood of succeeding when they have more than one founder. Probably because the demands of tech and business are better balanced on more than one pair of shoulders. [dropcap]H[/dropcap]ow we frame our questions is a big part of the problem. Reframing them makes us see different perspectives thereby changing our mindsets. Instead of asking ‘can working moms have it all?’, what if we asked ‘what can organizations do to help working moms achieve work-life balance?’, ‘how do women cope with competing demands at work and home?’ Instead of asking ‘can developers be great entrepreneurs?’ better questions to ask are ‘what can developers do to start a successful business?’, ‘what can we learn from those who have built successful companies?’ Keep an open mind; the best ideas may come from the unlikeliest sources! 1 in 3 developers wants to be an entrepreneur. What does it take to make a successful transition? Developers think managers don’t know enough about technology. And that’s hurting business. 96% of developers believe developing soft skills is important  
Read more
  • 0
  • 0
  • 14615
article-image-team-project-setup-0
Packt
10 Feb 2016
12 min read
Save for later

Building Your Application

Packt
10 Feb 2016
12 min read
"Measuring programming progress by lines of code is like measuring aircraft building progress by weight."                                                                --Bill Gates In this article, by Tarun Arora, the author of the book Microsoft Team Foundation Server 2015 Cookbook, provides you information about: Configuring TFBuild Agent, Pool, and Queues Setting up a TFBuild Agent using an unattended installation (For more resources related to this topic, see here.) As a developer, compiling code and running unit tests gives you an assurance that your code changes haven't had an impact on the existing codebase. Integrating your code changes into the source control repository enables other users to validate their changes with yours. As a best practice, Teams integrate changes into the shared repository several times a day to reduce the risk of introducing breaking changes or worse, overwriting each other's. Continuous integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is verified by an automated build, allowing Teams to detect problems early. The automated build that runs as part of the CI process is often referred to as the CI build. There isn't a clear definition of what the CI build should do, but at the very minimum, it is expected to compile code and run unit tests. Running the CI build on a non-developer remote workspace helps identify the dependencies that may otherwise go unnoticed into the release process. We can talk endlessly about the benefits of CI; the key here is that it enables you to have potentially deployable software at all times. Deployable software is the most tangible asset to customers. Moving from concept to application, in this article, you'll learn how to leverage the build tooling in TFS to set up a quality-focused CI process. But first, let's have a little introduction to the build system in TFS. The following image illustrates the three generations of build systems in TFS: TFS has gone through three generations of build systems. The very first was MSBuild using XML for configuration; the next one was XAML using Windows Workflow Foundation for configuration, and now, there's TFBuild using JSON for configuration. The XAML-based build system will continue to be supported in TFS 2015. No automated migration path is available from XAML build to TFBuild. This is generally because of the difference in the architecture between the two build systems. The new build system in TFS is called Team Foundation Build (TFBuild). It is an extensible task-based execution system with a rich web interface that allows authoring, queuing, and monitoring builds. TFBuild is fully cross platform with the underlying build agents that are capable of running natively on both Windows and non-Windows platforms. TFBuild provides out-of-the-box integration with Centralized Version Control such as TFVC and Distributed Version Controls such as Git and GitHub. TFBuild supports building .NET, Java, Android, and iOS applications. All the recipes in this article are based on TFBuild. TFBuild is a task orchestrator that allows you to run any build engine, such as Ant, CMake, Gradle, Gulp, Grunt, Maven, MSBuild, Visual Studio, Xamarin, XCode, and so on. TFBuild supports work item integration, publishing drops, and publishing test execution results into the TFS that is independent of the build engine that you choose. The build agents are xCopyable and do not require any installation. The agents are auto-updating in nature; there's no need to update every agent in your infrastructure: TFBuild offers a rich web-based interface. It does not require Visual Studio to author or modify a build definition. From simple to complex, all build definitions can easily be created in the web portal. The web interface is accessible from any device and any platform: The build definition can be authored from the web portal directly A build definition is a collection of tasks. A task is simply a build step. Build definition can be composed by dragging and dropping tasks. Each task supports Enabled, Continue on error, and Always run flags making it easier to manage build definitions as the task list grows: The build system supports invoking PowerShell, batch, command line, and shell scripts. All out-of-the-box tasks are open source. If a task does not satisfy your requirements, you can download the task from GitHub at https://github.com/Microsoft/vso-agent-tasks and customize it. If you can't find a task, you can easily create one. You'll learn more about custom tasks in this article. Changes to build definitions can be saved as drafts. Build definitions maintain a history of all changes in the History tab. A side-by-side comparison of the changes is also possible. Comments entered when changing the build definition show up in the change history: Build definitions can be saved as templates. This helps standardize the use of certain tasks across new build definitions: An existing build definition can be saved as a template Multiple triggers can be set for the same build, including CI triggers and multiple scheduled triggers: Rule-based retention policies support the setting up of multiple rules. Retention can be specified by "days" or "number" of the builds: The build output logs are displayed in web portal in real time. The build log can be accessed from the console even after the build gets completed: The build reports have been revamped to offer more visibility into the build execution, and among other things, the test results can now directly be accessed from the web interface. The .trx file does not need to be downloaded into Visual Studio to view the test results: The old build system had restrictions on one Team Project Collection per build controller and one controller per build machine. TFBuild removes this restriction and supports the reuse of queues across multiple Team Project Collections. The following image illustrates the architecture of the new build system: In the preceding diagram, we observe the following: Multiple agents can be configured on one machine Agents from across different machines can be grouped into a pool Each pool can have only one queue One queue can be used across multiple Team Project Collections To demonstrate the capabilities of TFBuild, we'll use the FabrikamTFVC and FabrikamGit Team Projects. Configuring TFBuild Agent, Pool, and Queues In this recipe, you'll learn how to configure agents and create pools and queues. You'll also learn how a queue can be used across multiple Team Project Collections. Getting ready Scenario: At Fabrikam, the FabrikamTFVC and FabrikamGit Team Projects need their own build queues. The FabrikamTFVC Teams build process can be executed on a Windows Server. The FabrikamGit Team build process needs both Windows and OS X. The Teams want to set up three build agents on a Windows Server; one build agent on an OS X machine. The Teams want to group two Windows Agents into a Windows Pool for FabrikamTFVC Team and group one Windows and one Mac Agent into another pool for the FabrikamGit Team: Permission: To configure a build agent, you should be in the Build Administrators Group. The prerequisites for setting up the build agent on a Windows-based machine are as follows: The build agent should have a supporting version of Windows. The list of supported versions is listed at https://msdn.microsoft.com/en-us/Library/vs/alm/TFS/administer/requirements#Operatingsystems. The build agent should have Visual Studio 2013 or 2015. The build agent should have PowerShell 3 or a newer version. A build agent is configured for your TFS as part of the server installation process if you leave the Configure the build service to start automatically option selected: For the purposes of this recipe, we'll configure the agents from scratch. Delete the default pool or any other pool you have by navigating to the Agent pools option in the TFS Administration Console http://tfs2015:8080/tfs/_admin/_AgentPool: How to do it Log into the Windows machine that you desire to set the agents upon. Navigate to the Agent pools in the TFS Administration Console by browsing to http://tfs2015:8080/tfs/_admin/_AgentPool. Click on New Pool, enter the pool name as Pool 1, and uncheck Auto-Provision Queue in Project Collections: Click on the Download agent icon. Copy the downloaded folder into E: and unzip it into E:Win-A1. You can use any drive; however, it is recommended to use the non-operating system drive: Run the PowerShell console as an administrator and change the current path in PowerShell to the location of the agent in this case E:Win-A1. Call the ConfigureAgent.ps1 script in the PowerShell console and click on Enter. This will launch the Build Agent Configuration utility: Enter the configuration details as illustrated in the following screenshot: It is recommended to install the build agent as a service; however, you have an option to run the agent as an interactive process. This is great when you want to debug a build or want to temporarily use a machine as a build agent. The configuration process creates a JSON settings file; it creates the working and diagnostics folders: Refresh the Agent pools page in the TFS Administration Console. The newly configured agent shows up under Pool 1: Repeat steps 2 to 5 to configure Win-A2 in Pool 1. Repeat steps 1 to 5 to configure Win-A3 in Pool 2. It is worth highlighting that each agent runs from its individual folder: Now, log into the Mac machine and launch terminal: Install the agent installer globally by running the commands illustrated here. You will be required to enter the machine password to authorize the install: This will download the agent in the user profile, shown as follows: The summary of actions performed when the agent is downloaded Run the following command to install the agent installer globally for the user profile: Running the following command will create a new directory called osx-A1 for the agent; create the agent in the directory: The agent installer has been copied from the user profile into the agent directory, shown as follows: Pass the following illustrated parameters to configure the agent: This completes the configuration of the xPlatform agent on the Mac. Refresh the Agent pools page in the TFS Administration Console to see the agent appear in Pool 2: The build agent has been configured at the Team Foundation Server level. In order to use the build agent for a Team Project Collection, a mapping between the build agent and Team Project Collection needs to be established. This is done by creating queues. To configure queues, navigate to the Collection Administration Console by browsing to http://tfs2015:8080/tfs/DefaultCollection/_admin/_BuildQueue. From the Build tab, click on New queue; this dialog allows you to reference the pool as a queue: Map Pool 1 as Queue 1 and Pool 2 as Queue 2 as shown here: The TFBuild Agent, Pools, and Queues are now ready to use. The green bar before the agent name and queue in the administration console indicates that the agent and queues are online. How it works... To test the setup, create a new build definition by navigating to the FabrikamTFVC Team Project Build hub by browsing to http://tfs2015:8080/tfs/DefaultCollection/FabrikamTFVC/_build. Click on the Add a new build definition icon. In the General tab, you'll see that the queues show up under the Queue dropdown menu. This confirms that the queues have been correctly configured and are available for selection in the build definition: Pools can be used across multiple Team Project Collections. As illustrated in the following screenshot, in Team Project Collection 2, clicking on the New queue... shows that the existing pools are already mapped in the default collection: Setting up a TFBuild Agent using an unattended installation The new build framework allows the unattended setup of build agents by injecting a set of parameter values via script. This technique can be used to spin up new agents to be attached into an existing agent pool. In this recipe, you'll learn how to configure and unconfigure a build agent via script. Getting ready Scenario: The FabrikamTFVC Team wants the ability to install, configure, and unconfigure a build agent directly via script without having to perform this operation using the Team Portal. Permission: To configure a build agent, you should be in the Build Administrators Group. Download the build agent as discussed in the earlier recipe Configuring TFBuild Agent, Pool, and Queues. Copy the folder to E:Agent. The script refers to this Agent folder. How to do it... Launch PowerShell in the elevated mode and execute the following command: .AgentVsoAgent.exe /Configure /RunningAsService /ServerUrl:"http://tfs2015:8080/tfs" /WindowsServiceLogonAccount:svc_build /WindowsServiceLogonPassword:xxxxx /Name:WinA-10 /PoolName:"Pool 1" /WorkFolder:"E:Agent_work" /StartMode:Automatic Replace the value of the username and password accordingly. Executing the script will result in the following output: The script installs an agent by the name WinA-10 as Windows Service running as svc_build. The agent is added to Pool 1: To unconfigure WinA-10, run the following command in an elevated PowerShell prompt: .AgentVsoAgent.exe /Unconfigure "vsoagent.tfs2015.WinA-10" To unconfigure, script needs to be executed from outside the scope of the Agent folder. Running the script from within the Agent folder scope will result in an error message. How it works... The new build agent natively allows configuration via script. A new capability called Personal Access Token (PAT) is due for release in the future updates of TFS 2015. PAT allows you to generate a personal OAuth token for a specific scope; it replaces the need to key in passwords into configuration files. Summary In this article, we have looked at configuring TFBuild Agent, Pool, and Queues and setting up a TFBuild Agent using an unattended installation. Resources for Article: Further resources on this subject: Overview of Process Management in Microsoft Visio 2013 [article] Introduction to the Raspberry Pi's Architecture and Setup [article] Implementing Microsoft Dynamics AX [article]
Read more
  • 0
  • 0
  • 14599

article-image-sdlc-puts-process-at-the-center-of-software-engineering
Richard Gall
22 May 2018
7 min read
Save for later

SDLC puts process at the center of software engineering

Richard Gall
22 May 2018
7 min read
What is SDLC? SDLC stands for software development lifecycle. It refers to all of the different steps that software engineers need to take when building software. This includes planning, creating, building and then deploying software, but maintenance is also crucial too. In some instances you may need to change or replace software - that is part of the software development lifecycle as well. SDLC is about software quality and development efficiency SDLC is about more than just the steps in the software development process. It's also about managing that process in a way that improves quality while also improving efficiency. Ultimately, there are numerous ways of approaching the software development lifecycle - Waterfall and Agile are the too most well known methodologies for managing the development lifecycle. There are plenty of reasons you might choose one over another. What is most important is that you pay close attention to what the software development lifecycle looks like. It sounds obvious, but it is very difficult to build software without a plan in place. Things can get chaotic very quickly. If it does, that's bad news for you, as the developer, and bad news for users as well. When you don't follow the software development lifecycle properly, you're likely to miss user requirements, and faults will also find their way into your code. The stages of the software development lifecycle (SDLC) There are a number of ways you might see an SDLC presented but the core should always be the same. And yes, different software methodologies like Agile and Waterfall outline very different ways of working, but broadly the steps should be the same. What differs between different software methodologies is how each step fits together. Step 1: Requirement analysis This is the first step in any SDLC. This is about understanding everything that needs to be understood in as much practical detail as possible. It might mean you need to find out about specific problems that need to be solved. Or, there might be certain things that users need that you need to make sure are in the software. To do this, you need to do good quality research, from discussing user needs with a product manager to revisiting documentation on your current systems. It is often this step that is the most challenging in the software development lifecycle. This is because you need to involve a wide range of stakeholders. Some of these might not be technical, and sometimes you might simply use a different vocabulary. It's essential that you have a shared language to describe everything from the user needs to the problems you might be trying to solve. Step 2: Design the software Once you have done a requirement analysis you can begin designing the software. You do this by turning all the requirements and software specifications into a design document. This might feel like it slows down the development process, but if you don't do this, not only are you wasting the time taken to do your requirement analysis, you're also likely to build poor quality or even faulty software. While it's important not to design by committee or get slowed down by intensive nave-gazing, keeping stakeholders updated and requesting feedback and input where necessary can be incredibly important. Sometimes its worth taking that extra bit of time, as it could solve a lot of problems later in the SDLC. Step 3: Plan the project Once you have captured requirements and feel you have properly understood exactly what needs to be delivered - as well as any potential constraints - you need to plan out how you're going to build that software. To do this you'll need to have an overview of the resources at your disposal. These are the sorts of questions you'll need to consider at this stage: Who is available? Are there any risks? How can we mitigate them? What budget do we have for this project? Are there any other competing projects? In truth, you'll probably do this during the design stage. The design document you create should, of course, be developed with context in mind. It's pointless creating a stunning design document, outlining a detailed and extensive software development project if it's simply not realistic for your team to deliver it. Step 4: Start building the software Now you can finally get down to the business of actually writing code. With all the work you have done in the previous steps this should be a little easier. However, it's important to remember that imperfection is part and parcel of software engineering. There will always be flaws in your software. That doesn't necessarily mean bugs or errors, however; it could be small compromises that need to be made in order to ensure something works. The best approach here is to deliver rapidly. The sooner you can get software 'out there' the faster you can make changes and improvements if (or more likely when) they're needed. It's worth involving stakeholders at this stage - transparency in the development process is a good way to build collaboration and ensure the end result delivers on what was initially in the requirements. Step 5: Testing the software Testing is, of course, an essential step in the software development lifecycle. This is where you identify any problems. That might be errors or performance issues, but you may find you haven't quite been able to deliver what you said you would in the design document. The continuous integration server is important here, as the continuous integration server can help to detect any problems with the software. The rise of automated software testing has been incredibly valuable; it means that instead of spending time manually running tests, engineers can dedicate more time to fixing problems and optimizing code. Step 6: Deploy the software The next step is to deploy the software to production. All the elements of the software should now be in place, and you want it to simply be used. It's important to remember that there will be problems here. Testing can never capture every issue, and feedback and insight from users are going to be much more valuable than automated tests run on a server. Continuous delivery pipelines allow you to deploy software very efficiently. This makes the build-test-deploy steps of the software development lifecycle to be relatively frictionless. Okay, maybe not frictionless - there's going to be plenty of friction when you're developing software. But it does allow you to push software into production very quickly. Step 7: Maintaining software Software maintenance is a core part of the day-to-day life of a software engineer. Its a crucial step in the SDLC. There are two forms of software maintenance; both are of equal importance. Evolutive maintenance and corrective maintenance. Evolutive maintenance As the name suggests, evolutive maintenance is where you evolve software by adding in new functionality or making larger changes to the logic of the software. These changes should be a response to feedback from stakeholders or, more importantly, users. There may be times when business needs dictate this type of maintenance - this is never ideal, but it is nevertheless an important part of a software engineer's work. Corrective maintenance Corrective maintenance isn't quite as interesting or creative as evolutive maintenance - it's about fixing bugs and errors in the code. This sort of maintenance can feel like a chore, and ideally you want to minimize the amount of time you spend doing this. However, if you're following SDLC closely, you shouldn't find too many bugs in your software. The benefits of SDLC are obvious The benefits of SDLC are clear. It puts process at the center of software engineering. Without those processes it becomes incredibly difficult to build the software that stakeholders and users want. And if you don't care about users then, really, why build software at all. It's true that DevOps has done a lot to change SDLC. Arguably, it is an area that is more important and more hotly debated than ever before. It's not difficult to find someone with an opinion on the best way to build something. Equally, as software becomes more fragmented and mutable, thanks to the emergence of cloud and architectural trends like microservices and serverless, the way we design, build and deploy software has never felt more urgent. Read next DevOps Engineering and Full-Stack Development – 2 Sides of the Same Agile Coin
Read more
  • 0
  • 0
  • 14572

article-image-building-applications-spring-data-redis
Packt
03 Dec 2012
9 min read
Save for later

Building Applications with Spring Data Redis

Packt
03 Dec 2012
9 min read
(For more resources related on Spring, see here.) Designing a Redis data model The most important rules of designing a Redis data model are: Redis does not support ad hoc queries and it does not support relations in the same way than relational databases. Thus, designing a Redis data model is a total different ballgame than designing the data model of a relational database. The basic guidelines of a Redis data model design are given as follows: Instead of simply modeling the information stored in our data model, we have to also think how we want to search information from it. This often leads to a situation where we have to duplicate data in order to fulfill the requirements given to us. Don't be afraid to do this. We should not concentrate on normalizing our data model. Instead, we should combine the data that we need to handle as an unit into an aggregate. Since Redis does not support relations, we have to design and implement these relations by using the supported data structures. This means that we have to maintain these relations manually when they are changed. Because this might require a lot of effort and code, it could be wise to simply duplicate the information instead of using relations. It is always wise to spend a moment to verify that we are using the correct tool for the job. NoSQL Distilled, by Martin Fowler contains explanations of different NoSQL databases and their use cases, and can be found at http://martinfowler.com/books/nosql.html. Redis supports multiple data structures. However, one question remained unanswered: which data structure should we use for our data? This question is addressed in the following table: Data type Description String A string is good choice for storing information that is already converted to a textual form. For instance, if we want to store HTML, JSON, or XML, a string should be our weapon of choice. List A list is a good choice if we will access it only near the start or end. This means that we should use it for representing queues or stacks. Set We should use a set if we need to get the size of a collection or check if a certain item belongs to it. Also, if we want to represent relations, a set is a good choice (for example, "who are John's friends?"). Sorted set Sorted sets should be used in the same situations as sets when the ordering of items is important to us. Hash A hash is a perfect data structure for representing complex objects. Key components Spring Data Redis provides certain components that are the cornerstones of each application that uses it. This section provides a brief introduction to the components that we will later use to implement our example applications. Atomic counters Atomic counters are for Redis what sequences are for relational databases. Atomic counters guarantee that the value received by a client is unique. This makes these counters a perfect tool for creating unique IDs to our data that is stored in Redis. At the moment, Spring Data Redis offers two atomic counters: RedisAtomicInteger and RedisAtomicLong . These classes provide atomic counter operations for integers and longs. RedisTemplate The RedisTemplate<K,V> class is the central component of Spring Data Redis. It provides methods that we can use to communicate with a Redis instance. This class requires that two type parameters are given during its instantiation: the type of used Redis key and the type of the Redis value. Operations The RedisTemplate class provides two kinds of operations that we can use to store, fetch, and remove data from our Redis instance: Operations that require that the key and the value are given every time an operation is performed. These operations are handy when we have to execute a single operation by using a key and a value. Operations that are bound to a specific key that is given only once. We should use this approach when we have to perform multiple operations by using the same key. The methods that require that a key and value is given every time an operation is performed are described in following list: HashOperations<K,HK,HV> opsForHash(): This method returns the operations that are performed on hashes ListOperations<K,V> opsForList(): This method returns the operations performed on lists SetOperations<K,V> opsForSet(): This method returns the operations performed on sets ValueOperations<K,V> opsForValue(): This method returns the operations performed on simple values ZSetOperations<K,HK,HV> opsForZSet(): This method returns the operations performed on sorted sets The methods of the RedisTemplate class that allow us to execute multiple operations by using the same key are described in following list: BoundHashOperarations<K,HK,HV> boundHashOps(K key): This method returns hash operations that are bound to the key given as a parameter BoundListOperations<K,V> boundListOps(K key): This method returns list operations bound to the key given as a parameter BoundSetOperations<K,V> boundSetOps(K key):: This method returns set operations, which are bound to the given key BoundValueOperations<K,V> boundValueOps(K key): This method returns operations performed to simple values that are bound to the given key BoundZSetOperations<K,V> boundZSetOps(K key): This method returns operations performed on sorted sets that are bound to the key that is given as a parameter The differences between these operations become clear to us when we start building our example applications. Serializers Because the data is stored in Redis as bytes, we need a method for converting our data to bytes and vice versa. Spring Data Redis provides an interface called RedisSerializer<T>, which is used in the serialization process. This interface has one type parameter that describes the type of the serialized object. Spring Data Redis provides several implementations of this interface. These implementations are described in the following table: Serializer Description GenericToStringSerializer<T> Serializes strings to bytes and vice versa. Uses the Spring ConversionService to transform objects to strings and vice versa. JacksonJsonRedisSerializer<T> Converts objects to JSON and vice versa. JdkSerializationRedisSerializer Provides Java based serialization to objects. OxmSerializer Uses the Object/XML mapping support of Spring Framework 3. StringRedisSerializer  Converts strings to bytes and vice versa. We can customize the serialization process of the RedisTemplate class by using the described serializers. The RedisTemplate class provides flexible configuration options that can be used to set the serializers that are used to serialize value keys, values, hash keys, hash values, and string values. The default serializer of the RedisTemplate class is JdkSerializationRedisSerializer. However, the string serializer is an exception to this rule. StringRedisSerializer is the serializer that is by default used to serialize string values. Implementing a CRUD application This section describes two different ways for implementing a CRUD application that is used to manage contact information. First, we will learn how we can implement a CRUD application by using the default serializer of the RedisTemplate class. Second, we will learn how we can use value serializers and implement a CRUD application that stores our data in JSON format. Both of these applications will also share the same domain model. This domain model consists of two classes: Contact and Address. We removed the JPA specific annotations from them We use these classes in our web layer as form objects and they no longer have any other methods than getters and setters The domain model is not the only thing that is shared by these examples. They also share the interface that declares the service methods for the Contact class. The source code of the ContactService interface is given as follows: public interface ContactService {public Contact add(Contact added);public Contact deleteById(Long id) throws NotFoundException;public List&lt;Contact&gt; findAll();public Contact findById(Long id) throws NotFoundException;public Contact update(Contact updated) throws NotFoundException;} Both of these applications will communicate with the used Redis instance by using the Jedis connector. Regardless of the user's approach, we can implement a CRUD application with Spring Data Redis by following these steps: Configure the application context. Implement the CRUD functions. Let's get started and find out how we can implement the CRUD functions for contact information. Using default serializers This subsection describes how we can implement a CRUD application by using the default serializers of the RedisTemplate class. This means that StringRedisSerializer is used to serialize string values, and JdkSerializationRedisSerializer serializes other objects. Configuring the application context We can configure the application context of our application by making the following changes to the ApplicationContext class: Configuring the Redis template bean. Configuring the Redis atomic long bean. Configuring the Redis template bean We can configure the Redis template bean by adding a redisTemplate() method to the ApplicationContext class and annotating this method with the @Bean annotation. We can implement this method by following these steps: Create a new RedisTemplate object. Set the used connection factory to the created RedisTemplate object. Return the created object. The source code of the redisTemplate() method is given as follows: @Beanpublic RedisTemplate redisTemplate() {RedisTemplate&lt;String, String&gt; redis = new RedisTemplate&lt;String,String&gt;();redis.setConnectionFactory(redisConnectionFactory());return redis;} Configuring the Redis atomic long bean We start the configuration of the Redis atomic long bean by adding a method called redisAtomicLong() to the ApplicationContext class and annotating the method with the @Bean annotation. Our next task is to implement this method by following these steps: Create a new RedisAtomicLong object. Pass the name of the used Redis counter and the Redis connection factory as constructor parameters. Return the created object. The source code of the redisAtomicLong() method is given as follows: @Beanpublic RedisAtomicLong redisAtomicLong() {return new RedisAtomicLong("contact", redisConnectionFactory());} If we need to create IDs for instances of different classes, we can use the same Redis counter. Thus, we have to configure only one Redis atomic long bean.
Read more
  • 0
  • 0
  • 14558
article-image-behavior-driven-development-selenium-webdriver
Packt
31 Jan 2013
15 min read
Save for later

Behavior-driven Development with Selenium WebDriver

Packt
31 Jan 2013
15 min read
Behavior-driven Development (BDD) is an agile software development practice that enhances the paradigm of Test Driven Development (TDD) and acceptance tests, and encourages the collaboration between developers, quality assurance, domain experts, and stakeholders. Behavior-driven Development was introduced by Dan North in the year 2003 in his seminal article available at http://dannorth.net/introducing-bdd/. In this article by Unmesh Gundecha, author of Selenium Testing Tools Cookbook, we will cover: Using Cucumber-JVM and Selenium WebDriver in Java for BDD Using SpecFlow.NET and Selenium WebDriver in .NET for BDD Using JBehave and Selenium WebDriver in Java Using Capybara, Cucumber, and Selenium WebDriver in Ruby (For more resources related to this topic, see here.) Using Cucumber-JVM and Selenium WebDriver in Java for BDD BDD/ATDD is becoming widely accepted practice in agile software development, and Cucumber-JVM is a mainstream tool used to implement this practice in Java. Cucumber-JVM is based on Cucumber framework, widely used in Ruby on Rails world. Cucumber-JVM allows developers, QA, and non-technical or business participants to write features and scenarios in a plain text file using Gherkin language with minimal restrictions about grammar in a typical Given, When, and Then structure. This feature file is then supported by a step definition file, which implements automated steps to execute the scenarios written in a feature file. Apart from testing APIs with Cucumber-JVM, we can also test UI level tests by combining Selenium WebDriver. In this recipe, we will use Cucumber-JVM, Maven, and Selenium WebDriver for implementing tests for the fund transfer feature from an online banking application. Getting ready Create a new Maven project named FundTransfer in Eclipse. Add the following dependencies to POM.XML: <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http:// maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>FundTransfer</groupId> <artifactId>FundTransfer</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-java</artifactId> <version>1.0.14</version> <scope>test</scope> </dependency> <dependency> <groupId>info.cukes</groupId> <artifactId>cucumber-junit</artifactId> <version>1.0.14</version> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>2.25.0</version> </dependency> </dependencies> </project> How to do it... Perform the following steps for creating BDD/ATDD tests with Cucumber-JVM: Select the FundTransfer project in Package Explorer in Eclipse. Select and right-click on src/test/resources in Package Explorer. Select New | Package from the menu to add a new package as shown in the following screenshot: Enter fundtransfer.test in the Name: textbox and click on the Finish button. Add a new file to this package. Name this file as fundtransfer.feature as shown in the following screenshot: Add the Fund Transfer feature and scenarios to this file: Feature: Customer Transfer's Fund As a customer, I want to transfer funds so that I can send money to my friends and family Scenario: Valid Payee Given the user is on Fund Transfer Page When he enters "Jim" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure the fund transfer is complete with "$100 transferred successfully to Jim!!" message Scenario: Invalid Payee Given the user is on Fund Transfer Page When he enters "Jack" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! 'Jack' is not registered in your List of Payees" is displayed Scenario: Account is overdrawn past the overdraft limit Given the user is on Fund Transfer Page When he enters "Tim" as payee name And he enters "1000000" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! account cannot be overdrawn" is displayed Select and right-click on src/test/java in Package Explorer. Select New | Package from menu to add a new Package as shown in the following screenshot: Create a class named FundTransferStepDefs in the newly-created package. Add the following code to this class: package fundtransfer.test; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.By; import cucumber.annotation.*; import cucumber.annotation.en.*; import static org.junit.Assert.assertEquals; public class FundTransferStepDefs { protected WebDriver driver; @Before public void setUp() { driver = new ChromeDriver(); } @Given("the user is on Fund Transfer Page") public void The_user_is_on_fund_transfer_page() { driver.get("http://dl.dropbox.com/u/55228056/fundTransfer. html"); } @When("he enters "([^"]*)" as payee name") public void He_enters_payee_name(String payeeName) { driver.findElement(By.id("payee")).sendKeys(payeeName); } @And("he enters "([^"]*)" as amount") public void He_enters_amount(String amount) { driver.findElement(By.id("amount")).sendKeys(amount); } @And("he Submits request for Fund Transfer") public void He_submits_request_for_fund_transfer() { driver.findElement(By.id("transfer")).click(); Behavior-driven Development 276 } @Then("ensure the fund transfer is complete with "([^"]*)" message") public void Ensure_the_fund_transfer_is_complete(String msg) { WebElement message = driver.findElement(By.id("message")); assertEquals(message.getText(),msg); } @Then("ensure a transaction failure message "([^"]*)" is displayed") public void Ensure_a_transaction_failure_message(String msg) { WebElement message = driver.findElement(By.id("message")); assertEquals(message.getText(),msg); } @After public void tearDown() { driver.close(); } } Create a support class RunCukesTest which will define the Cucumber-JVM configurations: package fundtransfer.test; import cucumber.junit.Cucumber; import org.junit.runner.RunWith; @RunWith(Cucumber.class) @Cucumber.Options(format = {"pretty", "html:target/cucumber-htmlreport", "json-pretty:target/cucumber-report.json"}) public class RunCukesTest { } To run the tests in Maven life cycle select the FundTransfer project in Package Explorer. Right-click on the project name and select Run As | Maven test. Maven will execute all the tests from the project. At the end of the test, an HTML report will be generated as shown in the following screenshot. To view this report open index.html in the targetcucumber-htmlreport folder: How it works... Creating tests in Cucumber-JVM involves three major steps: writing a feature file, implementing automated steps using the step definition file, and creating support code as needed. For writing features, Cucumber-JVM uses 100 percent Gherkin syntax. The feature file describes the feature and then the scenarios to test the feature: Feature: Customer Transfer's Fund As a customer, I want to transfer funds so that I can send money to my friends and family You can write as many scenarios as needed to test the feature in the feature file. The scenario section contains the name and steps to execute the defined scenario along with test data required to execute that scenario with the application: Scenario: Valid Payee Given the user is on Fund Transfer Page When he enters "Jim" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure the fund transfer is complete with "$100 transferred successfully to Jim!!" message Team members use these feature files and scenarios to build and validate the system. Frameworks like Cucumber or JBehave provide an ability to automatically validate the features by allowing us to implement automated steps. For this we need to create the step definition file that maps the steps from the feature file to automation code. Step definition files implement a method for steps using special annotations. For example, in the following code, the @When annotation is used to map the step "When he enters "Jim" as payee name" from the feature file in the step definition file. When this step is to be executed by the framework, the He_enters_payee_name() method will be called by passing the data extracted using regular expressions from the step: @When("he enters "([^"]*)" as payee name") public void He_enters_payee_name(String payeeName) { driver.findElement(By.id("payee")).sendKeys(payeeName); } In this method, the WebDriver code is written to locate the payee name textbox and enter the name value using the sendKeys() method. The step definition file acts like a template for all the steps from the feature file while scenarios can use a mix and match of the steps based on the test conditions. A helper class RunCukesTest is defined to provide Cucumber-JVM configurations such as how to run the features and steps with JUnit, report format, and location, shown as follows: @RunWith(Cucumber.class) @Cucumber.Options(format = {"pretty", "html:target/cucumber-htmlreport", "json-pretty:target/cucumber-report.json"}) public class RunCukesTest { } There's more… In this example, step definition methods are calling Selenium WebDriver methods directly. However, a layer of abstraction can be created using the Page object where a separate class is defined with the definition of all the elements from FundTransferPage: import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.support.CacheLookup; import org.openqa.selenium.support.FindBy; import org.openqa.selenium.support.PageFactory; public class FundTransferPage { @FindBy(id = "payee") @CacheLookup Chapter 11 279 public WebElement payeeField; @FindBy(id = "amount") public WebElement amountField; @FindBy(id = "transfer") public WebElement transferButton; @FindBy(id = "message") public WebElement messageLabel; public FundTransferPage(WebDriver driver) { if(!"Online Fund Transfers".equals(driver.getTitle())) throw new IllegalStateException("This is not Fund Transfer Page"); PageFactory.initElements(driver, this); } } Using SpecFlow.NET and Selenium WebDriver in .NET for BDD We saw how to use Selenium WebDriver with Cucumber-JVM for BDD/ATDD. Now let's try using a similar combination in .NET using SpecFlow.NET. We can implement BDD in .NET using the SpecFlow.NET and Selenium WebDriver .NET bindings. SpecFlow.NET is inspired by Cucumber and uses the same Gherkin language for writing specs. In this recipe, we will implement tests for the Fund Transfer feature using SpecFlow.NET. We will also use the Page objects for FundTransferPage in this recipe. Getting ready This recipe is created with SpecFlow.NET Version 1.9.0 and Microsoft Visual Studio Professional 2012. Download and install SpecFlow from Visual Studio Gallery http://visualstudiogallery.msdn.microsoft.com/9915524d-7fb0-43c3-bb3c-a8a14fbd40ee. Download and install NUnit Test Adapter from http://visualstudiogallery.msdn.microsoft.com/9915524d-7fb0-43c3-bb3c-a8a14fbd40ee. This will install the project template and other support files for SpecFlow.NET in Visual Studio 2012. How to do it... You will find the Fund Transfer feature in any online banking application where users can transfer funds to a registered payee who could be a family member or a friend. Let's test this feature using SpecFlow.NET by performing the following steps: Launch Microsoft Visual Studio. In Visual Studio create a new project by going to File | New | Project. Select Visual C# Class Library Project. Name the project FundTransfer.specs as shown in the following screenshot: Next, add SpecFlow.NET, WebDriver, and NUnit using NuGet. Right-click on the FundTransfer.specs solution in Solution Explorer and select Manage NuGet Packages... as shown in the following screenshot: On the FundTransfer.specs - Manage NuGet Packages dialog box, select Online, and search for SpecFlow packages. The search will result with the following suggestions: Select SpecFlow.NUnit from the list and click on Install button. NuGet will download and install SpecFlow.NUnit and any other package dependencies to the solution. This will take a while. Next, search for the WebDriver package on the FundTransfer.specs - Manage NuGet Packages dialog box. Select Selenium WebDriver and Selenium WebDriver Support Classes from the list and click on the Install button. Close the FundTransfer.specs - Manage NuGet Packages dialog box. Creating a spec file The steps for creating a spec file are as follows: Right-click on the FundTransfer.specs solution in Solution Explorer. Select Add | New Item. On the Add New Item – FundTransfer.specs dialog box, select SpecFlow Feature File and enter FundTransfer.feature in the Name: textbox. Click Add button as shown in the following screenshot: In the Editor window, your will see the FundTransfer.feature tab. By default, SpecFlow will add a dummy feature in the feature file. Replace the content of this file with the following feature and scenarios: Feature: Customer Transfer's Fund As a customer, I want to transfer funds so that I can send money to my friends and family Scenario: Valid Payee Given the user is on Fund Transfer Page When he enters "Jim" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure the fund transfer is complete with "$100 transferred successfully to Jim!!" message Scenario: Invalid Payee Given the user is on Fund Transfer Page When he enters "Jack" as payee name And he enters "100" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! 'Jack' is not registered in your List of Payees" is displayed Scenario: Account is overdrawn past the overdraft limit Given the user is on Fund Transfer Page When he enters "Tim" as payee name And he enters "1000000" as amount And he Submits request for Fund Transfer Then ensure a transaction failure message "Transfer failed!! account cannot be overdrawn" is displayed Creating a step definition file The steps for creating a step definition file are as follows: To add a step definition file, right-click on the FundTransfer.sepcs solution in Solution Explorer. Select Add | New Item. On the Add New Item - FundTransfer.specs dialog box, select SpecFlow Step Definition File and enter FundTransferStepDefs.cs in the Name: textbox. Click on Add button. A new C# class will be added with dummy steps. Replace the content of this file with the following code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using TechTalk.SpecFlow; using NUnit.Framework; using OpenQA.Selenium; namespace FundTransfer.specs { [Binding] public class FundTransferStepDefs { FundsTransferPage _ftPage = new FundsTransferPage(Environment.Driver); [Given(@"the user is on Fund Transfer Page")] public void GivenUserIsOnFundTransferPage() { Environment.Driver.Navigate().GoToUrl("http:// localhost:64895/Default.aspx"); } [When(@"he enters ""(.*)"" as payee name")] public void WhenUserEneteredIntoThePayeeNameField(string payeeName) { _ftPage.payeeNameField.SendKeys(payeeName); } [When(@"he enters ""(.*)"" as amount")] public void WhenUserEneteredIntoTheAmountField(string amount) { _ftPage.amountField.SendKeys(amount); } [When(@"he enters ""(.*)"" as amount above his limit")] public void WhenUserEneteredIntoTheAmountFieldAboveLimit (string amount) { _ftPage.amountField.SendKeys(amount); } [When(@"he Submits request for Fund Transfer")] public void WhenUserPressTransferButton() { _ftPage.transferButton.Click(); } [Then(@"ensure the fund transfer is complete with ""(.*)"" message")] public void ThenFundTransferIsComplete(string message) { Assert.AreEqual(message, _ftPage.messageLabel.Text); } [Then(@"ensure a transaction failure message ""(.*)"" is displayed")] public void ThenFundTransferIsFailed(string message) { Assert.AreEqual(message, _ftPage.messageLabel.Text); } } } Defining a Page object and a helper class The steps for defining a Page object and a helper class are as follows: Define a Page object for the Fund Transfer Page by adding a new C# class file. Name this class FundTransferPage. Copy the following code to this class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using OpenQA.Selenium; using OpenQA.Selenium.Support.PageObjects; namespace FundTransfer.specs { class FundTransferPage { public FundTransferPage(IWebDriver driver) { PageFactory.InitElements(driver, this); } [FindsBy(How = How.Id, Using = "payee")] public IWebElement payeeNameField { get; set; } [FindsBy(How = How.Id, Using = "amount")] public IWebElement amountField { get; set; } [FindsBy(How = How.Id, Using = "transfer")] public IWebElement transferButton { get; set; } [FindsBy(How = How.Id, Using = "message")] public IWebElement messageLabel { get; set; } } } We need a helper class that will provide an instance of WebDriver and perform clean up activity at the end. Name this class Environment and copy the following code to this class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using OpenQA.Selenium; using OpenQA.Selenium.Chrome; using TechTalk.SpecFlow; namespace FundTransfer.specs { [Binding] public class Environment { private static ChromeDriver driver; public static IWebDriver Driver { get { return driver ?? (driver = new ChromeDriver(@"C:ChromeDriver")); } } [AfterTestRun] public static void AfterTestRun() { Driver.Close(); Driver.Quit(); driver = null; } } } Build the solution. Running tests The steps for running tests are as follows: Open the Test Explorer window by clicking the Test Explorer option on Test | Windows on Main Menu. It will display the three scenarios listed in the feature file as shown in the following screenshot: Click on Run All to test the feature as shown in the following screenshot: How it works... SpecFlow.NET first needs the feature files for the features we will be testing. SpecFlow.NET supports the Gherkin language for writing features. In the step definition file, we create a method for each step written in a feature file using the Given, When, and Then attributes. These methods can also take the parameter values specified in the steps using the arguments. Following is an example where we are entering the name of the payee: [When(@"he enters ""(.*)"" as payee name")] public void WhenUserEneteredIntoThePayeeNameField(string payeeName) { _ftPage.payeeNameField.SendKeys(payeeName); } In this example, we are automating the "When he enters "Jim" as payee name" step. We used the When attribute and created a method: WhenUserEneteredIntoThePayeeNameField. This method will need the value of the payee name embedded in the step which is extracted using the regular expression by the SpecFlow.NET. Inside the method, we are using an instance of the FundTransferPage class and calling its payeeNameField member's SendKeysl() method, passing the name of the payee extracted from the step. Using the Page object helps in abstracting locator and page details from the step definition files, making it more manageable and easy to maintain. SpecFlow.NET automatically generates the NUnit test code when the project is built. Using the Visual Studio Test Explorer and NUnit Test Adaptor for Visual Studio, these tests are executed and the features are validated.
Read more
  • 0
  • 0
  • 14538

article-image-learning-qgis-python-api
Packt
23 Dec 2014
44 min read
Save for later

Learning the QGIS Python API

Packt
23 Dec 2014
44 min read
In this article, we will take a closer look at the Python libraries available for the QGIS Python developer, and also look at the various ways in which we can use  these libraries to perform useful tasks within QGIS. In particular, you will learn: How the QGIS Python libraries are based on the underlying C++ APIs How to use the C++ API documentation as a reference to work with the Python APIs How the PyQGIS libraries are organized The most important concepts and classes within the PyQGIS libraries and how to use them Some practical examples of performing useful tasks using PyQGIS About the QGIS Python APIs The QGIS system itself is written in C++ and has its own set of APIs that are also written in C++. The Python APIs are implemented as wrappers around these C++ APIs. For example, there is a Python class named QgisInterface that acts as a wrapper around a C++ class of the same name. All the methods, class variables, and the like, which are implemented by the C++ version of QgisInterface are made available through the Python wrapper. What this means is that when you access the Python QGIS APIs, you aren't accessing the API directly. Instead, the wrapper connects your code to the underlying C++ objects and methods, as follows :  Fortunately, in most cases, the QGIS Python wrappers simply hide away the complexity of the underlying C++ code, so the PyQGIS libraries work as you would expect them to. There are some gotchas, however, and we will cover these as they come up. Deciphering the C++ documentation As QGIS is implemented in C++, the documentation for the QGIS APIs is all based on C++. This can make it difficult for Python developers to understand and work with the QGIS APIs. For example, the API documentation for the QgsInterface.zoomToActiveLayer() method:  If you're not familiar with C++, this can be quite confusing. Fortunately, as a Python programmer, you can skip over much of this complexity because it doesn't apply to you. In particular: The virtual keyword is an implementation detail you don't need to worry about The word void indicates that the method doesn't return a value The double colons in QgisInterface::zoomToActiveLayer are simply a C++ convention for separating the class name from the method name Just like in Python, the parentheses show that the method doesn't take any parameters. So if you have an instance of QgisInterface (for example, as the standard iface variable available in the Python Console), you can call this method simply by typing the following: iface.zoomToActiveLayer() Now, let's take a look at a slightly more complex example: the C++ documentation for the QgisInterface.addVectorLayer() method looks like the following:  Notice how the virtual keyword is followed by QgsVectorLayer* instead of void. This is the return value for this method; it returns a QgsVector object. Technically speaking, * means that the method returns a pointer to an object of type QgsVectorLayer. Fortunately, Python wrappers automatically handle pointers, so you don't need to worry about this.  Notice the brief description at the bottom of the documentation for this method; while many of the C++ methods have very little, if any, additional information, other methods have quite extensive descriptions. Obviously, you should read these descriptions carefully as they tell you more about what the method does. Even without any description, the C++ documentation is still useful as it tells you what the method is called, what parameters it accepts, and what type of data is being returned. In the preceding method, you can see that there are three parameters listed in between the parentheses. As C++ is a strongly typed language, you have to define the type of each parameter when you define a function. This is helpful for Python programmers as it tells you what type of value to supply. Apart from QGIS objects, you might also encounter the following data types in the C++ documentation: Data type Description int A standard Python integer value long A standard Python long integer value float A standard Python floating point (real) number bool A Boolean value (true or false) QString A string value. Note that the QGIS Python wrappers automatically convert Python strings to C++ strings, so you don't need to deal with QString objects directly QList This object is used to encapsulate a list of other objects. For example, QList<QString*> represents a list of strings Just as in Python, a method can take default values for each parameter. For example, the QgisInterface.newProject() method looks like the following:  In this case, the thePromptToSaveFlag parameter has a default value, and this default value will be used if no value is supplied. In Python, classes are initialized using the __init__ method. In C++, this is called a constructor. For example, the constructor for the QgsLabel class looks like the following:  Just as in Python, C++ classes inherit the methods defined in their superclass. Fortunately, QGIS doesn't have an extensive class hierarchy, so most of the classes don't have a superclass. However, don't forget to check for a superclass if you can't find the method you're looking for in the documentation for the class itself. Finally, be aware that C++ supports the concept of method overloading. A single method can be defined more than once, where each version accepts a different set of parameters. For example, take a look at the constructor for the QgsRectangle class—you will see that there are four different versions of this method. The first version accepts the four coordinates as floating point numbers:  The second version constructs a rectangle using two QgsPoint objects:  The third version copies the coordinates from QRectF (which is a Qt data type) into a QgsRectangle object:  The final version copies the coordinates from another QgsRectangle object:  The C++ compiler chooses the correct method to use based on the parameters that have been supplied. Python has no concept of method overloading; just choose the version of the method that accepts the parameters you want to supply, and the QGIS Python wrappers will automatically choose the correct method for you. If you keep these guidelines in mind, deciphering the C++ documentation for QGIS isn't all that hard. It just looks more complicated than it really is, thanks to all the complexity specific to C++. However, it won't take long for your brain to start filtering out C++ and use the QGIS reference documentation almost as easily as if it was written for Python rather than C++. Organization of the QGIS Python libraries Now that we can understand the C++-oriented documentation, let's see how the PyQGIS libraries are structured. All of the PyQGIS libraries are organized under a package named qgis. You wouldn't normally import qgis directly, however, as all the interesting libraries are subpackages within this main package; here are the five packages that make up the PyQGIS library: qgis.core This provides access to the core GIS functionality used throughout QGIS. qgis.gui This defines a range of GUI widgets that you can include in your own programs. qgis.analysis This provides spatial analysis tools to analyze vector and raster format data. qgis.networkanalysis This provides tools to build and analyze topologies. qgis.utils This implements miscellaneous functions that allow you to work with the QGIS application using Python.  The first two packages (qgis.core and qgis.gui) implement the most important parts of the PyQGIS library, and it's worth spending some time to become more familiar with the concepts and classes they define. Now let's take a closer look at these two packages. The qgis.core package The qgis.core package defines fundamental classes used throughout the QGIS system. A large part of this package is dedicated to working with vector and raster format geospatial data, and displaying these types of data within a map. Let's take a look at how this is done. Maps and map layers A map consists of multiple layers drawn one on top of the other:  There are three types of map layers supported by QGIS: Vector layer: This layer draws geospatial features such as points, lines, and polygons Raster layer: This layer draws raster (bitmapped) data onto a map Plugin layer: This layer allows a plugin to draw directly onto a map Each of these types of map layers has a corresponding class within the qgis.core library. For example, a vector map layer will be represented by an object of type qgis.core.QgsVectorLayer. We will take a closer look at vector and raster map layers shortly. Before we do this, though, we need to learn how geospatial data (both vector and raster data) is positioned on a map. Coordinate reference systems Since the Earth is a three-dimensional object, maps will only represent the Earth's surface as a two-dimensional plane, so there has to be a way of translating points on the Earth's surface into (x,y) coordinates within a map. This is done using a Coordinate Reference System (CRS):  Globe image courtesy Wikimedia (http://commons.wikimedia.org/wiki/File:Rotating_globe.gif) A CRS has two parts: an ellipsoid, which is a mathematical model of the Earth's surface, and a projection, which is a formula that converts points on the surface of the spheroid into (x,y) coordinates on a map. Generally you won't need to worry about all these details. You can simply select the appropriate CRS that matches the CRS of the data you are using. However, as there are many different coordinate reference systems that have been devised over the years, it is vital that you use the correct CRS when plotting your geospatial data. If you don't do this, your features will be displayed in the wrong place or have the wrong shape. The majority of geospatial data available today uses the EPSG 4326 coordinate reference system (sometimes also referred to as WGS84). This CRS defines coordinates as latitude and longitude values. This is the default CRS used for new data imported into QGIS. However, if your data uses a different coordinate reference system, you will need to create and use a different CRS for your map layer. The qgis.core.QgsCoordinateReferenceSystem class represents a CRS. Once you create your coordinate reference system, you can tell your map layer to use that CRS when accessing the underlying data. For example: crs = QgsCoordinateReferenceSystem(4326,           QgsCoordinateReferenceSystem.EpsgCrsId))layer.setCrs(crs) Note that different map layers can use different coordinate reference systems. Each layer will use its CRS when drawing the contents of the layer onto the map. Vector layers A vector layer draws geospatial data onto a map in the form of points, lines, polygons, and so on. Vector-format geospatial data is typically loaded from a vector data source such as a shapefile or database. Other vector data sources can hold vector data in memory, or load data from a web service across the internet. A vector-format data source has a number of features, where each feature represents a single record within the data source. The qgis.core.QgsFeature class represents a feature within a data source. Each feature has the following principles: ID: This is the feature's unique identifier within the data source. Geometry: This is the underlying point, line, polygon, and so on, which represents the feature on the map. For example, a data source representing cities would have one feature for each city, and the geometry would typically be either a point that represents the center of the city, or a polygon (or a multipolygon) that represents the city's outline. Attributes: These are key/value pairs that provide additional information about the feature. For example, a city data source might have attributes such as total_area, population, elevation, and so on. Attribute values can be strings, integers, or floating point numbers. In QGIS, a data provider allows the vector layer to access the features within the data source. The data provider, an instance of qgis.core.QgsVectorDataProvider, includes: A geometry type that is stored in the data source A list of fields that provide information about the attributes stored for each feature The ability to search through the features within the data source, using the getFeatures() method and the QgsFeatureRequest class You can access the various vector (and also raster) data providers by using the qgis.core.QgsProviderRegistry class. The vector layer itself is represented by a qgis.core.QgsVectorLayer object. Each vector layer includes: Data provider: This is the connection to the underlying file or database that holds the geospatial information to be displayed Coordinate reference system: This indicates which CRS the geospatial data uses Renderer: This chooses how the features are to be displayed Let's take a closer look at the concept of a renderer and how features are displayed within a vector map layer. Displaying vector data The features within a vector map layer are displayed using a combination of renderer and symbol objects. The renderer chooses the symbol to use for a given feature, and the symbol that does the actual drawing. There are three basic types of symbols defined by QGIS: Marker symbol: This displays a point as a filled circle Line symbol: This draws a line using a given line width and color Fill symbol: This draws the interior of a polygon with a given color These three types of symbols are implemented as subclasses of the qgis.core.QgsSymbolV2 class: qgis.core.QgsMarkerSymbolV2 qgis.core.QgsLineSymbolV2 qgis.core.QgsFillSymbolV2 Internally, symbols are rather complex, as "symbol layers" allow multiple elements to be drawn on top of each other. In most cases, however, you can make use of the "simple" version of the symbol. This makes it easier to create a new symbol without having to deal with the internal complexity of symbol layers. For example: symbol = QgsMarkerSymbolV2.createSimple({'width' : 1.0, 'color' : "255,0,0"}) While symbols draw the features onto the map, a renderer is used to choose which symbol to use to draw a particular feature. In the simplest case, the same symbol is used for every feature within a layer. This is called a single symbol renderer, and is represented by the qgis.core.QgsSingleSymbolRenderV2 class. Other possibilities include: Categorized symbol renderer (qgis.core.QgsCategorizedSymbolRendererV2): This renderer chooses a symbol based on the value of an attribute. The categorized symbol renderer has a mapping from attribute values to symbols. Graduated symbol renderer (qgis.core.QgsGraduatedSymbolRendererV2): This type of renderer has a range of attribute, values, and maps each range to an appropriate symbol. Using a single symbol renderer is very straightforward: symbol = ...renderer = QgsSingleSymbolRendererV2(symbol)layer.setRendererV2(renderer) To use a categorized symbol renderer, you first define a list of qgis.core. QgsRendererCategoryV2 objects, and then use that to create the renderer. For example: symbol_male = ...symbol_female = ... categories = []categories.append(QgsRendererCategoryV2("M", symbol_male, "Male"))categories.append(QgsRendererCategoryV2("F", symbol_female, "Female"))renderer = QgsCategorizedSymbolRendererV2("", categories)renderer.setClassAttribute("GENDER")layer.setRendererV2(renderer) Notice that the QgsRendererCategoryV2 constructor takes three parameters: the desired value, the symbol used, and a label used to describe that category. Finally, to use a graduated symbol renderer, you define a list of qgis.core.QgsRendererRangeV2 objects and then use that to create your renderer. For example: symbol1 = ...symbol2 = ... ranges = []ranges.append(QgsRendererRangeV2(0, 10, symbol1, "Range 1"))ranges.append(QgsRendererRange(11, 20, symbol2, "Range 2")) renderer = QgsGraduatedSymbolRendererV2("", ranges)renderer.setClassAttribute("FIELD")layer.setRendererV2(renderer) Accessing vector data In addition to displaying the contents of a vector layer within a map, you can use Python to directly access the underlying data. This can be done using the data provider's getFeatures() method. For example, to iterate over all the features within the layer, you can do the following: provider = layer.dataProvider()for feature in provider.getFeatures(QgsFeatureRequest()):  ... If you want to search for features based on some criteria, you can use the QgsFeatureRequest object's setFilterExpression() method, as follows: provider = layer.dataProvider()request = QgsFeatureRequest()request.setFilterExpression('"GENDER" = "M"')for feature in provider.getFeatures(QgsFeatureRequest()):  ... Once you have the features, it's easy to get access to the feature's geometry, ID, and attributes. For example: geometry = feature.geometry()  id = feature.id()  name = feature.attribute("NAME") The object returned by the feature.geometry() call, which will be an instance of qgis.core.QgsGeometry, represents the feature's geometry. This object has a number of methods you can use to extract the underlying data and perform various geospatial calculations. Spatial indexes In the previous section, we searched for features based on their attribute values. There are times, though, when you might want to find features based on their position in space. For example, you might want to find all features that lie within a certain distance of a given point. To do this, you can use a spatial index, which indexes features according to their location and extent. Spatial indexes are represented in QGIS by the QgsSpatialIndex class. For performance reasons, a spatial index is not created automatically for each vector layer. However, it's easy to create one when you need it: provider = layer.dataProvider()index = QgsSpatialIndex()for feature in provider.getFeatures(QgsFeatureRequest()):  index.insertFeature(feature) Don't forget that you can use the QgsFeatureRequest.setFilterExpression() method to limit the set of features that get added to the index. Once you have the spatial index, you can use it to perform queries based on the position of the features. In particular: You can find one or more features that are closest to a given point using the nearestNeighbor() method. For example: features = index.nearestNeighbor(QgsPoint(long, lat), 5) Note that this method takes two parameters: the desired point as a QgsPoint object and the number of features to return. You can find all features that intersect with a given rectangular area by using the intersects() method, as follows: features = index.intersects(QgsRectangle(left, bottom, right, top)) Raster layers Raster-format geospatial data is essentially a bitmapped image, where each pixel or "cell" in the image corresponds to a particular part of the Earth's surface. Raster data is often organized into bands, where each band represents a different piece of information. A common use for bands is to store the red, green, and blue component of the pixel's color in a separate band. Bands might also represent other types of information, such as moisture level, elevation, or soil type. There are many ways in which raster information can be displayed. For example: If the raster data only has one band, the pixel value can be used as an index into a palette. The palette maps each pixel value maps to a particular color. If the raster data has only one band but no palette is provided. The pixel values can be used directly as a grayscale value; that is, larger numbers are lighter and smaller numbers are darker. Alternatively, the pixel values can be passed through a pseudocolor algorithm to calculate the color to be displayed. If the raster data has multiple bands, then typically, the bands would be combined to generate the desired color. For example, one band might represent the red component of the color, another band might represent the green component, and yet another band might represent the blue component. Alternatively, a multiband raster data source might be drawn using a palette, or as a grayscale or a pseudocolor image, by selecting a particular band to use for the color calculation. Let's take a closer look at how raster data can be drawn onto the map. Displaying raster data The drawing style associated with the raster band controls how the raster data will be displayed. The following drawing styles are currently supported: Drawing style Description PalettedColor For a single band raster data source, a palette maps each raster value to a color. SingleBandGray For a single band raster data source, the raster value is used directly as a grayscale value. SingleBandPseudoColor For a single band raster data source, the raster value is used to calculate a pseudocolor. PalettedSingleBandGray For a single band raster data source that has a palette, this drawing style tells QGIS to ignore the palette and use the raster value directly as a grayscale value. PalettedSingleBandPseudoColor For a single band raster data source that has a palette, this drawing style tells QGIS to ignore the palette and use the raster value to calculate a pseudocolor. MultiBandColor For multiband raster data sources, use a separate band for each of the red, green, and blue color components. For this drawing style, the setRedBand(), setGreenBand(), and setBlueBand() methods can be used to choose which band to use for each color component. MultiBandSingleBandGray For multiband raster data sources, choose a single band to use as the grayscale color value. For this drawing style, use the setGrayBand() method to specify the band to use. MultiBandSingleBandPseudoColor For multiband raster data sources, choose a single band to use to calculate a pseudocolor. For this drawing style, use the setGrayBand() method to specify the band to use.  To set the drawing style, use the layer.setDrawingStyle() method, passing in a string that contains the name of the desired drawing style. You will also need to call the various setXXXBand() methods, as described in the preceding table, to tell the raster layer which bands contain the value(s) to use to draw each pixel. Note that QGIS doesn't automatically update the map when you call the preceding functions to change the way the raster data is displayed. To have your changes displayed right away, you'll need to do the following: Turn off raster image caching. This can be done by calling layer.setImageCache(None). Tell the raster layer to redraw itself, by calling layer.triggerRepaint(). Accessing raster data As with vector-format data, you can access the underlying raster data via the data provider's identify() method. The easiest way to do this is to pass in a single coordinate and retrieve the value or values of that coordinate. For example: provider = layer.dataProvider()values = provider.identify(QgsPoint(x, y),              QgsRaster.IdentifyFormatValue)if values.isValid():  for band,value in values.results().items():    ... As you can see, you need to check whether the given coordinate exists within the raster data (using the isValid() call). The values.results() method returns a dictionary that maps band numbers to values. Using this technique, you can extract all the underlying data associated with a given coordinate within the raster layer. You can also use the provider.block() method to retrieve the band data for a large number of coordinates all at once. We will look at how to do this later in this article. Other useful qgis.core classes Apart from all the classes and functionality involved in working with data sources and map layers, the qgis.core library also defines a number of other classes that you might find useful: Class Description QgsProject This represents the current QGIS project. Note that this is a singleton object, as only one project can be open at a time. The QgsProject class is responsible for loading and storing properties, which can be useful for plugins. QGis This class defines various constants, data types, and functions used throughout the QGIS system. QgsPoint This is a generic class that stores the coordinates for a point within a two-dimensional plane. QgsRectangle This is a generic class that stores the coordinates for a rectangular area within a two-dimensional plane. QgsRasterInterface This is the base class to use for processing raster data. This can be used, to reproject a set of raster data into a new coordinate system, to apply filters to change the brightness or color of your raster data, to resample the raster data, and to generate new raster data by rendering the existing data in various ways. QgsDistanceArea This class can be used to calculate distances and areas for a given geometry, automatically converting from the source coordinate reference system into meters. QgsMapLayerRegistry This class provides access to all the registered map layers in the current project. QgsMessageLog This class provides general logging features within a QGIS program. This lets you send debugging messages, warnings, and errors to the QGIS "Log Messages" panel.  The qgis.gui package The qgis.gui package defines a number of user-interface widgets that you can include in your programs. Let's start by looking at the most important qgis.gui classes, and follow this up with a brief look at some of the other classes that you might find useful. The QgisInterface class QgisInterface represents the QGIS system's user interface. It allows programmatic access to the map canvas, the menu bar, and other parts of the QGIS application. When running Python code within a script or a plugin, or directly from the QGIS Python console, a reference to QgisInterface is typically available through the iface global variable. The QgisInterface object is only available when running the QGIS application itself. If you are running an external application and import the PyQGIS library into your application, QgisInterface won't be available. Some of the more important things you can do with the QgisInterface object are: Get a reference to the list of layers within the current QGIS project via the legendInterface() method. Get a reference to the map canvas displayed within the main application window, using the mapCanvas() method. Retrieve the currently active layer within the project, using the activeLayer() method, and set the currently active layer by using the setActiveLayer() method. Get a reference to the application's main window by calling the mainWindow() method. This can be useful if you want to create additional Qt windows or dialogs that use the main window as their parent. Get a reference to the QGIS system's message bar by calling the messageBar() method. This allows you to display messages to the user directly within the QGIS main window. The QgsMapCanvas class The map canvas is responsible for drawing the various map layers into a window. The QgsMapCanvas class represents a map canvas. This class includes: A list of the currently shown map layers. This can be accessed using the layers() method. Note that there is a subtle difference between the list of map layers available within the map canvas and the list of map layers included in the QgisInterface.legendInterface() method. The map canvas's list of layers only includes the list of layers currently visible, while QgisInterface.legendInterface() returns all the map layers, including those that are currently hidden.  The map units used by this map (meters, feet, degrees, and so on). The map's units can be retrieved by calling the mapUnits() method. An extent,which is the area of the map that is currently shown within the canvas. The map's extent will change as the user zooms in and out, and pans across the map. The current map extent can be obtained by calling the extent() method. A current map tool that controls the user's interaction with the contents of the map canvas. The current map tool can be set using the setMapTool() method, and you can retrieve the current map tool (if any) by calling the mapTool() method. A background color used to draw the background behind all the map layers. You can change the map's background color by calling the canvasColor() method. A coordinate transform that converts from map coordinates (that is, coordinates in the data source's coordinate reference system) to pixels within the window. You can retrieve the current coordinate transform by calling the getCoordinateTransform() method. The QgsMapCanvasItem class A map canvas item is an item drawn on top of the map canvas. The map canvas item will appear in front of the map layers. While you can create your own subclass of QgsMapCanvasItem if you want to draw custom items on top of the map canvas, it would be more useful for you to make use of an existing subclass that will do the work for you. There are currently three subclasses of QgsMapCanvasItem that you might find useful: QgsVertexMarker: This draws an icon (an "X", a "+", or a small box) centered around a given point on the map. QgsRubberBand: This draws an arbitrary polygon or polyline onto the map. It is intended to provide visual feedback as the user draws a polygon onto the map. QgsAnnotationItem: This is used to display additional information about a feature, in the form of a balloon that is connected to the feature. The QgsAnnotationItem class has various subclasses that allow you to customize the way the information is displayed. The QgsMapTool class A map tool allows the user to interact with and manipulate the map canvas, capturing mouse events and responding appropriately. A number of QgsMapTool subclasses provide standard map interaction behavior such as clicking to zoom in, dragging to pan the map, and clicking on a feature to identify it. You can also create your own custom map tools by subclassing QgsMapTool and implementing the various methods that respond to user-interface events such as pressing down the mouse button, dragging the canvas, and so on. Once you have created a map tool, you can allow the user to activate it by associating the map tool with a toolbar button. Alternatively, you can activate it from within your Python code by calling the mapCanvas.setMapTool(...) method. We will look at the process of creating your own custom map tool in the section Using the PyQGIS library in the following table: Other useful qgis.gui classes While the qgis.gui package defines a large number of classes, the ones you are most likely to find useful are given in the following table: Class Description QgsLegendInterface This provides access to the map legend, that is, the list of map layers within the current project. Note that map layers can be grouped, hidden, and shown within the map legend. QgsMapTip This displays a tip on a map canvas when the user holds the mouse over a feature. The map tip will show the display field for the feature; you can set this by calling layer.setDisplayField("FIELD"). QgsColorDialog This is a dialog box that allows the user to select a color. QgsDialog This is a generic dialog with a vertical box layout and a button box, making it easy to add content and standard buttons to your dialog. QgsMessageBar This is a user interface widget for displaying non-blocking messages to the user. We looked at the message bar class in the previous article. QgsMessageViewer This is a generic class that displays long messages to the user within a modal dialog. QgsBlendModeComboBox QgsBrushStyleComboBox QgsColorRampComboBox QgsPenCapStyleComboBox QgsPenJoinStyleComboBox QgsScaleComboBox These QComboBox user-interface widgets allow you to prompt the user for various drawing options. With the exception of the QgsScaleComboBox, which lets the user choose a map scale, all the other QComboBox subclasses let the user choose various Qt drawing options.  Using the PyQGIS library In the previous section, we looked at a number of classes provided by the PyQGIS library. Let's make use of these classes to perform some real-world geospatial development tasks. Analyzing raster data We're going to start by writing a program to load in some raster-format data and analyze its contents. To make this more interesting, we'll use a Digital Elevation Model (DEM) file, which is a raster format data file that contains elevation data. The Global Land One-Kilometer Base Elevation Project (GLOBE) provides free DEM data for the world, where each pixel represents one square kilometer of the Earth's surface. GLOBE data can be downloaded from http://www.ngdc.noaa.gov/mgg/topo/gltiles.html. Download the E tile, which includes the western half of the USA. The resulting file, which is named e10g, contains the height information you need. You'll also need to download the e10g.hdr header file so that QGIS can read the file—you can download this from http://www.ngdc.noaa.gov/mgg/topo/elev/esri/hdr. Once you've downloaded these two files, put them together into a convenient directory. You can now load the DEM data into QGIS using the following code: registry = QgsProviderRegistry.instance()provider = registry.provider("gdal", "/path/to/e10g") Unfortunately, there is a slight complexity here. Since QGIS doesn't know which coordinate reference system is used for the data, it displays a dialog box that asks you to choose the CRS. Since the GLOBE DEM data is in the WGS84 CRS, which QGIS uses by default, this dialog box is redundant. To disable it, you need to add the following to the top of your program: from PyQt4.QtCore import QSettingsQSettings().setValue("/Projections/defaultBehaviour", "useGlobal") Now that we've loaded our raster DEM data into QGIS, we can analyze. There are lots of things we can do with DEM data, so let's calculate how often each unique elevation value occurs within the data. Notice that we're loading the DEM data directly using QgsRasterDataProvider. We don't want to display this information on a map, so we don't want (or need) to load it into QgsRasterLayer.  Since the DEM data is in a raster format, you need to iterate over the individual pixels or cells to get each height value. The provider.xSize() and provider.ySize() methods tell us how many cells are in the DEM, while the provider.extent() method gives us the area of the Earth's surface covered by the DEM. Using this information, we can extract the individual elevation values from the contents of the DEM in the following way: raster_extent = provider.extent()raster_width = provider.xSize()raster_height = provider.ySize()block = provider.block(1, raster_extent, raster_width, raster_height) The returned block variable is an object of type QgsRasterBlock, which is essentially a two-dimensional array of values. Let's iterate over the raster and extract the individual elevation values: for x in range(raster_width):  for y in range(raster_height):    elevation = block.value(x, y)    .... Now that we've loaded the individual elevation values, it's easy to build a histogram out of those values. Here is the entire program to load the DEM data into memory, and calculate, and display the histogram: from PyQt4.QtCore import QSettingsQSettings().setValue("/Projections/defaultBehaviour", "useGlobal")registry = QgsProviderRegistry.instance()provider = registry.provider("gdal", "/path/to/e10g") raster_extent = provider.extent()raster_width = provider.xSize()raster_height = provider.ySize()no_data_value = provider.srcNoDataValue(1) histogram = {} # Maps elevation to number of occurrences. block = provider.block(1, raster_extent, raster_width,            raster_height)if block.isValid():  for x in range(raster_width):    for y in range(raster_height):      elevation = block.value(x, y)      if elevation != no_data_value:        try:          histogram[elevation] += 1        except KeyError:          histogram[elevation] = 1 for height in sorted(histogram.keys()):  print height, histogram[height] Note that we've added a no data value check to the code. Raster data often includes pixels that have no value associated with them. In the case of a DEM, elevation data is only provided for areas of land; pixels over the sea have no elevation, and we have to exclude them, or our histogram will be inaccurate. Manipulating vector data and saving it to a shapefile Let's create a program that takes two vector data sources, subtracts one set of vectors from the other, and saves the resulting geometries into a new shapefile. Along the way, we'll learn a few important things about the PyQGIS library. We'll be making use of the QgsGeometry.difference() function. This function performs a geometrical subtraction of one geometry from another, similar to this:  Let's start by asking the user to select the first shapefile and open up a vector data provider for that file: filename_1 = QFileDialog.getOpenFileName(iface.mainWindow(),                     "First Shapefile",                     "~", "*.shp")if not filename_1:  return registry = QgsProviderRegistry.instance()provider_1 = registry.provider("ogr", filename_1) We can then read the geometries from that file into memory: geometries_1 = []for feature in provider_1.getFeatures(QgsFeatureRequest()):  geometries_1.append(QgsGeometry(feature.geometry())) This last line of code does something very important that may not be obvious at first. Notice that we use the following: QgsGeometry(feature.geometry()) We use the preceding line instead of the following: feature.geometry() This creates a new instance of the QgsGeometry object, copying the geometry into a new object, rather than just adding the existing geometry object to the list. We have to do this because of a limitation of the way the QGIS Python wrappers work: the feature.geometry() method returns a reference to the geometry, but the C++ code doesn't know that you are storing this reference away in your Python code. So, when the feature is no longer needed, the memory used by the feature's geometry is also released. If you then try to access that geometry later on, the entire QGIS system will crash. To get around this, we make a copy of the geometry so that we can refer to it even after the feature's memory has been released. Now that we've loaded our first set of geometries into memory, let's do the same for the second shapefile: filename_2 = QFileDialog.getOpenFileName(iface.mainWindow(),                     "Second Shapefile",                     "~", "*.shp")if not filename_2:  return provider_2 = registry.provider("ogr", filename_2) geometries_2 = []for feature in provider_2.getFeatures(QgsFeatureRequest()):  geometries_2.append(QgsGeometry(feature.geometry())) With the two sets of geometries loaded into memory, we're ready to start subtracting one from the other. However, to make this process more efficient, we will combine the geometries from the second shapefile into one large geometry, which we can then subtract all at once, rather than subtracting one at a time. This will make the subtraction process much faster: combined_geometry = Nonefor geometry in geometries_2:  if combined_geometry == None:    combined_geometry = geometry  else:    combined_geometry = combined_geometry.combine(geometry) We can now calculate the new set of geometries by subtracting one from the other: dst_geometries = []for geometry in geometries_1:  dst_geometry = geometry.difference(combined_geometry)  if not dst_geometry.isGeosValid(): continue  if dst_geometry.isGeosEmpty(): continue  dst_geometries.append(dst_geometry) Notice that we check to ensure that the destination geometry is mathematically valid and isn't empty. Invalid geometries are a common problem when manipulating complex shapes. There are options for fixing them, such as splitting apart multi-geometries and performing a buffer operation.  Our last task is to save the resulting geometries into a new shapefile. We'll first ask the user for the name of the destination shapefile: dst_filename = QFileDialog.getSaveFileName(iface.mainWindow(),                      "Save results to:",                      "~", "*.shp")if not dst_filename:  return We'll make use of a vector file writer to save the geometries into a shapefile. Let's start by initializing the file writer object: fields = QgsFields()writer = QgsVectorFileWriter(dst_filename, "ASCII", fields,               dst_geometries[0].wkbType(),               None, "ESRI Shapefile")if writer.hasError() != QgsVectorFileWriter.NoError:  print "Error!"  return We don't have any attributes in our shapefile, so the fields list is empty. Now that the writer has been set up, we can save the geometries into the file: for geometry in dst_geometries:  feature = QgsFeature()  feature.setGeometry(geometry)  writer.addFeature(feature) Now that all the data has been written to the disk, let's display a message box that informs the user that we've finished: QMessageBox.information(iface.mainWindow(), "",            "Subtracted features saved to disk.") As you can see, creating a new shapefile is very straightforward in PyQGIS, and it's easy to manipulate geometries using Python—just so long as you copy QgsGeometry you want to keep around. If your Python code starts to crash while manipulating geometries, this is probably the first thing you should look for. Using different symbols for different features within a map Let's use World Borders Dataset that you downloaded in the previous article to draw a world map, using different symbols for different continents. This is a good example of using a categorized symbol renderer, though we'll combine it into a script that loads the shapefile into a map layer, and sets up the symbols and map renderer to display the map exactly as you want it. We'll then save this map as an image. Let's start by creating a map layer to display the contents of the World Borders Dataset shapefile: layer = iface.addVectorLayer("/path/to/TM_WORLD_BORDERS-0.3.shp",               "continents", "ogr") Each unique region code in the World Borders Dataset shapefile corresponds to a continent. We want to define the name and color to use for each of these regions, and use this information to set up the various categories to use when displaying the map: from PyQt4.QtGui import QColorcategories = []for value,color,label in [(0,   "#660000", "Antarctica"),                          (2,   "#006600", "Africa"),                          (9,   "#000066", "Oceania"),                          (19,  "#660066", "The Americas"),                          (142, "#666600", "Asia"),                          (150, "#006666", "Europe")]:  symbol = QgsSymbolV2.defaultSymbol(layer.geometryType())  symbol.setColor(QColor(color))  categories.append(QgsRendererCategoryV2(value, symbol, label)) With these categories set up, we simply update the map layer to use a categorized renderer based on the value of the region attribute, and then redraw the map: layer.setRendererV2(QgsCategorizedSymbolRendererV2("region",                          categories))layer.triggerRepaint() There's only one more thing to do; since this is a script that can be run multiple times, let's have our script automatically remove the existing continents layer, if it exists, before adding a new one. To do this, we can add the following to the start of our script: layer_registry = QgsMapLayerRegistry.instance() for layer in layer_registry.mapLayersByName("continents"):   layer_registry.removeMapLayer(layer.id()) When our script is running, it will create one (and only one) layer that shows the various continents in different colors. These will appear as different shades of gray in the printed article, but the colors will be visible on the computer screen: Now, let's use the same data set to color each country based on its relative population. We'll start by removing the existing population layer, if it exists: layer_registry = QgsMapLayerRegistry.instance()for layer in layer_registry.mapLayersByName("population"):  layer_registry.removeMapLayer(layer.id()) Next, we open the World Borders Dataset into a new layer called "population": layer = iface.addVectorLayer("/path/to/TM_WORLD_BORDERS-0.3.shp",               "population", "ogr") We then need to set up our various population ranges: from PyQt4.QtGui import QColorranges = []for min_pop,max_pop,color in [(0,        99999,     "#332828"),                              (100000,   999999,    "#4c3535"),                              (1000000,  4999999,   "#663d3d"),                              (5000000,  9999999,   "#804040"),                              (10000000, 19999999,  "#993d3d"),                              (20000000, 49999999,  "#b33535"),                              (50000000, 999999999, "#cc2828")]:  symbol = QgsSymbolV2.defaultSymbol(layer.geometryType())  symbol.setColor(QColor(color))  ranges.append(QgsRendererRangeV2(min_pop, max_pop,                   symbol, "")) Now that we have our population ranges and their associated colors, we simply set up a graduated symbol renderer to choose a symbol based on the value of the pop2005 attribute, and tell the map to redraw itself: layer.setRendererV2(QgsGraduatedSymbolRendererV2("pop2005",                         ranges))layer.triggerRepaint() The result will be a map layer that shades each country according to its population:  Calculating the distance between two user-defined points  In our final example of using the PyQGIS library, we'll write some code that, when run, starts listening for mouse events from the user. If the user clicks on a point, drags the mouse, and then releases the mouse button again, we will display the distance between those two points. This is a good example of how to add your  own map interaction logic to QGIS, using the QgsMapTool class. This is the basic structure for our QgsMapTool subclass: class DistanceCalculator(QgsMapTool):  def __init__(self, iface):    QgsMapTool.__init__(self, iface.mapCanvas())    self.iface = iface   def canvasPressEvent(self, event):    ...   def canvasReleaseEvent(self, event):    ... To make this map tool active, we'll create a new instance of it and pass it to the mapCanvas.setMapTool() method. Once this is done, our canvasPressEvent() and canvasReleaseEvent() methods will be called whenever the user clicks or releases the mouse button over the map canvas. Let's start with the code that handles the user clicking on the canvas. In this method, we're going to convert from the pixel coordinates that the user clicked on to the map coordinates (that is, a latitude and longitude value). We'll then remember these coordinates so that we can refer to them later. Here is the necessary code: def canvasPressEvent(self, event):  transform = self.iface.mapCanvas().getCoordinateTransform()  self._startPt = transform.toMapCoordinates(event.pos().x(),                        event.pos().y()) When the canvasReleaseEvent() method is called, we'll want to do the same with the point at which the user released the mouse button: def canvasReleaseEvent(self, event):  transform = self.iface.mapCanvas().getCoordinateTransform()  endPt = transform.toMapCoordinates(event.pos().x(),                    event.pos().y()) Now that we have the two desired coordinates, we'll want to calculate the distance between them. We can do this using a QgsDistanceArea object:      crs = self.iface.mapCanvas().mapRenderer().destinationCrs()  distance_calc = QgsDistanceArea()  distance_calc.setSourceCrs(crs)  distance_calc.setEllipsoid(crs.ellipsoidAcronym())  distance_calc.setEllipsoidalMode(crs.geographicFlag())  distance = distance_calc.measureLine([self._startPt,                     endPt]) / 1000 Notice that we divide the resulting value by 1000. This is because the QgsDistanceArea object returns the distance in meters, and we want to display the distance in kilometers. Finally, we'll display the calculated distance in the QGIS message bar:   messageBar = self.iface.messageBar()  messageBar.pushMessage("Distance = %d km" % distance,              level=QgsMessageBar.INFO,              duration=2) Now that we've created our map tool, we need to activate it. We can do this by adding the following to the end of our script: calculator = DistanceCalculator(iface)iface.mapCanvas().setMapTool(calculator) With the map tool activated, the user can click and drag on the map. When the mouse button is released, the distance (in kilometers) between the two points will be displayed in the message bar: Summary In this article, we took an in-depth look at the PyQGIS libraries and how you can use them in your own programs. We learned that the QGIS Python libraries are implemented as wrappers around the QGIS APIs implemented in C++. We saw how Python programmers can understand and work with the QGIS reference documentation, even though it is written for C++ developers. We also looked at the way the PyQGIS libraries are organized into different packages, and learned about the most important classes defined in the qgis.core and qgis.gui packages. We then saw how a coordinate reference systems (CRS) is used to translate from points on the three-dimensional surface of the Earth to coordinates within a two-dimensional map plane. We learned that vector format data is made up of features, where each feature has an ID, a geometry, and a set of attributes, and that symbols are used to draw vector geometries onto a map layer, while renderers are used to choose which symbol to use for a given feature. We learned how a spatial index can be used to speed up access to vector features. Next, we saw how raster format data is organized into bands that represent information such as color, elevation, and so on, and looked at the various ways in which a raster data source can be displayed within a map layer. Along the way, we learned how to access the contents of a raster data source. Finally, we looked at various techniques for performing useful tasks using the PyQGIS library. In the next article, we will learn more about QGIS Python plugins, and then go on to use the plugin architecture as a way of implementing a useful feature within a mapping application. Resources for Article:   Further resources on this subject: QGIS Feature Selection Tools [article] Server Logs [article]
Read more
  • 0
  • 0
  • 14536
Modal Close icon
Modal Close icon