Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-important-features-mockito
Packt
04 Sep 2013
4 min read
Save for later

Important features of Mockito

Packt
04 Sep 2013
4 min read
Reducing boilerplate code with annotations Mockito allows the use of annotations to reduce the lines of test code in order to increase the readability of tests. Let's take into consideration some of the tests that we have seen in previous examples. Removing boilerplate code by using the MockitoJUnitRunner The shouldCalculateTotalWaitingTimeAndAssertTheArgumentsOnMockUsingArgumentCaptor from Verifying behavior (including argument capturing, verifying call order and working with asynchronous code) section, can be rewritten as follows, using Mockito annotations, and the @RunWith(MockitoJUnitRunner.class) JUnit runner: @RunWith(MockitoJUnitRunner.class) public class _07ReduceBoilerplateCodeWithAnnotationsWithRunner { @Mock KitchenService kitchenServiceMock; @Captor ArgumentCaptor mealArgumentCaptor; @InjectMocks WaiterImpl objectUnderTest; @Test public void shouldCalculateTotalWaitingTimeAndAssert TheArgumentsOnMockUsingArgumentCaptor() throws Exception { //given final int mealPreparationTime = 10; when(kitchenServiceMock.calculate PreparationTime(any(Meal.class))).thenReturn(mealPreparationTime); //when int waitingTime = objectUnderTest.calculate TotalWaitingTime(createSampleMeals ContainingVegetarianFirstCourse()); //then assertThat(waitingTime, is(mealPreparationTime)); verify(kitchenServiceMock).calculatePreparation Time(mealArgumentCaptor.capture()); assertThat(mealArgumentCaptor.getValue(), is (VegetarianFirstCourse.class)); assertThat(mealArgumentCaptor.getAllValues().size(), is(1)); } private List createSampleMeals ContainingVegetarianFirstCourse() { List meals = new ArrayList(); meals.add(new VegetarianFirstCourse()); return meals; } } What happened here is that: All of the boilerplate code can be removed due to the fact that you are using the @RunWith(MockitoJUnitRunner.class) JUnit runner Mockito.mock(…) has been replaced with @Mock annotation You can provide additional parameters to the annotation, such as name, answer or extraInterfaces. The fieldname related to the annotated mock is referred to in any verification so it's easier to identify the mock ArgumentCaptor.forClass(…) is replaced with @Captor annotation. When using the @Captor annotation you avoid warnings related to complex generic types Thanks to the @InjectMocks annotation your object under test is initialized, proper constructor/setters are found and Mockito injects the appropriate mocks for you There is no explicit creation of the object under test You don't need to provide the mocks as arguments of the constructor Mockito @InjectMocksuses either constructor injection, property injection or setter injection Taking advantage of advanced mocks configuration Mockito gives you a possibility of providing different answers for your mocks. Let's focus more on that. Getting more information on NullPointerException Remember the Waiter's askTheCleaningServiceToCleanTheRestaurantMethod(): @Override public boolean askTheCleaningServiceToCleanTheRestaurant (TypeOfCleaningService typeOfCleaningService) { CleaningService cleaningService = cleaningServiceFactory.getMe ACleaningService(typeOfCleaningService); try{ cleaningService.cleanTheTables(); cleaningService.sendInformationAfterCleaning(); return SUCCESSFULLY_CLEANED_THE_RESTAURANT; }catch(CommunicationException communicationException){ System.err.println("An exception took place while trying to send info about cleaning the restaurant"); return FAILED_TO_CLEAN_THE_RESTAURANT; } } Let's assume that we want to test this function. We inject the CleaningServiceFactory as a mock but we forgot to stub the getMeACleaningService(…) method. Normally we would get a NullPointerException since, if the method is not stubbed, it will return null. But what will happen, if as an answer we would provide a RETURNS_SMART_NULLS answer? Let's take a look at the body of the following test: @Mock(answer = Answers.RETURNS_SMART_NULLS) CleaningServiceFactory cleaningServiceFactory; @InjectMocks WaiterImpl objectUnderTest; @Test public void shouldThrowSmartNullPointerExceptionWhenUsingUnstubbedMethod() { //given // Oops forgotten to stub the CleaningServiceFactory.getMeACle aningService(TypeOfCleaningService) method try { //when objectUnderTest.askTheCleaningServiceToCleanTheRestaurant( TypeOfCleaningService.VERY_FAST); fail(); } catch (SmartNullPointerException smartNullPointerException) { System.err.println("A SmartNullPointerException will be thrown here with a very precise information about the error [" + smartNullPointerException + "]"); } } What happened in the test is that: We create a mock with an answer RETURNS_SMART_NULLS of the CleaningServiceFactory The mock is injected to the WaiterImpl We do not stub the getMeACleaningService(…) of the CleaningServiceFactory The SmartNullPointerException will be thrown at the line containing the cleaningService.cleanTheTables() It will contain very detailed information about the reason for the exception to happen and where it happened In order to have the RETURNS_SMART_NULLS as the default answer (you wouldn't have to explicitly define the answer for your mock), you would have to create the class named MockitoConfiguration in a package org.mockito.configuration that either extends the DefaultMockitoConfiguration or implements the IMockitoConfiguration interface: package org.mockito.configuration; import org.mockito.internal.stubbing.defaultanswers.ReturnsSmartNulls; import org.mockito.stubbing.Answer; public class MockitoConfiguration extends DefaultMockitoConfiguration { public Answer<Object> getDefaultAnswer() { return new ReturnsSmartNulls(); } } Summary In this article we learned in detail about reducing the boilerplate code with annotations, and taking advantage of advanced mocks configuration, along with their code implementation. Resources for Article : Further resources on this subject: Testing your App [Article] Drools JBoss Rules 5.0 Flow (Part 2) [Article] Easily Writing SQL Queries with Spring Python [Article]
Read more
  • 0
  • 0
  • 11388

article-image-using-gerrit-github
Packt
04 Sep 2013
14 min read
Save for later

Using Gerrit with GitHub

Packt
04 Sep 2013
14 min read
In this article by Luca Milanesio, author of the book Learning Gerrit Code review, we will learn about Gerrit Code revew. GitHub is the world's largest platform for the free hosting of Git Projects, with over 4.5 million registered developers. We will now provide a step-by-step example of how to connect Gerrit to an external GitHub server so as to share the same set of repositories. Additionally, we will provide guidance on how to use the Gerrit Code Review workflow and GitHub concurrently. By the end of this article we will have our Gerrit installation fully integrated and ready to be used for both open source public projects and private projects on GitHub. (For more resources related to this topic, see here.) GitHub workflow GitHub has become the most popular website for open source projects, thanks to the migration of some major projects to Git (for example, Eclipse) and new projects adopting it, along with the introduction of the social aspect of software projects that piggybacks on the Facebook hype. The following diagram shows the GitHub collaboration model: The key aspects of the GitHub workflow are as follows: Each developer pushes to their own repository and pulls from others Developers who want to make a change to another repository, create a fork on GitHub and work on their own clone When forked repositories are ready to be merged, pull requests are sent to the original repository maintainer The pull requests include all of the proposed changes and their associated discussion threads Whenever a pull request is accepted, the change is merged by the maintainer and pushed to their repository on GitHub   GitHub controversy The preceding workflow works very effectively for most open source projects; however, when the projects gets bigger and more complex, the tools provided by GitHub are too unstructured, and a more defined review process with proper tools, additional security, and governance is needed. In May 2012 Linus Torvalds , the inventor of Git version control, openly criticized GitHub as a commit editing tool directly on the pull request discussion thread: " I consider GitHub useless for these kinds of things. It's fine for hosting, but the pull requests and the online commit editing, are just pure garbage " and additionally, " the way you can clone a (code repository), make changes on the web, and write total crap commit messages, without GitHub in any way making sure that the end result looks good. " See https://github.com/torvalds/linux/pull/17#issuecomment-5654674. Gerrit provides the additional value that Linus Torvalds claimed was missing in the GitHub workflow: Gerrit and GitHub together allows the open source development community to reuse the extended hosting reach and social integration of GitHub with the power of governance of the Gerrit review engine. GitHub authentication The list of authentication backends supported by Gerrit does not include GitHub and it cannot be used out of the box, as it does not support OpenID authentication. However, a GitHub plugin for Gerrit has been recently released in order to fill the gaps and allow a seamless integration. GitHub implements OAuth 2.0 for allowing external applications, such as Gerrit, to integrate using a three-step browser-based authentication. Using this scheme, a user can leverage their existing GitHub account without the need to provision and manage a separate one in Gerrit. Additionally, the Gerrit instance will be able to self-provision the SSH public keys needed for pushing changes for review. In order for us to use GitHub OAuth authentication with Gerrit, we need to do the following: Build the Gerrit GitHub plugin Install the GitHub OAuth filter into the Gerrit libraries (/lib under the Gerrit site directory) Reconfigure Gerrit to use the HTTP authentication type   Building the GitHub plugin The Gerrit GitHub plugin can be found under the Gerrit plugins/github repository on https://gerrit-review.googlesource.com/#/admin/projects/plugins/github. It is open source under the Apache 2.0 license and can be cloned and built using the Java 6 JDK and Maven. Refer to the following example: $ git clone https://gerrit.googlesource.com/plugins/github $ cd github $ mvn install […] [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------- [INFO] Total time: 9.591s [INFO] Finished at: Wed Jun 19 18:38:44 BST 2013 [INFO] Final Memory: 12M/145M [INFO] ------------------------------------------------------- The Maven build should generate the following artifacts: github-oauth/target/github-oauth*.jar, the GitHub OAuth library for authenticating Gerrit users github-plugin/target/github-plugin*.jar, the Gerrit plugin for integrating with GitHub repositories and pull requests Installing GitHub OAuth library The GitHub OAuth JAR file needs to copied to the Gerrit /lib directory; this is required to allow Gerrit to use it for filtering all HTTP requests and enforcing the GitHub three-step authentication process: $ cp github-oauth/target/github-oauth-*.jar /opt/gerrit/lib/ Installing GitHub plugin The GitHub plugin includes the additional support for the overall configuration, the advanced GitHub repositories replication, and the integration of pull requests into the Code Review process. We now need to install the plugin before running the Gerrit init again so that we can benefit from the simplified automatic configuration steps: $ cp github-plugin/target/github-plugin-*.jar /opt/gerrit/plugins/github.jar Register Gerrit as a GitHub OAuth application Before going through the Gerrit init, we need to tell GitHub to trust Gerrit as a partner application. This is done through the generation of a ClientId/ClientSecret pair associated to the exact Gerrit URLs that will be used for initiating the 3-step OAuth authentication. We can register a new application in GitHub through the URL https://github.com/settings/applications/new, where the following three fields are requested: Application name : It is the logical name of the application authorized to access GitHub, for example, Gerrit. Main URL : The Gerrit canonical web URL used for redirecting to GitHub OAuth authentication, for example, https://myhost.mydomain:8443. Callback URL : The URL that GitHub should redirect to when the OAuth authentication is successfully completed, for example, https://myhost.mydomain:8443/oauth. GitHub will automatically generate a unique pair ClientId/ClientSecret that has to be provided to Gerrit identifying them as a trusted authentication partner. ClientId/ClientSecret are not GitHub credentials and cannot be used by an interactive user to access any GitHub data or information. They are only used for authorizing the integration between a Gerrit instance and GitHub. Running Gerrit init to configure GitHub OAuth We now need to stop Gerrit and go through the init steps again in order to reconfigure the Gerrit authentication. We need to enable HTTP authentication by choosing an HTTP header to be used to verify the user's credentials, and to go through the GitHub settings wizard to configure the OAuth authentication. $ /opt/gerrit/bin/gerrit.sh stop Stopping Gerrit Code Review: OK $ cd /opt/gerrit $ java -jar gerrit.war init [...] *** User Authentication *** Authentication method []: HTTP RETURN Get username from custom HTTP header [Y/n]? Y RETURN Username HTTP header []: GITHUB_USER RETURN SSO logout URL : /oauth/reset RETURN *** GitHub Integration *** GitHub URL [https://github.com]: RETURN Use GitHub for Gerrit login ? [Y/n]? Y RETURN ClientId []: 384cbe2e8d98192f9799 RETURN ClientSecret []: f82c3f9b3802666f2adcc4 RETURN Initialized /opt/gerrit $ /opt/gerrit/bin/gerrit.sh start Starting Gerrit Code Review: OK   Using GitHub login for Gerrit Gerrit is now fully configured to register and authenticate users through GitHub OAuth. When opening the browser to access any Gerrit web pages, we are automatically redirected to the GitHub for login. If we have already visited and authenticated with GitHub previously, the browser cookie will be automatically recognized and used for the authentication, instead of presenting the GitHub login page. Alternatively, if we do not yet have a GitHub account, we create a new GitHub profile by clicking on the SignUp button. Once the authentication process is successfully completed, GitHub requests the user's authorization to grant access to their public profile information. The following screenshot shows GitHub OAuth authorization for Gerrit: The authorization status is then stored under the user's GitHub applications preferences on https://github.com/settings/applications. Finally, GitHub redirects back to Gerrit propagating the user's profile securely using a one-time code which is used to retrieve the full data profile including username, full name, e-mail, and associated SSH public keys. Replication to GitHub The next steps in the Gerrit to GitHub integration is to share the same Git repositories and then keep them up-to-date; this can easily be achieved by using the Gerrit replication plugin. The standard Gerrit replication is a master-slave, where Gerrit always plays the role of the master node and pushes to remote slaves. We will refer to this scheme as push replication because the actual control of the action is given to Gerrit through a git push operation of new commits and branches. Configure Gerrit replication plugin In order to configure push replication we need to enable the Gerrit replication plugin through Gerrit init: $ /opt/gerrit/bin/gerrit.sh stop Stopping Gerrit Code Review: OK $ cd /opt/gerrit $ java -jar gerrit.war init [...] *** Plugins *** Prompt to install core plugins [y/N]? y RETURN Install plugin reviewnotes version 2.7-rc4 [y/N]? RETURN Install plugin commit-message-length-validator version 2.7-rc4 [y/N]? RETURN Install plugin replication version 2.6-rc3 [y/N]? y RETURN Initialized /opt/gerrit $ /opt/gerrit/bin/gerrit.sh start Starting Gerrit Code Review: OK The Gerrit replication plugin relies on the replication.config file under the /opt/gerrit/etc directory to identify the list of target Git repositories to push to. The configuration syntax is a standard .ini format where each group section represents a target replica slave. See the following simplest replication.config script for replicating to GitHub: [remote "github"] url = git@github.com:myorganisation/${name}.git The preceding configuration enables all of the repositories in Gerrit to be replicated to GitHub under the myorganisa tion GitHub Team account. Authorizing Gerrit to push to GitHub Now, that Gerrit knows where to push, we need GitHub to authorize the write operations to its repositories. To do so, we need to upload the SSH public key of the underlying OS user where Gerrit is running to one of the accounts in the GitHub myorganisation team, with the permissions to push to any of the GitHub repositories. Assuming that Gerrit runs under the OS user gerrit, we can copy and paste the SSH public key values from the ~gerrit/.ssh/id_rsa.pub (or ~gerrit/.ssh/id_dsa.pub) to the Add an SSH Key section of the GitHub account under target URL to be set to: https://github.com/settings/ssh Start working with Gerrit replication Everything is now ready to start playing with Gerrit to GitHub replication. Whenever a change to a repository is made on Gerrit, it will be automatically replicated to the corresponding GitHub repository. In reality there is one additional operation that is needed on the GitHub side: the actual creation of the empty repositories using https://github.com/new associated to the ones created in Gerrit. We need to make sure that we select the organization name and repository name, consistent with the ones defined in Gerrit and in the replication.config file. Never initialize the repository from GitHub with an empty commit or readme file; otherwise the first replication attempt from Gerrit will result in a conflict and will then fail. Now GitHub and Gerrit are fully connected and whenever a repository in GitHub matches one of the repositories in Gerrit, it will be linked and synchronized with the latest set of commits pushed in Gerrit. Thanks to the Gerrit-GitHub authentication previously configured, Gerrit and GitHub share the same set of users and the commits authors will be automatically recognized and formatted by GitHub. The following screenshot shows Gerrit commits replicated to GitHub: Reviewing and merging to GitHub branches The final goal of the Code Review process is to agree and merge changes to their branches. The merging strategies need to be aligned with real-life scenarios that may arise when using Gerrit and GitHub concurrently. During the Code Review process the alignment between Gerrit and GitHub was at the change level, not influenced by the evolution of their target branches. Gerrit changes and GitHub pull requests are isolated branches managed by their review lifecycle. When a change is merged, it needs to align with the latest status of its target branch using a fast-forward, merge, rebase, or cherry-pick strategy. Using the standard Gerrit merge functionality, we can apply the configured project merge strategy to the current status of the target branch on Gerrit. The situation on GitHub may have changed as well, so even if the Gerrit merge has succeeded there is no guarantee that the actual subsequent synchronization to GitHub will do the same! The GitHub plugin mitigates this risk by implementing a two-phase submit + merge operation for merging opened changes as follows: Phase-1 : The change target branch is checked against its remote peer on GitHub and fast forwarded if needed. If two branches diverge, the submit + merge is aborted and manual merge intervention is requested. Phase-2 : The change is merged on its target branch in Gerrit and an additional ad hoc replication is triggered. If the merge succeeds then the GitHub pull request is marked as completed. At the end of Phase-2 the Gerrit and GitHub statuses will be completely aligned. The pull request author will then receive the notification that his/her commit has been merged. Using Gerrit and GitHub on http://gerrithub.io When using Gerrit and GitHub on the web with public or private repositories, all of the commits are replicated from Gerrit to GitHub, and each one of them has a complete copy of the data. If we are using a Git and collaboration server on GitHub over the Internet, why can't we do the same for its Gerrit counterpart? Can we avoid installing a standalone instance of Gerrit just for the purpose of going through a formal Code Review? One hassle-free solution is to use the GerritHub service (http://gerrithub.io), which offers a free Gerrit instance on the cloud already configured and connected with GitHub through the github-plugin and github-oauth authentication library. All of the flows that we have covered in this article are completely automated, including the replication and automatic pull request to change automation. As accounts are shared with GitHub, we do not need to register or create another account to use GerritHub; we can just visit http://gerrithub.io and start using Gerrit Code Review with our existing GitHub projects without having to teach our existing community about a new tool. GerritHub also includes an initial setup Wizard for the configuration and automation of the Gerrit projects and the option to configure the Gerrit groups using the existing GitHub. Once Gerrit is configured, the Code Review and GitHub can be used seamlessly for achieving maximum control and social reach within your developer community. Summary We have now integrated our Gerrit installation with GitHub authentication for a seamless Single-Sign-On experience. Using an existing GitHub account we started using Gerrit replication to automatically mirror all the commits to GitHub repositories, allowing our projects to have an extended reach to external users, free to fork our repositories, and to contribute changes as pull requests. Finally, we have completed our Code Review in Gerrit and managed the merge to GitHub with a two-phase change submit + merge process to ensure that the target branches on both Gerrit and GitHub have been merged and aligned accordingly. Similarly to GitHub, this Gerrit setup can be leveraged for free on the web without having to manage a separate private instance, thanks to the free set target URL to http://gerrithub.io service available on the cloud. Resources for Article : Further resources on this subject: Getting Dynamics NAV 2013 on Your Computer – For (Almost) Free [Article] Building Your First Zend Framework Application [Article] Quick start - your first Sinatra application [Article]
Read more
  • 0
  • 1
  • 51232

article-image-introduction-drools
Packt
03 Sep 2013
8 min read
Save for later

Introduction to Drools

Packt
03 Sep 2013
8 min read
(For more resources related to this topic, see here.) So, what is Drools? The techie answer guaranteed to get that glazed over look from anyone hounding you for details on project design is that Drools, part of the JBoss Enterprise BRMS product since federating in 2005, is a Business Rule Management System (BRMS) and rules engine written in Java which implements and extends the Rete pattern-matching algorithm within a rules engine capable of both forward and backward chaining inference. Now, how about an answer fit for someone new to rules engines? After all, you're here to learn the basics, right? Drools is a collection of tools which allow us to separate and reason over logic and data found within business processes. Ok, but what does that mean? Digging deeper, the keywords in that statement we need to consider are "logic" and "data". Logic, or rules in our case, are pieces of knowledge often expressed as, "When some conditions occur, then do some tasks". Simple enough, no? These pieces of knowledge could be about any process in your organization, such as how you go about approving TPS reports, calculate interest on a loan, or how you divide workload among employees. While these processes sound complex, in reality, they're made up of a collection of simple business rules. Let's consider a daily ritual process for many workers: the morning coffee. The whole process is second nature to coffee drinkers. As they prepare for their work day, they probably don't consider the steps involved—they simply react to situations at hand. However, we can capture the process as a series of simple rules: When your mug is dirty, then go clean it When your mug is clean, then go check for coffee When the pot is full, then pour yourself a cup and return to your desk When the pot is empty, then mumble about co-workers and make some coffee Alright, so that's logic, but what's data? Facts (our word for data) are the objects that drive the decision process for us. Given the rules from our coffee example, some facts used to drive our decisions would be the mug and the coffee pot. While we know from reading our rules what to do when the mug or pot are in a particular state, we need facts that reflect an actual state on a particular day to reason over. In seeing how a BRMS allows us to define the business rules of a business process, we can now state some of the features of a rules engine. As stated before, we've separated logic from data—always a good thing! In our example, notice how we didn't see any detail about how to clean our mug or how to make a new batch of coffee, meaning we've also separated what to do from how to do it , thus allowing us to change procedure without altering logic. Lastly, by gathering all of our rules in one place, we've centralized our business process knowledge. This gives us an excellent facility when we need to explain a business process or transfer knowledge. It also helps to prevent tribal knowledge, or the ownership and understanding of an undocumented procedure by just one or a few users. So when is a BRMS the right choice? Consider a rules engine when a problem is too complex for traditional coding approaches. Rules can abstract away the complexity and prevent usage of fragile implementations. Rules engines are also beneficial when a problem isn't fully known. More often than not, you'll find yourself iterating business methodology in order to fully understand small details involved that are second nature to users. Rules are flexible and allow us to easily change what we know about a procedure to accommodate this iterative design. This same flexibility comes in handy if you find that your logic changes often over time. Lastly, in providing a straightforward approach in documenting business rules, rules engines are an excellent choice if you find domain knowledge readily available, but via non-technical people who may be incapable of contributing to code. Sounds great, so let's get started, right? Well, I promised I'd also help you decide when a rules engine is not the right choice for you. In using a rules engine, someone must translate processes into actual rules, which can be a blessing in taking business logic away from developers, but also a curse in required training. Secondly, if your logic doesn't change very often, then rules might be overkill. Likewise, If your project is small in nature and likely to be used once and forgotten, then rules probably aren't for you. However, beware of the small system that will grow in complexity going forward! So if rules are right for you, why should you choose Drools? First and foremost, Drools has the flexibility of an open source license with the support of JBoss available. Drools also boasts five modules (to be discussed in more detail later), making their system quite extensible with domain-specific languages, graphical editing tools, web-based tools, and more. If you're partial to Eclipse, you'll also likely come to appreciate their plugin. Still not convinced? Read on and give it a shot—after all, that's why you're here, right? Installation In just five easy steps, you can integrate Drools into a new or existing project. Step 1 – what do I need? For starters, you will need to check that you have all of the required elements, listed as follows (all versions are as of time of writing): Java 1.5 (or higher) SE JDK. Apache Maven 3.0.4. Eclipse 4.2 (Juno) and the Drools plugin. Memory—512 MB (minimum), 1 GB or higher recommended. This will depend largely on the scale of your JVM and rule sessions, but the more the better! Step 2 – installing Java Java is the core language on which Drools is built, and is the language in which we'll be writing, so we'll definitely be needing that. The easiest way to get Java going is to download from and follow the installation instructions found at: www.oracle.com/technetwork/java/javase/downloads/index.html Step 3 – installing Maven Maven is a build automation tool from Apache that lets us describe a configuration of the project we're building and leave dependency management (amongst other things) up to it to work out. Again, the easiest way to get Maven up and running is to download and follow the documentation provided with the tool, found at: maven.apache.org/download.cgi Step 4 – installing Eclipse If you happen to have some other IDE of choice, or maybe you're just the old school type, then it's perfectly acceptable to author and execute your Drools-integrated code in your usual fashion. However, if you're an Eclipse fan, or you'd like to take advantage of auto-complete, syntax highlighting, and debugging features, then I recommend you go ahead and install Eclipse and the Drools plugin. The version of Eclipse that we're after is Eclipse IDE for Java Developers, which you can download and find installation instructions for on their site: http://www.eclipse.org/downloads/ Step 5 – installing the Drools Eclipse plugin In order to add the IDE plugin to Eclipse, the easiest method is to use Eclipse's built-in update manager. First, you'll need to add something the plugin depends on—the Graphical Editing Framework (GEF). In the Eclipse menu, click on Help, then on Install New Software..., enter the following URL in the Work with: field, and hit Add. download.eclipse.org/tools/gef/updates/releases/ Give your repository a nifty name in the pop-up window, such as GEF, and continue on with the install as prompted. You'll be asked to verify what you're installing and accept the license. Now we can add the Drools plugin itself—you can find the URL you'll need by visiting: http://www.jboss.org/drools/downloads.html Then, search for the text Eclipse update site and you'll see the link you need. Copy the address of the link to your clipboard, head back into Eclipse, and follow the same process you did for installing GEF. Note that you'll be asked to confirm the install of unsigned content, and that this is expected. Summary By this point, you know what Drools is, you should also be ready to integrate Drools into your applications. If you find yourself stuck, one of the good parts about an open source community is that there's nearly always someone who has faced your problem before and likely has a solution to recommend. Resources for Article : Further resources on this subject: Drools Integration Modules: Spring Framework and Apache Camel [Article] Human-readable Rules with Drools JBoss Rules 5.0(Part 2) [Article] Drools JBoss Rules 5.0 Flow (Part 2) [Article]
Read more
  • 0
  • 0
  • 1696

article-image-top-two-features-gson
Packt
03 Sep 2013
5 min read
Save for later

Top two features of GSON

Packt
03 Sep 2013
5 min read
(For more resources related to this topic, see here.) Java objects support Objects in GSON are referred as types of JsonElement: The GSON library can convert any user-defined class objects to and from the JSON representation. The Student class is a user-defined class, and GSON can serialize any Student object to JSON. The Student.java class is as follows: public class Student { private String name; private String subject; privateint mark; public String getName() { return name; } public void setName(String name) { this.name = name; } public String getSubject() { return subject; } public void setSubject(String subject) { this.subject = subject; } public int getMark() { return mark; } public void setMark(int mark) { this.mark = mark; } } The code for JavaObjectFeaturesUse.java is as follows: import com.google.gson.Gson; import com.packt.chapter.vo.Student; public class JavaObjectFeaturesUse { public static void main(String[] args){ Gsongson = new Gson(); Student aStudent = new Student(); aStudent.setName("Sandeep"); aStudent.setMark(128); aStudent.setSubject("Computer Science"); String studentJson = gson.toJson(aStudent); System.out.println(studentJson); Student anotherStudent = gson.fromJson(studentJson, Student.class); System.out.println(anotherStudentinstanceof Student); } } The output of the preceding code is as follows: {"name":"Sandeep","subject":"Computer Science","mark":128} True The preceding code creates a Student object with name as Sandeep, subject as Computer Science, and marks as 128. A Gson object is then instantiated and the Student object is passed in as a parameter to the toJson() method. It returns a string that has the JSON representation of the Java object. This string is printed as the first line in the console. The output JSON representation of the Student object is a collection of key/value pairs. The Java property of the Student class becomes the key in the JSON string. In the last part of the code, the fromJson() method takes the JSON generated string as the first input parameter and Student.class as the second parameter, to convert the JSON string back to a Student Java object. The last line of the code uses an instance of Student as the second-line operator to verify whether the generated Java object by the fromJson() method is of type Student. In the console, it prints True as the output, and if we print the values, we will get the same values as in JSON. Serialization and deserialization GSON has implicit serializations for some classes, such as Java wrapper type (Integer, Long, Double, and so on), java.net.URL, java.net.URI, java.util.Date, and so on. Let's see an example: import java.util.Date; import com.google.gson.Gson; public class InbuiltSerializerFeature { public static void main(String[] args) { Date aDateJson = new Date(); Gsongson = new Gson(); String jsonDate = gson.toJson(aDateJson); System.out.println(jsonDate); } } The output of the preceding code is as follows: May 29, 2013 8:55:07 PM The preceding code is serializing the Java Date class object to its JSON representation. In the preceding section, you have learned how GSON is used to serialize and deserialize objects, and how it supports custom serializers and deserializers for user-defined Java class objects. Let's see how it works. Also, GSON provides the custom serialization feature to developers. The following code is an example of a custom serializer: classStudentTypeSerializer implements JsonSerializer<Student>{ @Override publicJsonElement serialize(Student student, Type type, JsonSerializationContextcontext) { JsonObjectobj = new JsonObject(); obj.addProperty("studentname", student.getName()); obj.addProperty("subjecttaken", student.getSubject()); obj.addProperty("marksecured", student.getMark()); returnobj; } } The following code is an example of a custom deserializer: classStudentTypeDeserializer implements JsonDeserializer<Student>{ @Override public Student deserialize(JsonElementjsonelment, Type type, JsonDeserializationContext context) throws JsonParseException { JsonObjectjsonObject = jsonelment.getAsJsonObject(); Student aStudent = new Student(); aStudent.setName(jsonObject.get("studentname").getAsString()); aStudent.setSubject(jsonObject.get("subjecttaken").getAsString()); aStudent.setMark(jsonObject.get("marksecured").getAsInt()); return aStudent; } } The following code tests the custom serializer and deserializer: import java.lang.reflect.Type; import com.google.gson.Gson; import com.google.gson.GsonBuilder; import com.google.gson.JsonDeserializationContext; import com.google.gson.JsonDeserializer; import com.google.gson.JsonElement; import com.google.gson.JsonObject; import com.google.gson.JsonParseException; import com.google.gson.JsonSerializationContext; import com.google.gson.JsonSerializer; public class CustomSerializerFeature { public static void main(String[] args) { GsonBuildergsonBuilder = new GsonBuilder(); gsonBuilder.registerTypeAdapter(Student.class, new StudentTypeSerializer()); Gsongson = gsonBuilder.create(); Student aStudent = new Student(); aStudent.setName("Sandeep"); aStudent.setMark(150); aStudent.setSubject("Arithmetic"); String studentJson = gson.toJson(aStudent); System.out.println("Custom Serializer : Json String Representation "); System.out.println(studentJson); Student anotherStudent = gson.fromJson(studentJson, Student.class); System.out.println("Custom DeSerializer : Java Object Creation"); System.out.println("Student Name "+anotherStudent.getName()); System.out.println("Student Mark "+anotherStudent.getMark()); System.out.println("Student Subject "+anotherStudent.getSubject()); System.out.println("is anotherStudent is type of Student "+(anotherStudentinstanceof Student)); } } The output of the preceding code is as follows: Custom Serializer : Json String Representation {"studentname":"Sandeep","subjecttaken":"Arithmetic","marksecured":150} Custom DeSerializer : Java Object Creation Student Name Sandeep Student Mark 150 Student Subject Arithmetic is anotherStudent is type of Student true Summary This section explains about the support of Java objects and how to implement serialization and deserialization in GSON. Resources for Article : Further resources on this subject: Play Framework: Binding and Validating Objects and Rendering JSON Output [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Class-less Objects in JavaScript [Article]
Read more
  • 0
  • 0
  • 3636

article-image-scratching-tip-iceberg
Packt
03 Sep 2013
15 min read
Save for later

Scratching the Tip of the Iceberg

Packt
03 Sep 2013
15 min read
Boost is a huge collection of libraries. Some of those libraries are small and meant for everyday use and others require a separate article to describe all of their features. This article is devoted to some of those big libraries and to give you some basics to start with. The first two recipes will explain the usage of Boost.Graph. It is a big library with an insane number of algorithms. We'll see some basics and probably the most important part of it visualization of graphs. We'll also see a very useful recipe for generating true random numbers. This is a very important requirement for writing secure cryptography systems. Some C++ standard libraries lack math functions. We'll see how that can be fixed using Boost. But the format of this article leaves no space to describe all of the functions. Writing test cases is described in the Writing test cases and Combining multiple test cases in one test module recipes. This is important for any production-quality system. The last recipe is about a library that helped me in many courses during my university days. Images can be created and modified using it. I personally used it to visualize different algorithms, hide data in images, sign images, and generate textures. Unfortunately, even this article cannot tell you about all of the Boost libraries. Maybe someday I'll write another book... and then a few more. Working with graphs Some tasks require a graphical representation of data. Boost.Graph is a library that was designed to provide a flexible way of constructing and representing graphs in memory. It also contains a lot of algorithms to work with graphs, such as topological sort, breadth first search, depth first search, and Dijkstra shortest paths. Well, let's perform some basic tasks with Boost.Graph! Getting ready Only basic knowledge of C++ and templates is required for this recipe. How to do it... In this recipe, we'll describe a graph type, create a graph of that type, add some vertexes and edges to the graph, and search for a specific vertex. That should be enough to start using Boost.Graph. We start with describing the graph type: #include <boost/graph/adjacency_list.hpp> #include <string> typedef std::string vertex_t; typedef boost::adjacency_list< boost::vecS , boost::vecS , boost::bidirectionalS , vertex_t > graph_type; Now we construct it: graph_type graph; Let's use a non portable trick that speeds up graph construction: static const std::size_t vertex_count = 5; graph.m_vertices.reserve(vertex_count); Now we are ready to add vertexes to the graph: typedef boost::graph_traits<graph_type> ::vertex_descriptor descriptor_t; descriptor_t cpp = boost::add_vertex(vertex_t("C++"), graph); descriptor_t stl = boost::add_vertex(vertex_t("STL"), graph); descriptor_t boost = boost::add_vertex(vertex_t("Boost"), graph); descriptor_t guru = boost::add_vertex(vertex_t("C++ guru"), graph); descriptor_t ansic = boost::add_vertex(vertex_t("C"), graph); It is time to connect vertexes with edges: boost::add_edge(cpp, stl, graph); boost::add_edge(stl, boost, graph); boost::add_edge(boost, guru, graph); boost::add_edge(ansic, guru, graph); We make a function that searches for a vertex: template <class GraphT> void find_and_print(const GraphT& g, boost::string_ref name) { Now we will write code that gets iterators to all vertexes: typedef typename boost::graph_traits<graph_type> ::vertex_iterator vert_it_t; vert_it_t it, end; boost::tie(it, end) = boost::vertices(g); It's time to run a search for the required vertex: typedef boost::graph_traits<graph_type>::vertex_descriptor desc_t; for (; it != end; ++ it) { desc_t desc = *it; if (boost::get(boost::vertex_bundle, g)[desc] == name.data()) { break; } } assert(it != end); std::cout << name << 'n'; } /* find_and_print */ How it works... In step 1, we are describing what our graph must look like and upon what types it must be based. boost::adjacency_list is a class that represents graphs as a two-dimensional structure, where the first dimension contains vertexes and the second dimension contains edges for that vertex. boost::adjacency_list must be the default choice for representing a graph; it suits most cases. The first template parameter, boost::adjacency_list, describes the structure used to represent the edge list for each of the vertexes; the second one describes a structure to store vertexes. We can choose different STL containers for those structures using specific selectors, as listed in the following table: Selector STL container boost::vecS std::vector boost::listS std::list boost::slistS std::slist boost::setS std::set boost::multisetS std::multiset boost::hash_setS std::hash_set The third template parameter is used to make an undirected, directed, or bidirectional graph. Use the boost::undirectedS, boost::directedS, and boost::bidirectionalS selectors respectively. The fifth template parameter describes the datatype that will be used as the vertex. In our example, we chose std::string. We can also support a datatype for edges and provide it as a template parameter. Steps 2 and 3 are trivial, but at step 4 you will see a non portable way to speed up graph construction. In our example, we use std::vector as a container for storing vertexes, so we can force it to reserve memory for the required amount of vertexes. This leads to less memory allocations/deallocations and copy operations during insertion of vertexes into the graph. This step is non-portable because it is highly dependent on the current implementation of boost::adjacency_list and on the chosen container type for storing vertexes. At step 4, we see how vertexes can be added to the graph. Note how boost::graph_traits<graph_type> has been used. The boost::graph_traits class is used to get types that are specific for a graph type. We'll see its usage and the description of some graph-specific types later in this article. Step 5 shows what we need do to connect vertexes with edges. If we had provided a datatype for the edges, adding an edge would look as follows: boost::add_edge(ansic, guru, edge_t(initialization_parameters), graph) Note that at step 6 the graph type is a template parameter. This is recommended to achieve better code reusability and make this function work with other graph types. At step 7, we see how to iterate over all of the vertexes of the graph. The type of vertex iterator is received from boost::graph_traits. The function boost::tie is a part of Boost.Tuple and is used for getting values from tuples to the variables. So calling boost::tie(it, end) = boost::vertices(g) will put the begin iterator into the it variable and the end iterator into the end variable. It may come as a surprise to you, but dereferencing a vertex iterator does not return vertex data. Instead, it returns the vertex descriptor desc, which can be used in boost::get(boost::vertex_bundle, g)[desc] to get vertex data, just as we have done in step 8. The vertex descriptor type is used in many of the Boost.Graph functions; we saw its use in the edge construction function in step 5. As already mentioned, the Boost.Graph library contains the implementation of many algorithms. You will find many search policies implemented, but we won't discuss them in this article. We will limit this recipe to only the basics of the graph library. There's more... The Boost.Graph library is not a part of C++11 and it won't be a part of C++1y. The current implementation does not support C++11 features. If we are using vertexes that are heavy to copy, we may gain speed using the following trick: vertex_descriptor desc = boost::add_vertex(graph); boost::get(boost::vertex_bundle, g_)[desc] = std::move(vertex_data); It avoids copy constructions of boost::add_vertex(vertex_data, graph) and uses the default construction with move assignment instead. The efficiency of Boost.Graph depends on multiple factors, such as the underlying containers types, graph representation, edge, and vertex datatypes. Visualizing graphs Making programs that manipulate graphs was never easy because of issues with visualization. When we work with STL containers such as std::map and std::vector, we can always print the container's contents and see what is going on inside. But when we work with complex graphs, it is hard to visualize the content in a clear way: too many vertexes and too many edges. In this recipe, we'll take a look at the visualization of Boost.Graph using the Graphviz tool. Getting ready To visualize graphs, you will need a Graphviz visualization tool. Knowledge of the preceding recipe is also required. How to do it... Visualization is done in two phases. In the first phase, we make our program output the graph's description in a text format; in the second phase, we import the output from the first step to some visualization tool. The numbered steps in this recipe are all about the first phase. Let's write the std::ostream operator for graph_type as done in the preceding recipe: #include <boost/graph/graphviz.hpp> std::ostream& operator<<(std::ostream& out, const graph_type& g) { detail::vertex_writer<graph_type> vw(g); boost::write_graphviz(out, g, vw); return out; } The detail::vertex_writer structure, used in the preceding step, must be defined as follows: namespace detail { template <class GraphT> class vertex_writer { const GraphT& g_; public: explicit vertex_writer(const GraphT& g) : g_(g) {} template <class VertexDescriptorT> void operator()(std::ostream& out, const VertexDescriptorT& d) const { out << " [label="" << boost::get(boost::vertex_bundle, g_)[d] << ""]"; } }; // vertex_writer } // namespace detail That's all. Now, if we visualize the graph from the previous recipe using the std::cout << graph; command, the output can be used to create graphical pictures using the dot command-line utility: $ dot -Tpng -o dot.png digraph G { 0 [label="C++"]; 1 [label="STL"]; 2 [label="Boost"]; 3 [label="C++ guru"]; 4 [label="C"]; 0->1 ; 1->2 ; 2->3 ; 4->3 ; }   The output of the preceding command is depicted in the following figure: We can also use the Gvedit or XDot programs for visualization if the command line frightens you. How it works... The Boost.Graph library contains function to output graphs in Graphviz (DOT) format. If we write boost::write_graphviz(out, g) with two parameters in step 1, the function will output a graph picture with vertexes numbered from 0. That's not very useful, so we provide an instance of the vertex_writer class that outputs vertex names. As we can see in step 2, the format of output must be DOT, which is understood by the Graphviz tool. You may need to read the Graphviz documentation for more info about the DOT format. If you wish to add some data to the edges during visualization, we need to provide an instance of the edge visualizer as a fourth parameter to boost::write_graphviz. There's more... C++11 does not contain Boost.Graph or the tools for graph visualization. But you do not need to worry—there are a lot of other graph formats and visualization tools and Boost. Graph can work with plenty of them. Using a true random number generator I know of many examples of commercial products that use incorrect methods for getting random numbers. It's a shame that some companies still use rand() in cryptography and banking software. Let's see how to get a fully random uniform distribution using Boost.Random that is suitable for banking software. Getting ready Basic knowledge of C++ is required for this recipe. Knowledge of different types of distributions will also be helpful. The code in this recipe requires linking against the boost_random library. How to do it... To create a true random number, we need some help from the operating system or processor. This is how it can be done using Boost: We'll need to include the following headers: #include <boost/config.hpp> #include <boost/random/random_device.hpp> #include <boost/random/uniform_int_distribution.hpp> Advanced random number providers have different names under different platforms: static const std::string provider = #ifdef BOOST_WINDOWS "Microsoft Strong Cryptographic Provider" #else "/dev/urandom" #endif ; Now we are ready to initialize the generator with Boost.Random: boost::random_device device(provider); Let's get a uniform distribution that returns a value between 1000 and 65535: boost::random::uniform_int_distribution<unsigned short> random(1000); That's it. Now we can get true random numbers using the random(device) call. How it works... Why does the rand() function not suit banking? Because it generates pseudo-random numbers, which means that the hacker could predict the next generated number. This is an issue with all pseudo-random number algorithms. Some algorithms are easier to predict and some harder, but it's still possible. That's why we are using boost::random_device in this example (see step 3). That device gathers information about random events from all around the operating system to construct an unpredictable hardware-generated number. The examples of such events are delays between pressed keys, delays between some of the hardware interruptions, and the internal CPU random number generator. Operating systems may have more than one such type of random number generators. In our example for POSIX systems, we used /dev/urandom instead of the more secure /dev/random because the latter remains in a blocked state until enough random events have been captured by the OS. Waiting for entropy could take seconds, which is usually unsuitable for applications. Use /dev/random to create long-lifetime GPG/SSL/SSH keys. Now that we are done with generators, it's time to move to step 4 and talk about distribution classes. If the generator just generates numbers (usually uniformly distributed), the distribution class maps one distribution to another. In step 4, we made a uniform distribution that returns a random number of unsigned short type. The parameter 1000 means that distribution must return numbers greater or equal to 1000. We can also provide the maximum number as a second parameter, which is by default equal to the maximum value storable in the return type. There's more... Boost.Random has a huge number of true/pseudo random generators and distributions for different needs. Avoid copying distributions and generators; this could turn out to be an expensive operation. C++11 has support for different distribution classes and generators. You will find all of the classes from this example in the <random> header in the std:: namespace. The Boost.Random libraries do not use C++11 features, and they are not really required for that library either. Should you use Boost implementation or STL? Boost provides better portability across systems; however, some STL implementations may have assembly-optimized implementations and might provide some useful extensions. Using portable math functions Some projects require specific trigonometric functions, a library for numerically solving ordinary differential equations, and working with distributions and constants. All of those parts of Boost.Math would be hard to fit into even a separate book. A single recipe definitely won't be enough. So let's focus on very basic everyday-use functions to work with float types. We'll write a portable function that checks an input value for infinity and not-a-number (NaN) values and changes the sign if the value is negative. Getting ready Basic knowledge of C++ is required for this recipe. Those who know C99 standard will find a lot in common in this recipe. How to do it... Perform the following steps to check the input value for infinity and NaN values and change the sign if the value is negative: We'll need the following headers: #include <boost/math/special_functions.hpp> #include <cassert> Asserting for infinity and NaN can be done like this: template <class T> void check_float_inputs(T value) { assert(!boost::math::isinf(value)); assert(!boost::math::isnan(value)); Use the following code to change the sign: if (boost::math::signbit(value)) { value = boost::math::changesign(value); } // ... } // check_float_inputs That's it! Now we can check that check_float_inputs(std::sqrt(-1.0)) and check_float_inputs(std::numeric_limits<double>::max() * 2.0) will cause asserts. How it works... Real types have specific values that cannot be checked using equality operators. For example, if the variable v contains NaN, assert(v!=v) may or may not pass depending on the compiler. For such cases, Boost.Math provides functions that can reliably check for infinity and NaN values. Step 3 contains the boost::math::signbit function, which requires clarification. This function returns a signed bit, which is 1 when the number is negative and 0 when the number is positive. In other words, it returns true if the value is negative. Looking at step 3 some readers might ask, "Why can't we just multiply by -1 instead of calling boost::math::changesign?". We can. But multiplication may work slower than boost::math::changesign and won't work for special values. For example, if your code can work with nan, the code in step 3 will be able to change the sign of -nan and write nan to the variable. The Boost.Math library maintainers recommend wrapping math functions from this example in round parenthesis to avoid collisions with C macros. It is better to write (boost::math::isinf)(value) instead of boost::math::isinf(value). There's more... C99 contains all of the functions described in this recipe. Why do we need them in Boost? Well, some compiler vendors think that programmers do not need them, so you won't find them in one very popular compiler. Another reason is that the Boost.Math functions can be used for classes that behave like numbers. Boost.Math is a very fast, portable, reliable library.
Read more
  • 0
  • 0
  • 1823

article-image-cross-browser-distributed-testing
Packt
02 Sep 2013
3 min read
Save for later

Cross-browser-distributed testing

Packt
02 Sep 2013
3 min read
(For more resources related to this topic, see here.) Getting ready In contrast to the server-side software, JavaScript applications are being executed on the client side and therefore depend on the user browser. Normally, project specification includes the list of the browsers and platforms that the application must support. The longer the list, the harder is cross-browser-compatibility testing. For example, jQuery supports 13 browsers on different platforms. The project is fully tested in every declared environment with every single commit. That is possible thanks to the distributed testing tool TestSwarm (swarm.jquery.org). You may also hear of other tools such as Js TestDriver (code.google.com/p/js-test-driver) or Karma (karma-runner.github.io). We will take Bunyip (https://github.com/ryanseddon/bunyip) as it has swiftly been gaining popularity recently. How does it work? You launch the tool for a test runner HTML and it provides the connect end-point (IP:port) and launches a locally installed browser, if configured. As soon as you fire up the address in a browser, the client is captured by Bunyip and the connection is established. With your confirmation, Bunyip runs the tests in every connected browser to collect and report results. See the following figure: Bunyip is built on top of the Yeti tool (www.yeti.cx) that works with YUI Test, QUnit, Mocha, Jasmine, or DOH. Bunyip can be used in conjunction with BrowserStack. So, with a paid account at BrowserStack (www.browserstack.com), you can make Bunyip run your tests on hundreds of remotely hosted browsers. To install the tool, type in the console as follows: npm install -g bunyip Here, we recourse to the Node.js package manager that is part of Node.js. So if you don't have Node.js installed, find the installation instructions on the following page: https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager Now, we are ready to start using Bunyip. How to do it Add to the QUnit test suite (test-suite.html) the following configuration option to prevent it from auto-starting before the plugin callback is set up: if (QUnit && QUnit.config) {QUnit.config.autostart = false;} Launch a Yeti hub on port 9000 (default configuration) and use test-suite.html. bunyip -f test-suite.html Copy the connector address (for example, http://127.0.0.1:9000) from the output and fire it up in diverse browsers. You can use Oracle VirtualBox (www.virtualbox.org) to launch browsers in virtual machines set up on every platform you need. Examine the results shown in the following screenshot: Summary In this article, we learnt about Cross-browser-distributed testing and the automation of client-side cross-platform/browser testing. Resources for Article: Further resources on this subject: Building a Custom Version of jQuery [Article] Testing your App [Article] Logging and Reports [Article]
Read more
  • 0
  • 0
  • 1747
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-article-conceptualizing-it-service-management
Packt
30 Aug 2013
6 min read
Save for later

Conceptualizing IT Service Management

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) Understanding IT Service Management (ITSM) The success of ITSM lies in putting the customer first. ITSM suggests designing all processes to provide value to customers by facilitating the outcomes they want, without the ownership of specific costs and risks. This quality service is provided through a set of the organization's own resources and capabilities. The capabilities of an IT service organization generally lie with its people, process, or technology. While people and technology could be found in the market, the organizational processes need to be defined, developed, and often customized within the organization. The processes mature with the organization, and hence need to be given extra focus. Release Management, Incident Management, and so on, are some of the commonly heard ITSM processes. It's easy to confuse these with functions, which as per ITIL, has a different meaning associated with it. Many of us do not associate different meanings for many similar terms. Here are some examples: Incident Management versus Problem Management Change Management versus Release Management Service Level Agreement (SLA) versus Operational Level Agreement (OLA) Service Portfolio versus Service Catalog This book will strive to bring out the fine differences between such terms, as and when we formally introduce them. This should make the concepts clear while avoiding any confusion. So, let us first see the difference between a process and a function. Differentiating between process and function A process is simply a structured set of activity designed to accomplish a specific objective. It takes one or more defined inputs and turns them into defined outputs. Characteristics A process is measurable, aimed at specific results, delivers primary results to a customer, and responds to specific triggers. Whereas, a function is a team or a group of people and the tools they use to carry out the processes. Hence, while Release Management, Incident Management, and so on are processes. The IT Service Desk is a function, which might be responsible for carrying out these processes. Luckily, ServiceDesk Plus provides features for managing both processes and functions. Differentiating between Service Level Agreement (SLA) and Operational Level Agreement (OLA) Service Level Agreement, or SLA, is a widely used term and often has some misconceptions attached to it. Contrary to popular belief, SLA is not necessarily a legal contract, but should be written in simple language, which can be understood by all parties without any ambiguity. An SLA is simply an agreement between a service provider and the customer(s) and documents the service targets and responsibilities of all parties. There are three types of SLAs defined in ITIL: Service Based SLA: All customers get the same deal for a specific service Customer Based SLA: A customer gets the same deal for all services Multilevel SLA: This involves a combination of corporate level, service level, and customer level SLAs. An Operational Level Agreement, or OLA, on the other hand, is the agreement between the service provider and another part of the same organization. An OLA is generally a prerequisite to help meet the SLA. There might be legal contracts between the service provider and some external suppliers as well, to help meet the SLA(s). These third-party legal contracts are called Underpinning Contracts. As must be evident, management and monitoring of these agreements is of utmost importance for the service organization. Here is how to create SLA records easily and track them in ServiceDesk Plus: Agree SLA with the customers. Go to Admin tab. Click on Service Level Agreements in the Helpdesk block. Image All SLA-based mail escalations are enabled by default. These can be disabled by clicking on the Disable Escalation button. Four SLAs are set by default—High SLA, Medium SLA, Normal SLA, and Low SLA. More could be added, if needed. Click on any SLA Name to view/edit its details. SLAs for sites, if any, can be configured by the site admin from the Service Level Agreement for combo box. SLA Rules block, below SLA details, is used for setting the rules and criteria for the SLA. Once agreed with the customers, configuring SLAs in the tool is pretty easy and straightforward. Escalations are taken care of automatically, as per the defined rules. To monitor the SLAs for a continuous focus on customer satisfaction, several Flash Reports are available under the Reports tab, for use on the fly. Differentiating between Service Portfolio and Service Catalog This is another example of terms often used interchangeably. However, ITIL clarifies that the Service Catalog lists only live IT services but Service Portfolio is a bigger set including services in the pipeline and retired services as well. Service Catalog contains information about two types of IT services: Customer-facing services (referred to as Business Service Category ) and Supporting services, with the complexities hidden from the business (referred to as IT Service Category) ServiceDesk Plus plays a vital role in managing the ways in which these services are exposed to users. The software provides a simple and effective interface to browse through the services and monitor their status. Users can also request for availing these services from within the module. The Service Catalog can also be accessed from the Admin tab, by clicking on Service Catalog under the Helpdesk block. The page lists the configured service categories and can be used to Add Service Category , Manage the service items, and Add Service under each category. Deleting a Service Category Deletion of an existing Service Category should be done with care. Here are the steps: Select Service Categories from Manage dropdown. A window with Service Categories List will open. Select the check box next to the Service Category to be deleted and then press the Delete button on the interface. A confirmation box will appear and on confirmation, the Service Category will be processed for deletion. If the concerned Service Category is in use by a module, then it will be grayed out and the category will be unavailable for further usage. To bring it back into usage, click on the edit icon Image next to the category name and uncheck the box for Service not for further usage in the new window.   The following two options under the Manage dropdown provide additional features for the customization of service request forms: Additional Fields: This can be used to capture additional details about the service apart from the predefined fields Service Level Agreements : This can be used to configure Service Based SLAs Summary We now understand the ITSM concepts, the fine differences between some of the terms, and also why software like ServiceDesk Plus is modeled after ITIL framework. We've also seen how SLAs and Service Catalog could be configured and tracked using ServiceDesk Plus. Resources for Article :   Further resources on this subject: Introduction to vtiger CRM [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing PrestaShop Theme Part 1 [Article]
Read more
  • 0
  • 0
  • 1579

article-image-defining-alerts
Packt
30 Aug 2013
3 min read
Save for later

Defining alerts

Packt
30 Aug 2013
3 min read
(For more resources related to this topic, see here.) Citrix EdgeSight alert is a powerful rules- and actions-based system that instructs the EdgeSight agents to send an alert in real time when a predefined situation has occurred on a monitored object. Alerts are defined by rules. The action can be configured to send either an e-mail alert or SNMP trap. The generated alerts are also listed and organized within the EdgeSight web console. After an alert rule has been created, it should be mapped to a department. How to do it... To create an alert, navigate to Configure | Company Configuration | Alerts | Rules | New Alert Rule. We will create an alert rule based on an application, so select the Application Alerts radio button and click on Next. Select Application Performance as the alert type and click on Next. Give the alert rule a name, the name of the process we want to monitor, and the CPU time in percent. Click on Next. Select departments to assign this alert rule to and click on Next. Select the department you wish to edit alert actions in and click on Next. We now need to assign an action to this alert rule; we will create a new action. So select the Create New Alert Action radio button and click on Next. We will send an e-mail notification as the alert action so select Send an email notification radio button and click on Next. Enter a name, subject, and one or more recipient e-mail addresses for this e-mail action. You can click the Test Action button to test whether EdgeSight was able to successfully queue the message or not. Click on Finish. How it works... Creating too many real-time alerts can affect the XenApp server performance as each rule that is created requires more work to be performed by the agent. We should only create alerts for those critical situations that require immediate action. If the situation is not critical, the delivery of alerts based on the normal upload cycle will probably be sufficient. By default, the alert data and other statistics are uploaded to the server daily. There's more... When a new alert rule is created or any existing rule is modified, this change is applied to all the devices in the department when those devices next upload data to the EdgeSight Server; alternatively, you can manually upload the alert rule data by clicking on Run Remotely. Administrators can also force certain agent devices to perform a configuration check within the EdgeSight web console by navigating to Configure | Company Configuration | Agents and then selecting the device from the device picker. To suppress an alert, navigate to Monitor | Alert List, click on the down arrow , and then select Suppress Alert. To clear an alert navigate to Configure | Company Configuration | Alerts | Suppressions. This is per user basis and other EdgeSight administrators will still see those alerts suppressed by you. Summary In this article will learned EdgeSight alerts and saw how to create alerts and define action when the defined alert condition is met. Resources for Article : Further resources on this subject: Publishing applications [Article] Designing a XenApp 6 Farm [Article] The XenDesktop architecture [Article]
Read more
  • 0
  • 0
  • 1741

article-image-exploring-top-new-features-clr
Packt
30 Aug 2013
10 min read
Save for later

Exploring the Top New Features of the CLR

Packt
30 Aug 2013
10 min read
(For more resources related to this topic, see here.) One of its most important characteristics is that it is an in-place substitution of the .NET 4.0 and only runs on Windows Vista SP2 or later systems. .NET 4.5 breathes asynchronous features and makes writing async code even easier. It also provides us with the Task Parallel Library (TPL) Dataflow Library to help us create parallel and concurrent applications. Another very important addition is the portable libraries, which allow us to create managed assemblies that we can refer through different target applications and platforms, such as Windows 8, Windows Phone, Silverlight, and Xbox. We couldn't avoid mentioning Managed Extensibility Framework (MEF), which now has support for generic types, a convention-based programming model, and multiple scopes. Of course, this all comes together with a brand-new tooling, Visual Studio 2012, which you can find at http://msdn.microsoft.com/en-us/vstudio. Just be careful if you have projects in .NET 4.0 since it is an in-place install. For this article I'd like to give a special thanks to Layla Driscoll from the Microsoft .NET team who helped me summarize the topics, focus on what's essential, and showcase it to you, dear reader, in the most efficient way possible. Thanks, Layla. There are some features that we will not be able to explore through this article as they are just there and are part of the CLR but are worth explaining for better understanding: Support for arrays larger than 2 GB on 64-bit platforms, which can be enabled by an option in the app config file. Improved performance on the server's background garbage collection, which must be enabled in the <gcServer> element in the runtime configuration schema. Multicore JIT: Background JIT (Just In Time) compilation on multicore CPUs to improve app performance. This basically creates profiles and compiles methods that are likely to be executed on a separate thread. Improved performance for retrieving resources. The culture-sensitive string comparison (sorting, casing, normalization, and so on) is delegated to the operating system when running on Windows 8, which implements Unicode 6.0. On other platforms, the .NET framework will behave as in the previous versions, including its own string comparison data implementing Unicode 5.0. Next we will explore, in practice, some of these features to get a solid grasp on what .NET 4.5 has to offer and, believe me, we will have our hands full! Creating a portable library Most of us have often struggled and hacked our code to implement an assembly that we could use in different .NET target platforms. Portable libraries are here to help us to do exactly this. Now there is an easy way to develop a portable assembly that works without modification in .NET Framework, Windows Store apps style, Silverlight, Windows Phone, and XBOX 360 applications. The trick is that the Portable Class Library project supports a subset of assemblies from these platforms, providing us a Visual Studio template. This article will show you how to implement a basic application and help you get familiar with Visual Studio 2012. Getting ready In order to use this section you should have Visual Studio 2012 installed. Note that you will need a Visual Studio 2012 SKU higher than Visual Studio Express for it to fully support portable library projects. How to do it... Here we will create a portable library and see how it works: First, open Visual Studio 2012 and create a new project. We will select the Portable Class Library template from the Visual C# category. Now open the Properties dialog box of our newly created portable application and, in the library we will see a new section named Target frameworks. Note that, for this type of project, the dialog box will open as soon as the project is created, so opening it will only be necessary when modifying it afterwards. If we click on the Change button, we will see all the multitargeting possibilities for our class. We will see that we can target different versions of a framework. There is also a link to install additional frameworks. The one that we could install right now is XNA but we will click on Cancel and let the dialog box be as it is. Next, we will click on the show all files icon at the top of the Solution Explorer window (the icon with two papers and some dots behind them), right-click on the References folder, and click on Add Reference. We will observe on doing so that we are left with a .NET subset of assemblies that are compatible with the chosen target frameworks. We will add the following lines to test the portable assembly: using System;using System.Collections.Generic;using System.Linq;using System.Text;namespace pcl_myFirstPcl{public static class MyPortableClass {public static string GetSomething() {return "I am a portable class library"; } }} Build the project. Next, to try this portable assembly we could add, for example, a Silverlight project to the solution, together with an ASP.NET Web application project to wrap the Silverlight. We just need to add a reference to the portable library project and add a button to the MainPage.xaml page that calls the portable library static method we created. The code behind it should look as follows. Remember to add a using reference to our portable library namespace. using System.Windows.Documents;using System.Windows.Input;using System.Windows.Media;using System.Windows.Media.Animation;using System.Windows.Shapes;using pcl_myFirstPcl;namespace SilverlightApplication_testPCL{public partial class MainPage : UserControl {public MainPage() {InitializeComponent(); }private void Button_Click_1(object sender, RoutedEventArgs e) { String something = MyPortableClass.GetSomething();MessageBox.Show("Look! - I got this string from my portable class library: " + something); } }} We can execute the code and check if it works. In addition, we could add other types of projects, reference the Portable Library Class, and ensure that it works properly. How it works... We created a portable library from the Portable Class Library project template and selected the target frameworks. We saw the references; note that it reinforces the visibility of the assemblies that break the compatibility with the targeted platforms, helping us to avoid mistakes. Next we added some code, a target reference application that referenced the portable class, and used it. There's more... We should be aware that when deploying a .NET app that references a Portable Class Library assembly, we must specify its dependency to the correct version of the .NET Framework, ensuring that the required version is installed. A very common and interesting usage of the Portable Class Library would be to implement MVVM. For example, we could put the View Model and Model classes inside a portable library and share it with Windows Store apps, Silverlight, and Windows Phone applications. The architecture is described in the following diagram, which has been taken from MSDN (http://msdn.microsoft.com/en-us/library/hh563947%28v=vs.110%29.aspx): It is really interesting that the list of target frameworks is not limited and we even have a link to install additional frameworks, so I guess that the number of target frameworks will eventually grow. Controlling the timeout in regular expressions .NET 4.5 gives us improved control on the resolution of regular expressions so we can react when they don't resolve on time. This is extremely useful if we don't control the regular expressions/patterns, such as the ones provided by the users. A badly formed pattern can have bad performance due to excessive backtracking and this new feature is really a lifesaver. How to do it... Next we are going to control the timeout in the regular expression, where we will react if the operation takes more than 1 millisecond: Create a new Visual Studio project of type Console Application, named caRegexTimeout. Open the Program.cs file and add a using clause for using regular expressions: Using System.Text.RegularExpressions; Add the following method and call it from the Main function: private static void ExecuteRegexExpression() {bool RegExIsMatch = false;string testString = "One Tile to rule them all, One Tile to find them… ";string RegExPattern = @"([a-z ]+)*!";TimeSpantsRegexTimeout = TimeSpan.FromMilliseconds(1);try {RegExIsMatch = Regex.IsMatch(testString, RegExPattern, RegexOptions.None, tsRegexTimeout); }catch (RegexMatchTimeoutException ex) {Console.WriteLine("Timeout!!");Console.WriteLine("- Timeout specified: " + ex.MatchTimeout); }catch (ArgumentOutOfRangeException ex) {Console.WriteLine("ArgumentOutOfRangeException!!");Console.WriteLine(ex.Message); }Console.WriteLine("Finished succesfully: " + RegExIsMatch.ToString());Console.ReadLine();} If we execute it, we will see that it doesn't fi nish successfully, showing us some details in the console window. Next, we will change testString and RegExPattern to: String testString = "jose@brainsiders.com";String RegExPattern = @"^([w-.]+)@([w-.]+).[a-zA-Z]{2,4}$"; If we run it, we will now see that it runs and fi nishes successfully. How it works... The RegEx.IsMatch() method now accepts a parameter, which is matchTimeout of type TimeSpan, indicating the maximum time that we allow for the matching operation. If the execution time exceeds this amount, RegexMatchTimeoutException is launched. In our code, we have captured it with a try-catch statement to provide a custom message and of course to react upon a badly formed regex pattern taking too much time. We have tested it with an expression that will take some more time to validate and we got the timeout. When we changed the expression to a good one with a better execution time, the timeout was not reached. Additionally, we also watched out for the ArgumentOutOfRangeException, which is thrown when TimeSpan is zero, or negative, or greater than 24 days. There'smore... We could also set a global matchTimeout for the application through the "REGEX_DEFAULT_MATCH_TIMEOUT" property with the AppDomain.SetData method: AppDomain.CurrentDomain.SetData("REGEX_DEFAULT_MATCH_ TIMEOUT",TimeSpan.FromMilliseconds(200)); Anyway, if we specify the matchTimeout parameter, we will override the global value. Defining the culture for an application domain With .NET 4.5, we have in our hands a way of specifying the default culture for all of our application threads in a quick and efficient way. How to do it... We will now define the default culture for our application domain as follows: Create a new Visual Studio project of type Console Application named caCultureAppDomain. Open the Program.cs file and add the using clause for globalization: using System.Globalization; Next, add the following methods: static void DefineAppDomainCulture() {String CultureString = "en-US";DisplayCulture();CultureInfo.DefaultThreadCurrentCulture = CultureInfo.CreateSpecificCulture(CultureString);CultureInfo.DefaultThreadCurrentUICulture = CultureInfo.CreateSpecificCulture(CultureString);DisplayCulture();Console.ReadLine();}static void DisplayCulture() {Console.WriteLine("App Domain........: {0}", AppDomain.CurrentDomain.Id);Console.WriteLine("Default Culture...: {0}", CultureInfo.DefaultThreadCurrentCulture);Console.WriteLine("Default UI Culture: {0}", CultureInfo.DefaultThreadCurrentUICulture);} Then add a call to the DefineAppDomainCulture() method. If we execute it, we will observe that the initial default cultures are null and we specify them to become the default for the App Domain. How it works... We used the CultureInfo class to specify the culture and the UI of the application domain and all its threads. This is easily done through the DefaultThreadCurrentCulture and DefaultThreadCurrentUICulture properties. There's more... We must be aware that these properties affect only the current application domain, and if it changes we should control them.
Read more
  • 0
  • 0
  • 1792

article-image-working-zend-framework-20
Packt
29 Aug 2013
5 min read
Save for later

Working with Zend Framework 2.0

Packt
29 Aug 2013
5 min read
(For more resources related to this topic, see here.) So, what is Zend Framework? Throughout the years, PHP has become one of the most popular server-side scripting languages on the Internet. This is largely due to its steep learning curve and ease of use. However, these two reasons have also contributed to many of its shortcomings. With minimal restrictions on how you write code with this language, you can employ any style or structure that you prefer, and thus it becomes much easier to write bad code. But there is a solution: use a framework! A framework simplifies coding by providing a highly modular file organization with code libraries of the most common scripting in everyday programming. It helps you develop faster by eliminating the monotonous details of coding and makes your code more re-usable and easier to maintain. There are many popular PHP frameworks out there. A number of them have large, open source communities that provide a wide range of support and offer many solutions. This is probably the main reason why most beginner PHP developers get confused while choosing a framework. I will not discuss the pros and cons of other frameworks, but I will demonstrate briefly why Zend Framework is a great choice. Zend Framework (ZF) is a modern, free, and open source framework that is maintained and developed by a large community of developers and backed by Zend Technologies Ltd, the company founded by the developers of PHP. Currently, Zend Framework is used by a large number of global companies, such as BBC, Discovery, Offers.com, and Cisco. Additionally, many widely used open source projects and recognized frameworks are powered by Zend Framework, such as in the case of Magento, Centurion, TomatoCMS, and PHProjekt. And lastly, its continued development is sponsored by highly recognizable firms such as Google and Microsoft. With all this in mind, we know one thing is certain—Zend Framework is here to stay. Zend Framework has a rich set of components or libraries and that is why it is also known as a component framework. You will find a library in it for almost anything that you need for your everyday project, from simple form validation to file upload. It gives you the flexibility to select a single component to develop your project or opt for all components, as you may need them. Moreover, with the release of Zend Framework 2, each component is available via Pyrus and Composer. Pyrus is a package management and distribution system, and Composer is a tool for dependency management in PHP that allows you to declare the dependent libraries your project needs and installs them in your project for you. Zend Framework 2 follows a 100 percent object-oriented design principle and makes use of all the new PHP 5.3+ features such as namespaces, late static binding, lambda functions, and closures. Now, let’s get started on a quick-start project to learn the basics of Zend Framework 2, and be well on our way to building our first Zend Framework MVC application. Installation ZF2 requires PHP 5.3.3 or higher, so make sure you have the latest version of PHP. We need a Windows-based PC, and we will be using XAMPP (http://www.apachefriends.org/en/xampp.html) for our development setup. I have installed XAMPP on my D: drive, so my web root path for my setup is d:xampphtdocs. Step 1 – downloading Zend Framework To create a ZF2 project, we will need two things: the framework itself and a skeleton application. Download both Zend Framework and the skeleton application from http://framework.zend.com/downloads/latest and https://github.com/zendframework/ZendSkeletonApplication, respectively. Step 2 – unzipping the skeleton application Now put the skeleton application that you have just downloaded into the web root directory (d:xampphtdocs) and unzip it. Name the directory address-book as we are going to create a very small address book application, or you can name it anything you want your project name to be. When you unzip the skeleton application, it looks similar to the following screenshot: Step 3 – knowing the directories Inside the module directory, there is a default module called Application. Inside the vendor directory, there is an empty directory called ZF2. This directory is for the Zend Framework library. Unzip the Zend Framework that you have downloaded, and copy the library folder from the unzipped folder to the vendorZF2 directory. Step 4 – welcome to Zend Framework 2 Now, in your browser, type: http://localhost/address-book/public. It should show a screen as shown in the following screenshot. If you see the same screen, it means you have created the project successfully. And that’s it By this point, you should have a working Zend Framework, and you are free to play around and discover more about it. Summary In this article we will learned what is Zend Framework and will also learn how to install it on Windows PC. Resources for Article: Further resources on this subject: Authentication with Zend_Auth in Zend Framework 1.8 [Article] Building Your First Zend Framework Application [Article] Authorization with Zend_Acl in Zend Framework 1.8 [Article]
Read more
  • 0
  • 0
  • 1693
article-image-writing-your-first-lines-coffeescript
Packt
29 Aug 2013
9 min read
Save for later

Writing Your First Lines of CoffeeScript

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Following along with the examples I implore you to open up a console as you read this article and try out the examples for yourself. You don't strictly have to; I'll show you any important output from the example code. However, following along will make you more comfortable with the command-line tools, give you a chance to write some CoffeeScript yourself, and most importantly, will give you an opportunity to experiment. Try changing the examples in small ways to see what happens. If you're confused about a piece of code, playing around and looking at the outcome will often help you understand what's really going on. The easiest way to follow along is to simply open up a CoffeeScript console. Just run this from the command line to get an interactive console: coffee If you'd like to save all your code to return to later, or if you wish to work on something more complicated, you can create files instead and run those. Give your files the .coffee extension , and run them like this: coffee my_sample_code.coffee Seeing the compiled JavaScript The golden rule of CoffeeScript, according to the CoffeeScript documentation, is: It's just JavaScript. This means that it is a language that compiles down to JavaScript in a simple fashion, without any complicated extra moving parts. This also means that it's easy, with a little practice, to understand how the CoffeeScript you are writing will compile into JavaScript. Your JavaScript expertise is still applicable, but you are freed from the tedious parts of the language. You should understand how the generated JavaScript will work, but you do not need to actually write the JavaScript. To this end, we'll spend a fair amount of time, especially in this article, comparing CoffeeScript code to the compiled JavaScript results. It's like peeking behind the wizard's curtain! The new language features won't seem so intimidating once you know how they work, and you'll find you have more trust in CoffeeScript when you can check in on the code it's generating. After a while, you won't even need to check in at all. I'll show you the corresponding JavaScript for most of the examples in this article, but if you write your own code, you may want to examine the output. This is a great way to experiment and learn more about the language! Unfortunately, if you're using the CoffeeScript console to follow along, there isn't a great way to see the compiled output (most of the time, it's nice to have all that out of sight—just not right now!). You can see the compiled JavaScript in several other easy ways, though. The first is to put your code in a file and compile it. The other is to use the Try CoffeeScript tool on http://coffeescript.org/. It brings up an editor right in the browser that updates the output as you type. CoffeeScript basics Let's get started! We'll begin with something simple: x = 1 + 1 You can probably guess what JavaScript this will compile to: var x;x = 1 + 1; Statements One of the very first things you will notice about CoffeeScript is that there are no semicolons. Statements are ended by a new line. The parser usually knows if a statement should be continued on the next line. You can explicitly tell it to continue to the next line by using a backslash at the end of the first line: x = 1+ 1 It's also possible to stretch function calls across multiple lines, as is common in "fluent" JavaScript interfaces: "foo" .concat("barbaz") .replace("foobar", "fubar") You may occasionally wish to place more than one statement on a single line (for purely stylistic purposes). This is the one time when you will use a semicolon in CoffeeScript: x = 1; y = 2 Both of these situations are fairly rare. The vast majority of the time, you'll find that one statement per line works great. You might feel a pang of loss for your semicolons at first, but give it time. The calluses on your pinky finger will fall off, your eyes will adjust to the lack of clutter, and soon enough you won't remember what good you ever saw in semicolons. Variables CoffeeScript variables look a lot like JavaScript variables, with one big difference: no var! CoffeeScript puts all variables in the local scope by default. x = 1y = 2z = x + y compiles to: var x, y, z;x = 1;y = 2;z = x + y; Believe it or not, this is one of my absolute top favorite things about CoffeeScript. It's so easy to accidentally introduce variables to the global scope in JavaScript and create subtle problems for yourself. You never need to worry about that again; from now on, it's handled automatically. Nothing is getting into the global scope unless you want it there. If you really want to put a variable in the global scope and you're really sure it's a good idea, you can easily do this by attaching it to the top-level object. In the CoffeeScript console, or in Node.js programs, this is the global object: global.myGlobalVariable = "I'm so worldly!" In a browser, we use the window object instead: window.myGlobalVariable = "I'm so worldly!" Comments Any line that begins with a # is a comment. Anything after a # in the middle of a line will also be a comment. # This is a comment."Hello" # This is also a comment Most of the time, CoffeeScripters use only this style, even for multiline comments. # Most multiline comments simply wrap to the# next line, each begun with a # and a space. It is also possible (but rare in the CoffeeScript world) to use a block comment, which begins and ends with ###. The lines in between these characters do not need to begin with a #. ###This is a block comment. You can get artistic in here.<(^^)>### Regular comments are not included in the compiled JavaScript, but block comments are, delineated by /* */. Calling functions Function invocation can look very familiar in CoffeeScript: console.log("Hello, planet!") Other than the missing semicolon, that's exactly like JavaScript, right? But function invocation can also look different: console.log "Hello, planet!" Whoa! Now we're in unfamiliar ground. This will work exactly the same as the previous example, though. Any time you call a function with arguments, the parentheses are optional. This also works with more than one argument: Math.pow 2, 3 While you might be a little nervous writing this way at first, I encourage you to try it and give yourself time to become comfortable with it. Idiomatic CoffeeScript style eliminates parentheses whenever it's sensible to do so. What do I mean by "sensible"? Well, imagine you're reading your code for the first time, and ask yourself which style makes it easiest to comprehend. Usually it's most readable without parentheses, but there are some occasions when your code is complex enough that judicious use of parentheses will help. Use your best judgment, and everything will turn out fine. There is one exception to the optional parentheses rule. If you are invoking a function with no arguments, you must use parentheses: Date.now() Why? The reason is simple. CoffeeScript preserves JavaScript's treatment of functions as first-class citizens. myFunc = Date.now #=> myFunc holds a function object that hasn't been executedmyDate = Date.now() #=> myDate holds the result of the function's execution CoffeeScript's syntax is looser, but it must still be unambiguous. When no arguments are present, it's not clear whether you want to access the function object or execute the function. Requiring parentheses makes it clear which one you want, and still allows both kinds of functionality. This is part of CoffeeScript's philosophy of not deviating from the fundamentals of the JavaScript language. If functions were always executed instead of returned, CoffeeScript would no longer act like JavaScript, and it would be hard for you, the seasoned JavaScripter, to know what to expect. This way, once you understand a few simple concepts, you will know exactly what your code is doing. From this discussion, we can extract a more general principle: parentheses are optional, except when necessary to avoid ambiguity . Here's another situation in which you might encounter ambiguity: nested function calls. Math.max 2, 3, Math.min 4, 5, 6 Yikes! What's happening there? Well, you can easily clear this up by adding parentheses. You may add parentheses to all the function calls, or you may add just enough to resolve the ambiguity: # These two calls are equivalentMath.max(2, 3, Math.min(4, 5, 6))Math.max 2, 3, Math.min(4, 5, 6) This makes it clear that you wish min to take 4 and 5 as arguments. If you wished 6 to be an argument to max instead, you would place the parentheses differently. # These two calls are equivalentMath.max(2, 3, Math.min(4, 5), 6)Math.max 2, 3, Math.min(4, 5), 6 Precedence Actually, the original version I showed you is valid CoffeeScript too! You just need to understand the precedence rules that CoffeeScript uses for functions. Arguments are assigned to functions from the inside out . Another way to think of this is that an argument belongs to the function that it's nearest to. So our original example is equivalent to the first variation we used, in which 4, 5, and 6 are arguments to min: # These two calls are equivalentMath.max 2, 3, Math.min 4, 5, 6Math.max 2, 3, Math.min(4, 5, 6) The parentheses are only absolutely necessary if our desired behavior doesn't match CoffeeScript's precedence—in this case, if we wanted 6 to be a argument to max. This applies to an unlimited level of nesting: threeSquared = Math.pow 3, Math.floor Math.min 4, Math.sqrt 5 Of course, at some point the elimination of parentheses turns from the question of if you can to if you should. You are now a master of the intricacies of CoffeeScript function-call parsing, but the other programmers reading your code might not be (and even if they are, they might prefer not to puzzle out what your code is doing). Avoid parentheses in simple cases, and use them judiciously in the more complicated situations.
Read more
  • 0
  • 0
  • 2248

article-image-using-opencl
Packt
28 Aug 2013
15 min read
Save for later

Using OpenCL

Packt
28 Aug 2013
15 min read
(For more resources related to this topic, see here.) Let's start the journey by looking back into the history of computing and why OpenCL is important from the respect that it aims to unify the software programming model for heterogeneous devices. The goal of OpenCL is to develop a royalty-free standard for cross-platform, parallel programming of modern processors found in personal computers, servers, and handheld/embedded devices. This effort is taken by "The Khronos Group" along with the participation of companies such as Intel, ARM, AMD, NVIDIA, QUALCOMM, Apple, and many others. OpenCL allows the software to be written once and then executed on the devices that support it. In this way it is akin to Java, this has benefits because software development on these devices now has a uniform approach, and OpenCL does this by exposing the hardware via various data structures, and these structures interact with the hardware via Application Programmable Interfaces ( APIs ). Today, OpenCL supports CPUs that includes x86s, ARM and PowerPC and GPUs by AMD, Intel, and NVIDIA. Developers can definitely appreciate the fact that we need to develop software that is cross-platform compatible, since it allows the developers to develop an application on whatever platform they are comfortable with, without mentioning that it provides a coherent model in which we can express our thoughts into a program that can be executed on any device that supports this standard. However, what cross-platform compatibility also means is the fact that heterogeneous environments exists, and for quite some time, developers have to learn and grapple with the issues that arise when writing software for those devices ranging from execution model to memory systems. Another task that commonly arose from developing software on those heterogeneous devices is that developers were expected to express and extract parallelism from them as well. Before OpenCL, we know that various programming languages and their philosophies were invented to handle the aspect of expressing parallelism (for example, Fortran, OpenMP, MPI, VHDL, Verilog, Cilk, Intel TBB, Unified parallel C, Java among others) on the device they executed on. But these tools were designed for the homogeneous environments, even though a developer may think that it's to his/her advantage, since it adds considerable expertise to their resume. Taking a step back and looking at it again reveals that is there is no unified approach to express parallelism in heterogeneous environments. We need not mention the amount of time developers need to be productive in these technologies, since parallel decomposition is normally an involved process as it's largely hardware dependent. To add salt to the wound, many developers only have to deal with homogeneous computing environments, but in the past few years the demand for heterogeneous computing environments grew. The demand for heterogeneous devices grew partially due to the need for high performance and highly reactive systems, and with the "power wall" at play, one possible way to improve more performance was to add specialized processing units in the hope of extracting every ounce of parallelism from them, since that's the only way to reach power efficiency. The primary motivation for this shift to hybrid computing could be traced to the research headed entitled Optimizing power using Transformations by Anantha P. Chandrakasan. It brought out a conclusion that basically says that many-core chips (which run at a slightly lower frequency than a contemporary CPU) are actually more power-efficient. The problem with heterogeneous computing without a unified development methodology, for example, OpenCL, is that developers need to grasp several types of ISA and with that the various levels of parallelism and their memory systems are possible. CUDA, the GPGPU computing toolkit, developed by NVIDIA deserves a mention not only because of the remarkable similarity it has with OpenCL, but also because the toolkit has a wide adoption in academia as well as industry. Unfortunately CUDA can only drive NVIDIA's GPUs. The ability to extract parallelism from an environment that's heterogeneous is an important one simply because the computation should be parallel, otherwise it would defeat the entire purpose of OpenCL. Fortunately, major processor companies are part of the consortium led by The Khronos Group and actively realizing the standard through those organizations. Unfortunately the story doesn't end there, but the good thing is that we, developers, realized that a need to understand parallelism and how it works in both homogeneous and heterogeneous environments. OpenCL was designed with the intention to express parallelism in a heterogeneous environment. For a long time, developers have largely ignored the fact that their software needs to take advantage of the multi-core machines available to them and continued to develop their software in a single-threaded environment, but that is changing (as discussed previously). In the many-core world, developers need to grapple with the concept of concurrency, and the advantage of concurrency is that when used effectively, it maximizes the utilization of resources by providing progress to others while some are stalled. When software is executed concurrently with multiple processing elements so that threads can run simultaneously, we have parallel computation. The challenge that the developer has is to discover that concurrency and realize it. And in OpenCL, we focus on two parallel programming models: task parallelism and data parallelism. Task parallelism means that developers can create and manipulate concurrent tasks. When developers are developing a solution for OpenCL, they would need to decompose a problem into different tasks and some of those tasks can be run concurrently, and it is these tasks that get mapped to processing elements ( PEs ) of a parallel environment for execution. On the other side of the story, there are tasks that cannot be run concurrently and even possibly interdependent. An additional complexity is also the fact that data can be shared between tasks. When attempting to realize data parallelism, the developer needs to readjust the way they think about data and how they can be read and updated concurrently. A common problem found in parallel computation would be to compute the sum of all the elements given in an arbitrary array of values, while storing the intermediary summed value and one possible way to do this is illustrated in the following diagram and the operator being applied there, that is, is any binary associative operator. Conceptually, the developer could use a task to perform the addition of two elements of that input to derive the summed value. Whether the developer chooses to embody task/data parallelism is dependent on the problem, and an example where task parallelism would make sense will be by traversing a graph. And regardless of which model the developer is more inclined with, they come with their own sets of problems when you start to map the program to the hardware via OpenCL. And before the advent of OpenCL, the developer needs to develop a module that will execute on the desired device and communication, and I/O with the driver program. An example example of this would be a graphics rendering program where the CPU initializes the data and sets everything up, before offloading the rendering to the GPU. OpenCL was designed to take advantage of all devices detected so that resource utilization is maximized, and hence in this respect it differs from the "traditional" way of software development. Now that we have established a good understanding of OpenCL, we should spend some time understanding how a developer can learn it. And not to fret, because every project you embark with, OpenCL will need you to understand the following: Discover the makeup of the heterogeneous system you are developing for Understand the properties of those devices by probing it Start the parallel program decomposition using either or all of task parallelism or data parallelism, by expressing them into instructions also known as kernels that will run on the platform Set up data structures for the computation Manipulate memory objects for the computation Execute the kernels in the order that's desired on the proper device Collate the results and verify for correctness Next, we need to solidify the preceding points by taking a deeper look into the various components of OpenCL. The following components collectively make up the OpenCL architecture: Platform Model: A platform is actually a host that is connected to one or more OpenCL devices. Each device comprises possibly multiple compute units ( CUs ) which can be decomposed into one or possibly multiple processing elements, and it is on the processing elements where computation will run. Execution Model: Execution of an OpenCL program is such that the host program would execute on the host, and it is the host program which sends kernels to execute on one or more OpenCL devices on that platform. When a kernel is submitted for execution, an index space is defined such that a work item is instantiated to execute each point in that space. A work item would be identified by its global ID and it executes the same code as expressed in the kernel. Work items are grouped into work groups and each work group is given an ID commonly known as its work group ID, and it is the work group's work items that get executed concurrently on the PEs of a single CU. That index space we mentioned earlier is known as NDRange describing an N-dimensional space, where N can range from one to three. Each work item has a global ID and a local ID when grouped into work groups, that is distinct from the other and is derived from NDRange. The same can be said about work group IDs. Let's use a simple example to illustrate how they work. Given two arrays, A and B, of 1024 elements each, we would like to perform the computation of vector multiplication also known as dot product, where each element of A would be multiplied by the corresponding element in B. The kernel code would look something as follows: __kernel void vector_multiplication(__global int* a, __global int* b, __global int* c) {int threadId = get_global_id(0); // OpenCL functionc[i] = a[i] * b[i];} In this scenario, let's assume we have 1024 processing elements and we would assign one work item to perform exactly one multiplication, and in this case our work group ID would be zero (since there's only one group) and work items IDs would range from {0 … 1023}. Recall what we discussed earlier, that it is the work group's work items that can executed on the PEs. Hence reflecting back, this would not be a good way of utilizing the device. In this same scenario, let's ditch the former assumption and go with this: we still have 1024 elements but we group four work items into a group, hence we would have 256 work groups with each work group having an ID ranging from {0 … 255}, but it is noticed that the work item's global ID still would range from {0 … 1023} simply because we have not increased the number of elements to be processed. This manner of grouping work items into their work groups is to achieve scalability in these devices, since it increases execution efficiency by ensuring all PEs have something to work on. The NDRange can be conceptually mapped into an N-dimensional grid and the following diagram illustrates how a 2DRange works, where WG-X denotes the length in rows for a particular work group and WG-Y denotes the length in columns for a work group, and how work items are grouped including their respective IDs in a work group. Before the execution of the kernels on the device(s), the host program plays an important role and that is to establish context with the underlying devices and laying down the order of execution of the tasks. The host program does the context creation by establishing the existence (creating if necessary) of the following: All devices to be used by the host program The OpenCL kernels, that is, functions and their abstractions that will run on those devices The memory objects that encapsulated the data to be used / shared by the OpenCL kernels. Once that is achieved, the host needs to create a data structure called a command queue that will be used by the host to coordinate the execution of the kernels on the devices and commands are issued to this queue and scheduled onto the devices. A command queue can accept: kernel execution commands, memory transfer commands, and synchronization commands. Additionally, the command queues can execute the commands in-order, that is, in the order they've been given, or out-of-order. If the problem is decomposed into independent tasks, it is possible to create multiple command queues targeting different devices and scheduling those tasks onto them, and then OpenCL will run them concurrently. Memory Model: So far, we have understood the execution model and it's time to introduce the memory model that OpenCL has stipulated. Recall that when the kernel executes, it is actually the work item that is executing its instance of the kernel code. Hence the work item needs to read and write the data from memory and each work item has access to four types of memories: global, constant, local, and private. These memories vary from size as well as accessibilities, where global memory has the largest size and is most accessible to work items, whereas private memory is possibly the most restrictive in the sense that it's private to the work item. The constant memory is a read-only memory where immutable objects are stored and can be shared with all work items. The local memory is only available to all work items executing in the work group and is held by each compute unit, that is, CU-specific. The application running on the host uses the OpenCL API to create memory objects in global memory and will enqueue memory commands to the command queue to operate on them. The host's responsibility is to ensure that data is available to the device when the kernel starts execution, and it does so by copying data or by mapping/unmapping regions of memory objects. During a typical data transfer from the host memory to the device memory, OpenCL commands are issued to queues which may be blocking or non-blocking. The primary difference between a blocking and non-blocking memory transfer is that in the former, the function calls return only once (after being queued) it is deemed safe, and in the latter the call returns as soon as the command is enqueued. Memory mapping in OpenCL allows a region of memory space to be available for computation and this region can be blocking or non-blocking and the developer can treat this space as readable or writeable or both. Hence forth, we are going to focus on getting the basics of OpenCL by letting our hands get dirty in developing small OpenCL programs to understand a bit more, programmatically, how to use the platform and execution model of OpenCL. The OpenCL specification Version 1.2 is an open, royalty-free standard for general purpose programming across various devices ranging from mobile to conventional CPUs, and lately GPUs through an API and the standard at the time of writing supports: Data and task based parallel programming models Implements a subset of ISO C99 with extensions for parallelism with some restrictions such as recursion, variadic functions, and macros which are not supported Mathematical operations comply to the IEEE 754 specification Porting to handheld and embedded devices can be accomplished by establishing configuration profiles Interoperability with OpenGL, OpenGL ES, and other graphics APIs Throughout this article, we are going to show you how you can become proficient in programming OpenCL. As you go through the article, you'll discover not only how to use the API to perform all kinds of operations on your OpenCL devices, but you'll also learn how to model a problem and transform it from a serial program to a parallel program. More often than not, the techniques you'll learn can be transferred to other programming toolsets. In the toolsets, I have worked with OpenCLTM, CUDATM, OpenMPTM, MPITM, Intel thread building blocksTM, CilkTM, CilkPlusTM, which allows the developer to express parallelism in a homogeneous environment and find the entire process of learning the tools to application of knowledge to be classified into four parts. These four phases are rather common and I find it extremely helpful to remember them as I go along. I hope you will be benefited from them as well. Finding concurrency: The programmer works in the problem domain to identify the available concurrency and expose it to use in the algorithm design Algorithm structure: The programmer works with high-level structures for organizing a parallel algorithm Supporting Structures: This refers to how the parallel program will be organized and the techniques used to manage shared data Implementation mechanisms: The final step is to look at specific software constructs for implementing a parallel program. Don't worry about these concepts, they'll be explained as we move through the article. The next few recipes we are going to examine have to do with understanding the usage of OpenCL APIs, by focusing our efforts in understanding the platform model of the architecture.
Read more
  • 0
  • 0
  • 1368

article-image-salesforce-crm-functions
Packt
27 Aug 2013
3 min read
Save for later

Salesforce CRM Functions

Packt
27 Aug 2013
3 min read
(For more resources related to this topic, see here.) Functional overview of Salesforce CRM The Salesforce CRM functions are related to each other and, as mentioned previously, have cross-over areas which can be represented as shown in the following diagram: Marketing administration Marketing administration is available in Salesforce CRM under the application suite known as the Marketing Cloud. The core functionality enables organizations to manage marketing campaigns from initiation to lead development in conjunction with the sales team. The features in the marketing application can help measure the effectiveness of each campaign by analyzing the leads and opportunities generated as a result of specific marketing activities. Salesforce automation Salesforce automation is the core feature set within Salesforce CRM and is used to manage the sales process and activities. It enables salespeople to automate manual and repetitive tasks and provides them with information related to existing and prospective customers. In Salesforce CRM, Salesforce automation is known as the Sales Cloud, and helps the sales people manage sales activities, leads and contact records, opportunities, quotes, and forecasts. Customer service and support automation Customer service and support automation within Salesforce CRM is known as the Service Cloud, and allows support teams to automate and manage the requests for service and support by existing customers. Using the Service Cloud features, organizations can handle customer requests, such as the return of faulty goods or repairs, complaints, or provide advice about products and services. Associated with the functional areas, described previously, are features and mechanisms to help users and customers collaborate and share information known as enterprise social networking. Enterprise social networking Enterprise social network capabilities within Salesforce CRM enable organizations to connect with people and securely share business information in real time. Social networking within an enterprise serves to connect both employees and customers and enables business collaboration. In Salesforce CRM, the enterprise social network suite is known as Salesforce Chatter. Salesforce CRM record life cycle The capabilities of Salesforce CRM provides for the processing of campaigns through to customer acquisition and beyond as shown in the following diagram: At the start of the process, it is the responsibility of the marketing team to develop suitable campaigns in order to generate leads. Campaign management is carried out using the Marketing Administration tools and has links to the lead and also any opportunities that have been influenced by the campaign. When validated, leads are converted to accounts, contacts, and opportunities. This can be the responsibility of either the marketing or sales teams and requires a suitable sales process to have been agreed upon. In Salesforce CRM, an account is the company or organization and a contact is an individual associated with an account. Opportunities can either be generated from lead conversion or may be entered directly by the sales team. As described earlier in this article, the structure of Salesforce requires account ownership to be established which sees inherited ownership of the opportunity. Account ownership is usually the responsibility of the sales team. Opportunities are worked through a sales process using sales stages where the stage is advanced to the point where they are set as won/closed and become sales. Opportunity information should be logged in the organization's financial system. Upon financial completion and acceptance of the deal (and perhaps delivery of the goods or service), the post-customer acquisition process is then enabled where the account and contact can be recognized as a customer. Here the customer relationships concerning incidents and requests are managed by escalating cases within the customer services and support automation suite.
Read more
  • 0
  • 0
  • 8101
article-image-getting-started-mule
Packt
26 Aug 2013
10 min read
Save for later

Getting Started with Mule

Packt
26 Aug 2013
10 min read
(For more resources related to this topic, see here.) Mule ESB is a lightweight Java programming language. Through ESB, you can integrate or communicate with multiple applications. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. Understanding Mule concepts and terminologies Enterprise Service Bus (ESB) is an application that gives access to other applications and services. Its main task is to be the messaging and integration backbone of an enterprise. An ESB is a distributed middleware system to integrate different applications. All these applications communicate through the ESB. It consists of a set of service containers that integrate various types of applications. The containers are interconnected with a reliable messaging bus. Getting ready An ESB is used for integration using a service-oriented approach. Its main features are as follows: Polling JMS Message transformation and routing services Tomcat hot deployment Web service security We often use the abbreviation, VETRO, to summarize the ESB functionality: V– validate the schema validation E– enrich T– transform R– route (either itinerary or content based) O– operate (perform operations; they run at the backend) Before introducing any ESB, developers and integrators must connect different applications in a point-to-point fashion. How to do it... After the introduction of an ESB, you just need to connect each application to the ESB so that every application can communicate with each other through the ESB. You can easily connect multiple applications through the ESB, as shown in the following diagram: Need for the ESB You can integrate different applications using ESB. Each application can communicate through ESB: To integrate more than two or three services and/or applications To integrate more applications, services, or technologies in the future To use different communication protocols To publish services for composition and consumption For message transformation and routing   What is Mule ESB? Mule ESB is a lightweight Java-based enterprise service bus and integration platform that allows developers and integrators to connect applications together quickly and easily, enabling them to exchange data. There are two editions of Mule ESB: Community and Enterprise. Mule ESB Enterprise is the enterprise-class version of Mule ESB, with additional features and capabilities that are ideal for clustering and performance tuning, DataMapper, and the SAP connector. Mule ESB Community and Enterprise editions are built on a common code base, so it is easy to upgrade from Mule ESB Community to Mule ESB Enterprise. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet. Mule ESB includes powerful capabilities that include the following: Service creation and hosting: It exposes and hosts reusable services using Mule ESB as a lightweight service container Service mediation: It shields services from message formats and protocols, separate business logic from messaging, and enables location-independent service calls Message routing: It routes, filters, aggregates, and re-sequences messages based on content and rules Data transformation: It exchanges data across varying formats and transport protocols Mule ESB is lightweight but highly scalable, allowing you to start small and connect more applications over time. Mule provides a Java-based messaging framework. Mule manages all the interactions between applications and components transparently. Mule provides transformation, routing, filtering, Endpoint, and so on. How it works... When you examine how a message flows through Mule ESB, you can see that there are three layers in the architecture, which are listed as follows: Application Layer Integration Layer Transport Layer Likewise, there are three general types of tasks you can perform to configure and customize your Mule deployment. Refer to the following diagram: The following list talks about Mule and its configuration: Service component development: This involves developing or re-using the existing POJOs, which is a class with attributes and it generates the get and set methods, Cloud connectors, or Spring Beans that contain the business logic and will consume, process, or enrich messages. Service orchestration: This involves configuring message processors, routers, transformers, and filters that provide the service mediation and orchestration capabilities required to allow composition of loosely coupled services using a Mule flow. New orchestration elements can be created also and dropped into your deployment. Integration: A key requirement of service mediation is decoupling services from the underlying protocols. Mule provides transport methods to allow dispatching and receiving messages on different protocol connectors. These connectors are configured in the Mule configuration file and can be referenced from the orchestration layer. Mule supports many existing transport methods and all the popular communication protocols, but you may also develop a custom transport method if you need to extend Mule to support a particular legacy or proprietary system. Spring beans: You can construct service components from Spring beans and define these Spring components through a configuration file. If you don't have this file, you will need to define it manually in the Mule configuration file. Agents: An agent is a service that is created in Mule Studio. When you start the server, an agent is created. When you stop the server, this agent will be destroyed. Connectors: The Connector is a software component. Global configuration: Global configuration is used to set the global properties and settings. Global Endpoints: Global Endpoints can be used in the Global Elements tab. We can use the global properties' element as many times in a flow as we want. For that, we must pass the global properties' reference name. Global message processor: A global message processor observes a message or modifies either a message or the message flow; examples include transformers and filters. Transformers: A transformer converts data from one format to another. You can define them globally and use them in multiple flows. Filters: Filters decide which Mule messages should be processed. Filters specify the conditions that must be met for a message to be routed to a service or continue progressing through a flow. There are several standard filters that come with Mule ESB, which you can use, or you can create your own filters. Models: It is a logical grouping of services, which are created in Mule Studio. You can start and stop all the services inside a particular model. Services: You can define one or more services that wrap your components (business logic) and configure Routers, Endpoints, transformers, and filters specifically for that service. Services are connected using Endpoints. Endpoints: Services are connected using Endpoints. It is an object on which the services will receive (inbound) and send (outbound) messages. Flow: Flow is used for a message processor to define a message flow between a source and a target. Setting up the Mule IDE The developers who were using Mule ESB over other technologies such as Liferay Portal, Alfresco ECM, or Activiti BPM can use Mule IDE in Eclipse without configuring the standalone Mule Studio in the existing environment. In recent times, MuleSoft (http://www.mulesoft.org/) only provides Mule Studio from Version 3.3 onwards, but not Mule IDE. If you are using the older version of Mule ESB, you can get Mule IDE separately from http://dist.muleforge.org/mule-ide/releases/. Getting ready To set Mule IDE, we need Java to be installed on the machine and its execution path should be set in an environment variable. We will now see how to set up Java on our machine. Firstly, download JDK 1.6 or a higher version from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/jdk6downloads-1902814.html. In your Windows system, go to Start | Control Panel | System | Advanced. Click on Environment Variables under System Variables, find Path, and click on it. In the Edit window, modify the path by adding the location of the class to its value. If you do not have the item Path, you may select the option of adding a new variable and adding Path as the name and the location of the class as its value. Close the window, reopen the command prompt window, and run your Java code. How to do it... If you go with Eclipse, you have to download Mule IDE Standalone 3.3. Download Mule ESB 3.3 Community edition from the following URL: http://www.mulesoft.org/extensions/mule-ide. Unzip the downloaded file and set MULE_HOME as the environment variable. Download the latest version of Eclipse from http://www.eclipse.org/downloads/. After installing Eclipse, you now have to integrate Mule IDE in the Eclipse. If you are using Eclipse Version 3.4 (Galileo), perform the following steps to install Mule IDE. If you are not using Version 3.4 (Galileo), the URL for downloading will be different. Open Eclipse IDE. Go to Help | Install New Software…. Write the URL in the Work with: textbox: http://dist.muleforge.org/muleide/updates/3.4/ and press Enter. Select the Mule IDE checkbox. Click on the Next button. Read and accept the license agreement terms. Click on the Finish button. This will take some time. When it prompts for a restart, shut it down and restart Eclipse. Mule configuration After installing Mule IDE, you will now have to configure Mule in Eclipse. Perform the following steps: Open Eclipse IDE. Go to Window | Preferences. Select Mule, add the distribution folder mule as standalone 3.3; click on the Apply button and then on the OK button. This way you can configure Mule with Eclipse. Installing Mule Studio Mule Studio is a powerful, user-friendly Eclipse-based tool. Mule Studio has three main components: a package tree, a palette, and a canvas. Mule ESB easily creates flows as well as edits and tests them in a few minutes. Mule Studio is currently in public beta. It is based on drag-and-drop elements and supports two-way editing. Getting ready To install Mule Studio, download Mule Studio from http://www.mulesoft.org/download-mule-esb-community-edition. How to do it... Unzip the Mule Studio folder. Set the environment variable for Mule Studio. While starting with Mule Studio, the config.xml file will be created automatically by Mule Studio. The three main components of Mule Studio are as follows: A package tree A palette A canvas A package tree A package tree contains the entire structure of your project. In the following screenshot, you can see the package explorer tree. In this package explorer tree, under src/main/java, you can store the custom Java class. You can create a graphical flow from src/main/resources. In the app folder you can store the mule-deploy.properties file. The folders src, main, and app contain the flow of XML files. The folders src, main, and test contain flow-related test files. The Mule-project.xml file contains the project's metadata. You can edit the name, description, and server runtime version used for a specific project. JRE System Library contains the Java runtime libraries. Mule Runtime contains the Mule runtime libraries. A palette The second component is palette. The palette is the source for accessing Endpoints, components, transformers, and Cloud connectors. You can drag them from the palette and drop them onto the canvas in order to create flows. The palette typically displays buttons indicating the different types of Mule elements. You can view the content of each button by clicking on them. If you do not want to expand elements, click on the button again to hide the content. A canvas The third component is canvas; canvas is a graphical editor. In canvas you can create flows. The canvas provides a space that facilitates the arrangement of Studio components into Mule flows. In the canvas area you can configure each and every component, and you can add or remove components on the canvas.
Read more
  • 0
  • 0
  • 6314

article-image-networking-performance-design
Packt
23 Aug 2013
18 min read
Save for later

Networking Performance Design

Packt
23 Aug 2013
18 min read
(For more resources related to this topic, see here.) Device and I/O virtualization involves managing the routing of I/O requests between virtual devices and the shared physical hardware. Software-based I/O virtualization and management, in contrast to a direct pass through to the hardware, enables a rich set of features and simplified management. With networking, virtual NICs and virtual switches create virtual networks between virtual machines which are running on the same host without the network traffic consuming bandwidth on the physical network NIC teaming consists of multiple, physical NICs and provides failover and load balancing for virtual machines. Virtual machines can be seamlessly relocated to different systems by using VMware vMotion, while keeping their existing MAC addresses and the running state of the VM. The key to effective I/O virtualization is to preserve these virtualization benefits while keeping the added CPU overhead to a minimum. The hypervisor virtualizes the physical hardware and presents each virtual machine with a standardized set of virtual devices. These virtual devices effectively emulate well-known hardware and translate the virtual machine requests to the system hardware. This standardization on consistent device drivers also helps with virtual machine standardization and portability across platforms, because all virtual machines are configured to run on the same virtual hardware, regardless of the physical hardware in the system. In this article we will discuss the following: Describe various network performance problems Discuss the causes of network performance problems Propose solutions to correct network performance problems Designing a network for load balancing and failover for vSphere Standard Switch The load balancing and failover policies that are chosen for the infrastructure can have an impact on the overall design. Using NIC teaming we can group several physical network adapters attached to a vSwitch. This grouping enables load balancing between the different physical NICs and provides fault tolerance if a card or link failure occurs. Network adapter teaming offers a number of available load balancing and load distribution options. Load balancing is load distribution based on the number of connections, not on network traffic. In most cases, load is managed only for the outgoing traffic and balancing is based on three different policies: Route based on the originating virtual switch port ID (default) Route based on the source MAC hash Route based on IP hash Also, we have two network failure detection options and those are: Link status only Beacon probing Getting ready To step through this recipe, you will need one or more running ESXi hosts, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required. How to do it... To change the load balancing policy and to select the right one for your environment, and also select the appropriate failover policy, you need to follow the proceeding steps: Open up your VMware vSphere Client. Log in to the vCenter Server. On the left hand side, choose any ESXi Server and choose configuration from the right hand pane. Click on the Networking section and select the vSwitch for which you want to change the load balancing and failover settings. You may wish to override this per port group level as well. Click on Properties. Select the vSwitch and click on Edit. Go to the NIC Teaming tab. Select one of the available policies from the Load Balancing drop-down menu. Select one of the available policies on the Network Failover Detection drop-down menu. Click on OK to make it effective. How it works... Route based on the originating virtual switch port ID (default) In this configuration, load balancing is based on the number of physical network cards and the number of virtual ports used. With this configuration policy, a virtual network card connected to a vSwitch port will always use the same physical network card. If a physical network card fails, the virtual network card is redirected to another physical network card. You typically do not see the individual ports on a vSwitch. However, each vNIC that gets connected to a vSwitch is implicitly using a particular port on the vSwitch. (It's just that there's no reason to ever configure which port, because that is always done automatically.) It does a reasonable job of balancing your egress uplinks for the traffic leaving an ESXi host as long as all the virtual machines using these uplinks have similar usage patterns. It is important to note that port allocation occurs only when a VM is started or when a failover occurs. Balancing is done based on a port's occupation rate at the time the VM starts up. This means that which pNIC is selected for use by this VM is determined at the time the VM powers on based on which ports in the vSwitch are occupied at the time. For example, if you started 20 VMs in a row on a vSwitch with two pNICs, the odd-numbered VMs would use the left pNIC and the even-numbered VMs would use the right pNIC and that would persist even if you shut down all the even-numbered VMs; the left pNIC, would have all the VMs and the right pNIC would have none. It might happen that two heavily-loaded VMs are connected to the same pNIC, thus load is not balanced. This policy is the easiest one and we always call for the simplest one to map it to a best operational simplification. Now when speaking of this policy, it is important to understand that if, for example, teaming is created with two 1 GB cards, and if one VM consumes more than one card's capacity, a performance problem will arise because traffic greater than 1 Gbps will not go through the other card, and there will be an impact on the VMs sharing the same port as the VM consuming all resources. Likewise, if two VMs each wish to use 600 Mbps and they happen to go to the first pNIC, the first pNIC cannot meet the 1.2 Gbps demand no matter how idle the second pNIC is. Route based on source MAC hash This principle is the same as the default policy but is based on the number of MAC addresses. This policy may put those VM vNICs on the same physical uplink depending on how the MAC hash is resolved. For MAC hash, VMware has a different way of assigning ports. It's not based on the dynamically changing port (after a power off and power on the VM usually gets a different vSwitch port assigned), but is instead based on fixed MAC address. As a result one VM is always assigned to the same physical NIC unless the configuration is not changed. With the port ID, the VM could get different pNICs after a reboot or VMotion. If you have two ESXi Servers with the same configuration, the VM will stay on the same pNIC number even after a vMotion. But again, one pNIC may be congested while others are bored. So there is no real load balancing. Route based on IP hash The limitation of the two previously-discussed policies is that a given virtual NIC will always use the same physical network card for all its traffic. IP hash-based load balancing uses the source and destination of the IP address to determine which physical network card to use. Using this algorithm, a VM can communicate through several different physical network cards based on its destination. This option requires configuration of the physical switch's ports to EtherChannel. Because the physical switch is configured similarly, this option is the only one that also provides inbound load distribution, where the distribution is not necessarily balanced. There are some limitations and reasons why this policy is not commonly used. These reasons are described as follows: The route based on IP hash load balancing option involves added complexity and configuration support from upstream switches. Link Aggregation Control Protocol (LACP) or EtherChannel is required for this algorithm to be used. However, this does not apply for a vSphere Standard Switch. For IP hash to be an effective algorithm for load balancing there must be many IP sources and destinations. This is not a common practice for IP storage networks, where a single VMkernel port is used to access a single IP address on a storage device. The same NIC will always send all its traffic to the same destination (for example, Google.com) through the same pNIC, though another destination (for example, bing.com) might go through another pNIC. So, in a nutshell, due to the added complexity, the upstream dependency on the advanced switch configuration and the management overhead, this configuration is rarely used in production environments. The main reason is that if you use IP hash, the pSwitch must be configured with LACP or EtherChannel. Also, if you use LACP or EtherChannel, the load balancing algorithm must be IP hash. This is because with LACP, inbound traffic to the VM could come through either of the pNICs, and the vSwitch must be ready to deliver that to the VM and only IP Hash will do that (the other policies will drop the inbound traffic to this VM that comes in on a pNIC that the VM doesn't use). We have only two failover detection options and those are: Link status only The link status option enables the detection of failures related to the physical network's cables and switch. However, be aware that configuration issues are not detected. This option also cannot detect the link state problems with upstream switches; it works only with the first hop switch from the host. Beacon probing The beacon probing option allows the detection of failures unseen by the link status option, by sending the Ethernet broadcast frames through all the network cards. These network frames authorize the vSwitch to detect faulty configurations or upstream switch failures and force the failover if the ports are blocked. When using an inverted U physical network topology in conjunction with a dual-NIC server, it is recommended to enable link state tracking or a similar network feature in order to avoid traffic black holes. According to VMware's best practices, it is recommended to have at least three cards before activating this functionality. However, if IP hash is going to be used, beacon probing should not be used as a network failure detection, in order to avoid an ambiguous state due to the limitation that a packet cannot hairpin on the port it is received. Beacon probing works by sending out and listening to beacon probes from the NICs in a team. If there are two NICs, then each NIC will send out a probe and the other NICs will receive that probe. Because EtherChannel is considered one link, this will not function properly as the NIC uplinks are not logically separate uplinks. If beacon probing is used, this can result in MAC address flapping errors, and the network connectivity may be interrupted. Designing a network for load balancing and failover for vSphere Distributed Switch The load balancing and failover policies that are chosen for the infrastructure can have an impact on the overall design. Using NIC teaming, we can group several physical network switches attached to a vSwitch. This grouping enables load balancing between the different Physical NICs, and provides fault tolerance if a card failure occurs. The vSphere distributed vSwitch offers a load balancing option that actually takes the network workload into account when choosing the physical uplink. This is route based on a physical NIC load. This is also called Load Based Teaming (LBT). We recommend this load balancing option over the others when using a distributed vSwitch. Benefits of using this load balancing policy are as follows: It is the only load balancing option that actually considers NIC load when choosing uplinks. It does not require upstream switch configuration dependencies like the route based on IP hash algorithm does. When the route based on physical NIC load is combined with the network I/O control, a truly dynamic traffic distribution is achieved. Getting ready To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required. How to do it... To change the load balancing policy and select the right one for your environment, and also select the appropriate failover policy you need to follow the proceeding steps: Open up your VMware vSphere Client. Log in to the vCenter Server. Navigate to Networking on the home screen. Navigate to a Distributed Port group and right click and select Edit Settings. Click on the Teaming and Failover section. From the Load Balancing drop-down menu, select Route Based on physical NIC load as the load balancing policy. Choose the appropriate network failover detection policy from the drop-down menu. Click on OK and your settings will be effective. How it works... Load based teaming, also known as route based on physical NIC load, maps vNICs to pNICs and remaps the vNIC to pNIC affiliation if the load exceeds specific thresholds on a pNIC. LBT uses the originating port ID load balancing algorithm for the initial port assignment, which results in the first vNIC being affiliated to the first pNIC, the second vNIC to the second pNIC, and so on. Once the initial placement is over after the VM being powered on, LBT will examine both the inbound and outbound traffic on each of the pNICs and then distribute the load across if there is congestion. LBT will send a congestion alert when the average utilization of a pNIC is 75 percent over a period of 30 seconds. 30 seconds of interval period is being used for avoiding the MAC flapping issues. However, you should enable port fast on the upstream switches if you plan to use STP. VMware recommends LBT over IP hash when you use vSphere Distributed Switch, as it does not require any special or additional settings in the upstream switch layer. In this way you can reduce unnecessary operational complexity. LBT maps vNIC to pNIC and then distributes the load across all the available uplinks, unlike IP hash which just maps the vNIC to pNIC but does not do load distribution. So it may happen that when a high network I/O VM is sending traffic through pNIC0, your other VM will also get to map to the same pNIC and send the traffic. What to know when offloading checksum VMware takes advantage of many of the performance features from modern network adaptors. In this section we are going to talk about two of them and those are: TCP checksum offload TCP segmentation offload Getting ready To step through this recipe, you will need a running ESXi Server and a SSH Client (Putty). No other prerequisites are required. How to do it... The list of network adapter features that are enabled on your NIC can be found in the file /etc/vmware/esx.conf on your ESXi Server. Look for the lines that start with /net/vswitch. However, do not change the default NIC's driver settings unless you have a valid reason to do so. A good practice is to follow any configuration recommendations that are specified by the hardware vendor. Carry out the following steps in order to check the settings: Open up your SSH Client and connect to your ESXi host. Open the file etc/vmware/esx.conf Look for the line that starts with /net/vswitch Your output should look like the following screenshot: How it works... A TCP message must be broken down into Ethernet frames. The size of each frame is the maximum transmission unit (MUT). The default maximum transmission unit is 1500 bytes. The process of breaking messages into frames is called segmentation. Modern NIC adapters have the ability to perform checksum calculations natively. TCP checksums are used to determine the validity of transmitted or received network packets based on error correcting code. These calculations are traditionally performed by the host's CPU. By offloading these calculations to the network adapters, the CPU is freed up to perform other tasks. As a result, the system as a whole runs better. TCP segmentation offload (TSO) allows a TCP/IP stack from the guest OS inside the VM to emit large frames (up to 64KB) even though the MTU of the interface is smaller. Earlier operating system used the CPU to perform segmentation. Modern NICs try to optimize this TCP segmentation by using a larger segment size as well as offloading work from the CPU to the NIC hardware. ESXi utilizes this concept to provide a virtual NIC with TSO support, without requiring specialized network hardware. With TSO, instead of processing many small MTU frames during transmission, the system can send fewer, larger virtual MTU frames. TSO improves performance for the TCP network traffic coming from a virtual machine and for network traffic sent out of the server. TSO is supported at the virtual machine level and in the VMkernel TCP/IP stack. TSO is enabled on the VMkernel interface by default. If TSO becomes disabled for a particular VMkernel interface, the only way to enable TSO is to delete that VMkernel interface and recreate it with TSO enabled. TSO is used in the guest when the VMXNET 2 (or later) network adapter is installed. To enable TSO at the virtual machine level, you must replace the existing VMXNET or flexible virtual network adapter with a VMXNET 2 (or later) adapter. This replacement might result in a change in the MAC address of the virtual network adapter. Selecting the correct virtual network adapter When you configure a virtual machine, you can add NICs and specify the adapter type. The types of network adapters that are available depend on the following factors: The version of the virtual machine, which depends on which host created it or most recently updated it. Whether or not the virtual machine has been updated to the latest version for the current host. The guest operating system. The following virtual NIC types are supported: Vlance VMXNET Flexible E 1000 Enhanced VMXNET (VMXNET 2) VMXNET 3 If you want to know more about these network adapter types then refer to the following KB article: http://kb.vmware.com/kb/1001805 Getting ready To step through this recipe, you will need one or more running ESXi Servers, a vCenter Server, and a working installation of vSphere Client. No other prerequisites are required. How to do it... To choose a particular virtual network adapter you have two ways, one is while you create a new VM and the other one is while adding a new network adaptor to an existing VM. To choose a network adaptor while creating a new VM is as follows: Open vSphere Client. Log in to the vCenter Server. Click on the File menu, and navigate to New| Virtual Machine. Go through the steps and hold on to the step where you need to create network connections. Here you need to choose how many network adaptors you need, which port group you want them to connect to, and an adaptor type. To choose an adaptor type while adding a new network interface in an existing VM you should follow these steps: Open vSphere Client. Log in to the vCenter Server. Navigate to VMs and Templates on your home screen. Select an existing VM where you want to add a new network adaptor, right click and select Edit Settings. Click on the Add button. Select Ethernet Adaptor. Select the Adaptor type and select the network where you want this adaptor to connect. Click on Next and then click on Finish How it works... Among the entire supported virtual network adaptor types, VMXNETis the paravirtualized device driver for virtual networking. The VMXNET driver implements an idealized network interface that passes through the network traffic from the virtual machine to the physical cards with minimal overhead. The three versions of VMXNET are VMXNET, VMXNET 2 (Enhanced VMXNET), and VMXNET 3. The VMXNET driver improves the performance through a number of optimizations as follows: Shares a ring buffer between the virtual machine and the VMkernel, and uses zero copy, which in turn saves CPU cycles. Zero copy improves performance by having the virtual machines and the VMkernel share a buffer, reducing the internal copy operations between buffers to free up CPU cycles. Takes advantage of transmission packet coalescing to reduce address space switching. Batches packets and issues a single interrupt, rather than issuing multiple interrupts. This improves efficiency, but in some cases with slow packet-sending rates, it could hurt throughput while waiting to get enough packets to actually send. Offloads TCP checksum calculation to the network hardware rather than use the CPU resources of the virtual machine monitor. Use vmxnet3 if you can, or the most recent model you can. Use VMware Tools where possible. For certain unusual types of network traffic, sometimes the generally-best model isn't optimal; if you have poor network performance, experiment with other types of vNICs to see which performs best.
Read more
  • 0
  • 0
  • 3698
Modal Close icon
Modal Close icon