Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-rss-web-widget
Packt
24 Oct 2009
8 min read
Save for later

RSS Web Widget

Packt
24 Oct 2009
8 min read
What is an RSS Feed? First of all, let us understand what a web feed is. Basically, it is a data format that provides frequently updated content to users. Content distributors syndicate the web feed, allowing users to subscribe, by using feed aggregator. RSS feeds contain data in an XML format. RSS is the term used for describing Really Simple Syndication, RDF Site Summary, or Rich Site Summary, depending upon the different versions. RDF (Resource Description Framework), a family of W3C specification, is a data model format for modelling various information such as title, author, modified date, content etc through variety of syntax format. RDF is basically designed to be read by computers for exchanging information. Since, RSS is an XML format for data representation, different authorities defined different formats of RSS across different versions like 0.90, 0.91, 0.92, 0.93, 0.94, 1.0 and 2.0. The following table shows when and by whom were the different RSS versions proposed. RSS Version Year Developer's Name RSS 0.90 1999 Netscape introduced RSS 0.90. RSS 0.91 1999 Netscape proposed the simpler format of RSS 0.91. 1999 UserLand Software proposed the RSS specification. RSS 1.0 2000 O'Reilly released RSS 1.0. RSS 2.0 2000 UserLand Software proposed the further RSS specification in this version and it is the most popular RSS format being used these days. Meanwhile, Harvard Law school is responsible for the further development of the RSS specification. There had been a competition like scenario for developing the different versions of RSS between UserLand, Netscape and O'Reilly before the official RSS 2.0 specification was released. For a detailed history of these different versions of RSS you can check http://www.rss-specifications.com/history-rss.htm The current version RSS is 2.0 and it is the common format for publishing RSS feeds these days. Like RSS, there is another format that uses the XML language for publishing web feeds. It is known as ATOM feed, and is most commonly used in Wiki and blogging software. Please refer to http://en.wikipedia.org/wiki/ATOM for detail. The following is the RSS icon that denotes links with RSS feeds. If you're using Mozilla's Firefox web browser then you're likely to see the above image in the address bar of the browser for subscribing to an RSS feed link available in any given page. Web browsers like Firefox and Safari discover available RSS feeds in web pages by looking at the Internet media type application/rss+xml. The following tag specifies that this web page is linked with the RSS feed URL: http://www.example.com/rss.xml<link href="http://www.example.com/rss.xml" rel="alternate" type="application/rss+xml" title="Sitewide RSS Feed" /> Example of RSS 2.0 format First of all, let’s look at a simple example of the RSS format. <?xml version="1.0" encoding="UTF-8" ?><rss version="2.0"><channel> <title>Title of the feed</title> <link>http://www.examples.com</link> <description>Description of feed</description> <item> <title>News1 heading</title> <link>http://www.example.com/news-1</link> <description>detail of news1 </description> </item> <item> <title>News2 heading</title> <link>http://www.example.com/news-2</link> <description>detail of news2 </description> </item></channel></rss> The first line is the XML declaration that indicates its version is 1.0. The character encoding is UTF-8. UTF-8 characters support many European and Asian characters so it is widely used as character encoding in web. The next line is the rss declaration, which declares that it is a RSS document of version 2.0 The next line contains the <channel> element which is used for describing the detail of the RSS feed. The <channel> element must have three required elements <title>, <link> and <description>. The title tag contains the title of that particular feed. Similarly, the link element contains the hyperlink of the channel and the description tag describes or carries the main information of the channel. This tag usually contains the information in detail. Furthermore, each <channel> element may have one or more <item> elements which contain the story of the feed. Each <item> element must have the three elements <title>, <link> and <description> whose use is similar to those of channel elements, but they describe the details of each individual items. Finally, the last two lines are the closing tags for the <channel> and <rss> elements. Creating RSS Web Widget The RSS widget we're going to build is a simple one which displays the headlines from the RSS feed, along with the title of the RSS feed. This is another widget which uses some JavaScript, PHP CSS and HTML. The content of the widget is displayed within an Iframe so when you set up the widget, you've to adjust the height and width. To parse the RSS feed in XML format, I've used the popular PHP RSS Parser – Magpie RSS. The homepage of Magpie RSS is located at http://magpierss.sourceforge.net/. Introduction to Magpie RSS Before writing the code, let's understand what the benefits of using the Magpie framework are, and how it works. It is easy to use. While other RSS parsers are normally limited for parsing certain RSS versions, this parser parses most RSS formats i.e. RSS 0.90 to 2.0, as well as ATOM feed. Magpie RSS supports Integrated Object Cache which means that the second request to parse the same RSS feed is fast— it will be fetched from the cache. Now, let's quickly understand how Magpie RSS is used to parse the RSS feed. I'm going to pick the example from their homepage for demonstration. require_once 'rss_fetch.inc';$url = 'http://www.getacoder.com/rss.xml';$rss = fetch_rss($url);echo "Site: ", $rss->channel['title'], "<br>";foreach ($rss->items as $item ) { $title = $item[title]; $url = $item[link]; echo "<a href=$url>$title</a></li><br>";} If you're more interested in trying other PHP RSS parsers then you might like to check out SimplePie RSS Parser (http://simplepie.org/) and LastRSS (http://lastrss.oslab.net/). You can see in the first line how the rss_fetch.inc file is included in the working file. After that, the URL of the RSS feed from getacoder.com is assigned to the $url variable. The fetch_rss() function of Magpie is used for fetching data and converting this data into RSS Objects. In the next line, the title of RSS feed is displayed using the code $rss->channel['title']. The other lines are used for displaying each of the RSS feed's items. Each feed item is stored within an $rss->items array, and the foreach() loop is used to loop through each element of the array. Writing Code for our RSS Widget As I've already discussed, this widget is going to use Iframe for displaying the content of the widget, so let's look at the JavaScript code for embedding Iframe within the HTML code. var widget_string = '<iframe src="http://www.yourserver.com/rsswidget/rss_parse_handler.php?rss_url=';widget_string += encodeURIComponent(rss_widget_url);widget_string += '&maxlinks='+rss_widget_max_links;widget_string += '" height="'+rss_widget_height+'" width="'+rss_widget_width+'"';widget_string += ' style="border:1px solid #FF0000;"';widget_string +=' scrolling="no" frameborder="0"></iframe>';document.write(widget_string); In the above code, the widget string variable contains the string for displaying the widget. The source of Iframe is assigned to rss_parse_handler.php. The URL of the RSS feed, and the headlines from the feed are passed to rss_parse_handler.php via the GET method, using rss_url and maxlinks parameters respectively. The values to these parameters are assigned from the Javascript variables rss_widget_url and rss_widget_max_links. The width and height of the Iframe are also assigned from JavaScript variables, namely rss_widget_width and rss_widget_height. The red border on the widget is displayed by assigning 1px solid #FF0000 to the border attribute using the inline styling of CSS. Since, Inline CSS is used, the frameborder property is set to 0 (i.e. the border of the frame is zero). Displaying borders from the CSS has some benefit over employing the frameborder property. While using CSS code, 1px dashed #FF0000 (border-width border-style border-color) means you can display a dashed border (you can't using frameborder), and you can use the border-right, border-left, border-top, border-bottom attributes of CSS to display borders at specified positions of the object. The scrolling property is set to no here, which means that the scroll bar will not be displayed in the widget if the widget content overflows. If you want to show a scroll bar, then you can set this property to yes. The values of JavaScript variables like rss_widget_url, rss_widget_max_links etc come from the page where we'll be using this widget. You'll see how the values of these variables will be assigned from the section at the end where we'll look at how to use this RSS widget.  
Read more
  • 0
  • 0
  • 4517

article-image-understanding-crm-extendibility-architecture
Packt
16 Oct 2015
22 min read
Save for later

Understanding CRM Extendibility Architecture

Packt
16 Oct 2015
22 min read
 In this article by Mahender Pal, the author of the book Microsoft Dynamics CRM 2015 Application Design, we will see how Microsoft Dynamics CRM provides different components that can be highly extended to map our custom business requirements. Although CRM provides a rich set of features that help us execute different business operations without any modification. However, we can still extend its behavior and capabilities with the supported customization. (For more resources related to this topic, see here.) The following is the extendibility architecture of CRM 2015, where we can see how different components interact with each other and what are the components that can be extended with the help of CRM APIs: Extendibility Architecture Let's discuss these components one by one and the possible extendibility options for them. CRM databases During installation of CRM, two databases, organization and configuration, are created. The organization database is created with the name of organization_MSCRM and the configuration database is created with the name of MSCRM_CONFIG. The organization database contains complete organization-related data stored on different entities. For every entity in CRM, there is a corresponding table with the name of Entityname+Base. Although technically it is possible but any direct data modification in these tables are not supported. Any changes to CRM data should be done by using CRM APIs only. Adding indexes to the CRM database is supported, you can refer to https://msdn.microsoft.com/en-us/library/gg328350.aspx for more details on supported customizations. Apart from table, CRM also creates a special view for every entity with the name of Filtered+Entityname. These fields view provide data based on the user security role; so for example, if you are a sales person you will only get data while querying filtered views based on the sales person role. We use filtered views for writing custom reports for CRM. You can refer to more details on filtered views at https://technet.microsoft.com/en-us/library/dn531182.aspx. Entity relationship diagram can be downloaded from https://msdn.microsoft.com/en-us/library/jj602918.aspx for CRM 2015. The Platform Layer Platform layer works as middleware between CRM UI and database, it is responsible for executing inbuilt and custom business logic and moving data back and forth. When we browse a CRM application, the platform layer presents data that is available based on the current user security roles. When we develop and deploy custom component on the top of platform layer. Process Process is a way of implementing automation in CRM. We can set up process using process designer and also develop custom assemblies to enhance the capability of workflow designer and include custom steps. CRM web services CRM provides Windows Communication Foundation (WCF) based web services, which help us interact with organization data and metadata; so whenever we want to create or modify an entity's data or want to customize a CRM component's metadata, we need to utilize these web services. We can also develop our custom web services with the help of CRM web services if required. We will be discussing more about CRM web services in details in a later topic. Plugins Plugins are another way of extending the CRM capability. These are .NET assemblies that help us implement our custom business logic in the CRM platform. It helps us to execute our business logic before or after the main platform operation. We can also run our plugin on a transaction that is similar to a SQL transaction, which means if any operation failed, all the changes under transaction will rollback. We can setup asynchronous and synchronous plugins. Reporting CRM provides rich reporting capabilities. We have many out of box reports for every module such as sales, marketing, and service. We can also create new reports and customize existing reports in Visual Studio. While working with reports, we always utilize an entity-specific filtered view so that data can be exposed based on the user security role. We should never use a CRM table while writing reports. Custom reports can be developed using out of box report wizard or using Visual Studio. The report wizard helps us create reports by following a couple of screens where we can select an entity and filter the criteria for our report with different rendering and formatting options. We can create two types of reports in Visual Studio SSRS and FetchXML. Custom SSRS reports are supported on CRM on premise deployment whereas CRM online only FetchXML. You can refer to https://technet.microsoft.com/en-us/library/dn531183.aspx for more details on report development. Client extensions We can also extend the CRM application from the Web and Outlook client. We can also develop custom utility tools for it. Sitemap and Command bar editor add-ons are example of such applications. We can modify different CRM components such as entity structure, web resources, business rules, different type of web resources, and other components. CRM web services can be utilized to map custom requirements. We can make navigational changes from CRM clients by modifying Sitemap and Command Bar definition. Integrated extensions We can also develop custom extensions in terms of custom utility and middle layer to interact with CRM using APIs. It can be a portal application or any .NET or non .NET utility. CRM SDK comes with many tools that help us to develop these integrated applications. We will be discussing more on custom integration with CRM in a later topic. Introduction to Microsoft Dynamics CRM SDK Microsoft Dynamics CRM SDK contains resources that help us develop code for CRM. It includes different CRM APIs and helpful resources such as sample codes (both server side and client side) and a list of tools to facilitate CRM development. It provides a complete documentation of the APIs, methods, and their uses, so if you are a CRM developer, technical consultant, or solution architect, the first thing you need to make sure is to download the latest CRM SDK. You can download the latest version of CRM SDK from http://www.microsoft.com/en-us/download/details.aspx?id=44567. The following table talks about the different resources that come with CRM SDK: Name Descriptions Bin This folder contains all the assemblies of CRM. Resources This folder contains different resources such as data import maps, default entity ribbon XML definition, and images icons of CRM applications. SampleCode This folder contains all the server side and client side sample code that can help you get started with the CRM development. This folder also contains sample PowerShell commands. Schemas This folder contains XML schemas for CRM entities, command bars, and sitemap. These schemas can be imported in Visual Studio while editing the customization of the .xml file manually. Solutions This folder contains the CRM 2015 solution compatibility chart and one portal solution. Templates This folder contains the visual studio templates that can be used to develop components for a unified service desk and the CRM package deployment. Tools This folder contains tools that are shipped with CRM SDK such as the metadata browser that can used to get CRM entity metadata, plugin registration tool, web resource utility, and others. Walkthroughts This folder contains console and web portal applications. CrmSdk2015 This is the .chm help file. EntityMetadata This file contains entity metadata information. Message-entity support for plugins This is a very important file that will help you understand events available for entities to write custom business logic (plug-ins) Learning about CRM assemblies CRM SDK ships with different assemblies under the bin folder that we can use to write CRM application extension. We can utilize them to interact with CRM metadata and organization data. The following table provides details about the most common CRM assemblies: Name Details Microsoft.Xrm.Sdk.Deployment This assembly is used to work with the CRM organization. We can create, update, and delete organization assembly methods. Microsoft.Xrm.Sdk This is very important assembly as it contains the core methods and their details, this assembly is used for every CRM extension. This assembly contains different namespaces for different functionality, for example Query, which contains different classes to query CRM DB; Metadata, which help us interact with the metadata of the CRM application; Discovery, which help us interact with the discover service (we will be discussing the discovery services in a later topic); Messages, which provide classes for all CURD operation requests and responses with metadata classes. Microsoft.Xrm.Sdk.Workflow This assembly helps us extend the CRM workflows' capability. It contains methods and types which are required for writing custom workflow activity. This assembly contains the activities namespace, which is used by the CRM workflow designer. Microsoft.Crm.Sdk.Proxy This assembly contains all noncore requests and response messages. Microsoft.Xrm.Tooling This is a new assembly added in SDK. This assembly helps to write Windows client applications for CRM Microsoft.Xrm.Portal This assembly provides methods for portal development, which includes security management, cache management, and content management. Microsoft.Xrm.Client This is another assembly that is used in the CRM client application to communicate with CRM from the application. It contains connection classes that we can use to setup the connection using different CRM authentication methods. We will be working with these APIs in later topics. Understanding CRM web services Microsoft Dynamics CRM provides web service support, which can be used to work with CRM data or metadata. CRM web services are mentioned here. The deployment service The deployment service helps us work with organizations. Using this web service, we can create a new organization, delete, or update existing organizations. The discovery service Discovery services help us identify correct web service endpoints based on the user. Let's take an example where we have multiple CRM organizations, and we want to get a list of the organization where current users have access, so we can utilize discovery service to find out unique organization ID, endpoint URL and other details. We will be working with discovery service in a later topic. The organization service The organization service is used to work with CRM organization data and metadata. It has the CRUD method and other request and response messages. For example, if we want to create or modify any existing entity record, we can use organization service methods. The organization data service The organization data service is a RESTful service that we can use to get data from CRM. We can use this service's CRUD methods to work with data, but we can't use this service to work with CRM metadata. To work with CRM web services, we can use the following two programming models: Late bound Early bound Early bound In early bound classes, we use proxy classes which are generated by CrmSvcUtil.exe. This utility is included in CRM SDK under the SDKBin path. This utility generates classes for every entity available in the CRM system. In this programming model, a schema name is used to refer to an entity and its attributes. This provides intelligence support, so we don't need to remember the entity and attributes name; as soon as we type the first letter of the entity name, it will display all the entities with that name. We can use the following syntax to generate proxy class for CRM on premise: CrmSvcUtil.exe /url:http://<ServerName>/<organizationName>/XRMServices/2011/ Organization.svc /out:proxyfilename.cs /username:<username> /password:<password> /domain:<domainName> /namespace:<outputNamespace> /serviceContextName:<serviceContextName> The following is the code to generate proxy for CRM online: CrmSvcUtil.exe /url:https://orgname.api.crm.dynamics.com/XRMServices/2011/ Organization.svc /out:proxyfilename.cs /username:"myname@myorg.onmicrosoft.com" /password:"myp@ssword! Organization service URLs can be obtained by navigating to Settings | Customization | Developer Resources. We are using CRM online for our demo. In case of CRM online, the organization service URL is dependent on the region where your organization is hosted. You can refer to https://msdn.microsoft.com/en-us/library/gg328127.aspx to get details about different CRM online regions. We can follow these steps to generate the proxy class for CRM online: Navigate to Developer Command Prompt under Visual Studio Tools in your development machine where visual studio is installed. Go to the Bin folder under CRM SDK and paste the preceding command: CrmSvcUtil.exe /url:https://ORGName.api.crm5.dynamics.com/XRMServices/2011/ Organization.svc /out:Xrm.cs /username:"user@ORGName.onmicrosoft.com" /password:"password" CrmSVCUtil Once this file is generated, we can add this file to our visual studio solution. Late bound In the late bound programming model, we use the generic Entity object to refer to entities, which means that we can also refer an entity which is not part of the CRM yet. In this programming mode, we need to use logical names to refer to an entity and its attribute. No intelligence support is available during code development in case of late bound. The following is an example of using the Entity class: Entity AccountObj = new Entity("account"); Using Client APIs for a CRM connection CRM client API helps us connect with CRM easily from .NET applications. It simplifies the developer's task to setup connection with CRM using a simplified connection string. We can use this connection string to create a organization service object. The following is the setup to console applications for our demo: Connect to Visual Studio and go to File | New | Project. Select Visual C# | Console Application and fill CRMConnectiondemo under the Name textbox as shown in the following screenshot: Console app Make sure you have installed the .NET 4.5.2 and .NET 4.5.2 developer packs before creating sample applications. Right-click on References and add the following CRM SDK: Microsoft.Xrm.SDK Microsoft.Xrm.Client We also need to add the following .NET assemblies System.Runtime.Serialization System.Configuration Make sure to add the App.config file if not available under project. We need to right-click on Project Name | Add Item and add Application Configuration File as shown here: app.configfile We need to add a connection string to our app.config file; we are using CRM online for our demo application, so we will be using following connection string: <?xml version="1.0" encoding="UTF-8"?> <configuration> <connectionStrings> <add name="OrganizationService" connectionString="Url=https://CRMOnlineServerURL; Username=User@ORGNAME.onmicrosoft.com; Password=Password;" /> </connectionStrings> </configuration> Right-click on Project, select Add Existing File, and browse our file that we generated earlier to add to our console application. Now we can add two classes in our application—one for early bound and another for late bound and let's name them Earlybound.cs and Latebound.cs You can refer to https://msdn.microsoft.com/en-us/library/jj602970.aspx to connection string for other deployment type, if not using CRM online After adding the preceding classes, our solution structure should look like this: Working with organization web services Whenever we need to interact with CRM SDK, we need to use the CRM web services. Most of the time, we will be working with the Organization service to create and modify data. Organization services contains the following methods to interact with metadata and organization data, we will add these methods to our corresponding Earlybound.cs and Latebound.cs files in our console application. Create This method is used to create system or custom entity records. We can use this method when we want to create entity records using CRM SDK, for example, if we need to develop one utility for data import, we can use this method or we want to create lead record in dynamics from a custom website. This methods takes an entity object as a parameter and returns GUID of the record created. The following is an example of creating an account record with early and late bound. With different data types, we are setting some of the basic account entity fields in our code: Early bound: private void CreateAccount() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account accountObject = new Account { Name = "HIMBAP Early Bound Example", Address1_City = "Delhi", CustomerTypeCode = new OptionSetValue(3), DoNotEMail = false, Revenue = new Money(5000), NumberOfEmployees = 50, LastUsedInCampaign = new DateTime(2015, 3, 2) }; crmService.Create(accountObject); } } Late bound: private void Create() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountObj = new Entity("account"); //setting string value accountObj["name"] = "HIMBAP"; accountObj["address1_city"] = "Delhi"; accountObj["accountnumber"] = "101"; //setting optionsetvalue accountObj["customertypecode"] = new OptionSetValue(3); //setting boolean accountObj["donotemail"] = false; //setting money accountObj["revenue"] = new Money(5000); //setting entity reference/lookup accountObj["primarycontactid"] = new EntityReference("contact", new Guid("F6954457- 6005-E511-80F4-C4346BADC5F4")); //setting integer accountObj["numberofemployees"] = 50; //Date Time accountObj["lastusedincampaign"] = new DateTime(2015, 05, 13); Guid AccountID = crmService.Create(accountObj); } } We can also use the create method to create primary and related entity in a single call, for example in the following call, we are creating an account and the related contact record in a single call: private void CreateRecordwithRelatedEntity() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountEntity = new Entity("account"); accountEntity["name"] = "HIMBAP Technology"; Entity relatedContact = new Entity("contact"); relatedContact["firstname"] = "Vikram"; relatedContact["lastname"] = "Singh"; EntityCollection Related = new EntityCollection(); Related.Entities.Add(relatedContact); Relationship accountcontactRel = new Relationship("contact_customer_accounts"); accountEntity.RelatedEntities.Add(accountcontactRel, Related); crmService.Create(accountEntity); } } In the preceding code, first we created account entity objects, and then we created an object of related contact entity and added it to entity collection. After that, we added a related entity collection to the primary entity with the entity relationship name; in this case, it is contact_customer_accounts. After that, we passed our account entity object to create a method to create an account and the related contact records. When we will run this code, it will create the account as shown here: relatedrecord Update This method is used to update existing record properties, for example, we might want to change the account city or any other address information. This methods takes the entity object as the parameter, but we need to make sure to update the primary key field to update any record. The following are the examples of updating the account city and setting the state property: Early bound: private void Update() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account accountUpdate = new Account { AccountId = new Guid("85A882EE-A500- E511-80F9-C4346BAC0E7C"), Address1_City = "Lad Bharol", Address1_StateOrProvince = "Himachal Pradesh" }; crmService.Update(accountUpdate); } } Late bound: private void Update() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountUpdate = new Entity("account"); accountUpdate["accountid"] = new Guid("85A882EE-A500- E511-80F9-C4346BAC0E7C"); accountUpdate["address1_city"] = " Lad Bharol"; accountUpdate["address1_stateorprovince"] = "Himachal Pradesh"; crmService.Update(accountUpdate); } } Similarly, to create method, we can also use the update method to update the primary entity and the related entity in a single call as follows: private void Updateprimaryentitywithrelatedentity() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity accountToUpdate = new Entity("account"); accountToUpdate["name"] = "HIMBAP Technology"; accountToUpdate["websiteurl"] = "www.himbap.com"; accountToUpdate["accountid"] = new Guid("29FC3E74- B30B-E511-80FC-C4346BAD26CC");//replace it with actual account id Entity relatedContact = new Entity("contact"); relatedContact["firstname"] = "Vikram"; relatedContact["lastname"] = "Singh"; relatedContact["jobtitle"] = "Sr Consultant"; relatedContact["contactid"] = new Guid("2AFC3E74- B30B-E511-80FC-C4346BAD26CC");//replace it with actual contact id EntityCollection Related = new EntityCollection(); Related.Entities.Add(relatedContact); Relationship accountcontactRel = new Relationship("contact_customer_accounts"); accountToUpdate.RelatedEntities.Add (accountcontactRel, Related); crmService.Update(accountToUpdate); } } Retrieve This method is used to get data from the CRM based on the primary field, which means that this will only return one record at a time. This method has the following three parameter: Entity: This is needed to pass the logical name of the entity as fist parameter ID: This is needed to pass the primary ID of the record that we want to query Columnset: This is needed to specify the fields list that we want to fetch The following are examples of using the retrieve method Early bound: private void Retrieve() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Account retrievedAccount = (Account)crmService.Retrieve (Account.EntityLogicalName, new Guid("7D5E187C-9344-4267- 9EAC-DD32A0AB1A30"), new ColumnSet(new string[] { "name" })); //replace with actual account id } } Late bound: private void Retrieve() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { Entity retrievedAccount = (Entity)crmService.Retrieve("account", new Guid("7D5E187C- 9344-4267-9EAC-DD32A0AB1A30"), new ColumnSet(new string[] { "name"})); } RetrieveMultiple The RetrieveMultiple method provides options to define our query object where we can define criteria to fetch records from primary and related entities. This method takes the query object as a parameter and returns the entity collection as a response. The following are examples of using retrievemulitple with the early and late bounds: Late Bound: private void RetrieveMultiple() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { QueryExpression query = new QueryExpression { EntityName = "account", ColumnSet = new ColumnSet("name", "accountnumber"), Criteria = { FilterOperator = LogicalOperator.Or, Conditions = { new ConditionExpression { AttributeName = "address1_city", Operator = ConditionOperator.Equal, Values={"Delhi"} }, new ConditionExpression { AttributeName="accountnumber", Operator=ConditionOperator.NotNull } } } }; EntityCollection entityCollection = crmService.RetrieveMultiple(query); foreach (Entity result in entityCollection.Entities) { if (result.Contains("name")) { Console.WriteLine("name ->" + result.GetAttributeValue<string>("name").ToString()); } } } Early Bound: private void RetrieveMultiple() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { QueryExpression RetrieveAccountsQuery = new QueryExpression { EntityName = Account.EntityLogicalName, ColumnSet = new ColumnSet("name", "accountnumber"), Criteria = new FilterExpression { Conditions = { new ConditionExpression { AttributeName = "address1_city", Operator = ConditionOperator.Equal, Values = { "Delhi" } } } } }; EntityCollection entityCollection = crmService.RetrieveMultiple(RetrieveAccountsQuery); foreach (Entity result in entityCollection.Entities) { if (result.Contains("name")) { Console.WriteLine("name ->" + result.GetAttributeValue<string> ("name").ToString()); } } } } Delete This method is used to delete entity records from the CRM database. This methods takes the entityname and primaryid fields as parameters: private void Delete() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { crmService.Delete("account", new Guid("85A882EE-A500-E511- 80F9-C4346BAC0E7C")); } } Associate This method is used to setup a link between two related entities. It has the following parameters: Entity Name: This is the logical name of the primary entity Entity Id: This is the primary entity records it field. Relationship: This is the name of the relationship between two entities Related Entities: This is the correction of references The following is an example of using this method with an early bound: private void Associate() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { EntityReferenceCollection referenceEntities = new EntityReferenceCollection(); referenceEntities.Add(new EntityReference("account", new Guid("38FC3E74-B30B-E511-80FC-C4346BAD26CC"))); // Create an object that defines the relationship between the contact and account (we want to setup primary contact) Relationship relationship = new Relationship("account_primary_contact"); //Associate the contact with the accounts. crmService.Associate("contact", new Guid("38FC3E74-B30B- E511-80FC-C4346BAD26CC "), relationship, referenceEntities); } } Disassociate This method is the reverse of the associate. It is used to remove a link between two entity records. This method takes the same setup of parameter as associate method takes. The following is an example of a disassociate account and contact record: private void Disassociate() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { EntityReferenceCollection referenceEntities = new EntityReferenceCollection(); referenceEntities.Add(new EntityReference("account", new Guid("38FC3E74-B30B-E511-80FC-C4346BAD26CC "))); // Create an object that defines the relationship between the contact and account. Relationship relationship = new Relationship("account_primary_contact"); //Disassociate the records. crmService.Disassociate("contact", new Guid("15FC3E74- B30B-E511-80FC-C4346BAD26CC "), relationship, referenceEntities); } } Execute Apart from the common method that we discussed, the execute method helps to execute requests that is not available as a direct method. This method takes a request as a parameter and returns the response as a result. All the common methods that we used previously can also be used as a request with the execute method. The following is an example of working with metadata and creating a custom event entity using the execute method: private void Usingmetadata() { using (OrganizationService crmService = new OrganizationService("OrganizationService")) { CreateEntityRequest createRequest = new CreateEntityRequest { Entity = new EntityMetadata { SchemaName = "him_event", DisplayName = new Label("Event", 1033), DisplayCollectionName = new Label("Events", 1033), Description = new Label("Custom entity demo", 1033), OwnershipType = OwnershipTypes.UserOwned, IsActivity = false, }, PrimaryAttribute = new StringAttributeMetadata { SchemaName = "him_eventname", RequiredLevel = new AttributeRequiredLevelManagedProperty(AttributeRequiredLevel.None), MaxLength = 100, FormatName = StringFormatName.Text, DisplayName = new Label("Event Name", 1033), Description = new Label("Primary attribute demo", 1033) } }; crmService.Execute(createRequest); } } In the preceding code, we have utilized the CreateEntityRequest class, which is used to create a custom entity. After executing the preceding code, we can check out the entity under the default solution by navigating to Settings | Customizations | Customize the System. You can refer to https://msdn.microsoft.com/en-us/library/gg309553.aspx to see other requests that we can use with the execute method. Testing the console application After adding the preceding methods, we can test our console application by writing a simple test method where we can call our CRUD methods, for example, in the following example, we have added method in our Earlybound.cs. public void EarlyboundTesting() { Console.WriteLine("Creating Account Record....."); CreateAccount(); Console.WriteLine("Updating Account Record....."); Update(); Console.WriteLine("Retriving Account Record....."); Retrieve(); Console.WriteLine("Deleting Account Record....."); Delete(); } After that we can call this method in Main method of Program.cs file like below: static void Main(string[] args) { Earlybound obj = new Earlybound(); Console.WriteLine("Testing Early bound"); obj.EarlyboundTesting(); } Press F5 to run your console application. Summary In this article, you learned about the Microsoft Dynamics CRM 2015 SDK feature. We discussed various options that are available in CRM SDK. You learned about the different CRM APIs and their uses. You learned about different programming models in CRM to work with CRM SDK using different methods of CRM web services, and we created a sample console application. Resources for Article: Further resources on this subject: Attracting Leads and Building Your List [article] PostgreSQL in Action [article] Auto updating child records in Process Builder [article]
Read more
  • 0
  • 0
  • 4512

article-image-troux-enterprise-architecture-managing-ea-function
Packt
25 Aug 2010
9 min read
Save for later

Troux Enterprise Architecture: Managing the EA function

Packt
25 Aug 2010
9 min read
(For more resources on Troux, see here.) Targeted charter Organizations need a mission statement and charter. What should the mission and charter be for EA? The answer to this question depends on how the CIO views the function and where the function resides on the maturity model. The CIO could believe that EA should be focused on setting standards and identifying cost reduction opportunities. Conversely, the CIO could believe the function should focus on evaluation of emerging technologies and innovation. These two extremes are polar opposites. Each would require a different staffing model and different success criteria. The leader of EA must understand how the CIO views the function, as well as what the culture of the business will accept. Are IT and the business familiar with top-down direction, or does the company normally follow a consensus style of management? Is there a market leadership mentality or is the company a fast follower regarding technical innovation? To run a successful EA operation, the head of Enterprise Architecture needs to understand these parameters and factor them into the overall direction of the department. The following diagram illustrates finding the correct position between the two extremes of being focused on standards or innovation: Using standards to enforce polices on a culture that normally works through consensus will not work very well. Also, why focus resources on developing a business strategy or evaluating emerging technology if the company is totally focused on the next quarter's financial results? Sometimes, with the appropriate support from the CIO and other upper management, EA can become the change agent to encourage long-term planning. If a company has been too focused on tactics, EA can be the only department in IT that has the time and resources available to evaluate emerging solutions. The leader of the architecture function must understand the overall context in which the department resides. This understanding will help to develop the best structure for the department and hire people with the correct skill set. Let us look at the organization structure of the EA function. How large should the department be, where should the department report, and what does the organization structure look like? In most cases, there are also other areas within IT that perform what might be considered EA department responsibilities. How should the structure account for "domain architects" or "application architects" who do not report to the head of Enterprise Architecture? As usual, the answer to these questions is "it depends". The architecture department can be sized appropriately with an understanding of the overall role Enterprise Architecture plays within the broader scope of IT. If EA also runs the project management office (PMO) for IT, then the department is likely to be as large as fifty or more resources. In the case where the PMO resides outside of architecture, the architecture staffing level is normally between fifteen and thirty people. To be effective in a large enterprise, (five hundred or more applications development personnel) the EA department should be no smaller than about fifteen people. The following diagram provides a sample organization chart that assumes a balance is required between being focused on technical governance and IT strategy: The sample organization chart shows the balance between resources applied to tactical work and strategic work. The left side of the chart shows the teams focused on governance. Responsibilities include managing the ARB and maintaining standards and the architecture website. An architecture website is critical to maintaining awareness of the standards and best practices developed by the EA department. The sample organizational model assumes that a team of Solution Architects is centralized. These are experienced resources who help project teams with major initiatives that span the enterprise. These resources act like internal consultants and, therefore, must possess a broad spectrum of skills. Depending on the overall philosophy of the CIO, the Domain Architects may also be centralized. These are people with a high degree of experience within specific major technical domains. The domains match to the overall architectural framework of the enterprise and include platforms, software (including middleware), network, data, and security. These resources could also be decentralized into various applications development or engineering groups within IT. If Domain Architects are decentralized, at least two resources are needed within EA to ensure that each area is coordinated with the others across technical disciplines. If EA is responsible for evaluation of emerging technologies, then a team is needed to focus on execution of proof-of-architecture projects and productivity tool evaluations. A service can be created to manage various contracts and relationships with outside consulting agencies. These are typically companies focused on providing research, tracking IT advancements, and, in some cases, monitoring technology evolution within the company's industry. There are leaders (management) in each functional area within the architecture organization. As the resources under each area are limited, a good practice is to assume the leadership positions are also working positions. Depending on the overall culture of the company, the leadership positions could be Director- or Manager-level positions. In either case, these leaders must work with senior leaders across IT, the business, and outside vendors. For this reason, to be effective, they must be people with senior titles granted the authority to make important recommendations and decisions on a daily basis. In most companies, there is considerable debate about whether standards are set by the respective domain areas or by the EA department. The leader of EA, working with the CIO or CTO, must be flexible and able to adapt to the culture. If there is a need to centralize, then the architecture team must take steps to ensure there is buy-in for standards and ensure that governance processes are followed. This is done by building partnerships with the business and IT areas that control the allocation of funds to important projects. If the culture believes in decentralized standards management, then the head of architecture must ensure that there is one, and only one, official place where standards are documented and managed. The ARB, in this case, becomes the place where various opinions and viewpoints are worked out. However, it must be clear that the ARB is a function of Enterprise Architecture, and those that do not follow the collaborative review processes will not be able to move forward without obtaining a management consensus. Staffing the function Staffing the EA function is a challenge. To be effective, the group must have people who are respected for their technical knowledge and are able to communicate well using consensus and collaboration techniques. Finding people with the right combination of skills is difficult. Enterprise Architects may require higher salaries as compared to other staff within IT. Winning the battle with the human resources department about salaries and reporting levels within the corporate hierarchy is possible through the use of industry benchmarks. Requesting that jobs be evaluated against similar roles in the same industry will help make the point about what type of people are needed within the architecture department. People working in the EA department are different and here's why. In baseball, professional scouts rate prospects according to a scale on five different dimensions. Players that score high on all five are called "five tool players." These include hitting, hitting for power, running speed, throwing strength, and fielding. In evaluating resources for EA, there are also five major dimensions to consider. These include program management, software architecture, data architecture, network architecture, and platform architecture. As the following figure shows, an experience scale can be established for each dimension, yielding a complete picture of a candidate. People with the highest level of attainment across all five dimensions would be "five tool players". To be the most flexible in meeting the needs of the business and IT, the head of EA should strive for a good mix of resources covering the five dimensions. Resources who have achieved level 4 or level 5 across all of these would be the best candidates for the Solution Architect positions. These resources can do almost anything technical and are valuable across a wide array of enterprise-wide projects and initiatives. Resources who have mastered a particular dimension, such as data architecture or network architecture, are the best candidates for the Domain Architect positions. Software architecture is a broad dimension that includes software design, industry best practices, and middleware. Included within this area would be resources skilled in application development using various programming languages and design styles like object-oriented programming and SOA. As already seen, the Business Architect role spans all IT domains. The best candidates for Business Architecture need not be proficient in the five disciplines of IT architecture, but they will do a better job if they have a good awareness of what IT Architects do. Business Architects may be centralized and report into the EA function, or they may be decentralized across IT or even reside within business units. They are typically people with deep knowledge of business functions, business processes, and applications. Business Architects must be good communicators and have strong analytical abilities. They should be able to work without a great deal of supervision, be good at planning work, and can be trusted to deliver results per a schedule. Following are some job descriptions for these resources. They are provided as samples because each company will have its own unique set. Vice President/Director of Enterprise Architecture The Vice President/Director of Enterprise Architecture would normally have more than 10 or 15 years of experience depending on the circumstances of the organization. He or she would have experience with, and probably has mastered, all five of the key architecture skill set dimensions. The best resource is one with superior communication skills who is able to effect change across large and diverse organizations. The resource will also have experience within the industry in which the company competes. Leadership qualities are the most important aspect of this role, but having a technical background is also important. This person must be able to translate complex ideas, technology, and programs into language upper management can relate to. This person is a key influencer on technical decisions that affect the business on a long-term basis.
Read more
  • 0
  • 0
  • 4501

article-image-introduction-reactive-programming
Packt
24 Jun 2015
23 min read
Save for later

An Introduction to Reactive Programming

Packt
24 Jun 2015
23 min read
In this article written by Nickolay Tsvetinov, author of the book Learning Reactive Programming with Java 8, this article will present RxJava (https://github.com/ReactiveX/RxJava), an open source Java implementation of the reactive programming paradigm. Writing code using RxJava requires a different kind of thinking, but it will give you the power to create complex logic using simple pieces of well-structured code. In this article, we will cover: What reactive programming is Reasons to learn and use this style of programming Setting up RxJava and comparing it with familiar patterns and structures A simple example with RxJava (For more resources related to this topic, see here.) What is reactive programming? Reactive programming is a paradigm that revolves around the propagation of change. In other words, if a program propagates all the changes that modify its data to all the interested parties (users, other programs, components, and subparts), then this program can be called reactive. A simple example of this is Microsoft Excel. If you set a number in cell A1 and another number in cell 'B1', and set cell 'C1' to SUM(A1, B1); whenever 'A1' or 'B1' changes, 'C1' will be updated to be their sum. Let's call this the reactive sum. What is the difference between assigning a simple variable c to be equal to the sum of the a and b variables and the reactive sum approach? In a normal Java program, when we change 'a' or 'b', we will have to update 'c' ourselves. In other words, the change in the flow of the data represented by 'a' and 'b', is not propagated to 'c'. Here is this illustrated through source code: int a = 4; int b = 5; int c = a + b; System.out.println(c); // 9   a = 6; System.out.println(c); // 9 again, but if 'c' was tracking the changes of 'a' and 'b', // it would've been 6 + 5 = 11 This is a very simple explanation of what "being reactive" means. Of course, there are various implementations of this idea and there are various problems that these implementations must solve. Why should we be reactive? The easiest way for us to answer this question is to think about the requirements we have while building applications these days. While 10-15 years ago it was normal for websites to go through maintenance or to have a slow response time, today everything should be online 24/7 and should respond with lightning speed; if it's slow or down, users would prefer an alternative service. Today slow means unusable or broken. We are working with greater volumes of data that we need to serve and process fast. HTTP failures weren't something rare in the recent past, but now, we have to be fault-tolerant and give our users readable and reasonable message updates. In the past, we wrote simple desktop applications, but today we write web applications, which should be fast and responsive. In most cases, these applications communicate with a large number of remote services. These are the new requirements we have to fulfill if we want our software to be competitive. So in other words we have to be: Modular/dynamic: This way, we will be able to have 24/7 systems, because modules can go offline and come online without breaking or halting the entire system. Additionally, this helps us better structure our applications as they grow larger and manage their code base. Scalable: This way, we are going to be able to handle a huge amount of data or large numbers of user requests. Fault-tolerant: This way, the system will appear stable to its users. Responsive: This means fast and available. Let's think about how to accomplish this: We can become modular if our system is event-driven. We can divide the system into multiple micro-services/components/modules that are going to communicate with each other using notifications. This way, we are going to react to the data flow of the system, represented by notifications. To be scalable means to react to the ever-growing data, to react to load without falling apart. Reacting to failures/errors will make the system more fault-tolerant. To be responsive means reacting to user activity in a timely manner. If the application is event-driven, it can be decoupled into multiple self-contained components. This helps us become more scalable, because we can always add new components or remove old ones without stopping or breaking the system. If errors and failures are passed to the right component, which can handle them as notifications, the application can become more fault-tolerant or resilient. So if we build our system to be event-driven, we can more easily achieve scalability and failure tolerance, and a scalable, decoupled, and error-proof application is fast and responsive to users. The Reactive Manifesto (http://www.reactivemanifesto.org/) is a document defining the four reactive principles that we mentioned previously. Each reactive system should be message-driven (event-driven). That way, it can become loosely coupled and therefore scalable and resilient (fault-tolerant), which means it is reliable and responsive (see the preceding diagram). Note that the Reactive Manifesto describes a reactive system and is not the same as our definition of reactive programming. You can build a message-driven, resilient, scalable, and responsive application without using a reactive library or language. Changes in the application data can be modeled with notifications, which can be propagated to the right handlers. So, writing applications using reactive programming is the easiest way to comply with the Manifesto. Introducing RxJava To write reactive programs, we need a library or a specific programming language, because building something like that ourselves is quite a difficult task. Java is not really a reactive programming language (it provides some tools like the java.util.Observable class, but they are quite limited). It is a statically typed, object-oriented language, and we write a lot of boilerplate code to accomplish simple things (POJOs, for example). But there are reactive libraries in Java that we can use. In this article, we will be using RxJava (developed by people in the Java open source community, guided by Netflix). Downloading and setting up RxJava You can download and build RxJava from Github (https://github.com/ReactiveX/RxJava). It requires zero dependencies and supports Java 8 lambdas. The documentation provided by its Javadoc and the GitHub wiki pages is well structured and some of the best out there. Here is how to check out the project and run the build: $ git clone git@github.com:ReactiveX/RxJava.git $ cd RxJava/ $ ./gradlew build Of course, you can also download the prebuilt JAR. For this article, we'll be using version 1.0.8. If you use Maven, you can add RxJava as a dependency to your pom.xml file: <dependency> <groupId>io.reactivex</groupId> <artifactId>rxjava</artifactId> <version>1.0.8</version> </dependency> Alternatively, for Apache Ivy, put this snippet in your Ivy file's dependencies: <dependency org="io.reactivex" name="rxjava" rev="1.0.8" /> If you use Gradle instead, update your build.gradle file's dependencies as follows: dependencies { ... compile 'io.reactivex:rxjava:1.0.8' ... } Now, let's take a peek at what RxJava is all about. We are going to begin with something well known, and gradually get into the library's secrets. Comparing the iterator pattern and the RxJava observable As a Java programmer, it is highly possible that you've heard or used the Iterator pattern. The idea is simple: an Iterator instance is used to traverse through a container (collection/data source/generator), pulling the container's elements one by one when they are required, until it reaches the container's end. Here is a little example of how it is used in Java: List<String> list = Arrays.asList("One", "Two", "Three", "Four", "Five"); // (1)   Iterator<String> iterator = list.iterator(); // (2)   while(iterator.hasNext()) { // 3 // Prints elements (4) System.out.println(iterator.next()); } Every java.util.Collection object is an Iterable instance which means that it has the method iterator(). This method creates an Iterator instance, which has as its source the collection. Let's look at what the preceding code does: We create a new List instance containing five strings. We create an Iterator instance from this List instance, using the iterator() method. The Iterator interface has two important methods: hasNext() and next(). The hasNext() method is used to check whether the Iterator instance has more elements for traversing. Here, we haven't begun going through the elements, so it will return True. When we go through the five strings, it will return False and the program will proceed after the while loop. The first five times, when we call the next() method on the Iterator instance, it will return the elements in the order they were inserted in the collection. So the strings will be printed. In this example, our program consumes the items from the List instance using the Iterator instance. It pulls the data (here, represented by strings) and the current thread blocks until the requested data is ready and received. So, for example, if the Iterator instance was firing a request to a web server on every next() method call, the main thread of our program would be blocked while waiting for each of the responses to arrive. RxJava's building blocks are the observables. The Observable class (note that this is not the java.util.Observable class that comes with the JDK) is the mathematical dual of the Iterator class, which basically means that they are like the two sides of the same coin. It has an underlying collection or computation that produces values that can be consumed by a consumer. But the difference is that the consumer doesn't "pull" these values from the producer like in the Iterator pattern. It is exactly the opposite; the producer 'pushes' the values as notifications to the consumer. Here is an example of the same program but written using an Observable instance: List<String> list = Arrays.asList("One", "Two", "Three", "Four", "Five"); // (1)   Observable<String> observable = Observable.from(list); // (2)   observable.subscribe(new Action1<String>() { // (3) @Override public void call(String element) {    System.out.println(element); // Prints the element (4) } }); Here is what is happening in the code: We create the list of strings in the same way as in the previous example. Then, we create an Observable instance from the list, using the from(Iterable<? extends T> iterable) method. This method is used to create instances of Observable that send all the values synchronously from an Iterable instance (the list in our case) one by one to their subscribers (consumers). Here, we can subscribe to the Observable instance. By subscribing, we tell RxJava that we are interested in this Observable instance and want to receive notifications from it. We subscribe using an anonymous class implementing the Action1 interface, by defining a single method—call(T). This method will be called by the Observable instance every time it has a value, ready to be pushed. Always creating new Action1 instances may seem too verbose, but Java 8 solves this verbosity. So, every string from the source list will be pushed through to the call() method, and it will be printed. Instances of the RxJava Observable class behave somewhat like asynchronous iterators, which notify that there is a next value their subscribers/consumers by themselves. In fact, the Observable class adds to the classic Observer pattern (implemented in Java—see java.util.Observable, see Design Patterns: Elements of Reusable Object-Oriented Software by the Gang Of Four) two things available in the Iterable type. The ability to signal the consumer that there is no more data available. Instead of calling the hasNext() method, we can attach a subscriber to listen for a 'OnCompleted' notification. The ability to signal the subscriber that an error has occurred. Instead of try-catching an error, we can attach an error listener to the Observable instance. These listeners can be attached using the subscribe(Action1<? super T>, Action1 <Throwable>, Action0) method. Let's expand the Observable instance example by adding error and completed listeners: List<String> list = Arrays.asList("One", "Two", "Three", "Four", "Five");   Observable<String> observable = Observable.from(list); observable.subscribe(new Action1<String>() { @Override public void call(String element) {    System.out.println(element); } }, new Action1<Throwable>() { @Override public void call(Throwable t) {    System.err.println(t); // (1) } }, new Action0() { @Override public void call() {    System.out.println("We've finnished!"); // (2) } }); The new things here are: If there is an error while processing the elements, the Observable instance will send this error through the call(Throwable) method of this listener. This is analogous to the try-catch block in the Iterator instance example. When everything finishes, this call() method will be invoked by the Observable instance. This is analogous to using the hasNext() method in order to see if the traversal over the Iterable instance has finished and printing "We've finished!". We saw how we can use the Observable instances and that they are not so different from something familiar to us—the Iterator instance. These Observable instances can be used for building asynchronous streams and pushing data updates to their subscribers (they can have multiple subscribers).This is an implementation of the reactive programming paradigm. The data is being propagated to all the interested parties—the subscribers. Coding using such streams is a more functional-like implementation of Reactive Programming. Of course, there are formal definitions and complex terms for it, but this is the simplest explanation. Subscribing to events should be familiar; for example, clicking on a button in a GUI application fires an event which is propagated to the subscribers—handlers. But, using RxJava, we can create data streams from anything—file input, sockets, responses, variables, caches, user inputs, and so on. On top of that, consumers can be notified that the stream is closed, or that there has been an error. So, by using these streams, our applications can react to failure. To summarize, a stream is a sequence of ongoing messages/events, ordered as they are processed in real time. It can be looked at as a value that is changing through time, and these changes can be observed by subscribers (consumers), dependent on it. So, going back to the example from Excel, we have effectively replaced the traditional variables with "reactive variables" or RxJava's Observable instances. Implementing the reactive sum Now that we are familiar with the Observable class and the idea of how to use it to code in a reactive way, we are ready to implement the reactive sum, mentioned at the beginning of this article. Let's look at the requirements our program must fulfill: It will be an application that runs in the terminal. Once started, it will run until the user enters exit. If the user enters a:<number>, the a collector will be updated to the <number>. If the user enters b:<number>, the b collector will be updated to the <number>. If the user enters anything else, it will be skipped. When both the a and b collectors have initial values, their sum will automatically be computed and printed on the standard output in the format a + b = <sum>. On every change in a or b, the sum will be updated and printed. The first piece of code represents the main body of the program: ConnectableObservable<String> input = from(System.in); // (1)   Observable<Double> a = varStream("a", input); (2) Observable<Double> b = varStream("b", input);   ReactiveSum sum = new ReactiveSum(a, b); (3)   input.connect(); (4) There are a lot of new things happening here: The first thing we must do is to create an Observable instance, representing the standard input stream (System.in). So, we use the from(InputStream) method (implementation will be presented in the next code snippet) to create a ConnectableObservable variable from the System.in. The ConnectableObservable variable is an Observable instance and starts emitting events coming from its source only after its connect() method is called. We create two Observable instances representing the a and b values, using the varStream(String, Observable) method, which we are going to examine later. The source stream for these values is the input stream. We create a ReactiveSum instance, dependent on the a and b values. And now, we can start listening to the input stream. This code is responsible for building dependencies in the program and starting it off. The a and b values are dependent on the user input and their sum is dependent on them. Now let's look at the implementation of the from(InputStream) method, which creates an Observable instance with the java.io.InputStream source: static ConnectableObservable<String> from(final InputStream stream) { return from(new BufferedReader(new InputStreamReader(stream)));   // (1) }   static ConnectableObservable<String> from(final BufferedReader reader) { return Observable.create(new OnSubscribe<String>() { // (2)    @Override    public void call(Subscriber<? super String> subscriber) {      if (subscriber.isUnsubscribed()) { // (3)        return;      }      try {        String line;        while(!subscriber.isUnsubscribed() &&          (line = reader.readLine()) != null) { // (4)            if (line == null || line.equals("exit")) { // (5)              break;            }            subscriber.onNext(line); // (6)          }        }        catch (IOException e) { // (7)          subscriber.onError(e);        }        if (!subscriber.isUnsubscribed()) // (8)        subscriber.onCompleted();      }    } }).publish(); // (9) } This is one complex piece of code, so let's look at it step-by-step: This method implementation converts its InputStream parameter to the BufferedReader object and to calls the from(BufferedReader) method. We are doing that because we are going to use strings as data, and working with the Reader instance is easier. So the actual implementation is in the second method. It returns an Observable instance, created using the Observable.create(OnSubscribe) method. This method is the one we are going to use the most in this article. It is used to create Observable instances with custom behavior. The rx.Observable.OnSubscribe interface passed to it has one method, call(Subscriber). This method is used to implement the behavior of the Observable instance because the Subscriber instance passed to it can be used to emit messages to the Observable instance's subscriber. A subscriber is the client of an Observable instance, which consumes its notifications. If the subscriber has already unsubscribed from this Observable instance, nothing should be done. The main logic is to listen for user input, while the subscriber is subscribed. Every line the user enters in the terminal is treated as a message. This is the main loop of the program. If the user enters the word exit and hits Enter, the main loop stops. Otherwise, the message the user entered is passed as a notification to the subscriber of the Observable instance, using the onNext(T) method. This way, we pass everything to the interested parties. It's their job to filter out and transform the raw messages. If there is an IO error, the subscribers are notified with an OnError notification through the onError(Throwable) method. If the program reaches here (through breaking out of the main loop) and the subscriber is still subscribed to the Observable instance, an OnCompleted notification is sent to the subscribers using the onCompleted() method. With the publish() method, we turn the new Observable instance into ConnectableObservable instance. We have to do this because, otherwise, for every subscription to this Observable instance, our logic will be executed from the beginning. In our case, we want to execute it only once and all the subscribers to receive the same notifications; this is achievable with the use of a ConnectableObservable instance. This illustrates a simplified way to turn Java's IO streams into Observable instances. Of course, with this main loop, the main thread of the program will block waiting for user input. This can be prevented using the right Scheduler instances to move the logic to another thread. Now, every line the user types into the terminal is propagated as a notification by the ConnectableObservable instance created by this method. The time has come to look at how we connect our value Observable instances, representing the collectors of the sum, to this input Observable instance. Here is the implementation of the varStream(String, Observable) method, which takes a name of a value and source Observable instance and returns an Observable instance representing this value: public static Observable<Double> varStream(final String varName, Observable<String> input) { final Pattern pattern = Pattern.compile("\^s*" + varName +   "\s*[:|=]\s*(-?\d+\.?\d*)$"); // (1) return input .map(new Func1<String, Matcher>() {    public Matcher call(String str) {      return pattern.matcher(str); // (2)    } }) .filter(new Func1<Matcher, Boolean>() {    public Boolean call(Matcher matcher) {      return matcher.matches() && matcher.group(1) != null; //       (3)    } }) .map(new Func1<Matcher, Double>() {    public Double call(Matcher matcher) {      return Double.parseDouble(matcher.group(1)); // (4)    } }); } The map() and filter() methods called on the Observable instance here are part of the fluent API provided by RxJava. They can be called on an Observable instance, creating a new Observable instance that depends on these methods and that transforms or filters the incoming data. Using these methods the right way, you can express complex logic in a series of steps leading to your objective: Our variables are interested only in messages in the format <var_name>: <value> or <var_name> = <value>, so we are going to use this regular expression to filter and process only these kinds of messages. Remember that our input Observable instance sends each line the user writes; it is our job to handle it the right way. Using the messages we receive from the input, we create a Matcher instance using the preceding regular expression as a pattern. We pass through only data that matches the regular expression. Everything else is discarded. Here, the value to set is extracted as a Double number value. This is how the values a and b are represented by streams of double values, changing in time. Now we can implement their sum. We implemented it as a class that implements the Observer interface, because I wanted to show you another way of subscribing to Observable instances—using the Observer interface. Here is the code: public static final class ReactiveSum implements Observer<Double> { // (1) private double sum; public ReactiveSum(Observable<Double> a, Observable<Double> b) {    this.sum = 0;    Observable.combineLatest(a, b, new Func2<Double, Double,     Double>() { // (5)      public Double call(Double a, Double b) {       return a + b;      }    }).subscribe(this); // (6) } public void onCompleted() {    System.out.println("Exiting last sum was : " + this.sum); //     (4) } public void onError(Throwable e) {    System.err.println("Got an error!"); // (3)    e.printStackTrace(); } public void onNext(Double sum) {    this.sum = sum;    System.out.println("update : a + b = " + sum); // (2) } } This is the implementation of the actual sum, dependent on the two Observable instances representing its collectors: It is an Observer interface. The Observer instance can be passed to the Observable instance's subscribe(Observer) method and defines three methods that are named after the three types of notification: onNext(T), onError(Throwable), and onCompleted. In our onNext(Double) method implementation, we set the sum to the incoming value and print an update to the standard output. If we get an error, we just print it. When everything is done, we greet the user with the final sum. We implement the sum with the combineLatest(Observable, Observable, Func2) method. This method creates a new Observable instance. The new Observable instance is updated when any of the two Observable instances, passed to combineLatest receives an update. The value emitted through the new Observable instance is computed by the third parameter—a function that has access to the latest values of the two source sequences. In our case, we sum up the values. There will be no notification until both of the Observable instances passed to the method emit at least one value. So, we will have the sum only when both a and b have notifications. We subscribe our Observer instance to the combined Observable instance. Here is sample of what the output of this example would look like: Reacitve Sum. Type 'a: <number>' and 'b: <number>' to try it. a:4 b:5 update : a + b = 9.0 a:6 update : a + b = 11.0 So this is it! We have implemented our reactive sum using streams of data. Summary In this article, we went through the reactive principles and the reasons we should learn and use them. It is not so hard to build a reactive application; it just requires structuring the program in little declarative steps. With RxJava, this can be accomplished by building multiple asynchronous streams connected the right way, transforming the data all the way through its consumer. The two examples presented in this article may look a bit complex and confusing at first glance, but in reality, they are pretty simple. If you want to read more about reactive programming, take a look at Reactive Programming in the Netflix API with RxJava, a fine article on the topic, available at http://techblog.netflix.com/2013/02/rxjava-netflix-api.html. Another fine post introducing the concept can be found here: https://gist.github.com/staltz/868e7e9bc2a7b8c1f754. And these are slides about reactive programming and RX by Ben Christensen, one of the creators of RxJava: https://speakerdeck.com/benjchristensen/reactive-programming-with-rx-at-qconsf-2014. Resources for Article: Further resources on this subject: The Observer Pattern [article] The Five Kinds of Python Functions Python 3.4 Edition [article] Discovering Python's parallel programming tools [article]
Read more
  • 0
  • 0
  • 4496

article-image-part-1-deploying-multiple-applications-capistrano-single-project
Rodrigo Rosenfeld
01 Jul 2014
9 min read
Save for later

Part 1: Deploying Multiple Applications with Capistrano from a Single Project

Rodrigo Rosenfeld
01 Jul 2014
9 min read
Capistrano is a deployment tool written in Ruby that is able to deploy projects using any language or framework, through a set of recipes, which are also written in Ruby. Capistrano expects an application to have a single repository and it is able to run arbitrary commands on the server through an SSH non-interactive session. Capistrano was designed assuming that an application is completely described by a single repository with all code belonging to it. For example, your web application is written with Ruby on Rails and simply serving that application would be enough. But what if you decide to use a separate application for managing your users, in a separate language and framework? Or maybe some issue tracker application? You could setup a proxy server to properly deliver each request to the right application based upon the request path for example. But the problem remains: how do you use Capistrano to manage more complex scenarios like this if it supports a single repository? The typical approach is to integrate Capistrano on each of the component applications and then switching between those projects before deploying those components. Not only this is a lot of work to deploy all of these components, but it may also lead to a duplication of settings. For example, if your main application and the user management application both use the same database for a given environment, you’d have to duplicate this setting in each of the components. For the Market Tracker product, used byLexisNexis clients (which we develop at e-Core for Matterhorn Transactions Inc.), we were looking for a better way to manage many component applications, in lots of environments and servers. We wanted to manage all of them from a single repository, instead of adding Capistrano integration to each of our component’s repositories and having to worry about keeping the recipes in sync between each of the maintained repository branches. Motivation The Market Tracker application we maintain consists of three different applications: the main one, another to export search results to Excel files, and an administrative interface to manage users and other entities. We host the application in three servers: two for the real thing and another back-up server. The first two are identical ones and allow us to have redundancy and zero downtime deployments except for a few cases where we change our database schema in incompatible ways with previous versions. To add to the complexity of deploying our three composing applications to each of those servers, we also need to deploy them multiple times for different environments like production, certification, staging, and experimental. All of them run on the same server, in separate ports, and they are running separate databases:Solr and Redis instances. This is already complex enough to manage when you integrate Capistrano to each of your projects, but it gets worse. Sometimes you find bugs in production and have to release quick fixes, but you can't deploy the version in the master branch that has several other changes. At other times you find bugs on your Capistrano recipes themselves and fix them on the master. Or maybe you are changing your deploy settings rather than the application’s code. When you have to deploy to production, depending on how your Capistrano recipes work, you may have to change to the production branch, backport any changes for the Capistrano recipes from the master and finally deploy the latest fixes. This happens if your recipe will use any project files as a template and they moved to another place in the master branch, for example. We decided to try another approach, similar to what we do with our database migrations. Instead of integrating the database migrations into the main application (the default on Rails, Django, Grails, and similar web frameworks) we prefer to handle it as a separate project. In our case we use theactive_record_migrations gem, which brings standalone support for ActiveRecord migrations (the same that is bundled with Rails apps by default). Our database is shared between the administrative interface project and the main web application and we feel it's better to be able to manage our database schema independently from the projects using the database. We add the migrations project to the other application as submodules so that we know what database schema is expected to work for a particular commit of the application, but that's all. We wanted to apply the same principles to our Capistrano recipes. We wanted to manage all of our applications on different servers and environments from a single project containing the Capistrano recipes. We also wanted to store the common settings in a single place to avoid code duplication, which makes it hard to add new environments or update existing ones. Grouping all applications' Capistrano recipes in a single project It seems we were not the first to want all Capistrano recipes for all of our applications in a single project. We first tried a project called caphub. It worked fine initially and its inheritance model would allow us to avoid our code duplication. Well, not entirely. The problem is that we needed some kind of multiple inheritances or mixins. We have some settings, like token private key, that are unique across environments, like Certification and Production. But we also have other settings that are common in within a server. For example, the database host name will be the same for all applications and environments inside our collocation facility, but it will be different in our backup server at Amazon EC2. CapHub didn't help us to get rid of the duplication in such cases, but it certainly helped us to find a simple solution to get what we wanted. Let's explore how Capistrano 3 allows us to easily manage such complex scenarios that are more common than you might think. Capistrano stages Since Capistrano 3, multistage support is built-in (there was a multistage extension for Capistrano 2). That means you can writecap stage_nametask_name, for examplecap production deploy. By default,cap install will generate two stages: production and staging. You can generate as many as you want, for example: cap install STAGES=production,cert,staging,experimental,integrator But how do we deploy each of those stages to our multiple servers, since the settings for each stage may be different across the servers? Also, how can we manage separate applications? Even though those settings are called "stages" by Capistrano, you can use it as you want. For example, suppose our servers are named m1,m2, and ec2 and the applications are named web, exporter and admin. We can create settings likem1_staging_web, ec2_production_admin, and so on. This will result in lots of files (specifically 45 = 5 x 3 x 3 to support five environments, three applications, and three servers) but it's not a big deal if you consider the settings files can be really small, as the examples will demonstrate later on in this article by using mixins. Usually people will start with staging and production only, and then gradually add other environments. Also, they usually start with one or two servers and keep growing as they feel the need. So supporting 45 combinations is not such a pain since you don’t write all of them at once. On the other hand, if you have enough resources to have a separate server for each of your environments, Capistrano will allow you to add multiple "server" declarations and assign roles to them, which can be quite useful if you're running a cluster of servers. In our case, to avoid downtime we don't upgrade all servers in our cluster at once. We also don't have the budget to host 45 virtual machines or even 15. So the little effort to generate 45 small settings files compensates the savings with hosting expenses. Using mixins My next post will create an example deployment project from scratch providing detail for everything that has been discussed in this post. But first, let me introduce the concept of what we call a mixin in our project. Capistrano 3 is simply a wrapper on top of Rake. Rake is a build tool written in Ruby, similar to “make.” It has targets and targets have prerequisites. This fits nicely in the way Capistrano works, where some deployment tasks will depend on other tasks. Instead of a Rakefile (Rake’s Makefile) Capistrano will use a Capfile, but other than that it works almost the same way. The Domain Specific Language (DSL) in a Capfile is enhanced as you include Capistrano extensions to the Rake DSL. Here’s a sample Capfile, generated by cap install, when you install Capistrano: # Load DSL and Setup Up Stages require'capistrano/setup' # Includes default deployment tasks require'capistrano/deploy' # Includes tasks from other gems included in your Gemfile # # For documentation on these, see for example: # # https://github.com/capistrano/rvm # https://github.com/capistrano/rbenv # https://github.com/capistrano/chruby # https://github.com/capistrano/bundler # https://github.com/capistrano/rails # # require 'capistrano/rvm' # require 'capistrano/rbenv' # require 'capistrano/chruby' # require 'capistrano/bundler' # require 'capistrano/rails/assets' # require 'capistrano/rails/migrations' # Loads custom tasks from `lib/capistrano/tasks' if you have any defined. Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r } Just like a Rakefile, a Capfile is valid Ruby code, which you can easily extend using regular Ruby code. So, to support a mixin DSL, we simply need to extend the DSL, like this:   defmixin (path) loadFile.join('config', 'mixins', path +'.rb') end Pretty simple, right? We prefer to add this to a separate file, like lib/mixin.rb and add this to the Capfile: $:.unshiftFile.dirname(__FILE__) require 'lib/mixin' After that, calling mixin 'environments/staging' should load settings that are common for the staging environment from a file called config/mixins/environments/staging.rb in the root of the Capistrano-enabled project. This is the base to set up our deployment project that we will create in the next post. About the author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems.For the past five years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems includingactive_record_migrations, rails-web-console, the JS specs runner oojspec, sequel-devise and the Linux X11 utility ktrayshortcut.Rodrigo was hired by e-Core (Porto Alegre - RS, Brazil) to work from home, building and maintaining software forMatterhorn Transactions Inc. with a team of great developers. Matterhorn'smain product, the Market Tracker, is used by LexisNexis clients.
Read more
  • 0
  • 0
  • 4496

article-image-spatial-analysis
Packt
07 Jul 2016
21 min read
Save for later

Spatial Analysis

Packt
07 Jul 2016
21 min read
In this article by Ron Vincent author of the book Learning ArcGIS Runtime SDK for .NET, we're going to learn about spatial analysis with ArcGIS Runtime. As with other parts of ArcGIS Runtime, we really need to understand how spatial analysis is set up and executed with ArcGIS Desktop/Pro and ArcGIS Server. As a result, we will first learn about spatial analysis within the context of geoprocessing. Geoprocessing is the workhorse of doing spatial analysis with Esri's technology. Geoprocessing is very similar to how you write code; in that you specify some input data, do some work on that input data, and then produce the desired output. The big difference is that you use tools that come with ArcGIS Desktop or Pro. In this article, we're going to learn how to use these tools, and how to specify their input, output, and other parameters from an ArcGIS Runtime app that goes well beyond what's available in the GeometryEngine tool. In summary, we're going to cover the following topics: Introduction to spatial analysis Introduction to geoprocessing Preparing for geoprocessing Using geoprocessing in runtime Online geoprocessing (For more resources related to this topic, see here.) Introducing spatial analysis Spatial analysis is a broad term that can mean many different things, depending on the kind of study to be undertaken, the tools to be used, and the methods of performing the analysis, and is even subject to the dynamics of the individuals involved in the analysis. In this section, we will look broadly at the kinds of analysis that are possible, so that you have some context as to what is possible with the ArcGIS platform. Spatial analysis can be divided into these five broad categories: Point patterns Surface analysis Areal data Interactivity Networks Point pattern analysis is the evaluation of the pattern or distribution of points in space. With ArcGIS, you can analyze point data using average nearest neighbor, central feature, mean center, and so on. For surface analysis, you can create surface models, and then analyze them using tools such as LOS, slope surfaces, viewsheds, and contours. With areal data (polygons), you can perform hotspot analysis, spatial autocorrelation, grouping analysis, and so on. When it comes to modeling interactivity, you can use tools in ArcGIS that allow you to do gravity modeling, location-allocation, and so on. Lastly, with Esri's technology you can analyze networks, such as finding the shortest path, generating drive-time polygons, origin-destination matrices, and many other examples. ArcGIS provides the ability to perform all of these kinds of analysis using a variety of tools. For example, here the areas in green are visible from the tallest building. Areas in red are not visible: This article will deal with what is important to understand is that the ArcGIS platform has the capability to help solve problems such as these: An epidemiologist collects data on a disease, such as Chronic Obstructive Pulmonary Disease (COPD), and wants to know where it occurs and whether there are any statistically significant clusters so that a mitigation plan can be developed A mining geologist wants to obtain samples of a precious mineral so that he/she can estimate the overall concentration of the mineral A military analyst or soldier wants to know where they can be located in the battlefield and not been seen A crime analyst wants to know where crimes are concentrated so that they can increase police presence as a deterrent A research scientist wants to develop a model to predict the path of a fire There are many more examples. With ArcGIS Desktop and Pro, along with the correct extension, questions can be posed and answered using a variety of techniques. However, it's important to understand that ArcGIS Runtime may or may not be a good fit and may or may not support certain tools. In many cases, spatial analysis would be best studied with ArcGIS Desktop or Pro. For example, if you plan to conduct hotspot analysis on patients or crime, doing this kind of operation with Desktop or Pro is best suited because it's typically something you do once. On the other hand, if you plan to allow users to repeat this process again and again with different data, and you need high performance, building a tool with ArcGIS Runtime will be the perfect solution, especially if they need to run the tool in the field. It should also be noted that, in some cases, the ArcGIS JavaScript API will also be better suited. Introducing geoprocessing If you open up the Geoprocessing toolbox in ArcGIS Desktop or Pro, you will find dozens of tools categorized in the following manner: With these tools, you can build sophisticated models by using ModelBuilder or Python, and then publish them to ArcGIS Server. For example, to perform a buffer with the GeometryEngine tool, you would drag the Buffer tool onto the ModelBuilder canvas, as shown here, and specify its inputs and outputs: This model specifies an input (US cities), performs an operation (Buffer the cities), and then produces an output (Buffered cities). Conceptually, this is programming except that the algorithm is built graphically instead of with code. You may be asking: Why would you use this tool in ArcGIS Desktop or Pro? Good question. Well, ArcGIS Runtime only comes with a few selected tools in GeometryEngine. These tools, such as the buffer method in GeometryEngine, are so common that Esri decided to include them with ArcGIS Runtime so that these kinds of operation could be performed on the client without having to call the server. On the other hand, in order to keep the core of ArcGIS Runtime lightweight, Esri wanted to provide these tools and many more, but make them available as tools that you need to call on when required for special or advanced analysis. As a result, if your app needs basic operations, GeometryEngine may provide what you need. On the other hand, if you need to perform more sophisticated operations, you will need to build the model with Desktop or Pro, published it to Server, and then consume the resulting service with ArcGIS Runtime. The rest of this article will show you how to consume a geoprocessing model using this pattern. Preparing for geoprocessing To perform geoprocessing, you will need to create a model with ModelBuilder and/or Python. For more details on how to create models using ModelBuilder, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/modelbuilder/what-is-modelbuilder-.htm. To build a model with Python, navigate to http://pro.arcgis.com/en/pro-app/help/analysis/geoprocessing/basics/python-and-geoprocessing.htm. Once you've created a model with ModelBuilder or Python, you will then need to run the tool to ensure that it works and to make it so that it can be published as a geoprocessing service for online use, or as a geoprocessing package for offline use. See here for publishing a service: http://server.arcgis.com/en/server/latest/publish-services/windows/a-quick-tour-of-publishing-a-geoprocessing-service.htm If you plan to use geoprocessing offline, you'll need to publish a geoprocessing package (*.gpk) file. You can learn more about these at https://desktop.arcgis.com/en/desktop/latest/analyze/sharing-workflows/a-quick-tour-of-geoprocessing-packages.htm. Once you have a geoprocessing service or package, you can now consume it with ArcGIS Runtime. In the sections that follow, we will use classes from Esri.ArcGISRuntime.Tasks.Geoprocessing that allow us to consume these geoprocessing services or packages. Online geoprocessing with ArcGIS Runtime Once you have created a geoprocessing model, you will want to access it from ArcGIS Runtime. In this section, we're going to do surface analysis from an online service that Esri has published. To accomplish this, you will need to access the REST endpoint by typing in the following URL: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer When you open this page, you'll notice the description and that it has a list of Tasks: A task is a REST child resource of a geoprocessing service. A geoprocessing service can have one or more tasks associated with it. A task requires a set of inputs in the form of parameters. Once the task completes, it will produce some output that you will then use in your app. The output could be a map service, a single value, or even a report. This particular service only has one task associated with it and it is called Viewshed. If you click on the task called Viewshed, you'll be taken to this page: http://sampleserver6.arcgisonline.com/arcgis/rest/services/Elevation/ESRI_Elevation_World/GPServer/Viewshed. This service will produce a viewshed of where the user clicks that looks something like this: The user clicks on the map (X) and the geoprocessing task produces a viewshed, which shows all the areas on the surface that are visible to an observer, as if they were standing on the surface. Once you click on the task, you'll note the concepts marked in the following screenshot: As you can see, beside the red arrows, the geoprocessing service lets you know what is required for it to operate, so let's go over each of these: First, the service lets you know that it is a synchronous geoprocessing service. A synchronous geoprocessing task will run synchronously until it has completed, and block the calling thread. An asynchronous geoprocessing task will run asynchronously, but it won't block the calling thread. The next pieces of information you'll need to provide to the task are the parameters. In the preceding example, the task requires Input_Observation_Point. You will need to provide this exact name when providing the parameter later on, when we write the code to pass in this parameter. Also, note that the Direction value is esriGPParameterDirectionInput. This tells you that the task expects that Input_Observation_Point is an input to the model. Lastly, note that the Parameter Type value is Required. In other words, you must provide the task with this parameter in order for it to run. It's also worth noting that Default Value is an esriGeometryPoint type, which in ArcGIS Runtime is MapPoint. The Spatial Reference value of the point is 540003. If you investigate the remaining required parameters, you'll note that they require a Viewshed_Distance parameter. Now, refer to the following screenshot. If you don't specify a value, it will use Default Value of 15,000 meters. Lastly, this task will output a Viewshed_Result parameter, which is esriGeometryPolygon. Using this polygon, we can then render to the map or scene. Geoprocessing synchronously Now that you've seen an online service, let's look at how we call this service using ArcGIS Runtime. To execute the preceding viewshed task, we first need to create an instance of the geoprocessor object. The geoprocessor object requires a URL down to the task level in the REST endpoint, like this: private const string viewshedServiceUrl = "http://sampleserver6.arcgisonline.com/arcgis/rest/services/ Elevation/ESRI_Elevation_World/GPServer/Viewshed"; private Geoprocessor gpTask; Note that we've attached /Viewshed on the end of the original URL so that we can pass in the completed path to the task. Next, you will then instantiate the geoprocessor in your app, using the URL to the task: gpTask = new Geoprocessor(new Uri(viewshedServiceUrl)); Once we have created the geoprocessor, we can then prompt the user to click somewhere on the map. Let's look at some code: public async void CreateViewshed() { // // get a point from the user var mapPoint = await this.mapView.Editor.RequestPointAsync(); // clear the graphics layers this.viewshedGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add(new Graphic{ Geometry = mapPoint, Symbol = this.sms }); // specify the input parameters var parameter = new GPInputParameter() { OutSpatialReference = SpatialReferences.WebMercator }; parameter.GPParameters.Add(new GPFeatureRecordSetLayer("Input_Observation_Point", mapPoint)); parameter.GPParameters.Add(new GPLinearUnit("Viewshed_Distance", LinearUnits.Miles, this.distance)); // Send to the server this.Status = "Processing on server..."; var result = await gpTask.ExecuteAsync(parameter); if (result == null || result.OutParameters == null || !(result.OutParameters[0] is GPFeatureRecordSetLayer)) throw new ApplicationException("No viewshed graphics returned for this start point."); // process the output this.Status = "Finished processing. Retrieving results..."; var viewshedLayer = result.OutParameters[0] as GPFeatureRecordSetLayer; var features = viewshedLayer.FeatureSet.Features; foreach (Feature feature in features) { this.viewshedGraphicsLayer.Graphics.Add(feature as Graphic); } this.Status = "Finished!!"; } The first thing we do is have the user click on the map and return MapPoint. We then clear a couple of GraphicsLayers that hold the input graphic and viewshed graphics, so that the map is cleared every time they run this code. Next, we create a graphic using the location where the user clicked. Now comes the interesting part of this. We need to provide the input parameters for the task and we do that with GPInputParameter. When we instantiate GPInputParameter, we also need to specify the output spatial reference so that the data is rendered in the spatial reference of the map. In this example, we're using the map's spatial reference. Then, we add the input parameters. Note that we've spelled them exactly as the task required them. If we don't, the task won't work. We also learned earlier that this task requires a distance, so we use GPLinearUnit in Miles. The GPLinearUnit class lets the geoprocessor know what kinds of unit to accept. After the input parameters are set up, we then call ExecuteAsync. We are calling this method because this is a synchronous geoprocessing task. Even though this method has Async on the end of it, this applies to .NET, not ArcGIS Server. The alternative to ExecuteAsync is SubmitJob, which we will discuss shortly. After some time, the result comes back and we grab the results using result.OutParameters[0]. This contains the output from the geoprocessing task and we want to use that to then render the output to the map. Thankfully, it returns a read-only set of polygons, which we can then add to GraphicsLayer. If you don't know which parameter to use, you'll need to look it up on the task's page. In the preceding example, the parameter was called Viewshed_Distance and the Data Type value was GPLinearUnit. ArcGIS Runtime comes with a variety of data types to match the corresponding data type on the server. The other supported types are GPBoolean, GPDataFile, GPDate, GPDouble, GPItemID, GPLinearUnit, GPLong, GPMultiValue<T>, GPRasterData, GPRecordSet, and GPString. Instead of manually inspecting a task as we did earlier, you can also use Geoprocessor.GetTaskInfoAsync to discover all of the parameters. This is a useful object if you want to provide your users with the ability to specify any geoprocessing task dynamically while the app is running. For example, if your app requires that users are able to enter any geoprocessing task, you'll need to inspect that task, obtain the parameters, and then respond dynamically to the entered geoprocessing task. Geoprocessing asynchronously So far we've called a geoprocessing task synchronously. In this section, we'll cover how to call a geoprocessing task asynchronously. There are two differences when calling a geoprocessing task asynchronously: You will run the task by executing a method called SubmitJobAsync instead of ExecuteAsync. The SubmitJobAsync method is ideal for long-running tasks, such as performing data processing on the server. The major advantage of SubmitJobAsync is that users can continue working while the task works in the background. When the task is completed, the results will be presented. You will need to check the status of the task with GPJobStatus so that users can get a sense of whether the task is working as expected. To do this, check GPJobStatus periodically and it will return GPJobStatus. The GPJobStatus enumeration has the following values: New, Submitted, Waiting, Executing, Succeeded, Failed, TimedOut, Cancelling, Cancelled, Deleting, or Deleted. With these enumerations, you can poll the server and return the status using CheckJobStatusAsync on the task and present that to the user while they wait for the geoprocessor. Let's take a look at this process in the following diagram: As you can see in the preceding diagram, the input parameters are specified as we did earlier with the synchronous task, the Geoprocessor object is set up, and then SubmitJobAsync is called with the parameters (GOInputParameter). Once the task begins, we then have to check the status of it using the results from SubmitJobAsync. We then use CheckJobStatusAsync on the task to return the status enumeration. If it indicates Succeeded, we then do something with the results. If not, we continue to check the status using any time period we specify. Let's try this out using an example service from Esri that allows for areal analysis. Go to the following REST endpoint: http://serverapps10.esri.com/ArcGIS/rest/services/SamplesNET/USA_Data_ClipTools/GPServer/ClipCounties. In the service, you will note that it's called ClipCounties. This is a rather contrived example, but it shows how to do server-side data processing. It requires two parameters called Input_Features and Linear_unit. It outputs output_zip and Clipped _Counties. Basically, this task allows you to drag a line on the map; it will then buffer it and clip out the counties in the U.S. and show them on the map, like so: We are interested in two methods in this sample app. Let's take a look at them: public async void Clip() { //get the user's input line var inputLine = await this.mapView.Editor.RequestShapeAsync( DrawShape.Polyline) as Polyline; // clear the graphics layers this.resultGraphicsLayer.Graphics.Clear(); this.inputGraphicsLayer.Graphics.Clear(); // add new graphic to layer this.inputGraphicsLayer.Graphics.Add( new Graphic { Geometry = inputLine, Symbol = this.simpleInputLineSymbol }); // add the parameters var parameter = new GPInputParameter(); parameter.GPParameters.Add( new GPFeatureRecordSetLayer("Input_Features", inputLine)); parameter.GPParameters.Add(new GPLinearUnit( "Linear_unit", LinearUnits.Miles, this.Distance)); // poll the task var result = await SubmitAndPollStatusAsync(parameter); // add successful results to the map if (result.JobStatus == GPJobStatus.Succeeded) { this.Status = "Finished processing. Retrieving results..."; var resultData = await gpTask.GetResultDataAsync(result.JobID, "Clipped_Counties"); if (resultData is GPFeatureRecordSetLayer) { GPFeatureRecordSetLayer gpLayer = resultData as GPFeatureRecordSetLayer; if (gpLayer.FeatureSet.Features.Count == 0) { // the the map service results var resultImageLayer = await gpTask.GetResultImageLayerAsync( result.JobID, "Clipped_Counties"); // make the result image layer opaque GPResultImageLayer gpImageLayer = resultImageLayer; gpImageLayer.Opacity = 0.5; this.mapView.Map.Layers.Add(gpImageLayer); this.Status = "Greater than 500 features returned. Results drawn using map service."; return; } // get the result features and add them to the // GraphicsLayer var features = gpLayer.FeatureSet.Features; foreach (Feature feature in features) { this.resultGraphicsLayer.Graphics.Add( feature as Graphic); } } this.Status = "Success!!!"; } } This Clip method first asks the user to add a polyline to the map. It then clears the GraphicsLayer class, adds the input line to the map in red, sets up GPInputParameter with the required parameters (Input_Featurs and Linear_unit), and calls a method named SubmitAndPollStatusAsync using the input parameters. Let's take a look at that method too: // Submit GP Job and Poll the server for results every 2 seconds. private async Task<GPJobInfo> SubmitAndPollStatusAsync(GPInputParameter parameter) { // Submit gp service job var result = await gpTask.SubmitJobAsync(parameter); // Poll for the results async while (result.JobStatus != GPJobStatus.Cancelled && result.JobStatus != GPJobStatus.Deleted && result.JobStatus != GPJobStatus.Succeeded && result.JobStatus != GPJobStatus.TimedOut) { result = await gpTask.CheckJobStatusAsync(result.JobID); foreach (GPMessage msg in result.Messages) { this.Status = string.Join(Environment.NewLine, msg.Description); } await Task.Delay(2000); } return result; } The SubmitAndPollStatusAsync method submits the geoprocessing task and the polls it every two seconds to see if it hasn't been Cancelled, Deleted, Succeeded, or TimedOut. It calls CheckJobStatusAsync, gets the messages of type GPMessage, and adds them to the property called Status, which is a ViewModel property with the current status of the task. We then effectively check the status of the task every 2 seconds with Task.Delay(2000) and continue doing this until something happens other than the GPJobStatus enumerations we're checking for. Once SubmitAndPollStatusAsync has succeeded, we then return to the main method (Clip) and perform the following steps with the results: We obtain the results with GetResultDataAsync by passing in the results of JobID and Clipped_Counties. The Clipped_Counties instance is an output of the task, so we just need to specify the name Clipped_Counties. Using the resulting data, we first check whether it is a GPFeatureRecordSetLayer type. If it is, we then do some more processing on the results. We then do a cast just to make sure we have the right object (GPFeatureRecordsSetLayer). We then check to see if no features were returned from the task. If none were returned, we perform the following steps: We obtain the resulting image layer using GetResultImageLayerAsync. This returns a map service image of the results. We then cast this to GPResultImageLayer and set its opacity to 0.5 so that we can see through it. If the user enters in a large distance, a lot of counties are returned, so we convert the layer to a map image, and then show them the entire country so that they can see what they've done wrong. Having the result as an image is faster than displaying all of the polygons as the JSON objects. Add GPResultImageLayer to the map. If everything worked according to plan, we get only the features needed and add them to GraphicsLayer. That was a lot of work, but it's pretty awesome that we sent this off to ArcGIS Server and it did some heavy processing for us so that we could continue working with our map. The geoprocessing task took in a user-specified line, buffered it, and then clipped out the counties in the U.S. that intersected with that buffer. When you run the project, make sure you pan or zoom around while the task is running so that you can see that you can still work. You could also further enhance this code to zoom to the results when it finishes. There are some other pretty interesting capabilities that we need to discuss with this code, so let's delve a little deeper. Working with the output results Let's discuss the output of the geoprocessing results in a little more detail in this section. GPMesssage The GPMessage object is very helpful because it can be used to check the types of message that are coming back from Server. It contains different kinds of message via an enumeration called GPMessageType, which you can use to further process the message. GPMessageType returns an enumeration of Informative, Warning, Error, Abort, and Empty. For example, if the task failed, GPMessageType.Error will be returned and you can present a message to the user letting them know what happened and what they can do to resolve this issue. The GPMessage object also returns Description, which we used in the preceding code to display to the user as the task executed. The level of messages returned by Server dictates what messages are returned by the task. See Message Level here: If the Message Level field is set to None, no messages will be returned. When testing a geoprocessing service, it can be helpful to set the service to Info because it produces detailed messages. GPFeatureRecordSetLayer The preceding task expected an output of features, so we cast the result to GPFeatureRecordsSetLayer. The GPFeatureRecordsSetLayer object is a layer type which handles the JSON objects returned by the server, which we can then use to render on the map. GPResultMapServiceLayer When a geoprocessing service is created, you have the option of making it produce an output map service result with its own symbology. Refer to http://server.arcgis.com/en/server/latest/publish-services/windows/defining-output-symbology-for-geoprocessing-tasks.htm. You can take the results of a GPFeatureRecordsSetLayer object and access this map service using the following URL format: http://catalog-url/resultMapServiceName/MapServer/jobs/jobid Using JobID, which was produced by SubmitJobAsync, you can add the result to the map like so: ArcGISDynamicMapServiceLayer dynLayer = this.gpTask.GetResultMapServiceLayer(result.JobID); this.mapView.Map.Layers.Add(dynLayer); Summary In this article, we went over spatial analysis at a high level, and then went into the details of how to do spatial analysis with ArcGIS Runtime. We discussed how to create models with ModelBuilder and/or Python, and then went on to show how to use geoprocessing, both synchronously and asynchronously, with online and offline tasks. With this information, you now have a multitude of options for adding a wide variety of analytical tools to your apps. Resources for Article: Further resources on this subject: Building Custom Widgets [article] Learning to Create and Edit Data in ArcGIS [article] ArcGIS – Advanced ArcObjects [article]
Read more
  • 0
  • 0
  • 4486
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-integrating-spring-framework-hibernate-orm-framework-part-2
Packt
29 Dec 2009
5 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 2

Packt
29 Dec 2009
5 min read
Configuring Hibernate in a Spring context Spring provides the LocalSessionFactoryBean class as a factory for a SessionFactory object. The LocalSessionFactoryBean object is configured as a bean inside the IoC container, with either a local JDBC DataSource or a shared DataSource from JNDI. The local JDBC DataSource can be configured in turn as an object of org.apache.commons.dbcp.BasicDataSource in the Spring context: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value>jdbc:hsqldb:hsql://localhost/hiberdb</value> </property> <property name="username"> <value>sa</value> </property> <property name="password"> <value></value> </property></bean> In this case, the org.apache.commons.dbcp.BasicDataSource (the Jakarta Commons Database Connection Pool) must be in the application classpath. Similarly, a shared DataSource can be configured as an object of org.springframework.jndi.JndiObjectFactoryBean. This is the recommended way, which is used when the connection pool is managed by the application server. Here is the way to configure it: <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/HiberDB</value> </property></bean> When the DataSource is configured, you can configure the LocalSessionFactoryBean instance upon the configured DataSource as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> ...</bean> Alternatively, you may set up the SessionFactory object as a server-side resource object in the Spring context. This object is linked in as a JNDI resource in the JEE environment to be shared with multiple applications. In this case, you need to use JndiObjectFactoryBean instead of LocalSessionFactoryBean: <bean id="sessionFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/hiberDBSessionFactory</value> </property></bean> JndiObjectFactoryBean is another factory bean for looking up any JNDI resource. When you use JndiObjectFactoryBean to obtain a preconfigured SessionFactory object, the SessionFactory object should already be registered as a JNDI resource. For this purpose, you may run a server-specific class which creates a SessionFactory object and registers it as a JNDI resource. LocalSessionFactoryBean uses three properties: datasource, mappingResources, and hibernateProperties. These properties are as follows: datasource refers to a JDBC DataSource object that is already defined as another bean inside the container. mappingResources specifies the Hibernate mapping files located in the application classpath. hibernateProperties determines the Hibernate configuration settings. We have the sessionFactory object configured as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="mappingResources"> <list> <value>com/packtpub/springhibernate/ch13/Student.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Teacher.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Course.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect </prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.max_fetch_depth">2</prop> </props> </property></bean> The mappingResources property loads mapping definitions in the classpath. You may use mappingJarLocations, or mappingDirectoryLocations to load them from a JAR file, or from any directory of the file system, respectively. It is still possible to configure Hibernate with hibernate.cfg.xml, instead of configuring Hibernate as just shown. To do so, configure sessionFactory with the configLocation property, as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="configLocation"> <value>/conf/hibernate.cfg.xml</value> </property></bean> Note that hibernate.cfg.xml specifies the Hibernate mapping definitions in addition to the other Hibernate properties. When the SessionFactory object is configured, you can configure DAO implementations as beans in the Spring context. These DAO beans are the objects which are looked up from the Spring IoC container and consumed by the business layer. Here is an example of DAO configuration: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="sessionFactory"> <ref local="sessionFactory"/> </property></bean> This is the DAO configuration for a DAO class that extends HibernateDaoSupport, or directly uses a SessionFactory property. When the DAO class has a HibernateTemplate property, configure the DAO instance as follows: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="hibernateTemplate"> <bean class="org.springframework.orm.hibernate3.HibernateTemplate"> <constructor-arg> <ref local="sessionFactory"/> </constructor-arg> </bean> </property></bean> According to the preceding declaration, the HibernateStudentDao class has a hibernateTemplate property that is configured via the IoC container, to be initialized through constructor injection and a SessionFactory instance as a constructor argument. Now, any client of the DAO implementation can look up the Spring context to obtain the DAO instance. The following code shows a simple class that creates a Spring application context, and then looks up the DAO object from the Spring IoC container: package com.packtpub.springhibernate.ch13; public class DaoClient { public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("com/packtpub/springhibernate/ch13/applicationContext.xml"); StudentDao stdDao = (StudentDao)ctx.getBean("studentDao"); Student std = new Student(); //set std properties //save std stdDao.saveStudent(std); }}
Read more
  • 0
  • 0
  • 4486

article-image-ejb-3-security
Packt
23 Oct 2009
15 min read
Save for later

EJB 3 Security

Packt
23 Oct 2009
15 min read
Authentication and authorization in Java EE Container Security There are two aspects covered by Java EE container security: authentication and authorization. Authentication is the process of verifying that users are who they claim to be. Typically this is performed by the user providing credentials such as a password. Authorization, or access control, is the process of restricting operations to specific users or categories of users. The EJB specification provides two kinds of authorization: declarative and programmatic, as we shall see later in the article. The Java EE security model introduces a few concepts common to both authentication and authorization. A principal is an entity that we wish to authenticate. The format of a principal is application-specific but an example is a username. A role is a logical grouping of principals. For example, we can have administrator, manager, and employee roles. The scope over which a common security policy applies is known as a security domain, or realm. Authentication For authentication, every Java EE compliant application server provides the Java Authentication and Authorization Service (JAAS) API. JAAS supports any underlying security system. So we have a common API regardless of whether authentication is username/password verification against a database, iris or fingerprint recognition for example. The JAAS API is fairly low level and most application servers provide authentication mechanisms at a higher level of abstraction. These authentication mechanisms are application-server specific however. We will not cover JAAS any further here, but look at authentication as provided by the GlassFish application server. GlassFish Authentication There are three actors we need to define on the GlassFish application server for authentication purposes: users, groups, and realms. A user is an entity that we wish to authenticate. A user is synonymous with a principal. A group is a logical grouping of users and is not the same as a role. A group's scope is global to the application server. A role is a logical grouping of users whose scope is limited to a specific application. Of course for some applications we may decide that roles are identical to groups. For other applications we need some mechanism for mapping the roles onto groups. We shall see how this is done later. A realm, as we have seen, is the scope over which a common security policy applies. GlassFish provides three kinds of realms: file, certificate, and admin-realm. The file realm stores user, group, and realm credentials in a file named keyfile. This file is stored within the application server file system. A file realm is used by web clients using http or EJB application clients. The certificate realm stores a digital certificate and is used for authenticating web clients using https. The admin-realm is similar to the file realm and is used for storing administrator credentials. GlassFish comes pre-configured with a default file realm named file. We can add, edit, and delete users, groups, and realms using the GlassFish administrator console. We can also use the create-file-user option of the asadmin command line utility. To add a user named scott to a group named bankemployee, in the file realm, we would use the command: <target name="create-file-user"> <exec executable="${glassfish.home}/bin/asadmin" failonerror="true" vmlauncher="false"> <arg line="create-file-user --user admin --passwordfile userpassword --groups bankemployee scott"/> </exec> </target> --user specifies the GlassFish administrator username, admin in our example. --passwordfile specifies the name of the file containing password entries. In our example this file is userpassword. Users, other than GlassFish administrators, are identified by AS_ADMIN_USERPASSWORD. In our example the content of the userpassword file is: AS_ADMIN_USERPASSWORD=xyz This indicates that the user's password is xyz. --groups specifies the groups associated with this user (there may be more than one group). In our example there is just one group, named bankemployee. Multiple groups are colon delineated. For example if the user belongs to both the bankemployee and bankcustomer groups, we would specify: --groups bankemployee:bankcustomer The final entry is the operand which specifies the name of the user to be created. In our example this is scott. There is a corresponding asadmin delete-file-user option to remove a user from the file realm. Mapping Roles to Groups The Java EE specification specifies that there must be a mechanism for mapping local application specific roles to global roles on the application server. Local roles are used by an EJB for authorization purposes. The actual mapping mechanism is application server specific. As we have seen in the case of GlassFish, the global application server roles are called groups. In GlassFish, local roles are referred to simply as roles. Suppose we want to map an employee role to the bankemployee group. We would need to create a GlassFish specific deployment descriptor, sun-ejb-jar.xml, with the following element: <security-role-mapping> <role-name>employee</role-name> <group-name>bankemployee</group-name> </security-role-mapping> We also need to access the configuration-security screen in the administrator console. We then disable the Default Principal To Role Mapping flag. If the flag is enabled then the default is to map a group onto a role with the same name. So the bankemployee group will be mapped to the bankemployee role. We can leave the default values for the other properties on the configuration-security screen. Many of these features are for advanced use where third party security products can be plugged in or security properties customized. Consequently we will give only a brief description of these properties here. Security Manager: This refers to the JVM security manager which performs code-based security checks. If the security manager is disabled GlassFish will have better performance. However, even if the security manager is disabled, GlassFish still enforces standard Java EE authentication/authorization. Audit Logging: If this is enabled, GlassFish will provide an audit trail of all authentication and authorization decisions through audit modules. Audit modules provide information on incoming requests, outgoing responses and whether authorization was granted or denied. Audit logging applies for web-tier and ejb-tier authentication and authorization. A default audit module is provided but custom audit modules can also be created. Default Realm: This is the default realm used for authentication. Applications use this realm unless they specify a different realm in their deployment descriptor. The default value is file. Other possible values are admin-realm and certificate. We discussed GlassFish realms in the previous section. Default Principal: This is the user name used by GlassFish at run time if no principal is provided. Normally this is not required so the property can be left blank. Default Principal Password: This is the password of the default principal. JACC: This is the class name of a JACC (Java Authorization Contract for Containers) provider. This enables the GlassFish administrator to set up third-party plug in modules conforming to the JACC standard to perform authorization. Audit Modules: If we have created custom modules to perform audit logging, we would select from this list. Mapped Principal Class: This is only applicable when Default Principal to Role Mapping is enabled. The mapped principal class is used to customize the java.security.Principal implementation class used in the default principal to role mapping. If no value is entered, the com.sun.enterprise.deployment.Group implementation of java.security.Principal is used. Authenticating an EJB Application Client Suppose we want to invoke an EJB, BankServiceBean, from an application client. We also want the application client container to authenticate the client. There are a number of steps we first need to take which are application server specific. We will assume that all roles will have the same name as the corresponding application server groups. In the case of GlassFish we need to use the administrator console and enable Default Principal To Role Mapping. Next we need to define a group named bankemployee with one or more associated users. An EJB application client needs to use IOR (Interoperable Object Reference) authentication. The IOR protocol was originally created for CORBA (Common Object Request Broker Architecture) but all Java EE compliant containers support IOR. An EJB deployed on one Java EE compliant vendor may be invoked by a client deployed on another Java EE compliant vendor. Security interoperability between these vendors is achieved using the IOR protocol. In our case the client and target EJB both happen to be deployed on the same vendor, but we still use IOR for propagating security details from the application client container to the EJB container. IORs are configured in vendor specific XML files rather than the standard ejb-jar.xml file. In the case of GlassFish, this is done within the <ior-security-config> element within the sun-ejb-jar.xml deployment descriptor file. We also need to specify the invoked EJB, BankServiceBean, in the deployment descriptor. An example of the sun-ejb-jar.xml deployment descriptor is shown below: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD       Application Server 9.0 EJB 3.0//EN"       "http://www.sun.com/software/appserver/dtds/sun-ejb-jar_3_0-0.dtd"> <sun-ejb-jar>   <enterprise-beans>     <ejb>       <ejb-name>BankServiceBean</ejb-name>         <ior-security-config>           <as-context>              <auth-method>USERNAME_PASSWORD</auth-method>              <realm>default</realm>              <required>true</required>           </as-context>         </ior-security-config>     </ejb>   </enterprise-beans> </sun-ejb-jar> The as in <as-context> stands for the IOR authentication service. This specifies authentication mechanism details. The <auth-method> element specifies the authentication method. This is set to USERNAME_PASSWORD which is the only value for an application client. The <realm> element specifies the realm in which the client is authenticated. The <required> element specifies whether the above authentication method is required to be used for client authentication. When creating the corresponding EJB JAR file, the sun-ejb-jar.xml file should be included in the META-INF directory, as follows: <target name="package-ejb" depends="compile">     <jar jarfile="${build.dir}/BankService.jar">         <fileset dir="${build.dir}">              <include name="ejb30/session/**" />                           <include name="ejb30/entity/**" />               </fileset>               <metainf dir="${config.dir}">             <include name="persistence.xml" />                          <include name="sun-ejb-jar.xml" />         </metainf>     </jar> </target> As soon as we run the application client, GlassFish will prompt with a username and password form, as follows: If we reply with the username scott and password xyz the program will run. If we run the application with an invalid username or password we will get the following error message: javax.ejb.EJBException: nested exception is: java.rmi.AccessException: CORBA NO_PERMISSION 9998 ..... EJB Authorization Authorization, or access control, is the process of restricting operations to specific roles. In contrast with authentication, EJB authorization is completely application server independent. The EJB specification provides two kinds of authorization: declarative and programmatic. With declarative authorization all security checks are performed by the container. An EJB's security requirements are declared using annotations or deployment descriptors. With programmatic authorization security checks are hard-coded in the EJBs code using API calls. However, even with programmatic authorization the container is still responsible for authentication and for assigning roles to principals. Declarative Authorization As an example, consider the BankServiceBean stateless session bean with methods findCustomer(), addCustomer() and updateCustomer(): package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Customer; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import javax.annotation.security.PermitAll; import java.util.*; @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Customer cust; @PermitAll public Customer findCustomer(int custId) { return ((Customer) em.find(Customer.class, custId)); } public void addCustomer(int custId, String firstName, String lastName) { cust = new Customer(); cust.setId(custId); cust.setFirstName(firstName); cust.setLastName(lastName); em.persist(cust); } public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } } We have prefixed the bean class with the annotation: @RolesAllowed("bankemployee") This specifies the roles allowed to access any of the bean's method. So only users belonging to the bankemployee role may access the addCustomer() and updateCustomer() methods. More than one role can be specified by means of a brace delineated list, as follows: @RolesAllowed({"bankemployee", "bankcustomer"}) We can also prefix a method with @RolesAllowed, in which case the method annotation will override the class annotation. The @PermitAll annotation allows unrestricted access to a method, overriding any class level @RolesAllowed annotation. As with EJB 3 in general, we can use deployment descriptors as alternatives to the @RolesAllowed and @PermitAll annotations. Denying Authorization Suppose we want to deny all users access to the BankServiceBean.updateCustomer() method. We can do this using the @DenyAll annotation: @DenyAll public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } Of course if you have access to source code you could simply delete the method in question rather than using @DenyAll. However suppose you do not have access to the source code and have received the EJB from a third party. If you in turn do not want your clients accessing a given method then you would need to use the <exclude-list> element in the ejb-jar.xml deployment descriptor: <?xml version="1.0" encoding="UTF-8"?> <ejb-jar version="3.0"                         xsi_schemaLocation="http://java.sun.com/xml/ns/javaee             http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd"> <enterprise-beans> <session> <ejb-name>BankServiceBean</ejb-name> </session> </enterprise-beans> <assembly-descriptor> <exclude-list><method> <ejb-name>BankServiceBean</ejb-name> <method-name>updateCustomer</method-name></method></exclude-list> </assembly-descriptor> </ejb-jar> EJB Security Propagation Suppose a client with an associated role invokes, for example, EJB A. If EJB A then invokes, for example, EJB B then by default the client's role is propagated to EJB B. However, you can specify with the @RunAs annotation that all methods of an EJB execute under a specific role. For example, suppose the addCustomer() method in the BankServiceBean EJB invokes the addAuditMessage() method of the AuditServiceBean EJB: @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { private @EJB AuditService audit; ....      public void addCustomer(int custId, String firstName,                                                          String lastName) {              cust = new Customer();              cust.setId(custId);              cust.setFirstName(firstName);              cust.setLastName(lastName);              em.persist(cust);              audit.addAuditMessage(1, "customer add attempt");      }      ... } Note that only a client with an associated role of bankemployee can invoke addCustomer(). If we prefix the AuditServiceBean class declaration with @RunAs("bankauditor") then the container will run any method in AuditServiceBean as the bankauditor role, regardless of the role which invokes the method. Note that the @RunAs annotation is applied only at the class level, @RunAs cannot be applied at the method level. @Stateless @RunAs("bankauditor") public class AuditServiceBean implements AuditService { @PersistenceContext(unitName="BankService") private EntityManager em; @TransactionAttribute( TransactionAttributeType.REQUIRES_NEW) public void addAuditMessage (int auditId, String message) { Audit audit = new Audit(); audit.setId(auditId); audit.setMessage(message); em.persist(audit); } } Programmatic Authorization With programmatic authorization the bean rather than the container controls authorization. The javax.ejb.SessionContext object provides two methods which support programmatic authorization: getCallerPrincipal() and isCallerInRole(). The getCallerPrincipal() method returns a java.security.Principal object. This object represents the caller, or principal, invoking the EJB. We can then use the Principal.getName() method to obtain the name of the principal. We have done this in the addAccount() method of the BankServiceBean as follows: Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); The isCallerInRole() method checks whether the principal belongs to a given role. For example, the code fragment below checks if the principal belongs to the bankcustomer role. If the principal does not belong to the bankcustomer role, we only persist the account if the balance is less than 99. if (ctx.isCallerInRole("bankcustomer")) {     em.persist(ac); } else if (balance < 99) {            em.persist(ac);   } When using the isCallerInRole() method, we need to declare all the security role names used in the EJB code using the class level @DeclareRoles annotation: @DeclareRoles({"bankemployee", "bankcustomer"}) The code below shows the BankServiceBean EJB with all the programmatic authorization code described in this section: package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Account; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import java.security.Principal; import javax.annotation.Resource; import javax.ejb.SessionContext; import javax.annotation.security.DeclareRoles; import java.util.*; @Stateless @DeclareRoles({"bankemployee", "bankcustomer"}) public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Account ac; @Resource SessionContext ctx; @RolesAllowed({"bankemployee", "bankcustomer"}) public void addAccount(int accountId, double balance, String accountType) { ac = new Account(); ac.setId(accountId); ac.setBalance(balance); ac.setAccountType(accountType); Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); if (ctx.isCallerInRole("bankcustomer")) { em.persist(ac); } else if (balance < 99) { em.persist(ac); } } ..... } Where we have a choice declarative authorization is preferable to programmatic authorization. Declarative authorization avoids having to mix business code with security management code. We can change a bean's security policy by simply changing an annotation or deployment descriptor instead of modifying the logic of a business method. However, some security rules, such as the example above of only persisting an account within a balance limit, can only be handled by programmatic authorization. Declarative security is based only on the principal and the method being invoked, whereas programmatic security can take state into consideration. Because an EJB is typically invoked from the web-tier by a servlet, JSP page or JSF component, we will briefly mention Java EE web container security. The web-tier and EJB tier share the same security model. So the web-tier security model is based on the same concepts of principals, roles and realms.
Read more
  • 0
  • 0
  • 4478

article-image-getting-ready-coffeescript
Packt
02 Apr 2015
20 min read
Save for later

Getting Ready with CoffeeScript

Packt
02 Apr 2015
20 min read
In this article by Mike Hatfield, author of the book, CoffeeScript Application Development Cookbook, we will see that JavaScript, though very successful, can be a difficult language to work with. JavaScript was designed by Brendan Eich in a mere 10 days in 1995 while working at Netscape. As a result, some might claim that JavaScript is not as well rounded as some other languages, a point well illustrated by Douglas Crockford in his book titled JavaScript: The Good Parts, O'Reilly Media. These pitfalls found in the JavaScript language led Jeremy Ashkenas to create CoffeeScript, a language that attempts to expose the good parts of JavaScript in a simple way. CoffeeScript compiles into JavaScript and helps us avoid the bad parts of JavaScript. (For more resources related to this topic, see here.) There are many reasons to use CoffeeScript as your development language of choice. Some of these reasons include: CoffeeScript helps protect us from the bad parts of JavaScript by creating function closures that isolate our code from the global namespace by reducing the curly braces and semicolon clutter and by helping tame JavaScript's notorious this keyword CoffeeScript helps us be more productive by providing features such as list comprehensions, classes with inheritance, and many others Properly written CoffeeScript also helps us write code that is more readable and can be more easily maintained As Jeremy Ashkenas says: "CoffeeScript is just JavaScript." We can use CoffeeScript when working with the large ecosystem of JavaScript libraries and frameworks on all aspects of our applications, including those listed in the following table: Part Some options User interfaces UI frameworks including jQuery, Backbone.js, AngularJS, and Kendo UI Databases Node.js drivers to access SQLite, Redis, MongoDB, and CouchDB Internal/external services Node.js with Node Package Manager (NPM) packages to create internal services and interfacing with external services Testing Unit and end-to-end testing with Jasmine, Qunit, integration testing with Zombie, and mocking with Persona Hosting Easy API and application hosting with Heroku and Windows Azure Tooling Create scripts to automate routine tasks and using Grunt Configuring your environment and tools One significant aspect to being a productive CoffeeScript developer is having a proper development environment. This environment typically consists of the following: Node.js and the NPM CoffeeScript Code editor Debugger In this recipe, we will look at installing and configuring the base components and tools necessary to develop CoffeeScript applications. Getting ready In this section, we will install the software necessary to develop applications with CoffeeScript. One of the appealing aspects of developing applications using CoffeeScript is that it is well supported on Mac, Windows, and Linux machines. To get started, you need only a PC and an Internet connection. How to do it... CoffeeScript runs on top of Node.js—the event-driven, non-blocking I/O platform built on Chrome's JavaScript runtime. If you do not have Node.js installed, you can download an installation package for your Mac OS X, Linux, and Windows machines from the start page of the Node.js website (http://nodejs.org/). To begin, install Node.js using an official prebuilt installer; it will also install the NPM. Next, we will use NPM to install CoffeeScript. Open a terminal or command window and enter the following command: npm install -g coffee-script This will install the necessary files needed to work with CoffeeScript, including the coffee command that provides an interactive Read Evaluate Print Loop (REPL)—a command to execute CoffeeScript files and a compiler to generate JavaScript. It is important to use the -g option when installing CoffeeScript, as this installs the CoffeeScript package as a global NPM module. This will add the necessary commands to our path. On some Windows machines, you might need to add the NPM binary directory to your path. You can do this by editing the environment variables and appending ;%APPDATA%npm to the end of the system's PATH variable. Configuring Sublime Text What you use to edit code can be a very personal choice, as you, like countless others, might use the tools dictated by your team or manager. Fortunately, most popular editing tools either support CoffeeScript out of the box or can be easily extended by installing add-ons, packages, or extensions. In this recipe, we will look at adding CoffeeScript support to Sublime Text and Visual Studio. Getting ready This section assumes that you have Sublime Text or Visual Studio installed. Sublime Text is a very popular text editor that is geared to working with code and projects. You can download a fully functional evaluation version from http://www.sublimetext.com. If you find it useful and decide to continue to use it, you will be encouraged to purchase a license, but there is currently no enforced time limit. How to do it... Sublime Text does not support CoffeeScript out of the box. Thankfully, a package manager exists for Sublime Text; this package manager provides access to hundreds of extension packages, including ones that provide helpful and productive tools to work with CoffeeScript. Sublime Text does not come with this package manager, but it can be easily added by following the instructions on the Package Control website at https://sublime.wbond.net/installation. With Package Control installed, you can easily install the CoffeeScript packages that are available using the Package Control option under the Preferences menu. Select the Install Package option. You can also access this command by pressing Ctrl + Shift + P, and in the command list that appears, start typing install. This will help you find the Install Package command quickly. To install the CoffeeScript package, open the Install Package window and enter CoffeeScript. This will display the CoffeeScript-related packages. We will use the Better CoffeeScript package: As you can see, the CoffeeScript package includes syntax highlighting, commands, shortcuts, snippets, and compilation. How it works... In this section, we will explain the different keyboard shortcuts and code snippets available with the Better CoffeeScript package for Sublime. Commands You can run the desired command by entering the command into the Sublime command pallet or by pressing the related keyboard shortcut. Remember to press Ctrl + Shift + P to display the command pallet window. Some useful CoffeeScript commands include the following: Command Keyboard shortcut Description Coffee: Check Syntax Alt + Shift + S This checks the syntax of the file you are editing or the currently selected code. The result will display in the status bar at the bottom. Coffee: Compile File Alt + Shift + C This compiles the file being edited into JavaScript. Coffee: Run Script Alt + Shift + R This executes the selected code and displays a buffer of the output. The keyboard shortcuts are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by choosing CoffeeScript in the list of file types in the bottom-left corner of the screen. Snippets Snippets allow you to use short tokens that are recognized by Sublime Text. When you enter the code and press the Tab key, Sublime Text will automatically expand the snippet into the full form. Some useful CoffeeScript code snippets include the following: Token Expands to log[Tab] console.log cla class Name constructor: (arguments) ->    # ... forin for i in array # ... if if condition # ... ifel if condition # ... else # ... swi switch object when value    # ... try try # ... catch e # ... The snippets are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by selecting CoffeeScript in the list of file types in the bottom-left corner of the screen. Configuring Visual Studio In this recipe, we will demonstrate how to add CoffeeScript support to Visual Studio. Getting ready If you are on the Windows platform, you can use Microsoft's Visual Studio software. You can download Microsoft's free Express edition (Express 2013 for Web) from http://www.microsoft.com/express. How to do it... If you are a Visual Studio user, Version 2010 and above can work quite effectively with CoffeeScript through the use of Visual Studio extensions. If you are doing any form of web development with Visual Studio, the Web Essentials extension is a must-have. To install Web Essentials, perform the following steps: Launch Visual Studio. Click on the Tools menu and select the Extensions and Updates menu option. This will display the Extensions and Updates window (shown in the next screenshot). Select Online in the tree on the left-hand side to display the most popular downloads. Select Web Essentials 2012 from the list of available packages and then click on the Download button. This will download the package and install it automatically. Once the installation is finished, restart Visual Studio by clicking on the Restart Now button. You will likely find Web Essentials 2012 ranked highly in the list of Most Popular packages. If you do not see it, you can search for Web Essentials using the Search box in the top-right corner of the window. Once installed, the Web Essentials package provides many web development productivity features, including CSS helpers, tools to work with Less CSS, enhancements to work with JavaScript, and, of course, a set of CoffeeScript helpers. To add a new CoffeeScript file to your project, you can navigate to File | New Item or press Ctrl + Shift + A. This will display the Add New Item dialog, as seen in the following screenshot. Under the Web templates, you will see a new CoffeeScript File option. Select this option and give it a filename, as shown here: When we have our CoffeeScript file open, Web Essentials will display the file in a split-screen editor. We can edit our code in the left-hand pane, while Web Essentials displays a live preview of the JavaScript code that will be generated for us. The Web Essentials CoffeeScript compiler will create two JavaScript files each time we save our CoffeeScript file: a basic JavaScript file and a minified version. For example, if we save a CoffeeScript file named employee.coffee, the compiler will create employee.js and employee.min.js files. Though I have only described two editors to work with CoffeeScript files, there are CoffeeScript packages and plugins for most popular text editors, including Emacs, Vim, TextMate, and WebMatrix. A quick dive into CoffeeScript In this recipe, we will take a quick look at the CoffeeScript language and command line. How to do it... CoffeeScript is a highly expressive programming language that does away with much of the ceremony required by JavaScript. It uses whitespace to define blocks of code and provides shortcuts for many of the programming constructs found in JavaScript. For example, we can declare variables and functions without the var keyword: firstName = 'Mike' We can define functions using the following syntax: multiply = (a, b) -> a * b Here, we defined a function named multiply. It takes two arguments, a and b. Inside the function, we multiplied the two values. Note that there is no return statement. CoffeeScript will always return the value of the last expression that is evaluated inside a function. The preceding function is equivalent to the following JavaScript snippet: var multiply = function(a, b) { return a * b; }; It's worth noting that the CoffeeScript code is only 28 characters long, whereas the JavaScript code is 50 characters long; that's 44 percent less code. We can call our multiply function in the following way: result = multiply 4, 7 In CoffeeScript, using parenthesis is optional when calling a function with parameters, as you can see in our function call. However, note that parenthesis are required when executing a function without parameters, as shown in the following example: displayGreeting = -> console.log 'Hello, world!' displayGreeting() In this example, we must call the displayGreeting() function with parenthesis. You might also wish to use parenthesis to make your code more readable. Just because they are optional, it doesn't mean you should sacrifice the readability of your code to save a couple of keystrokes. For example, in the following code, we used parenthesis even though they are not required: $('div.menu-item').removeClass 'selected' Like functions, we can define JavaScript literal objects without the need for curly braces, as seen in the following employee object: employee = firstName: 'Mike' lastName: 'Hatfield' salesYtd: 13204.65 Notice that in our object definition, we also did not need to use a comma to separate our properties. CoffeeScript supports the common if conditional as well as an unless conditional inspired by the Ruby language. Like Ruby, CoffeeScript also provides English keywords for logical operations such as is, isnt, or, and and. The following example demonstrates the use of these keywords: isEven = (value) -> if value % 2 is 0    'is' else    'is not'   console.log '3 ' + isEven(3) + ' even' In the preceding code, we have an if statement to determine whether a value is even or not. If the value is even, the remainder of value % 2 will be 0. We used the is keyword to make this determination. JavaScript has a nasty behavior when determining equality between two values. In other languages, the double equal sign is used, such as value == 0. In JavaScript, the double equal operator will use type coercion when making this determination. This means that 0 == '0'; in fact, 0 == '' is also true. CoffeeScript avoids this using JavaScript's triple equals (===) operator. This evaluation compares value and type such that 0 === '0' will be false. We can use if and unless as expression modifiers as well. They allow us to tack if and unless at the end of a statement to make simple one-liners. For example, we can so something like the following: console.log 'Value is even' if value % 2 is 0 Alternatively, we can have something like this: console.log 'Value is odd' unless value % 2 is 0 We can also use the if...then combination for a one-liner if statement, as shown in the following code: if value % 2 is 0 then console.log 'Value is even' CoffeeScript has a switch control statement that performs certain actions based on a list of possible values. The following lines of code show a simple switch statement with four branching conditions: switch task when 1    console.log 'Case 1' when 2    console.log 'Case 2' when 3, 4, 5    console.log 'Case 3, 4, 5' else    console.log 'Default case' In this sample, if the value of a task is 1, case 1 will be displayed. If the value of a task is 3, 4, or 5, then case 3, 4, or 5 is displayed, respectively. If there are no matching values, we can use an optional else condition to handle any exceptions. If your switch statements have short operations, you can turn them into one-liners, as shown in the following code: switch value when 1 then console.log 'Case 1' when 2 then console.log 'Case 2' when 3, 4, 5 then console.log 'Case 3, 4, 5' else console.log 'Default case' CoffeeScript provides a number of syntactic shortcuts to help us be more productive while writing more expressive code. Some people have claimed that this can sometimes make our applications more difficult to read, which will, in turn, make our code less maintainable. The key to highly readable and maintainable code is to use a consistent style when coding. I recommend that you follow the guidance provided by Polar in their CoffeeScript style guide at http://github.com/polarmobile/coffeescript-style-guide. There's more... With CoffeeScript installed, you can use the coffee command-line utility to execute CoffeeScript files, compile CoffeeScript files into JavaScript, or run an interactive CoffeeScript command shell. In this section, we will look at the various options available when using the CoffeeScript command-line utility. We can see a list of available commands by executing the following command in a command or terminal window: coffee --help This will produce the following output: As you can see, the coffee command-line utility provides a number of options. Of these, the most common ones include the following: Option Argument Example Description None None coffee This launches the REPL-interactive shell. None Filename coffee sample.coffee This command will execute the CoffeeScript file. -c, --compile Filename coffee -c sample.coffee This command will compile the CoffeeScript file into a JavaScript file with the same base name,; sample.js, as in our example. -i, --interactive   coffee -i This command will also launch the REPL-interactive shell. -m, --map Filename coffee--m sample.coffee This command generates a source map with the same base name, sample.js.map, as in our example. -p, --print Filename coffee -p sample.coffee This command will display the compiled output or compile errors to the terminal window. -v, --version None coffee -v This command will display the correct version of CoffeeScript. -w, --watch Filename coffee -w -c sample.coffee This command will watch for file changes, and with each change, the requested action will be performed. In our example, our sample.coffee file will be compiled each time we save it. The CoffeeScript REPL As we have been, CoffeeScript has an interactive shell that allows us to execute CoffeeScript commands. In this section, we will learn how to use the REPL shell. The REPL shell can be an excellent way to get familiar with CoffeeScript. To launch the CoffeeScript REPL, open a command window and execute the coffee command. This will start the interactive shell and display the following prompt: For example, if we enter the expression x = 4 and press return, we would see what is shownin the following screenshot In the coffee> prompt, we can assign values to variables, create functions, and evaluate results. When we enter an expression and press the return key, it is immediately evaluated and the value is displayed. For example, if we enter the expression x = 4 and press return, we would see what is shown in the following screenshot: This did two things. First, it created a new variable named x and assigned the value of 4 to it. Second, it displayed the result of the command. Next, enter timesSeven = (value) -> value * 7 and press return: You can see that the result of this line was the creation of a new function named timesSeven(). We can call our new function now: By default, the REPL shell will evaluate each expression when you press the return key. What if we want to create a function or expression that spans multiple lines? We can enter the REPL multiline mode by pressing Ctrl + V. This will change our coffee> prompt to a ------> prompt. This allows us to enter an expression that spans multiple lines, such as the following function: When we are finished with our multiline expression, press Ctrl + V again to have the expression evaluated. We can then call our new function: The CoffeeScript REPL offers some handy helpers such as expression history and tab completion. Pressing the up arrow key on your keyboard will circulate through the expressions we previously entered. Using the Tab key will autocomplete our function or variable name. For example, with the isEvenOrOdd() function, we can enter isEven and press Tab to have the REPL complete the function name for us. Debugging CoffeeScript using source maps If you have spent any time in the JavaScript community, you would have, no doubt, seen some discussions or rants regarding the weak debugging story for CoffeeScript. In fact, this is often a top argument some give for not using CoffeeScript at all. In this recipe, we will examine how to debug our CoffeeScript application using source maps. Getting ready The problem in debugging CoffeeScript stems from the fact that CoffeeScript compiles into JavaScript which is what the browser executes. If an error arises, the line that has caused the error sometimes cannot be traced back to the CoffeeScript source file very easily. Also, the error message is sometimes confusing, making troubleshooting that much more difficult. Recent developments in the web development community have helped improve the debugging experience for CoffeeScript by making use of a concept known as a source map. In this section, we will demonstrate how to generate and use source maps to help make our CoffeeScript debugging easier. To use source maps, you need only a base installation of CoffeeScript. How to do it... You can generate a source map for your CoffeeScript code using the -m option on the CoffeeScript command: coffee -m -c employee.coffee How it works... Source maps provide information used by browsers such as Google Chrome that tell the browser how to map a line from the compiled JavaScript code back to its origin in the CoffeeScript file. Source maps allow you to place breakpoints in your CoffeeScript file and analyze variables and execute functions in your CoffeeScript module. This creates a JavaScript file called employee.js and a source map called employee.js.map. If you look at the last line of the generated employee.js file, you will see the reference to the source map: //# sourceMappingURL=employee.js.map Google Chrome uses this JavaScript comment to load the source map. The following screenshot demonstrates an active breakpoint and console in Goggle Chrome: Debugging CoffeeScript using Node Inspector Source maps and Chrome's developer tools can help troubleshoot our CoffeeScript that is destined for the Web. In this recipe, we will demonstrate how to debug CoffeeScript that is designed to run on the server. Getting ready Begin by installing the Node Inspector NPM module with the following command: npm install -g node-inspector How to do it... To use Node Inspector, we will use the coffee command to compile the CoffeeScript code we wish to debug and generate the source map. In our example, we will use the following simple source code in a file named counting.coffee: for i in [1..10] if i % 2 is 0    console.log "#{i} is even!" else    console.log "#{i} is odd!" To use Node Inspector, we will compile our file and use the source map parameter with the following command: coffee -c -m counting.coffee Next, we will launch Node Inspector with the following command: node-debug counting.js How it works... When we run Node Inspector, it does two things. First, it launches the Node debugger. This is a debugging service that allows us to step through code, hit line breaks, and evaluate variables. This is a built-in service that comes with Node. Second, it launches an HTTP handler and opens a browser that allows us to use Chrome's built-in debugging tools to use break points, step over and into code, and evaluate variables. Node Inspector works well using source maps. This allows us to see our native CoffeeScript code and is an effective tool to debug server-side code. The following screenshot displays our Chrome window with an active break point. In the local variables tool window on the right-hand side, you can see that the current value of i is 2: The highlighted line in the preceding screenshot depicts the log message. Summary This article introduced CoffeeScript and lays the foundation to use CoffeeScript to develop all aspects of modern cloud-based applications. Resources for Article: Further resources on this subject: Writing Your First Lines of CoffeeScript [article] Why CoffeeScript? [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 4474

article-image-multiserver-installation
Packt
29 Oct 2013
7 min read
Save for later

Multiserver Installation

Packt
29 Oct 2013
7 min read
(For more resources related to this topic, see here.) The prerequisites for Zimbra Let us dive into the prerequisites for Zimbra: Zimbra supports only 64-bit LTS versions of Ubuntu, release 10.04 and above. If you would like to use a 32-bit version, you should use Ubuntu 8.04.x LTS with Zimbra 7.2.3. Having a clean and freshly installed system is preferred for Zimbra; it requires a dedicated system and there is no need to install components such as Apache and MySQL since the Zimbra server contains all the components it needs. Note that installing Zimbra with another service (such as a web server) on the same server can cause operational issues. The dependencies (libperl5.14, libgmp3c2, build-essential, sqlite3, sysstat, and ntp) should be installed beforehand. Configure a fixed IP address on the server. Have a domain name and a well-configured DNS (A and MX entries) that points to the server. The system clocks should be synced on all servers. Configure the file /etc/resolv.conf on all servers to point at the server on which we installed the bind (it can be installed on any Zimbra server or on a separate server). We will explain this point in detail later. Preparing the environment Before starting the Zimbra installation process, we should prepare the environment. In the first part of this section, we will see the different possible configurations and then, in the second part, we will present the needed assumptions to apply the chosen configuration. Multiserver configuration examples One of the greatest advantages of Zimbra is its scalability; we can deploy it for a small business with few mail accounts as well as for a huge organization with thousands of mail accounts. There are many possible configuration options; the following are the most used out of those: Small configuration: All Zimbra components are installed on only one server. Medium configuration: Here, LDAP and message store are installed on one server and Zimbra MTA on a separate server. Note here that we can use more Zimbra MTA servers so we can scale easier for large incoming or outgoing e-mail volume. Large configuration: In this case, LDAP will be installed on a dedicated server and we will have multiple mailbox and MTA servers, so we can scale easier for a large number of users. Very large configuration: The difference between this configuration and large one is the existence of an additional LDAP server, so we will have a Master LDAP and its replica. We choose the medium configuration; so, we will install LDAP and mailbox in one server and MTA on the other server. Install different servers in the following order (for medium configuration, 1 and 2 are combined in only one step): 1. First of all, install and configure the LDAP server. 2. Then, install and configure Zimbra mailbox servers. 3. Finally, install Zimbra MTA servers and finish the whole installation configuration. New installations of Zimbra limit spam/ham training to the first installed MTA. If you uninstall or move this MTA, you should enable spam/ham training on another MTA as one host should have this enabled to run zmtrainsa --cleanup. To do this, execute the following command: zmlocalconfig -e zmtrainsa_cleanup_host=TRUE Assumptions In this article, we will use some specific information as input in the Zimbra installation process, which, in most cases, will be different for each user. Therefore, we will note some of the most redundant ones in this section. Remember that you should specify your own values rather than using the arbitrary values that I have provided. The following is the list of assumptions used : OS version: ubuntu-12.04.2-server-amd64 Zimbra version: zcs-8.0.3_GA_5664.UBUNTU12_64.20130305090204 MTA server name: mta MTA hostname: mta.zimbra-essentials.com Internet domain: zimbra-essentials.com MTA server IP address: 172.16.126.141 MTA server IP subnet mask: 255.255.255.0 MTA server IP gateway: 172.16.126.1 Internal DNS server: 172.16.126.11 External DNS server: 8.8.8.8 MTA admin ID: abdelmonam MTA admin Password: Z!mbra@dm1n Zimbra admin Password: zimbrabook MTA server name: ldap MTA hostname: ldap.zimbra-essentials.com LDAP server IP address: 172.16.126.140 LDAP server IP subnet mask: 255.255.255.0 LDAP server IP gateway: 172.16.126.1 Internal DNS server: 172.16.126.11 External DNS server: 8.8.8.8 LDAP admin ID: abdelmonam LDAP admin password: Z!mbra@dm1n To be able to follow the steps described in the next sections, especially each time we need to perform a configuration, the reader should know how to harness the vi editor. If not, you should develop your skill set for using the vi editor or use another editor instead. You can find good basic training for the vi editor at http://www.cs.colostate.edu/helpdocs/vi.html System requirements For the various system requirements, please refer to the following link: http://www.zimbra.com/docs/os/8.0.0/multi_server_install/wwhelp/wwhimpl/common/html/wwhelp.htm#href=ZCS_Multiserver_Open_8.0.System_Requirements_for_VMware_Zimbra_Collaboration_Server_8.0.html&single=true If you are using another version of Zimbra, please check the correct requirements on the Zimbra website. Ubuntu server installation First of all, choose the appropriate language. Choose Install Ubuntu Server and then press Enter. When the installation prompts you to provide a hostname, configure only a one-word hostname; in the Assumptions section, we've chosen ldap for the LDAP and mailstore server and mta for the MTA server—don't give the fully qualified domain name (for example, mta.zimbra-essentials.com). On the next screen that calls for the domain name, assign it zimbra-essentials.com (without the hostname). The hard disk setup is simple if you are using a single drive; however, in the case of a server, it's not the best way to do things. There are a lot of options for partitioning your drives. In our case, we just make a little partition (2x RAM) for swapping, and what remains will be used for the whole system. Others can recommend separate partitions for mailstore, system, and so on. Feel free to use the recommendation you want depending on your IT architecture; use your own judgment here or ask your IT manager. After finishing the partitioning task, you will be asked to enter the username and password; you can choose what you want except admin and zimbra. When asked if you want to encrypt the home directory, select No and then press Enter. Press Enter to accept an empty entry for the HTTP proxy. Choose Install security updates automatically and then press Enter. On the Software Selection screen, you must select the DNS Server and the OpenSSH Server choices for installation; no other options. This will authorize remote administration (SSH) and mandatorily set up bind9 for a split DNS. For bind9, you can install it on only one server, which is what we've done in this article. Select Yes and then press Enter to install the GRUB boot loader to the master boot record. The installation should have completed successfully. Preparing Ubuntu for Zimbra installation In order to prepare the Ubuntu for the Zimbra installation, the following steps need to be performed: Log in to the newly installed system and update and upgrade Ubuntu using the following commands: sudo apt-get update sudo apt-get upgrade Install the dependencies as follows: sudo apt-get install libperl5.14 libgmp3c2 build-essential sqlite3 sysstat ntp Zimbra recommends (but there's no obligation) to disable and remove Apparmor. sudo /etc/init.d/apparmor stop sudo /etc/init.d/apparmor teardown sudo update-rc.d -f apparmor remove sudo aptitude remove apparmor apparmor-utils Set the static IP for your server as follows: Open the network interfaces file using the following command: sudo vi /etc/network/interfaces Then replace the following line: iface eth0 inet dhcp With: iface eth0 inet static address 172.16.126.14 netmask 255.255.255.0 gateway 172.16.126.1 network 172.16.126.0 broadcast 172.16.126.255 Restart the network process by typing in the following: sudo /etc/init.d/networking restart Sanity test! To verify that your network configuration is configured properly, type in ifconfig and ensure that the settings are correct. Then try to ping any working website (such as google.com) to see if that works. On each server, pay attention when you set the static IP address (172.16.126.140 for the LDAP server and 172.16.126.141 for the MTA server). Summary In this article, we learned the prerequisites for Zimbra multiserver installation and preparing the environment for the installation of the Zimbra server in a multiserver environment. Resources for Article : Further resources on this subject: Routing Rules in AsteriskNOW - The Calling Rules Tables [Article] Users, Profiles, and Connections in Elgg [Article] Integrating Zimbra Collaboration Suite with Microsoft Outlook [Article]
Read more
  • 0
  • 0
  • 4467
article-image-introduction-jsf-part-1
Packt
30 Dec 2009
6 min read
Save for later

An Introduction to JSF: Part 1

Packt
30 Dec 2009
6 min read
While the main focus of this article is learning how to use JSF UI components, and not to cover the JSF framework in complete detail, a basic understanding of fundamental JSF concepts is required before we can proceed. Therefore, by way of introduction, let's look at a few of the building blocks of JSF applications: the Model-View-Controller architecture, the JSF request processing lifecycle, managed beans, EL expressions, UI components, converters, validators, and internationalization (I18N). The Model-View-Controller architecture Like many other web frameworks, JSF is based on the Model-View-Controller (MVC) architecture. The MVC pattern promotes the idea of “separation of concerns”, or the decoupling of the presentation, business, and data access tiers of an application. The Model in MVC represents “state” in the application. This includes the state of user interface components (for example: the selection state of a radio button group, the enabled state of a button, and so on) as well as the application’s data (the customers, products, invoices, orders, and so on). In a JSF application, the Model is typically implemented using Plain Old Java Objects (POJOs) based on the JavaBeans API. These classes are also described as the “domain model” of the application, and act as Data Transfer Objects (DTOs) to transport data between the various tiers of the application. JSF enables direct data binding between user interface components and domain model objects using the Expression Language (EL), greatly simplifying data transfer between the View and the Model in a Java web application. The View in MVC represents the user interface of the application. The View is responsible for rendering data to the user, and for providing user interface components such as labels, text fields, buttons, radios, and checkboxes that support user interaction. As users interact with components in the user interface, events are fired by these components and delivered to Controller objects by the MVC framework. In this respect, JSF has much in common with a desktop GUI toolkit such as Swing or AWT. We can think of JSF as a GUI toolkit for building web applications. JSF components are organized in the user interface declaratively using UI component tags in a JSF view (typically a JSP or Facelets page). The Controller in MVC represents an object that responds to user interface events and to query or modify the Model. When a JSF page is displayed in the browser, the UI components declared in the markup are rendered as HTML controls. The JSF markup supports the JSF Expression Language (EL), a scripting language that enables UI components to bind to managed beans for data transfer and event handling. We use value expressions such as #{backingBean.name} to connect UI components to managed bean properties for data binding, and we use method expressions such as #{backingBean.sayHello} to register an event handler (a managed bean method with a specific signature) on a UI component. In a JSF application, the entity classes in our domain model act as the Model in MVC terms, a JSF page provides the View, and managed beans act as Controller objects. The JSF EL provides the scripting language necessary to tie the Model, View, and Controller concepts together. There is an important variation of the Controller concept that we should discuss before moving forward. Like the Struts framework, JSF implements what is known as the “Front Controller” pattern, where a single class behaves like the primary request handler or event dispatcher for the entire system. In the Struts framework, the ActionServlet performs the role of the Front Controller, handling all incoming requests and delegating request processing to application-defined Action classes. In JSF, the FacesServlet implements the Front Controller pattern, receiving all incoming HTTP requests and processing them in a sophisticated chain of events known as the JSF request processing lifecycle. The JSF Request Processing Lifecycle In order to understand the interplay between JSF components, converters, validators, and managed beans, let’s take a moment to discuss the JSF request processing lifecycle. The JSF lifecycle includes six phases: Restore/create view – The UI component tree for the current view is restored from a previous request, or it is constructed for the first time. Apply request values – The incoming form parameter values are stored in server-side UI component objects. Conversion/Validation – The form data is converted from text to the expected Java data types and validated accordingly (for example: required fields, length and range checks, valid dates, and so on). Update model values – If conversion and validation was successful, the data is now stored in our application’s domain model. Invoke application – Any event handler methods in our managed beans that were registered with UI components in the view are executed. Render response – The current view is re-rendered in the browser, or another view is displayed instead (depending on the navigation rules for our application). To summarize the JSF request handling process, the FacesServlet (the Front Controller) first handles an incoming request sent by the browser for a particular JSF page by attempting to restore or create for the first time the server-side UI component tree representing the logical structure of the current View (Phase 1). Incoming form data sent by the browser is stored in the components such as text fields, radio buttons, checkboxes, and so on, in the UI component tree (Phase 2). The data is then converted from Strings to other Java types and is validated using both standard and custom converters and validators (Phase 3). Once the data is converted and validated successfully, it is stored in the application’s Model by calling the setter methods of any managed beans associated with the View (Phase 4). After the data is stored in the Model, the action method (if any) associated with the UI component that submitted the form is called, along with any other event listener methods that were registered with components in the form (Phase 5). At this point, the application’s logic is invoked and the request may be handled in an application-defined way. Once the Invoke Application phase is complete, the JSF application sends a response back to the web browser, possibly displaying the same view or perhaps another view entirely (Phase 6). The renderers associated with the UI components in the view are invoked and the logical structure of the view is transformed into a particular presentation format or markup language. Most commonly, JSF views are rendered as HTML using the framework’s default RenderKit, but JSF does not require pages to be rendered only in HTML. In fact, JSF was designed to be a presentation technology neutral framework, meaning that views can be rendered according to the capabilities of different client devices. For example, we can render our pages in HTML for web browsers and in WML for PDAs and wireless devices.
Read more
  • 0
  • 0
  • 4447

article-image-scribus-manipulate-and-place-objects-layout
Packt
07 Jan 2011
5 min read
Save for later

Scribus: Manipulate and Place Objects in a Layout

Packt
07 Jan 2011
5 min read
Scribus 1.3.5: Beginner's Guide Create optimum page layouts for your documents using productive tools of Scribus. Master desktop publishing with Scribus Create professional-looking documents with ease Enhance the readability of your documents using powerful layout tools of Scribus Packed with interesting examples and screenshots that show you the most important Scribus tools to create and publish your documents.   Resizing objects Well it's time to work on the logo: it's really big and we would like it to be aligned the top part of the card. There are several ways to resize an object or frame. Resizing with the mouse When an object is selected, for example, click on the logo, and you can see a red rectangle outline. This doesn't affect the object properties but only shows that it is selected. There are little red square handles at each corner and at the middle of each side. If the mouse gets over one of these handles, the cursor will change to a double arrow. If you press the left mouse button when the pointer is on one of them and then move the pointer, you'll see the size changing according to the mouse movements. Just release the button when you're done. While resizing the frame an information box appears near the pointer and displays the new width. You will notice that the proportions of the object are not kept, and that the logo is modified. To avoid this, just press the Ctrl key while dragging the handles and you'll see that the logo will be scaled proportionally. Resizing with the Properties Palette As an alternative, you can use the Width and Height fields of the XYZ tab in the Properties Palette. If you need to keep the ratio, be sure that the chain button at the right-hand side of the field is activated. You can set the size in three ways: By scrolling the mouse wheel within the field. Pressing Ctrl or Shift while scrolling will increase or decrease the effect. If you already know the size, you can directly write it. This is mostly the case when you have a graphical charter that defines it or when you're already recreating an existing document. You can also use the small arrows at the right-hand side of the field (the same modifiers apply as described for the mouse wheel). Resizing with the keyboard Another way to resize objects is by using the keyboard. It's useful when you're typing and you need some extra space to put some more text, and that don't want to put your hands on the mouse. In this case, just: Press Esc to enter the Layout mode and leave the Content mode Press Alt and one of the arrows at the same time Press E to go back to Content Edit mode If you do some tests, you'll find that each arrow controls a side: the left arrow affects the size by moving the left-hand side, the right arrow affects the right-hand side, and so on. You can see that with this method the shape can only grow. Have a go hero – vector circle style Since the past two or three years, you might have noticed that shapes are being used in their pure form. For example, check this easy sample and try to reproduce it in the best way you can: copy-paste, moving, and resizing are all you'll need to know. Scaling objects Scaling objects—what can be different here from resizing? Once more, it's on Text Frames that the difference is more evident. Compare the results you can get: The difference is simple: in the top example the content has been scaled with the frame, and in the second only the frame is scaled. So it's scaling the content. You can scale a Text Frame (with its consent) by pressing the Alt key while resizing with the mouse. The Alt key applies, as always, while the mouse is pressed during the resizing movement. So did you see something missing in our card? Time for action – scaling the name of our company Let's say that our company name is "GraphCo" as in the previous image and that we want to add it to the card. Take the Insert Text Frame tool and draw a little frame in the page. An alternative could be clicking on the page instead of dragging. Once you've clicked, the Object Size window is displayed and you can set 12mm or so as width, and 6mm as the height. Then click on OK to create the frame. Double-click in the frame and type the name of the company. Select the text and change the font family to one that you like (here the font is OpenDINSchriftenEngShrift), and decrease the size if the name is not completely visible. Scale the frame until it is about 50mm wide. We can fix the width later. What just happened? Most of the time, you will use simple resizing instead of scaling. When you want the text to match some area and you don't want to play indefinitely with the font size setting, you may prefer to use the scaling functionality. Using the scale options makes it very easy to resize the frame and the text visually without trying to find the best font size in pt, which can sometimes be quite long.  
Read more
  • 0
  • 0
  • 4444

Packt
12 Mar 2013
12 min read
Save for later

Parallel Dimensions – Branching with Git

Packt
12 Mar 2013
12 min read
(For more resources related to this topic, see here.) What is branching Branching in Git is a function that is used to launch a separate, similar copy of the present workspace for different usage requirements. In other words branching means diverging from whatever you have been doing to a new lane where you can continue working on something else without disturbing your main line of work. Let's understand it better with the help of the following example Suppose you are maintaining a checklist of some process for a department in your company, and having been impressed with how well it's structured, your superior requests you to share the checklist with another department after making some small changes specific to the department. How will you handle this situation? An obvious way without a version control system is to save another copy of your file and make changes to the new one to fit the other department's needs. With a version control system and your current level of knowledge, perhaps you'd clone the repository and make changes to the cloned one, right? Looking forward, there might be requirements/situations where you want to incorporate the changes that you have made to one of the copies with another one. For example, if you have discovered a typo in one copy, it's likely to be there in the other copy because both share the same source. Another thought – as your department evolves, you might realize that the customized version of the checklist that you created for the other department fits your department better than what you used to have earlier, so you want to integrate all changes made for the other department into your checklist and have a unified one. This is the basic concept of a branch – a line of development which exists independent of another line both sharing a common history/source, which when needed can be integrated. Yes, a branch always begins life as a copy of something and from there begins a life of its own. Almost all VCS have some form of support for such diverged workflows. But it's Git's speed and ease of execution that beats them all. This is the main reason why people refer to branching in Git as its killer feature. Why do you need a branch To understand the why part, let's think about another situation where you are working in a team where different people contribute to different pieces existing in your project. Your entire team recently launched phase one of your project and is working towards phase two. Unfortunately, a bug that was not identified by the quality control department in the earlier phases of testing the product pops up after the release of phase one (yeah, been there, faced that!). All of a sudden your priority shifts to fixing the bug first, thereby dropping whatever you've been doing for phase two and quickly doing a hot fix for the identified bug in phase one. But switching context derails your line of work; a thought like that might prove very costly sometimes. To handle these kind of situations you have the branching concept (refer to the next section for visuals), which allows you to work on multiple things without stepping on each other's toes. There might be multiple branches inside a repository but there's only one active branch, which is also called current branch. By default, since the inception of the repository, the branch named master is the active one and is the only branch unless and until changed explicitly. Naming conventions There are a bunch of naming conventions that Git enforces on its branch names; here's a list of frequently made mistakes: A branch name cannot contain the following: A space or a white space character Special characters such as colon (:), question mark (?), tilde (~), caret (^), asterisk (*), and open bracket ([) Forward slash (/) can be used to denote a hierarchical name, but the branch name cannot end with a slash For example, my/name is allowed but myname/ is not allowed, and myname\ will wait for inputs to be concatenated Strings followed by a forward slash cannot begin with a dot (.) For example, my/.name is not valid Names cannot contain two continuous dots (..) anywhere When do you need a branch With Git, There are no hard and fast rules on when you can/need to create a branch. You can have your own technical, managerial, or even organizational reasons to do so. Following are a few to give you an idea: A branch in development of software applications is often used for self learning/ experimental purposes where the developer needs to try a piece of logic on the code without disturbing the actual released version of the application Situations like having a separate branch of source code for each customer who requires a separate set of improvements to your present package And the classic one – few people in the team might be working on the bug fixes of the released version, whereas the others might be working on the next phase/release For few workflows, you can even have separate branches for people providing their inputs, which are finally integrated to produce a release candidate Following are flow diagrams for few workflows to help us understand the utilization of branching: Branching for a bug fix can have a structure as shown the following diagram:     This explains that when you are working on P2 and find a bug in P1, you need not drop your work, but switch to P1, fix it, and return back to P2. Branching for each promotion is as shown in the following diagram:     This explains how the same set of files can be managed across different phases/ promotions. Here, P1 from development has been sent to the testing team (a branch called testing will be given to the testing team) and the bugs found are reported and fixed in the development branch (v1.1 and v1.2) and merged with the testing branch. This is then branched as production or release, which end users can access. Branching for each component development is as shown in the following diagram:     Here every development task/component build is a new independent branch, which when completed is merged into the main development branch. Practice makes perfect: branching with Git I'm sure you have got a good idea about what, why, and when you can use branches when dealing with a Git repository. Let's fortify the understanding by creating a few use cases. Scenario Suppose you are the training organizer in your organization and are responsible for conducting trainings as and when needed. You are preparing a list of people who you think might need business communication skills training based on their previous records. As a first step, you need to send an e-mail to the nominations and check their availability on the specified date, and then get approval from their respective managers to allot the resource. Having experience in doing this, you are aware that the names picked by you from the records for training can have changes even at the last minute based on situations within the team. So you want to send out the initial list for each team and then proceed with your work while the list gets finalized. Time for action – creating branches in GUI mode Whenever you want to create a new branch using Git Gui, execute the following steps: Open Git Gui for the specified repository. Select the Create option from the Branch menu (or use the shortcut keys Ctrl + N), which will give you a dialog box as follows:     In the Name field, enter a branch name, leave the remaining fields as default for now, and then click on the Create button. What just happened? We have learned to create a branch using Git Gui. Now let's go through the process mentioned for the CLI mode and perform relevant actions in Git Gui. Time for action – creating branches in CLI mode Create a directory called BCT in your desktop. BCT is the acronym for Business Communication Training. Let's create a text file inside the BCT directory and name it participants. Now open the participants.txt file and paste the following lines in it: Finance team Charles Lisa John Stacy Alexander Save and close the file. Initiate it as a Git repository, add all the files, and make a commit as follows: git init git add . git commit –m 'Initial list for finance team' Now, e-mail those people followed by an e-mail to their managers and wait for the finalized list. While they take their time to respond, you should go ahead and work on the next list, say for the marketing department. Create a new branch called marketing using the following syntax: git checkout –b marketing Now open the participants.txt file and start entering the names for the marketing department below the finance team list, as follows: Marketing team Collins Linda Patricia Morgan Before you finish finding the fifth member of the marketing team, you receive a finalized list from the finance department manager stating he can afford only three people for the training as the remaining (Alexander and Stacy) need to take care of other critical tasks. Now you need to alter the finance list and fill in the last member of the marketing department. Before going back to the finance list and altering it, let's add the changes made for the marketing department and commit it. git add . git commit –m 'Unfinished list of marketing team' git checkout master Open the file and delete the names Alexander and Stacy, save, close, add the changes, and commit with the commit message Final list from Finance team. git add . git commit –m "Final list from Finance team" git checkout marketing Open the file and add the fifth name, Amanda, for the marketing team, save, add, and commit. ggit add . git commit –m "Initial list of marketing team" Say the same names entered for marketing have been confirmed; now we need to merge these two lists, which can be done by the following command. git merge master You will get a merge conflict as shown in the following screenshot:     Open the participants.txt ?le and resolve the merge then add the changes, and finally commit them. What just happened? Without any loss of thought or data, we have successfully adopted the changes on the first list, which came in while working on the second list, with the concept of branching – without one interfering with another As discussed, a branch begins its life as a copy of something else and then has a life of its own. Here, by performing git checkout –b branch_name we have created a new branch from the existing position. Technically, the so-called existing position is termed as the position of HEAD and this type of lightweight branches, which we create locally, are called topic branches. Another type of branch would be the remote branch or remote-tracking branch, which tracks somebody else's work from some other repository. We already got exposed to this while learning the concept of cloning. The command git checkout –b branch_name is equivalent to executing the following two commands: git branch branch_name: Creates a new branch of the given name at the given position, but stays in the current branch git checkout branch_name: Switches you to the specified branch from the current/active branch When a branch is created using Git Gui, the checkout process is automatically taken care of, which results in it being in the created branch. The command git merge branch_name merges the current/active branch with the specified branch to incorporate the content. Note that even after the merge the branch will exist until it's deleted with the command git branch –d branch_name. In cases where you have created and played with a branch whose content you don't want to merge with any other branch and want to simply delete the entire branch, use –D instead of –d in the command mentioned earlier. To view a list of branches available in the system, use the command git branch as shown in the following screenshot: As shown in the screenshot, the branches available in our BCT repository right now are marketing and master, with master being the default branch when you create a repository. The branch with a star in front of it is the active branch. To ease the process of identifying the active branch, Git displays the active branch in brackets (branch_name) as indicated with an arrow. By performing this exercise we have learned to create, add content, and merge branches when needed. Now, to visually see how the history has shaped up, open gitk (by typing gitk in the command-line interface or by selecting Visualize All Branch History from the Repository menu of Git Gui) and view the top left corner. It will show a history like in the following screenshot: Homework Try to build a repository alongside the idea explained with the last flow diagram given in the When do you need a branch section. Have one main line branch called development and five component development branches, which should be merged in after the customizations are made to its source.
Read more
  • 0
  • 0
  • 4433
article-image-editing-attributes
Packt
19 Sep 2013
4 min read
Save for later

Editing attributes

Packt
19 Sep 2013
4 min read
(For more resources related to this topic, see here.) There are three main use cases for attribute editing. First, we might want to edit the attributes of one specific feature, for example, to fix a wrong name. Second, we might want to edit attributes of a group of features. Or third, we might want to change the attributes of all the features within a layer. All these use cases are covered by functionality available through the attribute table. We can access it by going to Layer | Open Attribute Table, the Open Attribute Table button present in the Attributes toolbar, or in the layer name context menu. To change attribute values, we always have to first enable editing. Then we can double-click on any cell in the attribute table to activate the input mode. Clicking on Enter confirms the change, but to save the new value permanently, we have to also click on the Save Edit(s) button or press Ctrl + S. In the bottom-right corner of the attribute table dialog, we can switch from the table to the form view, as shown in the following screenshot, and start editing there. Another option for editing the attributes of one feature is to open the attribute form directly by clicking on the feature on the map using the Identify tool. By default, the Identify tool displays the attribute values in the read mode, but we can enable Open feature form if a single feature is identified by going to Settings | Options | Map Tools. In the attribute table, we also find tools to handle selections (from left to right starting at the third button): Delete selected features, Select by expression, Cancel the selection, Move selected features to the top of the table, Invert the selection, Pan to the selected features, Zoom to the selected features, and Copy the selected features. Another way to select features in the attribute table is to click on the row number. The next two buttons allow us to add and remove columns. When we click on the delete column button, we get a list of columns to choose from. Similarly, the add columns button brings up a dialog to specify the name and data type of the new column. If we want to change attributes of multiple or all features in a layer, editing them manually usually isn't an option. That is what Field Calculator is good for. We can access it using the Open field calculator button in the attribute table or using the Ctrl + I keys. In Field Calculator, we can choose to only update selected features or to update all the features in the layer. Besides updating an existing field, we can also create a new field. The function list is the same one we already explored when we selected features by expression. We can use any of these functions to populate a new field or update an existing one. Here are some example expressions that are used often: We can create an id column using the $rownum function, which populates a column with the row numbers as shown in the following screenshot Another common use case is to calculate line length or polygon area using the geometry functions $length and $area respectively Similarly, we can get point coordinates using $x and $y If we want to get the start or end points of a line, we can use xat(0) and yat(0) or xat(-1) and yat(-1) Summary Thus, in this article we have learned how to edit the attributes in QGIS. Resources for Article : Further resources on this subject: Geo-Spatial Data in Python: Working with Geometry [Article] Web Frameworks for Python Geo-Spatial Development [Article] Plotting Geographical Data using Basemap [Article]
Read more
  • 0
  • 0
  • 4418

article-image-overview-microsoft-sure-step
Packt
03 Feb 2011
9 min read
Save for later

An Overview of Microsoft Sure Step

Packt
03 Feb 2011
9 min read
  Microsoft Dynamics Sure Step 2010 The smart guide to the successful delivery of Microsoft Dynamics Business Solutions Learn how to effectively use Microsoft Dynamics Sure Step to implement the right Dynamics business solution with quality, on-time and on-budget results. Leverage the Decision Accelerator offerings in Microsoft Dynamics Sure Step to create consistent selling motions while helping your customer ascertain the best solution to fit their requirements. Understand the review and optimization offerings available from Microsoft Dynamics Sure Step to further enhance your business solution delivery during and after go-live. Gain knowledge of the project and change management content provided in Microsoft Dynamics Sure Step. Familiarize yourself with the approach to adopting the Microsoft Dynamics Sure Step methodology as your own. Includes a Foreword by Microsoft Dynamics Sure Step Practitioners.         The success of a business solution, and specifically an Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) solution, isn't solely about technology. Experience tells that it is as much about the people and processes as it is about the software. Software is often viewed as the enabler, with the key to success lying in how the solution is implemented and how the implementations are managed. The transformation from the technological solution being the point of emphasis in the early days of the business software era to the solution becoming an enabler for business transformation has only been furthered by the ERP/CRM reports by independent organizations that decry deployment failures in great detail. What stands out very clearly in these reports is the fact that ERP and CRM solution delivery is characterized by uncertainties and risks. Service providers have to balance time and budget constraints, while delivering the business value of the solution to their customers. Customer organizations need to understand that their involvement and collaboration is critical for the success of the delivery. They will need to invest time, provide relevant and accurate information, and manage the organizational changes to ensure that the solution is delivered as originally envisioned. The need for seamless implementation and deployment of business software is even more accentuated in the current state of the economy with enterprise software sales going through a prolonged period of negative to stagnant growth over the last several quarters. Sales cycles are taking longer to execute, especially as the customers take advantage of the buyer's market and force software providers to prove their solution in the sales cycle before signing off on the purchase. In this market, a good solution delivery approach is critical. We have consistently heard words such as in-scope, within-budget, and on-time being tossed around in the industry. Service providers are still facing these demands; however, in the current context, budgets are tighter, timeframes are shorter, and the demand for a quick return on investment is becoming increasingly critical. Microsoft has always understood that the value of the software is only as good as its implementation and adoption. Accordingly, Microsoft Dynamics Sure Step was developed as the methodology for positioning and deploying the Microsoft Dynamics ERP/CRM suite of products—AX, CRM, GP, NAV, and SL. In the vision of Sure Step, project management is not the prerogative of the project manager only. Sure Step is a partnership of consulting and customer resources, representing a very important triangulation of the collaboration between the software vendor, implementer, and customer, with the implementation methodology becoming a key element of the implemented application. The business solutions market The 2010 calendar year began with the global economy trying to crawl out of a recession. Still, businesses continued to invest in solutions, to leverage the power of information technology to drive down redundancy and waste in their internal processes. This was captured in a study by Gartner of the top industry CIOs, published in their annual report titled Gartner Perspective: IT Spending 2010. In spite of the recessionary pressures, organizations continued to list improving business processes, reducing costs, better use of information, and improving workforce effectiveness as their priorities for IT spending. The Gartner study listed the following top 10 business priorities based on 2009 findings: Business process improvement Reducing enterprise costs Improving enterprise workforce effectiveness Attracting and retaining new customers Increasing the use of information/analytics Creating new products or services (innovation) Targeting customers and markets more effectively Managing change initiatives Expanding current customer relationships Expanding into new markets and geographies The Gartner study listed the following top 10 technology priorities based on 2009 findings: Business intelligence Enterprise applications (ERP, CRM, and others) Servers and storage technologies (virtualization) Legacy application modernization Collaboration technologies Networking, voice, and data communications Technical infrastructure Security technologies Service-oriented applications and architecture Document management The source document for the previous two lists is: Gartner Executive Programs – CIO Agenda 2010. These are also some of the many reasons that companies, regardless of scale, implement ERP and CRM software, which again is evident from the top 10 technology priorities of the CIOs listed above. These demands, however, happen to be articulated even more strongly by small and medium businesses. For these businesses, an ERP/CRM solution can be a sizable percentage of their overall expense outlay, so they have to be especially vigilant about their spending—they just can't afford time and cost overruns as are sometimes visible in the Enterprise market. At the same time, the deployment of rich functionality software must realize a significant and clear advantage for their business. These trends are picked up and addressed by the IT vendors, who are constantly seeking and exploring new technological ingredients to address the Small-to-Medium Enterprise market demands. The importance of a methodology Having a predictable and reliable methodology is important for both the service provider (the implementer) and the users of the solution (the customer). This is especially true for ERP/CRM solution deployment, which can happen at intervals of anywhere from a couple of months to a couple of years, and the implementation team often comprises multiple individuals from the service provider and the customer. Therefore, it is very important that all the individuals are working off the same sheet of music, so to speak. Methodology can be defined as: The methods, rules, and hypothesis employed by, and the theory behind a given discipline or The systematic study of the methods and processes applied within the discipline over time Methodology can also be described as a collection of theories, concepts, and processes pertaining to a specific discipline or field. Rather than just a compilation of methods, methodology refers to the scientific method and the rationale behind it, as well as the assumptions underlying the definitions and components of the method. The definitions we just saw are particularly relevant to the design/architecture of a methodology for ERP/CRM and business solutions. For these solutions, the methodology should not just provide the processes, but it should also provide a connection to the various disciplines and roles that are involved in the execution of the methodology. It should provide detailed guidance and assumptions for each of the components, so that the consumers of the methodology can discern to what extent they will need to employ all or certain aspects of it on a given engagement. As such, a solid approach provides more than just a set of processes for solution deployment. For the service provider, a viable methodology can provide: End-to-end process flows for solution development and deployment, creating a repeatable process leading to excellence in execution Ability to link shell and sample templates, reference architecture, and other similar documentation to key activities A structure for creating an effective Knowledge Management (KM) system, facilitating easier harvesting, storing, retrieval, and reuse of content created by the field on customer engagements Ability to develop a rational structure for training of the consulting team members, including ramp-up of new employees Ability to align the quality assurance approach to the deployment process— important in organizations that use an independent QA process as oversight for consulting efforts Ability to develop a structured estimation process for solution development and deployment Creation of a structure for project scope control and management, and a process for early risk identification and mediation For the customer, a viable methodology can provide: Clear end-to-end process flows for solution development that can be followed by the customer's key users and Subject Matter Experts (SMEs) assigned to the project Consistent terminology and taxonomy, especially where the SMEs may not have had prior experience with implementing systems of such magnitude, thus making it easier for everybody to be on the same page Ability to develop a good Knowledge Management system to capture lessons learned for future projects/upgrades Ability to develop a rational structure and documentation for end-user training and new employee ramp-up Creation of a structure for ensuring that the project stays within scope, including a process for early risk identification and mediation In addition to the points listed here, having a "full lifecycle methodology" provides additional benefits in the sales-to-implementation continuum. The benefits for the service providers include: Better alignment of the consulting teams with the sales teams A more scientific deal management and approval process that takes into account the potential risks Better processes to facilitate the transfer of customer knowledge, ascertained during the sales cycle, to the solution delivery team Ability to show the customer how the service provider has "done it before" and effectively establish trust that they can deliver the envisioned solution Clearly illustrating the business value of the solution to the customer Ability to integrate multiple software packages into an overall solution for the customer Ability to deliver the solution as originally envisioned within scope, on time, and within established budget The benefits for the customers include: Ability to understand and articulate the business value of the solution to all stakeholders in the organization Ensuring that there is a clear solution blueprint established Ensuring that the solution is delivered as originally envisioned within scope, on time, and within established budget Ensuring an overall solution that can integrate multiple software packages In summary, a good methodology creates a better overall ecosystem for the organizations. The points noted in the earlier lists are an indication of some of the ways that the benefits are manifested; as you leverage methodologies in your own organization, you may realize other benefits as well.  
Read more
  • 0
  • 0
  • 4410
Modal Close icon
Modal Close icon