Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cloud Computing

121 Articles
article-image-introduction-web-experience-factory
Packt
24 Sep 2012
20 min read
Save for later

Introduction to Web Experience Factory

Packt
24 Sep 2012
20 min read
What is Web Experience Factory? Web Experience Factory is a rapid application development tool, which applies software automation technology to construct applications. By using WEF, developers can quickly create single applications that can be deployed to a variety of platforms, such as IBM WebSphere Application Server and IBM WebSphere Portal Server , which in turn can serve your application to standard browsers, mobile phones, tablets, and so on. Web Experience Factory is the new product derived from the former WebSphere Portlet Factory (WPF) product. In addition to creating portal applications, WEF always had the capability of creating exceptional web applications. In fact, the initial product developed by Bowstreet, the company which originally created WPF, was meant to create web applications, way before the dawn of portal technologies. As the software automation technology developed by Bowstreet could easily be adapted to produce portal applications, it was then tailored for the portal market. This same adaptability is now expanded to enable WEF to target different platforms and multiple devices. Key benefits of using Web Experience Factory for portlet development While WEF has the capability of targeting several platforms, we will be focusing on IBM WebSphere Portal applications. The following are a few benefits of WEF for the portal space: Significantly improves productivity Makes portal application development easier Contains numerous components (builders) to facilitate portal application development Insulates the developer from the complexity of the low-level development tasks Automatically handles the deployment and redeployment of the portal project (WAR file ) to the portal Reduces portal development costs The development environment Before we discuss key components of WEF, let's take a look at the development environment. From a development environment perspective, WEF is a plugin that is installed into either Eclipse or IBM Rational Application Developer for WebSphere. As a plugin, it uses all the standard features from these development environments at the same time that it provides its own perspective and views to enable the development of portlets with WEF. Let's explore the WEF development perspective in Eclipse. The WEF development environment is commonly referred to as the designer. While we explore this perspective, you will read about new WEF-specific terms. In this section, we will neither define nor discuss them, but don't worry. Later on in this article, you will learn all about these new WEF terms. The following screenshot shows the WEF perspective with its various views and panes: The top-left pane, identified by number 1, shows the Project Explorer tab. In this pane you can navigate to the WEF project, which has a structure similar to a JEE project. WEF adds a few extra folders to host the WEF-specific files. Box 1 also contains a tab to access the Package Explorer view. The Package Explorer view enables you to navigate the several directories containing the .jar files. These views can be arranged in different ways within this Eclipse perspective. The area identified by number 2 shows the Outline view. This view holds the builder call list. This view also holds two important icons. The first one is the "Regeneration" button. This is the first icon from left to right, immediately above the builder call table header. Honestly, we do not know what the graphical image of this icon is supposed to convey. Some people say it looks like a candlelight, others say it looks like a chess pawn. We even heard people referring to this icon as the "Fisher-Price" icon, because it looks like the Fisher-Price children's toy. The button right next to the Regeneration button is the button to access the Builder palette. From the Builder palette, you can select all builders available in WEF. Box number 3 presents the panes available to work on several areas of the designer. The screenshot inside this box shows the Builder Call Editor. This is the area where you will be working with the builders you add to your model. Lastly, box number 4 displays the Applied Profiles view. This view displays content only when the open model contains profile-enabled inputs, which is not the case in this screenshot. The following screenshot shows the right-hand side pane, which contains four tabs—Source, Design, Model XML, and Builder Call Editor. The preceding screenshot shows the content displayed when you select the first tab from the right-hand side pane, the Source tab. The Source tab exposes two panes. The left-hand side pane contains the WebApp tree, and the right-hand side pane contains the source code for elements selected from the WebApp tree. Although it is not our intention to define the WEF elements in this section, it is important to make an exception to explain to you what the WebApp tree is. The WebApp tree is a graphical representation of your application. This tree represents an abstract object identified as WebApp object. As you add builders to your models or modify them, these builders add or modify elements in this WebApp object. You cannot modify this object directly except through builders. The preceding screenshot shows the source code for the selected element in the WebApp tree. The code shows what WEF has written and the code to be compiled. The following screenshot shows the Design pane. The Design pane displays the user interface elements placed on a page either directly or as they are created by builders. It enables you to have a good sense of what you are building from a UI perspective. The following screenshot shows the content of a model represented as an XML structure in the Model XML tab. The highlighted area in the right-hand side pane shows the XML representation of the sample_PG builder, which has been selected in the WebApp tree. We will discuss the next tab, Builder Call Editor, when we address builders in the next section. Key components of WEF—builders, models, and profiles Builders, models, and profiles comprise the key components of WEF. These three components work together to enable software automation through WEF. Here, we will explain and discuss in details what they are and what they do. Builders Builders are at the core of WEF technology. There have been many definitions for builders. Our favorite is the one that defines builders as "software components, which encapsulate design patterns". Let's look at the paradigm of software development as it maps to software patterns. Ultimately, everything a developer does in terms of software development can be defined as patterns. There are well-known patterns, simple and complex patterns, well-documented patterns, and patterns that have never been documented. Even simple, tiny code snippets can be mapped to patterns. Builders are the components that capture these countless patterns in a standard way, and present them to developers in an easy, common, and user-friendly interface. This way, developers can use and reuse these patterns to accomplish their tasks. Builders enable developers to put together these encapsulated patterns in a meaningful fashion in such a way that they become full-fl edged applications, which address business needs. In this sense, developers can focus more on quickly and efficiently building the business solutions instead of focusing on low-level, complex, and time consuming development activities. Through the builder technology, senior and experienced developers at the IBM labs can identify, capture, and code these countless patterns into reusable components. When you are using builders, you are using code that has not only been developed by a group, which has already put a lot of thought and effort into the development task, but also a component, which has been extensively tested by IBM. Here, we will refer to the IBM example, because they are the makers of WEF—but overall, any developer can create builders. Simple and complex builders The same way that development activities can range from very simple to very complex tasks, builders can also range from very simple to very complex. Simple builders can perform tasks such as placing an attribute on a tag, highlighting a row of a table, or creating a simple link. Equally, there are complex builders , which perform complex and extensive tasks. These builders can save WEF developers' days worth of work, troubleshooting, and aggravation. For instance, there are builders for accessing, retrieving, and transforming data from backend systems, builders to create tables, form, and hundreds of others. The face of builders The following screenshot shows a Button builder in the Builder Editor pane: All builders have a common interface, which enables developers to provide builder input values. The builder input values define several aspects concerning how the application code will be generated by this builder. Through the Builder Editor pane, developers define how a builder will contribute to the process of creating your application, be it a portlet, a web application, or a widget. Any builder contains required and optional builder inputs. The required inputs are identified with an asterisk symbol (*) in front of their names. For instance, the preceding screenshot representing the Button builder shows two required inputs—Page and Tag. As you can see through the preceding screenshot, builder input values can be provided through several ways. The following table describes the items identified by the numbered labels: Label number Description Function 1 Free form inputs Enables developer to type in any appropriate value. 2 Drop-down controls Enables developer to select values from a predefined list, which is populated based on the context of the input. This type of input is dynamically populated with possible influence from other builder inputs, other builders in the same model, or even other aspects of the current WEF project. 3 Picker controls This type of control enables users make a selection from multiple source types such as variables, action list builders, methods defined in the current model, public methods defined in java classes and exposed through the Linked Java Class builder, and so on. The values selected through the picker controls can be evaluated at runtime. 4 Profiling assignment button This button enables developers to profile-enable the value for this input. In another words, through this button, developers indicate that the value for this input will come from a profile to be evaluated at regeneration time.     Through these controls, builders contribute to make the modeling process faster at the same time it reduces errors, because only valid options and within the proper context are presented. Builders are also adaptive. Inputs, controls, and builder sections are either presented, hidden, or modified depending upon the resulting context that is being automatically built by the builder. This capability not only guides the developers to make the right choices, but it also helps developers become more productive. Builder artifacts We have already mentioned that builders either add artifacts to or modify existing artifacts in the WebApp abstract object. In this section, we will show you an instance of these actions. In order to demonstrate this, we will not walk you through a sample. Rather, we will show you this process through a few screenshots from a model. Here, we will simulate the action of adding a button to a portlet page. In WEF, it is common to start portlet development with a plain HTML page, which contains mostly placeholder tags. These placeholders, usually represented by the names of span or div tags, indicate locations where code will be added by the properly selected builders. The expression "code will be added" can be quite encompassing. Builders can create simple HTML code, JavaScript code, stylesheet values, XML schemas, Java code, and so on. In this case, we mean to say that builders have the capability of creating any code required to carry on the task or tasks for which they have been designed. In our example, we will start with a plain and simple HTML page, which is added to a model either through a Page builder or an Imported Page builder. Our sample page contains the following HTML content: Now, let's use a Button builder to add a button artifact to this sample_PG page, more specifically to the sampleButton span tag. Assume that this button performs some action through a Method builder (Java Method), which in turn returns the same page. The following screenshot shows what the builder will look like after we provide all the inputs we will describe ahead: Let's discuss the builder inputs we have provided in the preceding screenshot. The first input we provide to this builder is the builder name. Although this input is not required, you should always name your builders. Some naming convention should be used for naming your builders. If you do not name your builders, WEF will name them for you. The following table shows same sample names, which adds an underscore followed by two or three letters to indentify the builder type: Builder type Builder name Button search_BTN Link details_LNK Page search_PG Data Page searchCriteria_DP Variable searchInputs_VAR Imported Model results_IM Model Container customer_MC There are several schools of thoughts regarding naming convention. Some scholars like to debate in favor of one or another. Regardless of the naming convention you adopt, you need to make sure that the same convention is followed by the entire development team. The next inputs relate to the location where the content created by this builder will be placed. For User Interface builders, you need to specify which page will be targeted. You also need to specify, within that page, the tag with which this builder will be associated. Besides specifying a tag based on the name, you can also use the other location techniques to define this location. In our simple example, we will be selecting the sample_PG page. If you were working on a sample, and if you would click on the drop-down control, you would see that only the available pages would be displayed as options from which you could choose. When a page is not selected, the tag input does not display any value. That is because the builders know how to present only valid options based on the inputs you have previously provided. For this example, we will select sample_PG for page input. After doing so, the Tag input is populated with all the HTML tags available on this page. We selected the sampleButton tag. This means that the content to be created on this page will be placed at the same location where this tag currently exists. It replaces the span tag type, but it preserves the other attributes, which make sense for the builder being currently added. Another input is the label value to be displayed. Once again, here you can type in a value, you can select a value from the picker, or you can specify a value to be provided by a profile. In this sample, we have typed in Sample Button. For the Button builder, you need to define the action to be performed when the button is clicked. Here also, the builder presents only the valid actions from which we can select one. We have selected, Link to an action. For the Action input, we select sample_MTD. This is the mentioned method, which performs some action and returns the same page. Now that the input values to this Button builder have been provided, we will inspect the content created by this builder. Inspecting content created by builders The builder call list has a small gray arrow icon in front of each builder type. By clicking on this icon, you cause the designer to show the content and artifacts created by the selected builder: By clicking on the highlighted link, the designer displays the WebApp tree in its right-hand side pane. By expanding the Pages node, you can see that one of the nodes is sample_BTN, which is our button. By clicking on this element, the Source pane displays the sample page with which we started. If necessary, click on the Source tab at the bottom of the page to expose the source pane. Once the WebApp tree is shown, by clicking on the sample_BTN element, the right-hand side pane highlights the content created by the Button builder we have added. Let's compare the code shown in the preceding screenshot against the original code shown by the screenshot depicturing the Sample Page builder. Please refer to the screenshot that shows a Sample Page builder named sample_PG. This screenshot shows that the sample_PG builder contains simple HTML tags defined in the Page Contents (HTML) input. By comparing these two screenshots, the first difference we notice is that after adding the Button builder, our initial simple HTML page became a JSP page, as denoted by the numerous JSP notations on this page. We can also notice that the initial sampleButton span tag has been replaced by an input tag of the button type. This tag includes an onClick JavaScript event. The code for this JavaScript event is provided by JSP scriptlet created by the Button builder. As we learned in this section, builders add diverse content to the WebApp abstract object. They can add artifacts such as JSP pages, JavaScript code, Java classes, and so on, or they can modify content already created by other builders. In summary, builders add or modify any content or artifacts in order to carry on their purpose according to the design pattern they represent. Models Another important element of WEF is the Model component. Model is a container for builder calls. The builder call list is maintained in an XML file with a .model extension. The builder call list represents the list of builders added to a model. The Outline view of the WEF perspective displays the list of builders that have been added to a model. The following screenshot displays the list of builder calls contained in a sample model: To see what a builder call looks like inside the model, you can click on the gray arrow icon in front of the builder type and inspect it in the Model XML tab. For instance, let's look at the Button builder call inside the sample model we described in the previous section. The preceding image represents a builder call the way it is stored in the model file. This builder call is one of the XML elements found in the BuilderCallList node, which in turn is child of the Model node. Extra information is also added at the end of this file. This XML model file contains the input names and the values for each builder you have added to this model. WEF operates on this information and the set of instructions contained in these XML elements, to build your application by invoking a process known as generation or regeneration to actually build the executable version of your application, be it a portlet, a web application, or a widget. We will discuss more on regeneration at the end of this article. It is important to notice that models contain only the builder call list, not the builders themselves. Although the terms—builder call and builder are used interchangeably most of the times, technically they are different. Builder call can be defined as an entry in your model, which identifies the builder by the Builder call ID, and then provides inputs to that builder. Builders are the elements or components that actually perform the tasks of interacting with the WebApp object. These elements are the builder definition file (an XML file) and a Java Class. A builder can optionally have a coordinator class. This class coordinates the behavior of the builder interface you interact with through the Builder Editor. Modeling Unlike traditional development process utilizing pure Java, JSP, JavaScript coding, WEF enables developers to model their application. By modeling, WEF users actually define the instructions of how the tool will build the final intended application. The time-consuming, complex, and tedious coding and testing tasks have already been done by the creators of the builders. It is now left to the WEF developer to select the right builders and provide the right inputs to these builders in order to build the application. In this sense, WEF developers are actually modelers. A modeler works with a certain level of abstraction by not writing or interacting directly with the executable code. This is not to say that WEF developers do not have to understand or write some Java or eventually JavaScript code. It means that, when some code writing is necessary, the amount and complexity of this code is reduced as WEF does the bulk of the coding for you. There are many advantages to the modeling approach. Besides the fact that it significantly speeds the development process, it also manages changes to the underlying code, without requiring you to deal with low-level coding. You only change the instructions that generate your application. WEF handles all the intricacies and dependencies for you. In the software development lifecycle, requests to change requirements and functionality after implementation are very common. It is given that your application will change after you have coded it. So, be proactive by utilizing a tool, which efficiently and expeditiously handles these changes. WEF has been built with the right mechanism to graciously handle change request scenarios. That is because changing the instructions to build the code is much faster and easier than changing the code itself. Code generation versus software automation While software has been vastly utilized to automate an infinite number of processes in countless domains, very little has been done to facilitate and improve software automation itself. Prior to being a tool for building portlets, WEF exploits the quite dormant paradigm of software automation. It is beyond the scope of this book to discuss software automation in details, but it is suffice to say that builders, profiles, and the regeneration engine enable the automation of the process of creating software. In the particular case of WEF, the automation process targets web applications and portlets, but it keeps on expanding to other domains, such as widgets and mobile phones. WEF is not a code generation tool. While code generation tools utilize a static process mostly based on templates, WEF implements software automation to achieve not only high productivity but also variability. Profiles In the development world, the word profile can signify many things. From the WEF perspective, profile represents a means to provide variability to an application. WEF also enables profiles or profile entries to be exposed to external users. In this way, external users can modify predefined aspects of the application without assistance from development or redeployment of the application. The externalized elements are the builder input values. By externalizing the builder input values, line of business, administrators, and even users can change these values causing WEF to serve a new flavor of their application. Profile names, profile entry names, which map to the builder inputs, and their respective values are initially stored in an XML file with a .pset extention . This is part of your project and is deployed with your project. Once it is deployed, it can be stored in other persistence mechanisms, for example a database. WEF provides an interface to enable developers to create profile, define entries and their initial values, as well as define the mechanism that will select which profile to use at runtime. By selecting a profile, all the entry values associated with that profile will be applied to your application, providing an unlimited level of variability. Variability can be driven by personalization, configuration, LDAP attributes, roles, or it can even be explicitly set through the Java methods. The following screenshot shows the Manage Profile tab of Profile Manager. The Profile Manager enables you to manage every aspect related to profile sets. The top portion of this screenshot lists the three profiles available in this profile set. The bottom part of this screenshot shows the profile entries and their respective values for the selected profile:
Read more
  • 0
  • 0
  • 3924

article-image-creating-your-first-vm-using-vcloud-technology
Packt
22 Apr 2013
14 min read
Save for later

Creating your first VM using vCloud technology

Packt
22 Apr 2013
14 min read
(For more resources related to this topic, see here.) Step 1 – Understanding vCloud resources This step will introduce you to how resources work in vCloud Director. The following diagram shows how resources are managed in vCloud and how they work together. The diagram is simplified and doesn't show all the vCloud properties; however, it is sufficient to explain the resource design. PvDC A Provider Virtual Data Center (PvDC) represents a portion of all the virtual resources of a vSphere environment. It will take all the CPU and memory resources from a given resource pool or cluster and present them to the vCloud as consumable resources. A typical cluster or resource pool contains multiple datastores, storage profiles, and networks as well as multiple hosts. A PvDC will automatically gain access to these resources. It is basically the link between vSphere and the vCloud world. Org An organization (Org) is a container that holds users and groups and regulates their access to the vCloud resources. Users can be either locally created or imported from Lightweight Directory Access Protocol (LDAP) or Active Directory (AD); however, groups can only be imported. It is possible to assign different LDAP, e-mail, and notification settings to each organization. This is one of the most powerful features of vCloud Director. Its usage becomes clear if you think about a public cloud model. You could link different organizations into the different customers' LDAP/ AD and e-mail systems (assuming a VPN tunnel between vCloud and the customer network), extending the customer's sphere of influence into the cloud. If a customer doesn't have or doesn't want to use his / her own LDAP/AD, he / she could make use of the local user function. OvDC An Organizational Virtual Data Center (OvDC) is a mixture of an Org with a PvDC. The Org defines who can do what and the PvDC defines where it is happening. Each OvDC is assigned one of the three allocation models as well as storage profiles. The three allocation models are designed to provide different methods of resource allocation. Let's first look at the difference between the models: Reservation pool: This allocates a fixed amount of resources (in GHz and GB) from the PvDC to the OvDC. This model is good if the users want to define a per-VM resource allocation. Only this model enables the Resource Allocation tab in VMs. Allocation pool: This is similar to reservation pool; however, you can also assign how many resources are guaranteed (reserved) for this OvDC. This model is good for overcommitting resources. Pay-as-you-go (PAYG): This is similar to the allocation pool; however, recourses are only consumed if vApps/VMs are running. All other models reserve resources even if the OvDC doesn't contain any running VMs. This model is useful if the number of resources is unknown or fluctuating. There are different settings that one can choose from for each model.   Allocation Pool PAYG Reservation Pool CPU allocation (GHz) Yes Yes and unlimited Yes CPU resources guaranteed (percentage) Yes Yes NA vCPU max speed (GHz) Yes Yes NA Memory allocation (GB) Yes Yes and unlimited Yes Memory resources guaranteed (percentage) Yes Yes NA Maximum number of VMs (number or unlimited) Yes Yes Yes vApp You might have encountered the name before in vCenter; however, the vApp of vCD and the vApp of vCenter are totally different beasts. vApps in vCenter are essentially resource pools with extras, such as a startup sequence. A vApp in vCD is a container that exists only in vCD. However, it can also contain isolated networks and allows the configuration of a start-and-stop sequence for its member VMs. In addition to all this, you can allow this vApp to be shared with other members of your organization. VM The most atomic part of a vCD is the VM. VMs live in vApps. Here you can configure all the settings you are familiar with from vSphere, and some more. You are able to add/update/delete vHardware as well as define guest customization. Step 2 – Connecting vCenter to vCD Let's start with the process of assigning resources to the vCloud. The first step is to assign a vCenter to this vCD installation. For future reference, one vCD installation can use multiple vCenters. As a starting point for steps 2 to 5, we will use the home screen, as shown in the following screenshot: On the Home screen (or if you like, the welcome screen), click on the first link, Attach a vCenter. A pop up will ask you for the following details for your vCenter: Host name or IP address: Enter the fully qualified domain name (FQDN) or IP address of your vCenter Port number: Port 443 is the correct default port User name and Password: Enter the username and password of an account that has administrator rights in vCenter vCenter name: Give your vCenter a name with which you would like to identify it in vCloud Description: A description isn't required; however, it doesn't hurt either vSphere Web Client URL: Now enter the URL for the web vCenter client https://(FDQN or IP)/vsphere-client After vCD has accepted the information and contacted vCenter, we now need to enter all the details for the vShield installation (from the Step 2 – downloading vCloud Director subsection in the Installation section). Enter the FQDN or IP address of the vShield VM And if you didn't change the default password, you can log in with admin as ID and default as the password vCD contacts vShield and that's that. You have now connected vCD to vCenter and are now able to use resources presented by this vCenter in your vCloud. Step 3 – Creating a PvDC Now we will create our first PvDC and assign resources to our vCloud. To create a new PvDC, you click on the second link, Create a Provider VDC (refer to the first image in the Step 2 – connecting vCenter to vCD subsection of the Quick start – creating your first VM section). Enter a name for the new PvCD. A good idea is to develop a naming standard for any item in vCenter and vCD. My PvDC will be called PvDC_myLab. Choose the highest supported virtual hardware version that your vCenter/ESXi supports. If you are running VMware 5.1, it is Version 9. In the next window, we choose the cluster or the resource pool that vCloud should use to create the PvDC. Please note that you need to create a resource pool before starting this wizard, or else it won't show up. For this example, I choose the cluster myCluster. In the next window, we are prompted to choose a storage profile. For the time being, just choose any and continue. Now vCD shows us all the ESXi hosts that belong to the cluster or the resource pool we selected. vCD will need to install some extra software on them and will need to connect directly to the ESXi hosts. That's why it is asking for the credentials of the ESXi hosts. Finish the wizard. At the end of this wizard, vCD will put the ESXi into maintenance mode to install the extra software package. If you only have one ESXi host and it is also running vCD and vCenter, you will have to manually install the vCD software package (not in the scope of this book). You have now successfully carved off a slice of resources to be used inside your vCloud. Storage profiles vSphere storage profiles are defined in vCenter. The idea is to group datastores together by their capabilities or by a user-defined label. For example, group datstores by their types (NFS, Fiber, SATA, or SSD), different RAID types, or by features that are provided, such as backup or replication. Enterprises use storage profiles such as gold, silver, and bronze, depending on the speed of the disks (SATA or SSD) and on whether a datastore is backed up or replicated for DR purposes. vCloud Director can assign different storage profiles to PvDCs and OvDCs. If an OvDC has multiple storage profiles assigned to it, you can choose a specific storage profile to be the default for this OvDC. Also, when you create a vApp in this OvDC, you can choose the storage profile with which you want to store the vApp. Step 4 – Creating an Org And now we will create an organization (Org). On the Home panel, click on the fifth link, Create a new organization (refer to the first image in the Step 2 – connecting vCenter to vCD subsection of the Quick start – creating your first VM section). Give the Org a name, for example, MyOrg and the organization's full name. In the next window, choose the first option, Do not use LDAP. Next, we could add a local user but we won't. So let's just click on Next. Our first Org should be able to share. So click on Allow publishing…, and then click on Next. We keep clicking on Next. The first Org will use the e-mail and notification settings of vCD. Now we need to configure the leases. You can just click on Next, or if you like, set all leases to unlimited. The last window shows us all the settings we have selected, and by clicking on Finish, our first organization will be created. System Org You have actually created a second Org as the first Org is called system and was created when we installed vCD. If you look at your home screen, you will see that there is a small tab that says System. The system Org is the mother of all Orgs. It's where other Orgs, PvDCs, OvDCs, and basically all settings are defined in vCloud Director. The system organization can only be accessed by vCloud system administrators. Step 5 – Creating an OvDC Now that we have our first Org, we can proceed with assigning resources to it for consumption. To do that, we need to create an Organization Virtual Data Center (OvDC). On the Home Screen, we click on the sixth link, Allocate resources to an organization. First we have to select the Org to which we want to assign the resources. As we only have one Org, the choice is easy. Next, we are asked which PvDC we want to take the resources from. Again, we only have one PvDC, so we choose that one. Note that the screen shows you what percentage of various resources of this PvDC are already committed and which networks are associated with this PvDC. Don't be alarmed that no networks are showing; we haven't configured any yet. Next we choose the allocation model. We have discussed the details of all the three models earlier: allocation pool, pay-as-you-go, and reservation pool. Choose Pay-as-you-go and click on Next. Have a look at the settings and click on Next. The next window lets you define which storage profile you would like to use for this OvDC. If you don't have a storage profile configured (as I do in my lab), just select any and click on the Add button. Enable Thin Provisioning to save on storage. This setting is the same as the normal thin provisioning in vSpere. Enable Fast Provisioning. This setting will use vCloud-linked clones (explained later). This window lets us configure the network resources for the organization. As we haven't configured any networking yet, just click on Next. We will discuss the network options in the next section about networks. We don't want to create an edge gateway so we leave the setting as it is and click Next. Again, more information about this is to follow in the next section. Finally, we will give this OvDC a name and finish the creation. I normally add a little descriptor in the name to say what allocation model I used, for example, res, payg, or allo. We have now successfully assigned memory, CPU, and storage to be consumed by our organization. Linked clones Linked clones save an enormous amount of storage space. When a VM is created from a template, a full clone of the template is created. When linked clones are used, only the changes to the VM are written to disk. As an example, we have a VM with 40 GB storage capacity (ignore thin provisioning for this example). A full clone would need another 40 GB of disk space. If linked clones are used, only a few MB will be used. As more changes are made to the cloned VM, it will demand more storage (up to the maximum of 40 GB). If this reminds you of the way snapshots work in vSphere, that's because that is what is actually used in the background. vCloud linked clones are not the same technology as VMware View linked clones; they are a more advanced version of the VMware Lab Manager linked clones. Step 6 – Creating a vApp Now that we have resources within our organization, we can create a vAPP and the VM inside it. vApps are created inside organizations, so we first need to access the organization that was created in the Step 4 – creating an Org subsection of the Quick start – creating your first VM section. Click on the Manage & Monitor tab and double-click on the Organizations menu item. Now double-click on the organization we created in the Step 4 – creating an Org subsection of the Quick start – creating your first VM section. You will see that a new tab is opened with the name of the new Org. You are now on the home screen of this Org. We will take the easy road here. Click on Build New vApp. Give your first vApp a name (for example, MyFirstVapp), a description, and if you like, explore the settings of the leases. After you click on Next, we are asked to choose a template. As we currently don't have one, we click on New Virtual Machine in the left-hand side menu of the screen. We will learn about templates in the Top features you need to know about section. A pop up will appear and we will then select all the settings we would expect when creating a new VM, such as name and hostname, CPU, memory, OS type and version, hard disk, and network. Note that if you are using virtual ESXi servers in your lab, you may be limited to 32-bit VMs only. After clicking on OK, we will find ourselves back at the previous screen. However, our VM should now show up in the lower table. Click on Next. We can now choose in which OvDC and in what storage profile we will deploy the vApp. The choices should be very limited at the moment, so just click on Next. Next, we are asked to choose a network. As we don't have one, we just click on Next. Another window will open; click on Next. Normally, we could define a fencing here. At last, we see a summary of all the settings, and clicking on Finish will create our first vApp. After the vApp is created, you can power it on and have a closer look. Click on the play button to power the vApp on. Wait for a few seconds, and then click on the black screen of the VM. A console pop up should come up and show you the BIOS of the booting VM. If that's not happening, check your browser security settings. That's it! You have installed vCD, and you've configured your resources and created your first vApp. Summary This article explained how we could create VM in vCloud technology. Resources for Article : Further resources on this subject: VMware View 5 Desktop Virtualization [Article] Supporting hypervisors by OpenNebula [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 3840

article-image-creating-java-ee-applications
Packt
24 Oct 2014
16 min read
Save for later

Creating Java EE Applications

Packt
24 Oct 2014
16 min read
In this article by Grant Shipley author of Learning OpenShift we are going to learn how to use OpenShift in order to create and deploy Java-EE-based applications using the JBoss Enterprise Application Platform (EAP) application server. To illustrate and learn the concepts of Java EE, we are going to create an application that displays an interactive map that contains all of the major league baseball parks in the United States. We will start by covering some background information on the Java EE framework and then introduce each part of the sample application. The process for learning how to create the sample application, named mlbparks, will be started by creating the JBoss EAP container, then adding a database, creating the web services, and lastly, creating the responsive map UI. (For more resources related to this topic, see here.) Evolution of Java EE I can't think of a single programming language other than Java that has so many fans while at the same time has a large community of developers that profess their hatred towards it. The bad reputation that Java has can largely be attributed to early promises made by the community when the language was first released and then not being able to fulfill these promises. Developers were told that we would be able to write once and run anywhere, but we quickly found out that this meant that we could write once and then debug on every platform. Java was also perceived to consume more memory than required and was accused of being overly verbose by relying heavily on XML configuration files. Another problem the language had was not being able to focus on and excel at one particular task. We used Java to create thick client applications, applets that could be downloaded via a web browser, embedded applications, web applications, and so on. Having Java available as a tool that completes most projects was a great thing, but the implementation for each project was often confusing. For example, let's examine the history of the GUI development using the Java programming language. When the language was first introduced, it included an API called the Abstract Window Toolkit (AWT) that was essentially a Java wrapper around native UI components supplied by the operating system. When Java 1.2 was released, the AWT implementation was deprecated in the favor of the Swing API that contained GUI elements written in 100 percent Java. By this time, a lot of developers were quickly growing frustrated with the available APIs and a new toolkit called the Standard Widget Toolkit (SWT) was developed to create another UI toolkit for Java. SWT was developed at IBM and is the windowing toolkit in use by the Eclipse IDE and is considered by most to be the superior toolkit that can be used when creating applications. As you can see, rapid changes in the core functionality of the language coupled with the refusal of some vendors to ship the JRE as part of the operating system left a bad taste in most developers' mouths. Another reason why developers began switching from Java to more attractive programming languages was the implementation of Enterprise JavaBeans (EJB). The first Java EE release occurred in December, 1999, and the Java community is just now beginning to recover from the complexity introduced by the language in order to create applications. If you were able to escape creating applications using early EJBs, consider yourself one of the lucky ones, as many of your fellow developers were consumed by implementing large-scale systems using this new technology. It wasn't fun; trust me. I was there and experienced it firsthand. When developers began abandoning Java EE, they seemed to go in one of two directions. Developers who understood that the Java language itself was quite beautiful and useful adopted the Spring Framework methodology of having enterprise grade features while sticking with a Plain Old Java Object (POJO) implementation. Other developers were wooed away by languages that were considered more modern, such as Ruby and the popular Rails framework. While the rise in popularity of both Ruby and Spring was happening, the team behind Java EE continued to improve and innovate, which resulted in the creation of a new implementation that is both easy to use and develop with. I am happy to report that if you haven't taken a look at Java EE in the last few years, now is the time to do so. Working with the language after a long hiatus has been a rewarding and pleasurable experience. Introducing the sample application For the remainder of this article, we are going to develop an application called mlbparks that displays a map of the United States with a pin on the map representing the location of each major league baseball stadium. The requirements for the application are as follows: A single map that a user can zoom in and out of As the user moves the map around, the map must be updated with all baseball stadiums that are located in the shown area The location of the stadiums must be searchable based on map coordinates that are passed to the REST-based API The data should be transferred in the JSON format The web application must be responsive so that it is displayed correctly regardless of the resolution of the browser When a stadium is listed on the map, the user should be able to click on the stadium to view details about the associated team The end state application will look like the following screenshot: The user will also be able to zoom in on a specific location by double-clicking on the map or by clicking on the + zoom button in the top-left corner of the application. For example, if a user zooms the map in to the Phoenix, Arizona area of the United States, they will be able to see the information for the Arizona Diamondbacks stadium as shown in the following screenshot: To view this sample application running live, open your browser and type http://mlbparks-packt.rhcloud.com. Now that we have our requirements and know what the end result should look like, let's start creating our application. Creating a JBoss EAP application For the sample application that we are going to develop as part of this article, we are going to take advantage of the JBoss EAP application server that is available on the OpenShift platform. The JBoss EAP application server is a fully tested, stable, and supported platform for deploying mission-critical applications. Some developers prefer to use the open source community application server from JBoss called WildFly. Keep in mind when choosing WildFly over EAP that it only comes with community-based support and is a bleeding edge application server. To get started with building the mlbparks application, the first thing we need to do is create a gear that contains the cartridge for our JBoss EAP runtime. For this, we are going to use the RHC tools. Open up your terminal application and enter in the following command: $ rhc app create mlbparks jbosseap-6 Once the previous command is executed, you should see the following output: Application Options ------------------- Domain:     yourDomainName Cartridges: jbosseap-6 (addtl. costs may apply) Gear Size: default Scaling:   no Creating application 'mlbparks' ... done Waiting for your DNS name to be available ... done Cloning into 'mlbparks'... Your application 'mlbparks' is now available. URL:       http://mlbparks-yourDomainName.rhcloud.com/ SSH to:     5311180f500446f54a0003bb@mlbparks-yourDomainName.rhcloud.com Git remote: ssh://5311180f500446f54a0003bb@mlbparks-yourDomainName.rhcloud.com/~/git/mlbparks.git/ Cloned to: /home/gshipley/code/mlbparks  Run 'rhc show-app mlbparks' for more details about your app. If you have a paid subscription to OpenShift Online, you might want to consider using a medium- or large-size gear to host your Java-EE-based applications. To create this application using a medium-size gear, use the following command: $ rhc app create mlbparks jbosseap-6 -g medium Adding database support to the application Now that our application gear has been created, the next thing we want to do is embed a database cartridge that will hold the information about the baseball stadiums we want to track. Given that we are going to develop an application that doesn't require referential integrity but provides a REST-based API that will return JSON, it makes sense to use MongoDB as our database. MongoDB is arguably the most popular NoSQL database available today. The company behind the database, MongoDB, offers paid subscriptions and support plans for production deployments. For more information on this popular NoSQL database, visit www.mongodb.com. Run the following command to embed a database into our existing mlbparks OpenShift gear: $ rhc cartridge add mongodb-2.4 -a mlbparks Once the preceding command is executed and the database has been added to your application, you will see the following information on the screen that contains the username and password for the database: Adding mongodb-2.4 to application 'mlbparks' ... done  mongodb-2.4 (MongoDB 2.4) ------------------------- Gears:         Located with jbosseap-6 Connection URL: mongodb://$OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/ Database Name: mlbparks Password:       q_6eZ22-fraN Username:       admin MongoDB 2.4 database added. Please make note of these credentials:    Root User:     admin    Root Password: yourPassword    Database Name: mlbparks Connection URL: mongodb://$OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/ Importing the MLB stadiums into the database Now that we have our application gear created and our database added, we need to populate the database with the information about the stadiums that we are going to place on the map. The data is provided as a JSON document and contains the following information: The name of the baseball team The total payroll for the team The location of the stadium represented with the longitude and latitude The name of the stadium The name of the city where the stadium is located The league the baseball club belongs to (national or American) The year the data is relevant for All of the players on the roster including their position and salary A sample for the Arizona Diamondbacks looks like the following line of code: {   "name":"Diamondbacks",   "payroll":89000000,   "coordinates":[     -112.066662,     33.444799   ], "ballpark":"Chase Field",   "city":"Phoenix",   "league":"National League", "year":"2013",   "players":[     {       "name":"Miguel Montero", "position":"Catcher",       "salary":10000000     }, ………… ]} In order to import the preceding data, we are going to use the SSH command. To get started with the import, SSH into your OpenShift gear for the mlbparks application by issuing the following command in your terminal prompt: $ rhc app ssh mlbparks Once we are connected to the remote gear, we need to download the JSON file and store it in the /tmp directory of our gear. To complete these steps, use the following commands on your remote gear: $ cd /tmp $ wget https://raw.github.com/gshipley/mlbparks/master/mlbparks.json Wget is a software package that is available on most Linux-based operating systems in order to retrieve files using HTTP, HTTPS, or FTP. Once the file has completed downloading, take a quick look at the contents using your favorite text editor in order to get familiar with the structure of the document. When you are comfortable with the data that we are going to import into the database, execute the following command on the remote gear to populate MongoDB with the JSON documents: $ mongoimport --jsonArray -d $OPENSHIFT_APP_NAME -c teams --type json --file /tmp/mlbparks.json -h $OPENSHIFT_MONGODB_DB_HOST --port $OPENSHIFT_MONGODB_DB_PORT -u $OPENSHIFT_MONGODB_DB_USERNAME -p $OPENSHIFT_MONGODB_DB_PASSWORD If the command was executed successfully, you should see the following output on the screen: connected to: 127.7.150.130:27017 Fri Feb 28 20:57:24.125 check 9 30 Fri Feb 28 20:57:24.126 imported 30 objects What just happened? To understand this, we need to break the command we issued into smaller chunks, as detailed in the following table: Command/argument Description mongoimport This command is provided by MongoDB to allow users to import data into a database. --jsonArray This specifies that we are going to import an array of JSON documents. -d $OPENSHIFT_APP_NAME Specifies the database that we are going to import the data into the database. We are using a system environment variable to use the database that was created by default when we embedded the database cartridge in our application. -c teams This defines the collection to which we want to import the data. If the collection does not exist, it will be created. --type json This specifies the type of file we are going to import. --file /tmp/mlbparks.json This specifies the full path and name of the file that we are going to import into the database. -h $OPENSHIFT_MONGODB_DB_HOST This specifies the host of the MongoDB server. --port $OPENSHIFT_MONGODB_DB_PORT This specifies the port of the MongoDB server. -u $OPENSHIFT_MONGODB_DB_USERNAME This specifies the username to be used to be authenticated to the database. -p $OPENSHIFT_MONGODB_DB_PASSWORD This specifies the password to be authenticated to the database. To verify that data was loaded properly, you can use the following command that will print out the number of documents in the teams collections of the mlbparks database: $ mongo -quiet $OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/$OPENSHIFT_APP_NAME -u $OPENSHIFT_MONGODB_DB_USERNAME -p $OPENSHIFT_MONGODB_DB_PASSWORD --eval "db.teams.count()" The result should be 30. Lastly, we need to create a 2d index on the teams collection to ensure that we can perform spatial queries on the data. Geospatial queries are what allow us to search for specific documents that fall within a given location as provided by the latitude and longitude parameters. To add the 2d index to the teams collections, enter the following command on the remote gear: $ mongo $OPENSHIFT_MONGODB_DB_HOST:$OPENSHIFT_MONGODB_DB_PORT/$OPENSHIFT_APP_NAME --eval 'db.teams.ensureIndex( { coordinates : "2d" } );' Adding database support to our Java application The next step in creating the mlbparks application is adding the MongoDB driver dependency to our application. OpenShift Online supports the popular Apache Maven build system as the default way of compiling the source code and resolving dependencies. Maven was originally created to simplify the build process by allowing developers to specify specific JARs that their application depends on. This alleviates the bad practice of checking JAR files into the source code repository and allows a way to share JARs across several projects. This is accomplished via a pom.xml file that contains configuration items and dependency information for the project. In order to add the dependency for the MongoDB client to our mlbparks applications, we need to modify the pom.xml file that is in the root directory of the Git repository. The Git repository was cloned to our local machine during the application's creation step that we performed earlier in this article. Open up your favorite text editor and modify the pom.xml file to include the following lines of code in the <dependencies> block: <dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.9.1</version> </dependency> Once you have added the dependency, commit the changes to your local repository by using the following command: $ git commit -am "added MongoDB dependency" Finally, let's push the change to our Java application to include the MongoDB database drivers using the git push command: $ git push The first time the Maven build system builds the application, it downloads all the dependencies for the application and then caches them. Because of this, the first build will always that a bit longer than any subsequent build. Creating the database access class At this point, we have our application created, the MongoDB database embedded, all the information for the baseball stadiums imported, and the dependency for our database driver added to our application. The next step is to do some actual coding by creating a Java class that will act as the interface for connecting to and communicating with the MongoDB database. Create a Java file named DBConnection.java in the mlbparks/src/main/java/org/openshift/mlbparks/mongo directory and add the following source code: package org.openshift.mlbparks.mongo;  import java.net.UnknownHostException; import javax.annotation.PostConstruct; import javax.enterprise.context.ApplicationScoped; import javax.inject.Named; import com.mongodb.DB; import com.mongodb.Mongo;  @Named @ApplicationScoped public class DBConnection { private DB mongoDB; public DBConnection() {    super(); } @PostConstruct public void afterCreate() {    String mongoHost = System.getenv("OPENSHIFT_MONGODB_DB_HOST");    String mongoPort = System.getenv("OPENSHIFT_MONGODB_DB_PORT");    String mongoUser = System.getenv("OPENSHIFT_MONGODB_DB_USERNAME");    String mongoPassword = System.getenv("OPENSHIFT_MONGODB_DB_PASSWORD");    String mongoDBName = System.getenv("OPENSHIFT_APP_NAME");    int port = Integer.decode(mongoPort);    Mongo mongo = null;  try {     mongo = new Mongo(mongoHost, port);    } catch (UnknownHostException e) {     System.out.println("Couldn't connect to MongoDB: " + e.getMessage() + " :: " + e.getClass());    }    mongoDB = mongo.getDB(mongoDBName);    if (mongoDB.authenticate(mongoUser, mongoPassword.toCharArray()) == false) {     System.out.println("Failed to authenticate DB ");    } } public DB getDB() {    return mongoDB; } } The preceding source code as well as all source code for this article is available on GitHub at https://github.com/gshipley/mlbparks. The preceding code snippet simply creates an application-scoped bean that is available until the application is shut down. The @ApplicationScoped annotation is used when creating application-wide data or constants that should be available to all the users of the application. We chose this scope because we want to maintain a single connection class for the database that is shared among all requests. The next bit of interesting code is the afterCreate method that gets authenticated on the database using the system environment variables. Once you have created the DBConnection.java file and added the preceding source code, add the file to your local repository and commit the changes as follows: $ git add . $ git commit -am "Adding database connection class" Creating the beans.xml file The DBConnection class we just created makes use of Context Dependency Injection (CDI), which is part of the Java EE specification, for dependency injection. According to the official specification for CDI, an application that uses CDI must have a file called beans.xml. The file must be present and located under the WEB-INF directory. Given this requirement, create a file named beans.xml under the mlbparks/src/main/webapp/WEB-INF directory and add the following lines of code: <?xml version="1.0"?> <beans xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://jboss.org/schema/cdi/beans_1_0.xsd"/> After you have added the beans.xml file, add and commit it to your local Git repository: $ git add . $ git commit -am "Adding beans.xml for CDI" Summary In this article we learned about the evolution of Java EE, created a JBoss EAP application, and created the database access class. Resources for Article: Further resources on this subject: Using OpenShift [Article] Common performance issues [Article] The Business Layer (Java EE 7 First Look) [Article]
Read more
  • 0
  • 0
  • 3454

article-image-building-publishing-and-supporting-your-forcecom-application
Packt
22 Sep 2014
39 min read
Save for later

Building, Publishing, and Supporting Your Force.com Application

Packt
22 Sep 2014
39 min read
In this article by Andrew Fawcett, the author of Force.com Enterprise Architecture, we will use the declarative aspects of the platform to quickly build an initial version of an application, which will give you an opportunity to get some hands-on experience with some of the packaging and installation features that are needed to release applications to subscribers. We will also take a look at the facilities available to publish your application through Salesforce AppExchange (equivalent to the Apple App Store) and finally provide end user support. (For more resources related to this topic, see here.) We will then use this application as a basis for incrementally releasing new versions of the application to build our understanding of Enterprise Application Development. The following topics outline what we will achieve in this article: Required organizations Introducing the sample application Package types and benefits Creating your first managed package Package dependencies and uploading Introduction to AppExchange and creating listings Installing and testing your package Becoming a Salesforce partner and its benefits Licensing Supporting your application Customer metrics Trialforce and Test Drive Required organizations Several Salesforce organizations are required to develop, package, and test your application. You can sign up for these organizations at https://developer.salesforce.com/, though in due course, as your relationship with Salesforce becomes more formal, you will have the option of accessing their Partner Portal website to create organizations of different types and capabilities. We will discuss more on this later. It's a good idea to have some kind of naming convention to keep track of the different organizations and logins. Use the following table as a guide and create the following organizations via https://developer.salesforce.com/. As stated earlier, these organizations will be used only for the purposes of learning and exploring: Username Usage Purpose myapp@packaging.my.com Packaging Though we will perform initial work in this org, it will eventually be reserved solely for assembling and uploading a release. myapp@testing.my.com Testing In this org, we will install the application and test upgrades. You may want to create several of these in practice, via the Partner Portal website described later in this article. myapp@dev.my.com Developing Later, we will shift development of the application into this org, leaving the packaging org to focus only on packaging. You will have to substitute myapp and my.com (perhaps by reusing your company domain name to avoid naming conflicts) with your own values. Take time to take note of andyapp@packaging.andyinthecloud.com. The following are other organization types that you will eventually need in order to manage the publication and licensing of your application. Usage Purpose Production / CRM Org Your organization may already be using this org for managing contacts, leads, opportunities, cases, and other CRM objects. Make sure that you have the complete authority to make changes, if any, to this org since this is where you run your business. If you do not have such an org, you can request one via the Partner Program website described later in this article, by requesting (via a case) a CRM ISV org. Even if you choose to not fully adopt Salesforce for this part of your business, such an org is still required when it comes to utilizing the licensing aspects of the platform. AppExchange Publishing Org (APO) This org is used to manage your use of AppExchange. We will discuss this a little later in this article. This org is actually the same Salesforce org you designate as your production org, where you conduct your sales and support activities from. License Management Org (LMO) Within this organization, you can track who installs your application (as leads), the licenses you grant to them, and for how long. It is recommended that this is the same org as the APO described earlier. Trialforce Management Org (TMO) Trialforce is a way to provide orgs with your preconfigured application data for prospective customers to try out your application before buying. It will be discussed later in this article. Trialforce Source Org (TSO)   Typically, the LMO and APO can be the same as your primary Salesforce production org, which allows you to track all your leads and future opportunities in the same place. This leads to the rule of APO = LMO = production org. Though neither of them should be your actual developer or test orgs. you can work with Salesforce support and your Salesforce account manager to plan and assign these orgs. Introducing the sample application For this article, we will use the world of Formula1 motor car racing as the basis for a packaged application that we will build together. Formula1 is for me the motor sport that is equivalent to Enterprise applications software, due to its scale and complexity. It is also a sport that I follow, both of which helped me when building the examples that we will use. We will refer to this application as FormulaForce, though please keep in mind Salesforce's branding policies when naming your own application, as they prevent the use of the word "Force" in the company or product titles. This application will focus on the data collection aspects of the races, drivers, and their many statistics, utilizing platform features to structure, visualize, and process this data in both historic and current contexts. For this article, we will create some initial Custom Objects as detailed in the following table. Do not worry about creating any custom tabs just yet. You can use your preferred approach for creating these initial objects. Ensure that you are logged in to your packaging org. Object Field name and type Season__c Name (text) Race__c Name (text) Season__c (Master-Detail to Season__c) Driver__c Name Contestant__c Name (Auto Number, CONTESTANT-{00000000} ) Race__c (Master-Detail to Race__c) Driver__c (Lookup to Driver__c) The following screenshot shows the preceding objects within the Schema Builder tool, available under the Setup menu: Package types and benefits A package is a container that holds your application components such as Custom Objects, Apex code, Apex triggers, Visualforce pages, and so on. This makes up your application. While there are other ways to move components between Salesforce orgs, a package provides a container that you can use for your entire application or deliver optional features by leveraging the so-called extension packages. There are two types of packages, managed and unmanaged. Unmanaged packages result in the transfer of components from one org to another; however, the result is as if those components had been originally created in the destination org, meaning that they can be readily modified or even deleted by the administrator of that org. They are also not upgradable and are not particularly ideal from a support perspective. Moreover, the Apex code that you write is also visible for all to see, so your Intellectual Property is at risk. Unmanaged packages can be used for sharing template components that are intended to be changed by the subscriber. If you are not using GitHub and the GitHub Salesforce Deployment Tool (https://github.com/afawcett/githubsfdeploy), they can also provide a means to share open source libraries to developers. Features and benefits of managed packages Managed packages have the following features that are ideal for distributing your application. The org where your application package is installed is referred to as a subscriber org, since users of this org are subscribing to the services your application provides: Intellectual Property (IP) protection: Users in the subscriber org cannot see your Apex source code, although they can see your Visualforce pages code and static resources. While the Apex code is hidden, JavaScript code is not, so you may want to consider using a minify process to partially obscure such code. The naming scope: Your component names are unique to your package throughout the utilization of a namespace. This means that even if you have object X in your application, and the subscriber has an object of the same name, they remain distinct. You will define a namespace later in this article. The governor scope: Code in your application executes within its own governor limit scope (such as DML and SOQL governors that are subject to passing Salesforce Security Review) and is not affected by other applications or code within the subscriber org. Note that some governors such as the CPU time governor are shared by the whole execution context (discussed in a later article) regardless of the namespace. Upgrades and versioning: Once the subscribers have started using your application, creating data, making configurations, and so on, you will want to provide upgrades and patches with new versions of your application. There are other benefits to managed packages, but these are only accessible after becoming a Salesforce Partner and completing the security review process; these benefits are described later in this article. Salesforce provides ISVForce Guide (otherwise known as the Packaging Guide) in which these topics are discussed in depth; bookmark it now! The following is the URL for ISVForce Guide: http://login.salesforce.com/help/pdfs/en/salesforce_packaging_guide.pdf. Creating your first managed package Packages are created in your packaging org. There can be only one managed package being developed in your packaging org (though additional unmanaged packages are supported, it is not recommended to mix your packaging org with them). You can also install other dependent managed packages and reference their components from your application. The steps to be performed are discussed in the following sections: Setting your package namespace Creating the package and assigning it to the namespace Adding components to the package Setting your package namespace An important decision when creating a managed package is the namespace; this is a prefix applied to all your components (Custom Objects, Visualforce pages, and so on) and is used by developers in subscriber orgs to uniquely qualify between your packaged components and others, even those from other packages. The namespace prefix is an important part of the branding of the application since it is implicitly attached to any Apex code or other components that you include in your package. It can be up to 15 characters, though I personally recommend that you keep it less than this, as it becomes hard to remember and leads to frustrating typos if you make it too complicated. I would also avoid underscore characters as well. It is a good idea to have a naming convention if you are likely to create more managed packages in the future (in different packaging orgs). The following is the format of an example naming convention: [company acronym - 1 to 4 characters][package prefix 1 to 4 characters] For example, the ACME Corporation's Road Runner application might be named acmerr. When the namespace has not been set, the Packages page (accessed under the Setup menu under the Create submenu) indicates that only unmanaged packages can be created. Click on the Edit button to begin a small wizard to enter your desired namespace. This can only be done once and must be globally unique (meaning it cannot be set in any other org), much like a website domain name. The following screenshot shows the Packages page: Once you have set the namespace, the preceding page should look like the following screenshot with the only difference being the namespace prefix that you have used. You are now ready to create a managed package and assign it to the namespace. Creating the package and assigning it to the namespace Click on the New button on the Packages page and give your package a name (it can be changed later). Make sure to tick the Managed checkbox as well. Click on Save and return to the Packages page, which should now look like the following: Adding components to the package In the Packages page, click on the link to your package in order to view its details. From this page, you can manage the contents of your package and upload it. Click on the Add button to add the Custom Objects created earlier in this article. Note that you do not need to add any custom fields; these are added automatically. The following screenshot shows broadly what your Package Details page should look like at this stage: When you review the components added to the package, you will see that some components can be removed while other components cannot be removed. This is because the platform implicitly adds some components for you as they are dependencies. As we progress, adding different component types, you will see this list automatically grow in some cases, and in others, we must explicitly add them. Extension packages As the name suggests, extension packages extend or add to the functionality delivered by the existing packages they are based on, though they cannot change the base package contents. They can extend one or more base packages, and you can even have several layers of extension packages, though you may want to keep an eye on how extensively you use this feature, as managing inter-package dependency can get quite complex to manage, especially during development. Extension packages are created in pretty much the same way as the process you've just completed (including requiring their own packaging org), except that the packaging org must also have the dependent packages installed in it. As code and Visualforce pages contained within extension packages make reference to other Custom Objects, fields, Apex code, and Visualforce pages present in base packages. The platform tracks these dependencies and the version of the base package present at the time the reference was made. When an extension package is installed, this dependency information ensures that the subscriber org must have the correct version (minimum) of the base packages installed before permitting the installation to complete. You can also manage the dependencies between extension packages and base packages yourself through the Versions tab or XML metadata for applicable components. Package dependencies and uploading Packages can have dependencies on platform features and/or other packages. You can review and manage these dependencies through the usage of the Package detail page and the use of dynamic coding conventions as described here. While some features of Salesforce are common, customers can purchase different editions and features according to their needs. Developer Edition organizations have access to most of these features for free. This means that as you develop your application, it is important to understand when and when not to use those features. By default, when referencing a certain Standard Object, field, or component type, you will generate a prerequisite dependency on your package, which your customers will need to have before they can complete the installation. Some Salesforce features, for example Multi-Currency or Chatter, have either a configuration or, in some cases, a cost impact to your users (different org editions). Carefully consider which features your package is dependent on. Most of the feature dependencies, though not all, are visible via the View Dependencies button on the Package details page (this information is also available on the Upload page, allowing you to make a final check). It is a good practice to add this check into your packaging procedures to ensure that no unwanted dependencies have crept in. Clicking on this button, for the package that we have been building in this article so far, confirms that there are no dependencies. Uploading the release and beta packages Once you have checked your dependencies, click on the Upload button. You will be prompted to give a name and version to your package. The version will be managed for you in subsequent releases. Packages are uploaded in one of two modes (beta or release). We will perform a release upload by selecting the Managed - Released option from the Release Type field, so make sure you are happy with the objects created in the earlier section of this article, as they cannot easily be changed after this point. Once you are happy with the information on the screen, click on the Upload button once again to begin the packaging process. Once the upload process completes, you will see a confirmation page as follows: Packages can be uploaded in one of two states as described here: Release packages can be installed into subscriber production orgs and also provide an upgrade path from previous releases. The downside is that you cannot delete the previously released components and change certain things such as a field's type. Changes to the components that are marked global, such as Apex Code and Visualforce components, are also restricted. While Salesforce is gradually enhancing the platform to provide the ability to modify certain released aspects, you need to be certain that your application release is stable before selecting this option. Beta packages cannot be installed into subscriber production orgs; you can install only into Developer Edition (such as your testing org), sandbox, or Partner Portal created orgs. Also, Beta packages cannot be upgraded once installed; hence, this is the reason why Salesforce does not permit their installation into production orgs. The key benefit is in the ability to continue to change new components of the release, to address bugs and features relating to user feedback. The ability to delete previously-published components (uploaded within a release package) is in pilot. It can be enabled through raising a support case with Salesforce Support. Once you have understood the full implications, they will enable it. We have simply added some Custom Objects. So, the upload should complete reasonably quickly. Note that what you're actually uploading to is AppExchange, which will be covered in the following sections. If you want to protect your package, you can provide a password (this can be changed afterwards). The user performing the installation will be prompted for it during the installation process. Optional package dependencies It is possible to make some Salesforce features and/or base package component references (Custom Objects and fields) an optional aspect of your application. There are two approaches to this, depending on the type of the feature. Dynamic Apex and Visualforce For example, the Multi-Currency feature adds a CurrencyIsoCode field to the standard and Custom Objects. If you explicitly reference this field, for example in your Apex or Visualforce pages, you will incur a hard dependency on your package. If you want to avoid this and make it a configuration option (for example) in your application, you can utilize dynamic Apex and Visualforce. Extension packages If you wish to package component types that are only available in subscriber orgs of certain editions, you can choose to include these in extension packages. For example, you may wish to support Professional Edition, which does not support record types. In this case, create an Enterprise Edition extension package for your application's functionality, which leverages the functionality from this edition. Note that you will need multiple testing organizations for each combination of features that you utilize in this way, to effectively test the configuration options or installation options that your application requires. Introduction to AppExchange and listings Salesforce provides a website referred to as AppExchange, which lets prospective customers find, try out, and install applications built using Force.com. Applications listed here can also receive ratings and feedback. You can also list your mobile applications on this site as well. In this section, I will be using an AppExchange package that I already own. The package has already gone through the process to help illustrate the steps that are involved. For this reason, you do not need to perform these steps; they can be revisited at a later phase in your development once you're happy to start promoting your application. Once your package is known to AppExchange, each time you click on the Upload button on your released package (as described previously), you effectively create a private listing. Private listings are not visible to the public until you decide to make them so. It gives you the chance to prepare any relevant marketing details and pricing information while final testing is completed. Note that you can still distribute your package to other Salesforce users or even early beta or pilot customers without having to make your listing public. In order to start building a listing, you need to log in to AppExchange using the login details you designated to your AppExchange Publishing Org (APO). Go to www.appexchange.com and click on Login in the banner at the top-right corner. This will present you with the usual Salesforce login screen. Once logged in, you should see something like this: Select the Publishing Console option from the menu, then click on the Create New Listing button and complete the steps shown in the wizard to associate the packaging org with AppExchange; once completed, you should see it listed. It's really important that you consistently log in to AppExchange using your APO user credentials. Salesforce will let you log in with other users. To make it easy to confirm, consider changing the user's display name to something like MyCompany Packaging. Though it is not a requirement to complete the listing steps, unless you want to try out the process yourself a little further to see the type of information required, you can delete any private listings that you created after you complete this app. Installing and testing your package When you uploaded your package earlier in this article, you will receive an e-mail with a link to install the package. If not, review the Versions tab on the Package detail page in your packaging org. Ensure that you're logged out and click on the link. When prompted, log in to your testing org. The installation process will start. A reduced screenshot of the initial installation page is shown in the following screenshot; click on the Continue button and follow the default installation prompts to complete the installation: Package installation covers the following aspects (once the user has entered the package password if one was set): Package overview: The platform provides an overview of the components that will be added or updated (if this is an upgrade) to the user. Note that due to the namespace assigned to your package, these will not overwrite existing components in the subscriber org created by the subscriber. Connected App and Remote Access: If the package contains components that represent connections to the services outside of the Salesforce services, the user is prompted to approve these. Approve Package API Access: If the package contains components that make use of the client API (such as JavaScript code), the user is prompted to confirm and/or configure this. Such components will generally not be called much; features such as JavaScript Remoting are preferred, and they leverage the Apex runtime security configured post install. Security configuration: In this step, you can determine the initial visibility of the components being installed (objects, pages, and so on). Selecting admin only or the ability to select Profiles to be updated. This option predates the introduction of permission sets, which permit post installation configuration. If you package profiles in your application, the user will need to remember to map these to the existing profiles in the subscriber org as per step 2. This is a one-time option, as the profiles in the package are not actually installed, only merged. I recommend that you utilize permission sets to provide security configurations for your application. These are installed and are much more granular in nature. When the installation is complete, navigate to the Installed Packages menu option under the Setup menu. Here, you can see confirmation of some of your package details such as namespace and version, as well as any licensing details, which will be discussed later in this article. It is also possible to provide a Configure link for your package, which will be displayed next to the package when installed and listed on the Installed Packages page in the subscriber org. Here, you can provide a Visualforce page to access configuration options and processes for example. If you have enabled Seat based licensing, there will also be a Manage Licenses link to determine which users in the subscriber org have access to your package components such as tabs, objects, and Visualforce pages. Licensing, in general, is discussed in more detail later in this article. Automating package installation It is possible to automate some of the processes using the Salesforce Metadata API and associated tools, such as the Salesforce Migration Toolkit (available from the Tools menu under Setup), which can be run from the popular Apache Ant scripting environment. This can be useful if you want to automate the deployment of your packages to customers or test orgs. Options that require a user response such as the security configuration are not covered by automation. However, password-protected managed packages are supported. You can find more details on this by looking up the Installed Package component in the online help for the Salesforce Metadata API at https://www.salesforce.com/us/developer/docs/api_meta/. As an aid to performing this from Ant, a custom Ant task can be found in the sample code related to this article (see /lib/antsalesforce.xml). The following is a /build.xml Ant script to uninstall and reinstall the package. Note that the installation will also upgrade a package if the package is already installed. The following is the Ant script: <project name="FormulaForce" basedir="."> <!-- Downloaded from Salesforce Tools page under Setup --> <typedef uri="antlib:com.salesforce" resource="com/salesforce/antlib.xml" classpath="${basedir}/lib/ant-salesforce.jar"/> <!-- Import macros to install/uninstall packages --> <import file="${basedir}/lib/ant-salesforce.xml"/> <target name="package.installdemo"> <uninstallPackage namespace="yournamespace" username="${sf.username}" password="${sf.password}"/> <installPackage namespace="yournamespace" version="1.0" username="${sf.username}" password="${sf.password}"/> </target> </project> You can try the preceding example with your testing org by replacing the namespace attribute values with the namespace you entered earlier in this article. Enter the following commands, all on one line, from the folder that contains the build.xml file: ant package.installdemo -Dsf.username=testorgusername -Dsf.password=testorgpasswordtestorgtoken You can also use the Salesforce Metadata API to list packages installed in an org, for example, if you wanted to determine whether a dependent package needs to be installed or upgraded before sending an installation request. Finally, you can also uninstall packages if you wish. Becoming a Salesforce partner and benefits The Salesforce Partner Program has many advantages. The first place to visit is http://www.salesforce.com/partners/overview. You will want to focus on the areas of the site relating to being an Independent Software Vendor (ISV) partner. From here, you can click on Join. It is free to join, though you will want to read through the various agreements carefully of course. Once you wish to start listing a package and charging users for it, you will need to arrange billing details for Salesforce to take the various fees involved. Pay careful attention to the Standard Objects used in your package, as this will determine the license type required by your users and the overall cost to them in addition to your charges. Obviously, Salesforce would prefer your application to use as many features of the CRM application as possible, which may also be beneficial to you as a feature of your application, since it's an appealing immediate integration not found on other platforms, such as the ability to instantly integrate with accounts and contacts. If you're planning on using Standard Objects and are in doubt about the costs (as they do vary depending on the type), you can request a conversation with Salesforce to discuss this; this is something to keep in mind in the early stages. Once you have completed the signup process, you will gain access to the Partner Portal (your user will end with @partnerforce.com). You must log in to the specific site as opposed to the standard Salesforce login; currently, the URL is https://www.salesforce.com/partners/login. Starting from July 2014, the http://partners.salesforce.com URL provides access to the Partner Community. Logging in to this service using your production org user credentials is recommended. The following screenshot shows what the current Partner Portal home page looks like. Here you can see some of its key features: This is your primary place to communicate with Salesforce and also to access additional materials and announcements relevant to ISVs, so do keep checking often. You can raise cases and provide additional logins to other users in your organization, such as other developers who may wish to report issues or ask questions. There is also the facility to create test or developer orgs; here, you can choose the appropriate edition (Professional, Group, Enterprise, and others) you want to test against. You can also create Partner Developer Edition orgs from this option as well. These carry additional licenses and limits over the public's so-called Single Developer Editions orgs and are thus recommended for use only once you start using the Partner Portal. Note, however, that these orgs do expire, subject to either continued activity over 6 months or renewing the security review process (described in the following section) each year. Once you click on the create a test org button, there is a link on the page displayed that navigates to a table that describes the benefits, processes, and the expiry rules. Security review and benefits The following features require that a completed package release goes through a Salesforce-driven process known as the Security review, which is initiated via your listing when logged into AppExchange. Unless you plan to give your package away for free, there is a charge involved in putting your package through this process. However, the review is optional. There is nothing stopping you from distributing your package installation URL directly. However, you will not be able to benefit from the ability to list your new application on AppExchange for others to see and review. More importantly, you will also not have access to the following features to help you deploy, license, and support your application. The following is a list of the benefits you get once your package has passed the security review: Bypass subscriber org setup limits: Limits such as the number of tabs and Custom Objects are bypassed. This means that if the subscriber org has reached its maximum number of Custom Objects, your package will still install. This feature is sometimes referred to as Aloha. Without this, your package installation may fail. You can determine whether Aloha has been enabled via the Subscriber Overview page that comes with the LMA application, which is discussed in the next section. Licensing: You are able to utilize the Salesforce-provided License Management Application in your LMO (License Management Org as described previously). Subscriber support: With this feature, the users in the subscriber org can enable, for a specific period, a means for you to log in to their org (without exchanging passwords), reproduce issues, and enable much more detailed debug information such as Apex stack traces. In this mode, you can also see custom settings that you have declared as protected in your package, which are useful for enabling additional debug or advanced features. Push upgrade: Using this feature, you can automatically apply upgrades to your subscribers without their manual intervention, either directly by you or on a scheduled basis. You may use this for applying either smaller bug fixes that don't affect the Custom Objects or APIs or deploy full upgrades. The latter requires careful coordination and planning with your subscribers to ensure that changes and new features are adopted properly. Salesforce asks you to perform an automated security scan of your software via a web page (http://security.force.com/security/tools/forcecom/scanner). This service can be quite slow depending on how many scans are in the queue. Another option is to obtain the Eclipse plugin from the actual vendor CheckMarx at http://www.checkmarx.com, which runs the same scan but allows you to control it locally. Finally, for the ultimate confidence as you develop your application, Salesforce can provide a license to integrate it into your Continuous Integration (CI) build system. Keep in mind that if you make any callouts to external services, Salesforce will also most likely ask you and/or the service provider to run a BURP scanner, to check for security flaws. Make sure you plan a reasonable amount of time (at least 2–3 weeks, in my experience) to go through the security review process; it is a must to initially list your package, though if it becomes an issue, you have the option of issuing your package install URL directly to initial customers and early adopters. Licensing Once you have completed the security review, you are able to request through raising support cases via the Partner Portal to have access to the LMA. Once this is provided by Salesforce, use the installation URL to install it like any other package into your LMO. If you have requested a CRM for ISV's org (through a case raised within the Partner Portal), you may find the LMA already installed. The following screenshot shows the main tabs of the License Management Application once installed: In this section, I will use a package that I already own and have already taken through the process to help illustrate the steps that are involved. For this reason, you do not need to perform these steps. After completing the installation, return to AppExchange and log in. Then, locate your listing in Publisher Console under Uploaded Packages. Next to your package, there will be a Manage Licenses link. The first time after clicking on this link, you will be asked to connect your package to your LMO org. Once this is done, you will be able to define the license requirements for your package. The following example shows the license for a free package, with an immediately active license for all users in the subscriber org: In most cases, for packages that you intend to charge for, you would select a free trial rather than setting the license default to active immediately. For paid packages, select a license length, unless perhaps it's a one-off charge, and then select the license that does not expire. Finally, if you're providing a trial license, you need to consider carefully the default number of seats (users); users may need to be able to assign themselves different roles in your application to get the full experience. While licensing is expressed at a package level currently, it is very likely that more granular licensing around the modules or features in your package will be provided by Salesforce in the future. This will likely be driven by the Permission Sets feature. As such, keep in mind a functional orientation to your Permission Set design. The Manage Licenses link is shown on the Installed Packages page next to your package if you configure a number of seats against the license. The administrator in the subscriber org can use this page to assign applicable users to your package. The following screenshot shows how your installed package looks to the administrator when the package has licensing enabled: Note that you do not need to keep reapplying the license requirements for each version you upload; the last details you defined will be carried forward to new versions of your package until you change them. Either way, these details can also be completely overridden on the License page of the LMA application as well. You may want to apply a site-wide (org-wide) active license to extensions or add-on packages. This allows you to at least track who has installed such packages even though you don't intend to manage any licenses around them, since you are addressing licensing on the main package. The Licenses tab and managing customer licenses The Licenses tab provides a list of individual license records that are automatically generated when the users install your package into their orgs. Salesforce captures this action and creates the relevant details, including Lead information, and also contains contact details of the organization and person who performed the install, as shown in the following screenshot: From each of these records, you can modify the current license details to extend the expiry period or disable the application completely. If you do this, the package will remain installed with all of its data. However, none of the users will be able to access the objects, Apex code, or pages, not even the administrator. You can also re-enable the license at any time. The following screenshot shows the License Edit section: The Subscribers tab The Subscribers tab lists all your customers or subscribers (it shows their Organization Name from the company profile) that have your packages installed (only those linked via AppExchange). This includes their organization ID, edition (Developer, Enterprise, or others), and also the type of instance (sandbox or production). The Subscriber Overview page When you click on Organization Name from the list in this tab, you are taken to the Subscriber Overview page. This page is sometimes known as the Partner Black Tab. This page is packed with useful information such as the contact details (also seen via the Leads tab) and the login access that may have been granted (we will discuss this in more detail in the next section), as well as which of your packages they have installed, its current licensed status, and when it was installed. The following is a screenshot of the Subscriber Overview page: How licensing is enforced in the subscriber org Licensing is enforced in one of two ways, depending on the execution context in which your packaged Custom Objects, fields, and Apex code are being accessed from. The first context is where a user is interacting directly with your objects, fields, tabs, and pages via the user interface or via the Salesforce APIs (Partner and Enterprise). If the user or the organization is not licensed for your package, these will simply be hidden from view, and in the case of the API, return an error. Note that administrators can still see packaged components under the Setup menu. The second context is the type of access made from Apex code, such as an Apex trigger or controller, written by the customers themselves or from within another package. This indirect way of accessing your package components is permitted if the license is site (org) wide or there is at least one user in the organization that is allocated a seat. This condition means that even if the current user has not been assigned a seat (via the Manage Licenses link), they are still accessing your application's objects and code, although indirectly, for example, via a customer-specific utility page or Apex trigger, which automates the creation of some records or defaulting of fields in your package. Your application's Apex triggers (for example, the ones you might add to Standard Objects) will always execute even if the user does not have a seat license, as long as there is just one user seat license assigned in the subscriber org to your package. However, if that license expires, the Apex trigger will no longer be executed by the platform, until the license expiry is extended. Providing support Once your package has completed the security review, additional functionality for supporting your customers is enabled. Specifically, this includes the ability to log in securely (without exchanging passwords) to their environments and debug your application. When logged in this way, you can see everything the user sees in addition to extended Debug Logs that contain the same level of details as they would in a developer org. First, your customer enables access via the Grant Account Login page. This time however, your organization (note that this is the Company Name as defined in the packaging org under Company Profile) will be listed as one of those available in addition to Salesforce Support. The following screenshot shows the Grant Account Login page: Next, you log in to your LMO and navigate to the Subscribers tab as described. Open Subscriber Overview for the customer, and you should now see the link to Login as that user. From this point on, you can follow the steps given to you by your customer and utilize the standard Debug Log and Developer Console tools to capture the debug information you need. The following screenshot shows a user who has been granted login access via your package to their org: This mode of access also permits you to see protected custom settings if you have included any of those in your package. If you have not encountered these before, it's well worth researching them as they provide an ideal way to enable and disable debug, diagnostic, or advanced configurations that you don't want your customers to normally see. Customer metrics Salesforce has started to expose information relating to the usage of your package components in the subscriber orgs since the Spring '14 release of the platform. This enables you to report what Custom Objects and Visualforce pages your customers are using and more importantly those they are not. This information is provided by Salesforce and cannot be opted out by the customer. At the time of writing, this facility is in pilot and needs to be enabled by Salesforce Support. Once enabled, the MetricsDataFile object is available in your production org and will receive a data file periodically that contains the metrics records. The Usage Metrics Visualization application can be found by searching on AppExchange and can help with visualizing this information. Trialforce and Test Drive Large enterprise applications often require some consultation with customers to tune and customize to their needs after the initial package installation. If you wish to provide trial versions of your application, Salesforce provides a means to take snapshots of the results of this installation and setup process, including sample data. You can then allow prospects that visit your AppExchange listing or your website to sign up to receive a personalized instance of a Salesforce org based on the snapshot you made. The potential customers can then use this to fully explore the application for a limited duration until they sign up to be a paid customer from the trial version. Such orgs will eventually expire when the Salesforce trial period ends for the org created (typically 14 days). Thus, you should keep this in mind when setting the default expiry on your package licensing. The standard approach is to offer a web form for the prospect to complete in order to obtain the trial. Review the Providing a Free Trial on your Website and Providing a Free Trial on AppExchange sections of the ISVForce Guide for more on this. You can also consider utilizing the Signup Request API, which gives you more control over how the process is started and the ability to monitor it, such that you can create the lead records yourself. You can find out more about this in the Creating Signups using the API section in the ISVForce Guide. Alternatively, if the prospect wishes to try your package in their sandbox environment for example, you can permit them to install the package directly either from AppExchange or from your website. In this case, ensure that you have defined a default expiry on your package license as described earlier. In this scenario, you or the prospect will have to perform the setup steps after installation. Finally, there is a third option called Test Drive, which does not create a new org for the prospect on request, but does require you to set up an org with your application, preconfigure it, and then link it to your listing via AppExchange. Instead of the users completing a signup page, they click on the Test Drive button on your AppExchange listing. This logs them into your test drive org as a read-only user. Because this is a shared org, the user experience and features you can offer to users is limited to those that mainly read information. I recommend that you consider Trialforce over this option unless there is some really compelling reason to use it. When defining your listing in AppExchange, the Leads tab can be used to configure the creation of lead records for trials, test drives, and other activities on your listing. Enabling this will result in a form being presented to the user before accessing these features on your listing. If you provide access to trials through signup forms on your website for example, lead information will not be captured. Summary This article has given you a practical overview of the initial package creation process through installing it into another Salesforce organization. While some of the features discussed cannot be fully exercised until you're close to your first release phase, you can now head to development with a good understanding of how early decisions such as references to Standard Objects are critical to your licensing and cost decisions. It is also important to keep in mind that while tools such as Trialforce help automate the setup, this does not apply to installing and configuring your customer environments. Thus, when making choices regarding configurations and defaults in your design, keep in mind the costs to the customer during the implementation cycle. Make sure you plan for the security review process in your release cycle (the free online version has a limited bandwidth) and ideally integrate it into your CI build system (a paid facility) as early as possible, since the tool not only monitors security flaws but also helps report breaches in best practices such as lack of test asserts and SOQL or DML statements in loops. As you revisit the tools covered in this article, be sure to reference the excellent ISVForce Guide at http://www.salesforce.com/us/developer/docs/packagingGuide/index.htm for the latest detailed steps and instructions on how to access, configure, and use these features. Resources for Article: Further resources on this subject: Salesforce CRM Functions [Article] Force.com: Data Management [Article] Configuration in Salesforce CRM [Article]
Read more
  • 0
  • 0
  • 3412

article-image-getting-started-aws-and-amazon-ec2
Packt
20 Jul 2011
4 min read
Save for later

Getting Started with AWS and Amazon EC2

Packt
20 Jul 2011
4 min read
  Amazon Web Services: Migrate your .NET Enterprise Application to the Amazon Cloud Evaluate your Cloud requirements and successfully migrate your Enterprise .NET application to the Amazon Web Services Platform with this book and eBook         Read more about this book       (For more resources on this subject, see here.) Creating your first AWS account Well, here you are, ready to log in; create your first AWS account and get started! AWS lives at http://aws.amazon.com, so browse to this location and you will be greeted with the Amazon Web Services home page. From November 1st, 2010, Amazon has provided a free usage tier, which is currently displayed prominently on the front page. So, to get started click on the Sign Up Now button. You will be prompted with the Web Services Sign In screen. Enter the e-mail address that you would like to be associated with your AWS account and select I am a new user. When you have entered your e-mail address, click on the Sign in using our secure server button. Multi-factor authentication One of the things worth noting about this sign in screen is the Learn more comment at the bottom of the page, which mentions multi-factor authentication. Multi-factor authentication can be useful where organizations are required to use a more secure form of remote access. If you would like to secure your AWS account using multi-factor authentication this is now an option with AWS. To enable this, you will need to continue and create your AWS account. After your account has been created, go to the following address http://aws.amazon.com/ mfa/#get_device and follow the instructions for purchasing a device: Once you have the device in hand, you'll need to log in again to enable it: You will then be prompted with the extra dialog when signing in: Registration and privacy details Once you have clicked on the Sign in using our secure server button, you will be presented with the registration screen. Enter your full name and password that you would like to use: Note the link to the Privacy Notice at the bottom of the screen. You should be aware that the privacy notice is the same privacy notice used for the Amazon.com bookstore and website, which essentially means that any information you provide to Amazon through AWS may be correlated to purchases made on the Amazon bookstore and website. (Move the mouse over the image to enlarge it.) Fill out your contact details, agree to the AWS Customer Agreement, and complete the Security Check at the bottom of the form: If you are successful, you will be presented with the following result: AWS customer agreement Please note that the AWS Customer agreement is worth reading, with the full version located at http://aws.amazon.com/agreement. The agreement covers a lot of ground, but a couple of sections that are worth noting are: Section 10.2 – Your Applications, Data, and Content This section specifically states that you are the intellectual property and proprietary rights owner of all data and applications running under this account. However, the same section specifically gives the right to Amazon to hand over your data to a regulatory body, or to provide your data at the request of a court order or subpoena. Section 14.2 – Governing Law This section states that by agreeing to this agreement, you are bound by the laws of the State of Washington, USA, which—read in conjunction with section 10.2— suggests that any actions that fall out of section 10.2 will be initiated from within the State of Washington. Section 11.2 – Applications and Content This section may concern some users as it warrants that you (as the AWS user) are solely responsible for the content and security of any data and applications running under your account. I advise that you seek advice from your company's legal department prior to creating an account, which will be used for your enterprise.
Read more
  • 0
  • 0
  • 3188

article-image-autoscaling-windows-azure-service-management-rest-api
Packt
08 Aug 2011
9 min read
Save for later

Autoscaling with the Windows Azure Service Management REST API

Packt
08 Aug 2011
9 min read
  Microsoft Windows Azure Development Cookbook A hosted service may have a predictable pattern such as heavy use during the week and limited use at the weekend. Alternatively, it may have an unpredictable pattern identifiable through various performance characteristics. Windows Azure charges by the hour for each compute instance, so the appropriate number of instances should be deployed at all times. The basic idea is that the number of instances for the various roles in the hosted service is modified to a value appropriate to a schedule or to the performance characteristics of the hosted service. We use the Service Management API to retrieve the service configuration for the hosted service, modify the instance count as appropriate, and then upload the service configuration. In this recipe, we will learn how to use the Windows Azure Service Management REST API to autoscale a hosted service depending on the day of the week. Getting ready We need to create a hosted service. We must create an X.509 certificate and upload it to the Windows Azure Portal twice: once as a management certificate and once as a service certificate to the hosted service. How to do it... We are going to vary the instance count of a web role deployed to the hosted service by using the Windows Azure Service Management REST API to modify the instance count in the service configuration. We are going to use two instances of the web role from Monday through Friday and one instance on Saturday and Sunday, where all days are calculated in UTC. We do this as follows: Create a Windows Azure Project and add an ASP.Net Web Role to it. Add the following using statements to the top of WebRole.cs: using System.Threading; using System.Xml.Linq; using System.Security.Cryptography.X509Certificates; Add the following members to the WebRole class in WebRole.cs: XNamespace wa = "http://schemas.microsoft.com/windowsazure"; XNamespace sc = http://schemas.microsoft.com/ ServiceHosting/2008/10/ServiceConfiguration"; String changeConfigurationFormat = https://management.core. windows.net/{0}/services/hostedservices/{1}/deploymentslots/{2}/ ?comp=config"; String getConfigurationFormat = https://management.core.windows. net/{0}/services/hostedservices/{1}/deploymentslots/{2}"; String subscriptionId = RoleEnvironment.GetConfigurationSettingVal ue("SubscriptionId"); String serviceName = RoleEnvironment.GetConfigurationSettingValue ("ServiceName"); String deploymentSlot = RoleEnvironment.GetConfigurationSettingVal ue("DeploymentSlot"); String thumbprint = RoleEnvironment.GetConfigurationSettingValue ("Thumbprint"); String roleName = "WebRole1"; String instanceId = "WebRole1_IN_0"; Add the following method, implementing RoleEntryPoint.Run(), to the WebRole class: WebRole class: public override void Run() { Int32 countMinutes = 0; while (true) { Thread.Sleep(60000); if (++countMinutes == 20) { countMinutes = 0; if ( RoleEnvironment.CurrentRoleInstance.Id == instanceId) { ChangeInstanceCount(); } } } } Add the following method, controlling the instance count change, to the WebRole class: private void ChangeInstanceCount() { XElement configuration = LoadConfiguration(); Int32 requiredInstanceCount = CalculateRequiredInstanceCount(); if (GetInstanceCount(configuration) != requiredInstanceCount) { SetInstanceCount(configuration, requiredInstanceCount); String requestId = SaveConfiguration(configuration); } } Add the following method, calculating the required instance count, to the WebRole class: private Int32 CalculateRequiredInstanceCount() { Int32 instanceCount = 2; DayOfWeek dayOfWeek = DateTime.UtcNow.DayOfWeek; if (dayOfWeek == DayOfWeek.Saturday || dayOfWeek == DayOfWeek.Sunday) { instanceCount = 1; } return instanceCount; } Add the following method, retrieving the instance count from the service configuration, to the WebRole class: private Int32 GetInstanceCount(XElement configuration) { XElement instanceElement = (from s in configuration.Elements(sc + "Role") where s.Attribute("name").Value == roleName select s.Element(sc + "Instances")).First(); Int32 instanceCount = (Int32)Convert.ToInt32( instanceElement.Attribute("count").Value); return instanceCount; } Add the following method, setting the instance count in the service configuration, to the WebRole class: private void SetInstanceCount( XElement configuration, Int32 value) { XElement instanceElement = (from s in configuration.Elements(sc + "Role") where s.Attribute("name").Value == roleName select s.Element(sc + "Instances")).First(); instanceElement.SetAttributeValue("count", value); } Add the following method, creating the payload for the change deployment configuration operation, to the WebRole class: private XDocument CreatePayload(XElement configuration) { String configurationString = configuration.ToString(); String base64Configuration = ConvertToBase64String(configurationString); XElement xConfiguration = new XElement(wa + "Configuration", base64Configuration); XElement xChangeConfiguration = new XElement(wa + "ChangeConfiguration", xConfiguration); XDocument payload = new XDocument(); payload.Add(xChangeConfiguration); payload.Declaration = new XDeclaration("1.0", "UTF-8", "no"); return payload; } Add the following method, loading the service configuration, to the WebRole class: private XElement LoadConfiguration() { String uri = String.Format(getConfigurationFormat, subscriptionId, serviceName, deploymentSlot); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); XDocument deployment = operation.Invoke(uri); String base64Configuration = deployment.Element( wa + "Deployment").Element(wa + "Configuration").Value; String stringConfiguration = ConvertFromBase64String(base64Configuration); XElement configuration = XElement.Parse(stringConfiguration); return configuration; } Add the following method, saving the service configuration, to the WebRole class: private String SaveConfiguration(XElement configuration) { String uri = String.Format(changeConfigurationFormat, subscriptionId, serviceName, deploymentSlot); XDocument payload = CreatePayload(configuration); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); String requestId = operation.Invoke(uri, payload); return requestId; } Add the following utility methods, converting a String to and from its base-64 encoded version, to the WebRole class: private String ConvertToBase64String(String value) { Byte[] bytes = System.Text.Encoding.UTF8.GetBytes(value); String base64String = Convert.ToBase64String(bytes); return base64String; } private String ConvertFromBase64String(String base64Value) { Byte[] bytes = Convert.FromBase64String(base64Value); String value = System.Text.Encoding.UTF8.GetString(bytes); return value; } Add the ServiceManagementOperation class described in the Getting ready section of the Creating a Windows Azure hosted service recipe to the WebRole1 project. Set the ConfigurationSettings element in the ServiceDefinition.csdef file to: <ConfigurationSettings> <Setting name="DeploymentSlot" /> <Setting name="ServiceName" /> <Setting name="SubscriptionId" /> <Setting name="Thumbprint" /> </ConfigurationSettings> Set the ConfigurationSettings element in the ServiceDefinition.cscfg file to the following: <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics. ConnectionString" alue="DefaultEndpointsProtocol=https;AccountNam e=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY" /> <Setting name="DeploymentSlot" value="production" /> <Setting name="ServiceName" value="SERVICE_NAME" /> <Setting name="SubscriptionId" value="SUBSCRIPTION_ID" /> <Setting name="Thumbprint" value="THUMBPRINT" /> </ConfigurationSettings> How it works... In steps 1 and 2, we set up the WebRole class. In step 3, we add private members to define the XML namespace used in processing the response and the String format used in generating the endpoint for the change deployment configuration and get deployment operations. We then initialize several values from configuration settings in the service configuration file deployed to each instance. In step 4, we implement the Run() class . Every 20 minutes, the thread this method runs in wakes up and, only in the instance named WebRole1_IN_0, invokes the method controlling the instance count for the web role. This code runs in a single instance to ensure that there is no race condition with multiple instances trying to change the instance count simultaneously. In step 5, we load the service configuration. If we detect that the instance count should change we modify the service configuration to have the desired instance count and then save the service configuration. Note that the service configuration used here is downloaded and uploaded using the Service Management API. Step 6 contains the code where we calculate the needed instance count. In this example, we choose an instance count of 2 from Monday through Friday and 1 on Saturday and Sunday. All days are specified in UTC. This is the step where we should insert the desired scaling algorithm. In step 7, we retrieve the instance count for the web role from the service configuration. In step 8, we set the instance count to the desired value in the service configuration. In step 9, we create the payload for the change deployment configuration operation. We create a Configuration element and add a base-64 encoded copy of the service configuration to it. We add the Configuration element to the root ChangeConfiguration element which we then add to an XML document. In step 10, we use the ServiceManagementOperation utility class , described in the Creating a Windows Azure hosted service recipe, to invoke the get deployment operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate, and sends the request to the get deployment endpoint. We load the response into an XML document from which we extract the base-64 encoded service configuration. We then convert this into its XML format and load this into an XElement which we return. In step 11, we use the ServiceManagementOperation utility class to invoke the change deployment configuration operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate and the payload, and then sends the request to the change deployment configuration endpoint. It then parses the response to retrieve the request ID. In step 12, we add two utility methods to convert to and from a base-64 encoded String. In step 13, we add the ServiceManagementOperation utility class that we use to invoke operations against the Service Management API. In steps 14 and 15, we define some configuration settings in the service definition file and specify them in the service configuration file. We provide values for the Windows Azure Storage Service account name and access key. We also provide the subscription ID for the Windows Azure subscription, as well as the service name for current hosted service. We also need to add the thumbprint for the X.509 certificate we uploaded as a management certificate to the Windows Azure subscription and a service certificate to the hosted service we are deploying this application into. Note that this thumbprint is the same as that configured in the Certificate section of the ServiceConfiguration.cscfg file. This duplication is necessary because the Certificate section of this file is not accessible to the application code. Summary Windows Azure charges by the hour for each compute instance, so the appropriate number of instances should be deployed at all times. Autoscaling with the Windows Azure Service Management REST API as shown in this article is a boon in terms of keeping track of number of deployments at any time. Further resources on this subject: Managing Azure Hosted Services with the Service Management API [Article] Using the Windows Azure Platform PowerShell Cmdlets [Article] Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File [Article] Digging into Windows Azure Diagnostics [Article] Using IntelliTrace to Diagnose Problems with a Hosted Service [Article]
Read more
  • 0
  • 0
  • 3111
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-web-api-and-client-integration
Packt
10 Oct 2014
9 min read
Save for later

Web API and Client Integration

Packt
10 Oct 2014
9 min read
In this article written by Geoff Webber-Cross, the author of Learning Microsoft Azure, we'll create an on-premise production management client Windows application allowing manufacturing staff to view and update order and batch data and a web service to access data in the production SQL database and send order updates to the Service Bus topic. (For more resources related to this topic, see here.) The site's main feature is an ASP.NET Web API 2 HTTP service that allows the clients to read order and batch data. The site will also host a SignalR (http://signalr.net/) hub that allows the client to update order and batch statuses and have the changes broadcast to all the on-premise clients to keep them synchronized in real time. Both the Web API and SignalR hubs will use the Azure Active Directory authentication. We'll cover the following topic in this article: Building a client application Building a client application For the client application, we'll create a WPF client application to display batches and orders and allow us to change their state. We'll use MVVM Light again, like we did for the message simulator we created in the sales solution, to help us implement a neat MVVM pattern. We'll create a number of data services to get data from the API using Azure AD authentication. Preparing the WPF project We'll create a WPF application and install NuGet packages for MVVM Light, JSON.NET, and Azure AD authentication in the following procedure (for the Express version of Visual Studio, you'll need Visual Studio Express for desktops): Add a WPF project to the solution called ManagementApplication. In the NuGet Package Manager Console, enter the following command to install MVVM Light: install-package mvvmlight Now, enter the following command to install the Microsoft.IdentityModel.Clients.ActiveDirectory package: install-package Microsoft.IdentityModel.Clients.ActiveDirectory Now, enter the following command to install JSON.NET: install-package newtonsoft.json Enter the following command to install the SignalR client package (note that this is different from the server package): Install-package Microsoft.AspNet.SignalR.Client Add a project reference to ProductionModel by right-clicking on the References folder and selecting Add Reference, check ProductionModel by navigating to the Solution | Projects tab, and click on OK. Add a project reference to System.Configuraton and System.Net.Http by right-clicking on the References folder and selecting Add Reference, check System.Config and System.Net.Http navigating to the Assemblies | Framework tab, and click on OK. In the project's Settings.settings file, add a string setting called Token to store the user's auth token. Add the following appSettings block to App.config; I've put comments to help you understand (and remember) what they stand for and added commented-out settings for the Azure API: <appSettings> <!-- AD Tenant --> <add key="ida:Tenant" value="azurebakery.onmicrosoft.com" />    <!-- The target api AD application APP ID (get it from    config tab in portal) --> <!-- Local --> <add key="ida:Audience"    value="https://azurebakery.onmicrosoft.com/ManagementWebApi" /> <!-- Azure --> <!-- <add key="ida:Audience"    value="https://azurebakery.onmicrosoft.com/      WebApp-azurebakeryproduction.azurewebsites.net" /> -->    <!-- The client id of THIS application (get it from    config tab in portal) --> <add key="ida:ClientID" value=    "1a1867d4-9972-45bb-a9b8-486f03ad77e9" />    <!-- Callback URI for OAuth workflow --> <add key="ida:CallbackUri"    value="https://azurebakery.com" />    <!-- The URI of the Web API --> <!-- Local --> <add key="serviceUri" value="https://localhost:44303/" /> <!-- Azure --> <!-- <add key="serviceUri" value="https://azurebakeryproduction.azurewebsites.net/" /> --> </appSettings> Add the MVVM Light ViewModelLocator to Application.Resources in App.xaml: <Application.Resources>    <vm:ViewModelLocator x_Key="Locator"      d_IsDataSource="True"                      DataContext="{Binding Source={StaticResource          Locator}, Path=Main}"        Title="Production Management Application"          Height="350" Width="525"> Creating an authentication base class Since the Web API and SignalR hubs use Azure AD authentication, we'll create services to interact with both and create a common base class to ensure that all requests are authenticated. This class uses the AuthenticationContext.AquireToken method to launch a built-in login dialog that handles the OAuth2 workflow and returns an authentication token on successful login: using Microsoft.IdentityModel.Clients.ActiveDirectory; using System; using System.Configuration; using System.Diagnostics; using System.Net;   namespace AzureBakery.Production.ManagementApplication.Services {    public abstract class AzureAdAuthBase    {        protected AuthenticationResult Token = null;          protected readonly string ServiceUri = null;          protected AzureAdAuthBase()        {            this.ServiceUri =              ConfigurationManager.AppSettings["serviceUri"]; #if DEBUG            // This will accept temp SSL certificates            ServicePointManager.ServerCertificateValidationCallback += (se, cert, chain, sslerror) => true; #endif        }          protected bool Login()        {            // Our AD Tenant domain name            var tenantId =              ConfigurationManager.AppSettings["ida:Tenant"];              // Web API resource ID (The resource we want to use)            var resourceId =              ConfigurationManager.AppSettings["ida:Audience"];              // Client App CLIENT ID (The ID of the AD app for this            client application)            var clientId =              ConfigurationManager.AppSettings["ida:ClientID"];              // Callback URI            var callback = new              Uri(ConfigurationManager.AppSettings["ida:CallbackUri"]);              var authContext = new              AuthenticationContext(string.Format("https://login.windows.net/{0}", tenantId));              if(this.Token == null)            {                // See if we have a cached token               var token = Properties.Settings.Default.Token;                if (!string.IsNullOrWhiteSpace(token))                    this.Token = AuthenticationResult.Deserialize(token);            }                       if (this.Token == null)             {                try                {                    // Acquire fresh token - this will get user to                    login                                  this.Token =                      authContext.AcquireToken(resourceId,                         clientId, callback);                }                catch(Exception ex)                {                    Debug.WriteLine(ex.ToString());                      return false;                }            }            else if(this.Token.ExpiresOn < DateTime.UtcNow)            {                // Refresh existing token this will not require                login                this.Token =                  authContext.AcquireTokenByRefreshToken(this.Token.RefreshToken,                   clientId);            }                      if (this.Token != null && this.Token.ExpiresOn >              DateTime.UtcNow)            {                // Store token                Properties.Settings.Default.Token =                  this.Token.Serialize(); // This should be                    encrypted                Properties.Settings.Default.Save();                  return true;            }              // Clear token            this.Token = null;              Properties.Settings.Default.Token = null;            Properties.Settings.Default.Save();              return false;        }    } } The token is stored in user settings and refreshed if necessary, so the users don't have to log in to the application every time they use it. The Login method can be called by derived service classes every time a service is called to check whether the user is logged in and whether there is a valid token to use. Creating a data service We'll create a DataService class that derives from the AzureAdAuthBase class we just created and gets data from the Web API service using AD authentication. First, we'll create a generic helper method that calls an API GET action using the HttpClient class with the authentication token added to the Authorization header, and deserializes the returned JSON object into a .NET-typed object T: private async Task<T> GetData<T>(string action) {    if (!base.Login())        return default(T);      // Call Web API    var authHeader = this.Token.CreateAuthorizationHeader();    var client = new HttpClient();    var uri = string.Format("{0}{1}", this.ServiceUri,      string.Format("api/{0}", action));    var request = new HttpRequestMessage(HttpMethod.Get, uri);    request.Headers.TryAddWithoutValidation("Authorization",      authHeader);      // Get response    var response = await client.SendAsync(request);    var responseString = await response.Content.ReadAsStringAsync();      // Deserialize JSON    var data = await Task.Factory.StartNew(() =>      JsonConvert.DeserializeObject<T>(responseString));      return data; } Once we have this, we can quickly create methods for getting order and batch data like this:   public async Task<IEnumerable<Order>> GetOrders() {    return await this.GetData<IEnumerable<Order>>("orders"); }   public async Task<IEnumerable<Batch>> GetBatches() {    return await this.GetData<IEnumerable<Batch>>("batches"); } This service implements an IDataService interface and is registered in the ViewModelLocator class, ready to be injected into our view models like this: SimpleIoc.Default.Register<IDataService, DataService>(); Creating a SignalR service We'll create another service derived from the AzureAdAuthBase class, which is called ManagementService, and which sends updated orders to the SignalR hub and receives updates from the hub originating from other clients to keep the UI updated in real time. First, we'll create a Register method, which creates a hub proxy using our authorization token from the base class, registers for updates from the hub, and starts the connection: private IHubProxy _proxy = null;   public event EventHandler<Order> OrderUpdated; public event EventHandler<Batch> BatchUpdated;   public ManagementService() {   }   public async Task Register() {    // Login using AD OAuth    if (!this.Login())        return;      // Get header from auth token    var authHeader = this.Token.CreateAuthorizationHeader();      // Create hub proxy and add auth token    var cnString = string.Format("{0}signalr", base.ServiceUri);    var hubConnection = new HubConnection(cnString, useDefaultUrl:      false);    this._proxy = hubConnection.CreateHubProxy("managementHub");    hubConnection.Headers.Add("Authorization", authHeader);      // Register for order updates    this._proxy.On<Order>("updateOrder", order =>    {        this.OnOrderUpdated(order);    });        // Register for batch updates    this._proxy.On<Batch>("updateBatch", batch =>    {        this.OnBatchUpdated(batch);    });        // Start hub connection    await hubConnection.Start(); } The OnOrderUpdated and OnBatchUpdated methods call events to notify about updates. Now, add two methods that call the hub methods we created in the website using the IHubProxy.Invoke<T> method: public async Task<bool> UpdateOrder(Order order) {    // Invoke updateOrder method on hub    await this._proxy.Invoke<Order>("updateOrder",      order).ContinueWith(task =>    {        return !task.IsFaulted;    });      return false; }   public async Task<bool> UpdateBatch(Batch batch) {    // Invoke updateBatch method on hub    await this._proxy.Invoke<Batch>("updateBatch",      batch).ContinueWith(task =>    {        return !task.IsFaulted;    });      return false; } This service implements an IManagementService interface and is registered in the ViewModelLocator class, ready to be injected into our view models like this: SimpleIoc.Default.Register<IManagementService, ManagementService>(); Testing the application To test the application locally, we need to start the Web API project and the WPF client application at the same time. So, under the Startup Project section in the Solution Properties dialog, check Multiple startup projects, select the two applications, and click on OK: Once running, we can easily debug both applications simultaneously. To test the application with the service running in the cloud, we need to deploy the service to the cloud, and then change the settings in the client app.config file (remember we put the local and Azure settings in the config with the Azure settings commented-out, so swap them around). To debug the client against the Azure service, make sure that only the client application is running (select Single startup project from the Solution Properties dialog). Summary We learned how to use a Web API to enable the production management Windows client application to access data from our production database and a SignalR hub to handle order and batch changes, keeping all clients updated and messaging the Service Bus topic. Resources for Article: Further resources on this subject: Using the Windows Azure Platform PowerShell Cmdlets [Article] Windows Azure Mobile Services - Implementing Push Notifications using [Article] Using Azure BizTalk Features [Article]
Read more
  • 0
  • 0
  • 3044

article-image-photo-stream-icloud
Packt
12 Nov 2013
5 min read
Save for later

Photo Stream with iCloud

Packt
12 Nov 2013
5 min read
(For more resources related to this topic, see here.) Photo Stream The way that Photo Stream works is really simple. First, you take some pictures using your iOS device. Then, these pictures are automatically uploaded to the iCloud server. Other devices that have Photo Stream enabled receive these pictures immediately. Photo Stream only stores pictures that are taken using the Camera app on iOS devices. Of course, your devices need to be connected to the Internet via cellular data or Wi-Fi. For those who use Wi-Fi-only iOS devices, such as iPod touch and iPad with Wi-Fi, pictures are uploaded later when it's connected to Internet. Photo Stream lets you upload unlimited pictures and it won't count on iCloud storage, but these are stored in Photo Stream for 30 days. After that, your pictures are automatically deleted. Make sure that you have stored all pictures on your Mac or PC so that you don't lose any. All pictures are uploaded in full resolution, but when they are downloaded to iOS devices, the resolution is reduced and optimized for the devices. Setting up Photo Stream All Mac computers with OS X Lion or higher, PCs with an iCloud Control Panel, and iOS devices with iOS 5 or higher are able to store and receive pictures from Photo Stream. To use Photo Stream on your device, you need to activate it on each device. So, you can also decide on which devices you want to store and receive pictures using Photo Stream. Photo Stream on iOS It's hard for me (and maybe for you too), if I don't enable Photo Stream on my iPhone because Photo Stream is the easiest way to share pictures and screenshots across iOS devices. You don't need to share them by e-mail or using other apps. Just let iCloud stream them to all devices. To enable Photo Stream on iOS, navigate to Settings | Photos & Camera. Set the My Photo Stream toggle to the ON position, as shown in the preceding screenshot, and that's all! Photo Stream is now ready to serve you. You can also enable Photo Stream by navigating to Settings | iCloud | Photo Stream and setting the My Photo Stream toggle to the ON position. To view all pictures stored on Photo Stream, you need to open the Photos app, tap on the Albums tab, and then tap on the My Photo Stream tab at the bottom of the screen. You can browse and view the pictures just like browsing pictures on Albums or Events, as shown in the following screenshot: You can share pictures from Photo Stream to Mail, Message, Twitter, or Facebook; many other actions are available as well. You can also delete pictures individually from Photo Stream. Tap on Select and tap on the pictures that you want to delete. Then, tap on the Delete icon to execute the process. Saving pictures from Photo Stream to Camera Roll is really easy. Just tap on Select and choose the pictures that you want to save. If you're finished, tap on the Share icon and choose the Save to Camera Roll icon at the bottom of the screen to store all chosen pictures to Camera Roll. You must choose whether to keep the pictures in an existing album or in a new album. Photo Stream on Mac Photo Stream on Mac is really great. It's integrated with iPhoto; one of the applications in iLife suite for managing photos and videos. You will have it installed by default when you purchase a new Mac, or you can purchase it by yourself via the Mac App Store. With iPhoto, you can add and delete pictures in Photo Stream. One big advantage of having Photo Stream enabled on your Mac is that you don't need to plug in your iOS device to your Mac just for copying pictures taken with it. To enable Photo Stream on Mac, navigate to System Preferences | iCloud and check the Photos checkbox to enable it. You can manage and view your Photo Stream on iPhoto or Aperture. You can also use pictures from Photo Stream on iMovie as part of Media Browser. Viewing Photo Stream on iPhoto After you've enabled Photo Stream from the iCloud preference pane, launch iPhoto on your Mac. You'll see the iCloud icon on the left sidebar. Click on it and iPhoto shows a welcome screen, as shown in the following screenshot. Click on Turn On iCloud to enable Photo Stream, and iPhoto will download all pictures stored on Photo Stream automatically. It usually takes longer to download Photo Stream pictures for the first time: Photo Stream on iPhoto has different behaviors compared to Photo Stream on iOS. All Photo Stream pictures, which have been downloaded to iPhoto, are automatically stored in your iPhoto Library. It's not only stored but also organized with iPhoto as Event. So, you'll see something like "Jan 2013 Photo Stream", which contains Photo Stream pictures from January 2013. Pictures from Photo Stream behave like other pictures on iPhoto. These are available on Media Browser, which is connected with other apps on your Mac. Everything is organized so there is no more dragging and dropping from your mobile device to your Mac. By default, every new picture added to your iPhoto Library is uploaded to Photo Stream. You can disable it by navigating to iPhoto | Preferences| iCloud and unchecking Automatic Upload, as shown in the following screenshot: Summary This article shows how the Photo Stream app is used with iCloud. The article included the setting up of the Photo Stream app and using it with the different apple platforms like iOS and Mac. It also shows how the Photo Stream app is used with iPhoto. Resources for Article: Further resources on this subject: Using OpenShift [Article] Mobile and Social - the Threats You Should Know About [Article] iPhone: Customizing our Icon, Navigation Bar, and Tab Bar [Article]
Read more
  • 0
  • 0
  • 3038

article-image-platform-service-and-cloudbees
Packt
19 Dec 2013
10 min read
Save for later

Platform as a Service and CloudBees

Packt
19 Dec 2013
10 min read
(For more resources related to this topic, see here.) Platform as a Service (PaaS) is a crossover between IaaS and SaaS. This is a fuzzy definition, but it defines well the existing actors in this industry well and possible confusions. A general presentation of PaaS uses a pyramid. Depending on what the graphics try to demonstrate, the pyramid can be drawn upside down, as shown in the following diagram: Cloud pyramids The pyramid on the left-hand side shows XaaS platforms based on the target users' profiles. It demonstrates that IaaS is the basis for all Cloud services. It provides the required flexibility for PaaS to support applications that are exposed as SaaS to the end users. Some SaaS actually don't use a PaaS and directly rely on IaaS, but that doesn't really matter here. The pyramid on the right-hand side represents the providers and the three levels suggests the number of providers in each category. IaaS only makes sense for highly concentrated, large-scale providers. PaaS can have more actors, probably focused on some ecosystem, but the need is to have a neutral and standard platform that is actually attractive for developers. SaaS is about all the possible applications running in Cloud. The top-level shape should then be far larger than what the graphic shows. So, which platform? With the previous definition of platform, you just have a faint idea; your understanding about PaaS is more than IaaS and less than SaaS. The missing definition is to know what the platform is about. A platform is a standardization of the runtime for which a developer is waiting to do his/her job. This depends on the software ecosystem you're considering. For a Java EE developer, a platform means having at least a servlet container, managing DataSource to access the database, and having few comparable resources wrapped as standard Java EE APIs. A Play! framework developer will consider this as overweight and only ask for a JVM with web socket's support. A PHP developer will expect a Linux/Apache/MySQL/PHP (LAMP) stack, similar to the one he/she has been using for years, with a traditional server hosting service. So, depending on the development ecosystem you're considering, platforms don't have the exact same meaning, but they all share a common principle. A platform is the common denominator for a software language ecosystem, where the application is all that a specific developer will write or choose on their own. Java EE developers will ask for a container, and Ruby developers will ask for an RVM environment. What they run on top is their own choice. With this definition, you understand that a platform is about the standardization of runtime for a software ecosystem. Maybe some of you have patched OpenJDK to enable some magic features in the JVM (really?), but most of us just use the standard Oracle Java distribution. Such a standardization makes it possible to share resources and engineering skills on a large scale, to reduce cost, and provide a reliable runtime. Cloud and clustering Another consideration for a platform is clustering. Cloud is based on slicing resources into small virtual elements and letting the users select as many as they need. In most cases, this requires the application to support a clustering mode, as using more resources will require you to scale out on multiple hosts. Clustering has never been a trivial thing, and many developers aren't familiar with the related constraints. The platform can help them by providing specialized services to distribute the load around the cluster's nodes. Some PaaS such as CloudBees or Google App Engine provide such features, while some don't. This is the major difference between PaaS offers. Some are IaaS-like preinstalled middleware services, while some offer a highly integrated platform. A typical issue faced is that of state management. Java EE developers rely on HttpSession to store user's data and retrieve them on subsequent interaction. Modern frameworks tend to be stateless, but the state needs to be managed anyway. PaaS has to provide options to developers, so that they can choose the best strategy to match their own business requirements. This is a typical clustering issue that is well addressed by PaaS because the technical solutions (sticky session, session replication, distributed storage engines, and so on) have been implemented once with all the required skills to do it right, and can be used by all platform users. Thanks to a PaaS, you don't need to be a clustering guru. This doesn't mean that it will magically let your legacy application scale out, but it gives you adequate tools to design the application for scalability. Private versus public Clouds Many companies are interested in Cloud, thanks to the press for publishing all product announcements as the new revolution, and would like to benefit from them but as a private resource. If you go back to the comparison in the Preface with an electricity production, this may make sense if you're well established. Amazon or Google should have private power plants to supply giant data centers can make sense—anyway it doesn't seems that they do but as backends. For most of companies, this would be a surprising company choice. The main reason is that the principle of the Cloud relies on the last letter of XaaS (S) that stands for Service. You can install an OpenStack or VMware farm on your data center, but then you won't have an IaaS. You will have some virtualization and flexibility that probably is far better than traditional dedicated hardware, but you miss the major change. You still will have to hire operators to administer the servers and software stack. You will even have a more complex software stack (search for an OpenStack administrator and you'll understand). Using Cloud makes sense because there are thousands of users all around the world sharing the same lower-level resources, and a centralized, highly specialized team to manage them all. Building your own, private PaaS is yet another challenge. This is not a simple middleware stack. This is not about providing virtual machine images with a preinstalled Tomcat server. What about maintenance, application scalability, deployment APIs, clustering, backup, data replication, high availability,monitoring, and support? Support is a major added value of cloud services—I'm not just saying this because I'm a support engineer—but because when something fails, you need someone to help. You can't just wait with the promise for a patch provided by the community. The guy who's running your application needs to have significant knowledge of the platform. That's one reason that CloudBees is focusing on Java first, as this is the ecosystem and environment we know best (even we have some Erlang and Ruby engineers whose preferred game is to troll on this displeasing language). With a private Cloud, you probably can have level-one support with an internal support team, but you can't handle all the issues. As for resource concentration, to build an impressive knowledge base. All those topics are ignored in most cases as people only focus on the app:deploy automation, as opposed to the old-style deployments to dedicated hardware. If this is what you're looking for, you should know that Maven was able to do this for years on all the Java EE containers using cargo. You can check the same at http://cargo.codehaus.org. Cloud isn't just about abstracting the runtime behind an API; it's about changing the way in which developers manage and access runtime so that it becomes a service they can consume without any need to worry about what's happening behind the scene. Security The reason that companies claim to prefer a private cloud solution is security. Amazon datacenters are far more secure than any private datacenter, due to both strong security policy and anonymous user data. Security is not about exploiting encryption algorithms, like in Hollywood movies, but about social attacks that are far more fragile. Few companies take care of administrative, financial, familial, or personal safety. Thanks to the combination of VPN, HTTPS, fixed IPs, and firewall filters, you can safely deploy an application on Amazon Cloud as an extension to your own network, to access data from your legacy Oracle or SAP mainframe hosted in your datacenter. As a mobile application demonstrates, your data is already going out from your private network. There's no concrete reason why your backend application can't be hosted outside your walls. CloudBees – embrace the development stack CloudBees PaaS has something special in its DNA that you won't find in other PaaS; focusing on the Java ecosystem first, even with polyglot support, CloudBees understands well the Java ecosystem's complexity and its underlying practices. Heroku was one of the first successful PaaS, focusing on Ruby runtime. Deployment of a Ruby application is just about sending source code to the platform using the following command: git push heroku master Ruby is a pleasant ecosystem because there are no such long debates on building and provisioning tools that we know of, unlike in JavaWorld, GemFile, and Rake, period. In the Java ecosystem, there is a need to generate, compile the source code, and then sometime post the process classes, hence a large set of build tools are required. There's also a need to provision runtime with dozens of dependencies, so a set of dependency management tools, inter-project relations, and so on are required. With Agile development practices, automated testing has introduced a huge set of test frameworks that developers want to integrate into the deployment process. The Java platform is not just about hosting a JVM or a servlet container, it's about managing Ant, Maven, SBT, or Gradle builds, as well as Grails-, Play-, Clojure-, and Scala-specific tooling. It's about hosting dependency repositories. It's about handling complex build processes to include multiple levels of testing and code analysis. The CloudBees platform has two major components: RUN@cloud is a PaaS, as described earlier, to host applications and provide high-level runtime services DEV@cloud is a continuous integration and deployment SaaS based on Jenkins Jenkins is not the subject of this article, but it is the de facto standard for but not limited to continuous integration in the Java ecosystem. With a large set of plugins, it can be extended to support a large set of tools, processes, and views about your project. The CloudBees team includes major Jenkins committers (including myself #selfpromotion), and so it has a deep knowledge on Jenkins ecosystem and is best placed to offer it as a Cloud service. We also can help you to diagnose your project workflow by applying the best continuous integration and deployment practices. This also helps you to get more efficient and focused results on your actual business development. The following screenshot displays the continuous Cloud delivery concept in CloudBees: With some CloudBees-specific plugins to help, DEV@cloud Jenkins creates a smooth code-build-deploy pipeline, comparable to Heroku's Git push, but with full control over the intermediary process to convert your source code to a runnable application. This is such a significant component to build a full stack for Java developers that CloudBees is the official provider for the continuous integration service for Google App Engine (http://googleappengine.blogspot.fr/2012/10/jenkins-meet-google-app-engine.html), Cloud Foundry (http://blog.cloudfoundry.com/2013/02/28/continuous-integration-to-cloud-foundry-com-using-jenkins-in-the-cloud/), and Amazon. Summary This article introduced the Cloud principles and benefits, and compared CloudBees to its competitors. Resources for Article: Further resources on this subject: Framework Comparison: Backbase AJAX framework Vs Other Similar Framework (Part 2) [Article] Integrating Spring Framework with Hibernate ORM Framework: Part 2 [Article] Working with Zend Framework 2.0 [Article]
Read more
  • 0
  • 0
  • 2872

article-image-managing-azure-hosted-services-service-management-api
Packt
08 Aug 2011
11 min read
Save for later

Managing Azure Hosted Services with the Service Management API

Packt
08 Aug 2011
11 min read
  Microsoft Windows Azure Development Cookbook Over 80 advanced recipes for developing scalable services with the Windows Azure platform         Read more about this book       (For more resources on this subject, see here.) Introduction The Windows Azure Portal provides a convenient and easy-to-use way of managing the hosted services and storage account in a Windows Azure subscription, as well as any deployments into these hosted services. The Windows Azure Service Management REST API provides a programmatic way of managing the hosted services and storage accounts in a Windows Azure subscription, as well as any deployments into these hosted services. These techniques are complementary and, indeed, it is possible to use the Service Management API to develop an application that provides nearly all the features of the Windows Azure Portal. The Service Management API provides almost complete control over the hosted services and storage accounts contained in a Windows Azure subscription. All operations using this API must be authenticated using an X.509 management certificate. We see how to do this in the Authenticating against the Windows Azure Service Management REST API recipe in Controlling Access in the Windows Azure Platform. In Windows Azure, a hosted service is an administrative and security boundary for an application. A hosted service specifies a name for the application, as well as specifying a Windows Azure datacenter or affinity group into which the application is deployed. In the Creating a Windows Azure hosted service recipe, we see how to use the Service Management API to create a hosted service. A hosted service has no features or functions until an application is deployed into it. An application is deployed by specifying a deployment slot, either production or staging, and by providing the application package containing the code, as well as the service configuration file used to configure the application. We see how to do this using the Service Management API in the Deploying an application into a hosted service recipe. Once an application has been deployed, it probably has to be upgraded occasionally. This requires the provision of a new application package and service configuration file. We see how to do this using the Service Management API in the Upgrading an application deployed to a hosted service recipe. A hosted service has various properties defining it as do the applications deployed into it. There could, after all, be separate applications deployed into each of the production and staging slots. In the Retrieving the properties of a hosted service recipe, we see how to use the Service Management API to get these properties. An application deployed as a hosted service in Windows Azure can use the Service Management API to modify itself while running. Specifically, an application can autoscale by varying the number of role instances to match anticipated demand. We see how to do this in the Autoscaling with the Windows Azure Service Management REST API recipe. We can use the Service Management API to develop our own management applications. Alternatively, we can use one of the PowerShell cmdlets libraries that have already been developed using the API. Both the Windows Azure team and Cerebrata have developed such libraries. We see how to use them in the Using the Windows Azure Platform PowerShell Cmdlets recipe. Creating a Windows Azure hosted service A hosted service is the administrative and security boundary for an application deployed to Windows Azure. The hosted service specifies the service name, a label, and either the Windows Azure datacenter location or the affinity group into which the application is to be deployed. These cannot be changed once the hosted service is created. The service name is the subdomain under cloudapp.net used by the application, and the label is a humanreadable name used to identify the hosted service on the Windows Azure Portal. The Windows Azure Service Management REST API exposes a create hosted service operation. The REST endpoint for the create hosted service operation specifies the subscription ID under which the hosted service is to be created. The request requires a payload comprising an XML document containing the properties needed to define the hosted service, as well as various optional properties. The service name provided must be unique across all hosted services in Windows Azure, so there is a possibility that a valid create hosted service operation will fail with a 409 Conflict error if the provided service name is already in use. As the create hosted service operation is asynchronous, the response contains a request ID that can be passed into a get operation status operation to check the current status of the operation. In this recipe, we will learn how to use the Service Management API to create a Windows Azure hosted service. Getting ready The recipes in this article use the ServiceManagementOperation utility class to invoke operations against the Windows Azure Service Management REST API. We implement this class as follows: Add a class named ServiceManagementOperation to the project. Add the following assembly reference to the project: System.Xml.Linq.dll Add the following using statements to the top of the class file: using System.Security.Cryptography.X509Certificates;using System.Net;using System.Xml.Linq;using System.IO; Add the following private members to the class: String thumbprint;String versionId = "2011-02-25"; Add the following constructor to the class: public ServiceManagementOperation(String thumbprint){ this.thumbprint = thumbprint;} Add the following method, retrieving an X.509 certificate from the certificate store, to the class: private X509Certificate2 GetX509Certificate2( String thumbprint){ X509Certificate2 x509Certificate2 = null; X509Store store = new X509Store("My", StoreLocation.LocalMachine); try { store.Open(OpenFlags.ReadOnly); X509Certificate2Collection x509Certificate2Collection = store.Certificates.Find( X509FindType.FindByThumbprint, thumbprint, false); x509Certificate2 = x509Certificate2Collection[0]; } finally { store.Close(); } return x509Certificate2;} Add the following method, creating an HttpWebRequest, to the class: private HttpWebRequest CreateHttpWebRequest( Uri uri, String httpWebRequestMethod){ X509Certificate2 x509Certificate2 = GetX509Certificate2(thumbprint); HttpWebRequest httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(uri); httpWebRequest.Method = httpWebRequestMethod; httpWebRequest.Headers.Add("x-ms-version", versionId); httpWebRequest.ClientCertificates.Add(x509Certificate2); httpWebRequest.ContentType = "application/xml"; return httpWebRequest;} Add the following method, invoking a GET operation on the Service Management API, to the class: public XDocument Invoke(String uri){ XDocument responsePayload; Uri operationUri = new Uri(uri); HttpWebRequest httpWebRequest = CreateHttpWebRequest(operationUri, "GET"); using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse()) { Stream responseStream = response.GetResponseStream(); responsePayload = XDocument.Load(responseStream); } return responsePayload;} Add the following method, invoking a POST operation on the Service Management API, to the class: public String Invoke(String uri, XDocument payload){ Uri operationUri = new Uri(uri); HttpWebRequest httpWebRequest = CreateHttpWebRequest(operationUri, "POST"); using (Stream requestStream = httpWebRequest.GetRequestStream()) { using (StreamWriter streamWriter = new StreamWriter(requestStream, System.Text.UTF8Encoding.UTF8)) { payload.Save(streamWriter, SaveOptions.DisableFormatting); } } String requestId; using (HttpWebResponse response = (HttpWebResponse)httpWebRequest.GetResponse()) { requestId = response.Headers["x-ms-request-id"]; } return requestId;} How it works... In steps 1 through 3, we set up the class. In step 4, we add a version ID for service management operations. Note that Microsoft periodically releases new operations for which it provides a new version ID, which is usually applicable for operations added earlier. In step 4, we also add a private member for the X.509 certificate thumbprint that we initialize in the constructor we add in step 5. In step 6, we open the Personal (My) certificate store on the local machine level and retrieve an X.509 certificate identified by thumbprint. If necessary, we can specify the current user level, instead of the local machine level, by using StoreLocation.CurrentUser instead of StoreLocation.LocalMachine. In step 7, we create an HttpWebRequest with the desired HTTP method type, and add the X.509 certificate to it. We also add various headers including the required x-ms-version. In step 8, we invoke a GET request against the Service Management API and load the response into an XML document which we then return. In step 9, we write an XML document, containing the payload, into the request stream for an HttpWebRequest and then invoke a POST request against the Service Management API. We extract the request ID from the response and return it. How to do it... We are now going to construct the payload required for the create hosted service operation, and then use it when we invoke the operation against the Windows Azure Service Management REST API. We do this as follows: Add a new class named CreateHostedServiceExample to the WPF project. If necessary, add the following assembly reference to the project: System.Xml.Linq.dll Add the following using statement to the top of the class file: using System.Xml.Linq; Add the following private members to the class: XNamespace wa = "http://schemas.microsoft.com/windowsazure";String createHostedServiceFormat ="https://management.core.windows.net/{0}/services/hostedservices"; Add the following method, creating a base-64 encoded string, to the class: private String ConvertToBase64String(String value){ Byte[] bytes = System.Text.Encoding.UTF8.GetBytes(value); String base64String = Convert.ToBase64String(bytes); return base64String;} Add the following method, creating the payload, to the class: private XDocument CreatePayload( String serviceName, String label, String description, String location, String affinityGroup){ String base64LabelName = ConvertToBase64String(label); XElement xServiceName = new XElement(wa + "ServiceName", serviceName); XElement xLabel = new XElement(wa + "Label", base64LabelName); XElement xDescription = new XElement(wa + "Description", description); XElement xLocation = new XElement(wa + "Location", location); XElement xAffinityGroup = new XElement(wa + "AffinityGroup", affinityGroup); XElement createHostedService = new XElement(wa +"CreateHostedService"); createHostedService.Add(xServiceName); createHostedService.Add(xLabel); createHostedService.Add(xDescription); createHostedService.Add(xLocation); //createHostedService.Add(xAffinityGroup); XDocument payload = new XDocument(); payload.Add(createHostedService); payload.Declaration = new XDeclaration("1.0", "UTF-8", "no"); return payload;} Add the following method, invoking the create hosted service operation, to the class: private String CreateHostedService(String subscriptionId, String thumbprint, String serviceName, String label, String description, String location, String affinityGroup){ String uri = String.Format(createHostedServiceFormat, subscriptionId); XDocument payload = CreatePayload(serviceName, label, description, location, affinityGroup); ServiceManagementOperation operation = new ServiceManagementOperation(thumbprint); String requestId = operation.Invoke(uri, payload); return requestId;} Add the following method, invoking the methods added earlier, to the class: public static void UseCreateHostedServiceExample(){ String subscriptionId = "{SUBSCRIPTION_ID}"; String thumbprint = "{THUMBPRINT}"; String serviceName = "{SERVICE_NAME}"; String label = "{LABEL}"; String description = "Newly created service"; String location = "{LOCATION}"; String affinityGroup = "{AFFINITY_GROUP}"; CreateHostedServiceExample example = new CreateHostedServiceExample(); String requestId = example.CreateHostedService( subscriptionId, thumbprint, serviceName, label, description, location, affinityGroup);} How it works... In steps 1 through 3, we set up the class. In step 4, we add private members to define the XML namespace used in creating the payload and the String format used in generating the endpoint for the create hosted service operation. In step 5, we add a helper method to create a base-64 encoded copy of a String. We create the payload in step 6 by creating an XElement instance for each of the required and optional properties, as well as the root element. We add each of these elements to the root element and then add this to an XML document. Note that we do not add an AffinityGroup element because we provide a Location element and only one of them should be provided. In step 7, we use the ServiceManagementOperation utility class , described in the Getting ready section, to invoke the create hosted service operation on the Service Management API. The Invoke() method creates an HttpWebRequest, adds the required X.509 certificate and the payload, and then sends the request to the create hosted services endpoint. It then parses the response to retrieve the request ID which can be used to check the status of the asynchronous create hosted services operation. In step 8, we add a method that invokes the methods added earlier. We need to provide the subscription ID for the Windows Azure subscription, a globally unique service name for the hosted service, and a label used to identify the hosted service in the Windows Azure Portal. The location must be one of the official location names for a Windows Azure datacenter, such as North Central US. Alternatively, we can provide the GUID identifier of an existing affinity group and swap the commenting out in the code, adding the Location and AffinityGroup elements in step 6. We see how to retrieve the list of locations and affinity groups in the Locations and affinity groups section of this recipe. There's more... Each Windows Azure subscription can create six hosted services. This is a soft limit that can be raised by requesting a quota increase from Windows Azure Support at the following URL: http://www.microsoft.com/windowsazure/support/ There are also soft limits on the number of cores per subscription (20) and the number of Windows Azure storage accounts per subscription (5). These limits can also be increased by request to Windows Azure Support. Locations and affinity groups The list of locations and affinity groups can be retrieved using the list locations and list affinity groups operations respectively in the Service Management API. We see how to do this in the Using the Windows Azure Platform PowerShell Cmdlets recipe. As of this writing, the locations are: Anywhere US South Central US North Central US Anywhere Europe North Europe West Europe Anywhere Asia Southeast Asia East Asia The affinity groups are specific to a subscription.
Read more
  • 0
  • 0
  • 2837
article-image-funambol-e-mail-part-1
Packt
06 Jan 2010
6 min read
Save for later

Funambol E-mail: Part 1

Packt
06 Jan 2010
6 min read
In this article, Maria will set up Funambol to connect to the company e-mail server, in order to enable her users to receive e-mail on their mobile phones. E-mail Connector The E-mail Connector allows Funambol to connect to any IMAP and POP e-mail server to enable mobile devices to receive corporate or personal e-mail. The part of the Funambol architecture involved with this functionality is illustrated in the following figure: The E-mail Connector is a container of many things, the most important ones being: The e-mail server extension (represented in the figure by the E-mail Connector block): This is the extension of the Funambol Data Synchronization Service that allows e-mail synchronization through the connection to the e-mail provider. The Inbox Listener Service: This is the service that detects new e-mail in the user inbox and notifies the user's devices. When the Funambol DS Service receives sync requests for e-mail, the request calls the E-mail Connector, which downloads new messages from the e-mail server and makes them available to the DS Service, which in turn delivers them to the device. When a user receives a new e-mail, the new message is detected by the Inbox Listener Service that notifies the user's device to start a new sync. When the E-mail Connector is set up and activated, e-mail can be synced with an e-mail provider if it supports one of the two popular e-mail protocols—POP3 or IMAP v4 for incoming e-mail and the SMTP protocol for outgoing e-mail delivery. Please note that the Funambol server does not store user e-mail locally. For privacy and security reasons, e-mail is stored in the e-mail store of the E-mail Provider. The server constructs a snapshot of each user's inbox in the local database to speed up the process of discovering new e-mails without connecting to the e-mail server. Basically, this e-mail cache contains the ID of the messages and their received date and time. The Funambol service responsible for populating and updating the user inbox cache is the Inbox Listener Service. This service checks each user inbox on a regular basis (that is, every 15 minutes) and updates the inbox cache, adding new messages and deleting the messages that are removed from the inbox (for example, when a desktop client downloaded them or the user moved the messages to a different folder). Another important aspect to consider with mobile e-mail is that many devices have limited capabilities and resources. Further, the latency of retrieving a large inbox can be unacceptable for mobile users, who need the device to be always functional when they are away from their computer. For this reason, Funambol limits the maximum number of e-mails that Maria can download on her mobile so that she is never inconvenienced by having too many e-mails in her mobile e-mail inbox. This value can be customized in the server settings (see section E-mail account setup). In the following sections, Maria will learn how to set up Funambol to work with the corporate e-mail provider and how she can provide Funambol mobile e-mail for her users. Setting up Funambol mobile e-mail The Funambol E-mail Connector is part of a default installation of Funambol so Maria does not need to install any additional packages to use it. The following sections describe what Maria needs to do to set up Funambol to connect to her corporate e-mail server. E-mail Provider The only thing that Maria needs to make sure about the corporate E-mail Provider is that it supports POP/IMAP and SMTP access from the network where Funambol is installed. It is not necessary that the firewall is open to mobile devices. Devices will keep using SyncML as the transport protocol, while the Funambol server connects to the e-mail server when required. Also, the same e-mail server does not need to provide both POP (or IMAP) and SMTP. Funambol can be configured to use two different servers for incoming and outgoing messages. Funambol authentication with e-mail One of Maria's security concerns is the distribution and provisioning of e-mail account information on the mobile phones. She does not like the fact that e-mail account information is sent over a channel that she can only partially control. This is a common concern of IT administrators. Funambol addresses this issue by not storing e-mail credentials on the device. The device (or any other SyncML client) is provisioned with Funambol credentials. In the previous sections, Maria was able to create new accounts so that users could use the PIM synchronization service, and in doing so, she needed to provide new usernames and passwords. This is still valid for e-mail users. What Maria needs to do now is to configure the E-mail Connector and add the e-mail account of the users she wants to enable for mobile e-mail. These topics are covered in detail in the following sections. E-mail account setup To add a user e-mail account to the synchronization service, Maria can use the Funambol Administration Tool, expanding the Modules | email | FunambolEmailConnector node and double-clicking the connector. This opens the connector administration panel, as shown in the following screenshot: There are two sections: Public Mail Servers and Accounts. Maria needs to add new accounts. Let's start with her account first. Clicking the Add button in the Accounts section opens up a new search window so that she can search which Funambol user to attach to the e-mail account. Typing maria in the Username field and clicking Search, will show you the result as shown in the following screenshot: Double-clicking the desired account displays a form for maria's account details as shown in the following screenshot: Each field is explained as follows: Login, Password, Confirm password, and E-mail addressAs the labels of the fields describe, these are the e-mail account credentials and e-mail address. These are credentials used to access the e-mail service, not the ones to access the Funambol synchronization service. Enable PollingThis setting enables or disables the functionality of the Inbox Listener Service to check for updates on this account's inbox. When disabled, the account inbox won't be scanned for new/updated/deleted e-mail. This disables e-mail synchronization completely. Enable PushThis setting enables or disables the push functionality. When disabled, the user will not be notified of new e-mails. If Enable Polling checkbox is active, the Inbox Listener Service keeps updating this account's e-mail cache anyway. In this case Maria can still download e-mail by manually starting the synchronization from the client. Refresh time(min) This setting specifies how frequently the Inbox Listener Service checks for updates on this account's inbox. The value is expressed in minutes. The shorter this period, the more often new e-mail is detected and therefore the closer the user experience is to real time. However, the smaller this number, the heavier the load on the Inbox Listener Service and the e-mail provider. When you have only a few users, this is not too relevant, but it is something to consider when planning a major deployment.
Read more
  • 0
  • 0
  • 2416

article-image-using-intellitrace-diagnose-problems-hosted-service
Packt
11 Aug 2011
4 min read
Save for later

Using IntelliTrace to Diagnose Problems with a Hosted Service

Packt
11 Aug 2011
4 min read
  Microsoft Windows Azure Development Cookbook Over 80 advanced recipes for developing scalable services with the Windows Azure platform         Read more about this book       (For more resources on this subject, see here.) Windows Azure Diagnostics provides non-intrusive support for diagnosing problems with a Windows Azure hosted service. This non-intrusion is vital in a production service. However, when developing a hosted service, it may be worthwhile to get access to additional diagnostics information even at the cost of intruding on the service. The Visual Studio 2010 Ultimate Edition supports the use of IntelliTrace with an application deployed to the cloud. This can be particularly helpful when dealing with problems, such as missing assemblies. It also allows for the easy identification and diagnosis of exceptions. Note that IntelliTrace has a significant impact on the performance of a hosted service. Consequently, it should never be used in a production environment and, in practice, should only be used when needed during development. IntelliTrace is configured when the application package is published. This configuration includes specifying the events to trace and identifying the modules and processes for which IntelliTrace should not capture data. For example, the Storage Client module is removed by default from IntelliTrace since otherwise, storage exceptions could occur due to timeouts. Once the application package has been deployed, the Windows Azure Compute node in the Visual Studio Server Explorer indicates the Windows Azure hosted service, roles, and instances which are capturing IntelliTrace data. From the instance level in this node, a request can be made to download the current IntelliTrace log. This lists: Threads Exceptions System info Modules The threads section provides information about when particular threads were running. The exceptions list specifies the exceptions that occurred, and provides the call stack when they occurred. The system info section provides general information about the instance, such as number of processors and total memory. The modules section lists the loaded assemblies. The IntelliTrace logs will probably survive an instance crash, but they will not survive if the virtual machine is moved due to excessive failure. The instance must be running for Visual Studio to be able to download the IntelliTrace logs. In this recipe, we will learn how to use IntelliTrace to identify problems with an application deployed to a hosted service in the cloud. Getting ready Only Visual Studio Ultimate Edition supports the use of IntelliTrace with an application deployed to a hosted service in the cloud. How to do it... We are going to use IntelliTrace to investigate an application deployed to a hosted service in the cloud. We do this as follows: The first few steps occur before the application package is deployed to the cloud: Use Visual Studio 2010 Ultimate Edition to build a Windows Azure project. Right click on the Solution and select Publish.... Select Enable IntelliTrace for .Net 4 roles. Click on Settings... and make any changes desired to the IntelliTrace settings for modules excluded, and so on. Click on OK to continue the deployment of the application package. The remaining steps occur after the package has been deployed and the hosted service is in the Ready (that is, running) state: Open the Server Explorer in Visual Studio. On the Windows Azure Compute node, right click on an instance node and select View IntelliTrace logs. Investigate the downloaded logs, looking at exceptions and their call stacks, and so on. Right click on individual lines of code in a code file and select Search For This Line In IntelliTrace. Select one of the located uses and step through the code from the line. How it works... Steps 1 through 5 are a normal application package deployment except for the IntelliTrace configuration. In steps 6 and 7, we use Server Explorer to access and download the IntelliTrace logs. Note that we can refresh the logs through additional requests to View IntelliTrace logs. In steps 8 through 10, we look at various aspects of the downloaded IntelliTrace logs. Further resources on this subject: Windows Azure Diagnostics: Initializing the Configuration and Using a Configuration File [Article] Digging into Windows Azure Diagnostics [Article] Managing Azure Hosted Services with the Service Management API [Article] Autoscaling with the Windows Azure Service Management REST API [Article] Using the Windows Azure Platform PowerShell Cmdlets [Article]
Read more
  • 0
  • 0
  • 2178

Packt
23 Oct 2013
7 min read
Save for later

Installation and Deployment of Citrix Systems®' CPSM

Packt
23 Oct 2013
7 min read
(For more resources related to this topic, see here.) Metrics to verify before CPSM deployment Until now, you have learned about the most obvious requirements to install CPSM; now, in the upcoming session, we will have a look at how to verify essentials for CPSM deployment. Verification of environment prerequisites We will look at the core components that should essentially be verified right at the outset before the installation. The first component that needs to be verified is the Active Directory (AD) schema, which is necessary to accommodate Citrix CloudPortal Services Manager. As you are aware, the operation can be performed using the Microsoft Exchange installation tools. The following steps need to be performed: Open the command prompt on your planned Exchange server. Then execute the following command: setup /p /on:OrganizationName The second component that needs to be cross-checked is whether DNS aliases have been configured. Citrix CloudPortal Services Manager uses DNS aliases to discover the servers where the platform modules will be positioned. For this, the following steps need to be performed: On AD, create CNAME records. There should be one record against each of your servers as shown in the following table: Server EX Name Database server CORTEXSQL Provisioning server CORTEXPROVISIONING Web server CORTEXWEB Reporting Services CORTEXREPORTS Use the Citrix CloudPortal Services Manager Setup utility to verify the preceding items. The utility probes our settings and if it is positive, displays a green check mark next to each confirmed item. If it is negative, the Setup utility shows a Validate button, so you can execute the checks over again. Perform the following steps: From your file cluster or from the installation media, execute Setup.exe. On the CloudPortal Services Manager splash screen, click on Get Started. On the Choice Deployment Task screen, choose Install CloudPortal Services Manager. In the CloudPortal Services Manager screen, choose check environment prerequisites. The Prepare Environment screen displays the status of the verified items. As the next step, we will now create the system database. The heart of the deployment is the Config.xml file, which will be useful throughout the wizard run-through. How to deploy SQL Server and Reporting Services For Cloud IT providers, it is recommended that they use the SQL Server deployment and Reporting Services. This should be done in a dedicated cluster for high availability, especially when providing for multiple consumers. With regards to installation, configuration, and performance tuning of SQL Server and Reporting Services, please refer to http://technet.microsoft.com/en-us/library/ms143219(v=sql.105).aspx. The next step is to create the DB. We have to perform this activity post deployment of SQL Server and SQL Server Reporting Services. The system databases are created using the Services Manager Configuration Tool, which is installed as a part of this process. Perform the following steps: From the source location where the installation media is located, execute the Setup.exe file. On the CloudPortal Services Manager splash screen, click on Get Started. On the Choose Deployment Task screen, choose Install CloudPortal Services Manager. On the Install CloudPortal Services Manager screen, choose Deploy Server Roles & Primary Location. On the Deploy Server Roles & Primary Location screen, choose Create System Databases. Now let us install the Citrix CloudPortal Services Manager Configuration Tool: When prompted, click on Install to deploy the Configuration utility. On the License Agreement screen, read and accept the license agreement and then select commit next. On the Ready to install screen, click on Install. The setup utility installs the Configuration Tool and the prerequisites that are required as well. Now, let us click on Finish to continue creating the system databases. The next step of the installation is to create a Configuration File screen. Browse to the directory where you want to store the Config.xml file and provide a filename. Then click on Next. Now, let us go to the Create Primary Databases screen and configure the following information about the SQL Server that will store system configuration information: Server address: This is used to specify the DB server using the DNS alias, IP address, or the FQDN. Server port: This is used to declare the port number used by SQL Server. The port for a default instance of SQL Server is 1433. Authentication mode: This is used to choose whether to apply Integrated Windows and SQL or SQL authentication. By default, Integrated is chosen. (Mixed Mode is recommended to be used). Connect as: This is used to declare Consumer name and password of the SQL administrator Consumer (Super account). Fields are accessible when we choose the SQL authentication mode for our installation. Auto-create SQL logins: This checkbox is available only if we want the required SQL Server Consumer accounts to be created automatically. If you do not choose this checkbox, we can later provide the login details manually on the Configure Database Logins screen. Run through the Test Connection to make sure the Configuration utility can make contact with the SQL Server and then click on Next. On the Configure Database Logins screen, proceed with Generate IDs chosen if you want passwords created automatically for CortexProp, OLMReports, and OLM DB accounts. Clear this choice if you want to provide the passwords for these accounts. CortexProp, OLM DB, and OLMReports accounts are formed to make sure the cross-domain right of entry is available to the server DBs. On the Summary screen, assess the DB configuration in sequence. If you want to change anything, click on Back to return to the suitable configuration screen. Upon completion of the entire configuration as per the guideline, go ahead and click on Commit. The Applying Configuration screen displays the progress. After the server DBs are effectively created, click on Finish. After the system databases are created, you can install Provisioning Directory Web Service and the web platform server roles on the other servers. Installation of the CPSM role using GUI By now you would have crystal clear understanding of the system requirements for a CPSM installation. In order to start the installation using GUI, we need to perform the following activity on the server you will be using to host each server role you planned: Deploy and configure the Reporting server role after the primary location has been configured. If you deploy Reporting Services before the primary location has been configured, configuration of Reporting Services fails. From the source location where the installation media is located, execute the Setup.exe file. On the Setup Tool splash screen, click on Get Started. On the Choose Deployment Task screen, choose Install CloudPortal Services Manager and click on Next. Now on the Install CloudPortal Services Manager screen, choose Deploy Server Roles & Primary Location and click on Next. Now on the Deploy Server Roles & Primary Location screen, choose Install Server Roles and click on Next. Now on the License Agreement screen, agree to the license agreement and then click on Next. On the Choose Server Roles screen, choose the roles to install and then click on Next. On the Review Prerequisites screen, evaluate the prerequisite objects that will be deployed and then click on Next. On the Ready to install screen, evaluate the chosen roles and prerequisites that will be deployed. Click on Install. The Deploying Server Roles screen shows the installation of the prerequisites and the chosen roles, and the outcome. On the Deployment Complete screen, click on Finish. Summary This article serves as a brief reference for readers to understand about the system, to verify the essentials, and install and configure CPSM using GUI and CLI. Resources for Article: Further resources on this subject: Content Switching using Citrix Security [Article] Managing Citrix Policies [Article] Getting Started with the Citrix Access Gateway Product Family [Article]
Read more
  • 0
  • 0
  • 2048
article-image-funambol-e-mail-part-2
Packt
06 Jan 2010
3 min read
Save for later

Funambol E-mail: Part 2

Packt
06 Jan 2010
3 min read
Mobile e-mail at work One of the most widely used phones for mobile e-mail are phones running Windows Mobile; therefore, this is a platform Maria will have to support. Funambol fully supports this platform, extending the Windows Mobile native e-mail client to support SyncML and Funambol mobile e-mail. As Windows Mobile does not natively support SyncML, Maria needs to download the Funambol Windows Mobile Sync Client from the following URLs: http://www.funambol.com/opensource/download.php?file_id=funambol-smartphone-sync-client-7.2.2.exe&_=d (for Windows Mobile smartphone) http://www.funambol.com/opensource/download.php?file_id=funambol-pocketpc-sync-client-7.2.2.exe&_=d (for Windows Mobile Pocket PC) Like any other Windows Mobile applications, these are executable files that need to be run on a desktop PC and the installation will be performed by Microsoft ActiveSync. Once installed on the mobile phone, Maria can run Funambol by clicking the Funambol icon. The first time the application is launched, it asks for the Funambol credentials, as shown in the following image: Maria fills in her Funambol Server location and credentials (not her e-mail account credentials) and presses Save. After a short while, the device will start downloading the messages that she can access from the Funambol account created by the Funambol installation program in Pocket Outlook. The inbox will look similar to the following image: To see mobile e-mail at work, Maria just needs to send an e-mail to the e-mail account she set up earlier. In less than a minute, her mobile device will be notified that there are new messages and the synchronization will automatically start (unless the client is configured differently). Mobile e-mail client configuration There are a number of settings that Maria can set on her mobile phone to change how mobile e-mail works. These settings are accessible from the Funambol application by clicking on Menu | Settings. There are two groups of settings that are important for mobile e-mail: E-mail options... and Sync Method. From the Email options panel, Maria can choose which e-mails to download (all e-mails, today's e-mails, or e-mails received from the last X days), the size of the e-mail to download first (then the remaining part can be downloaded on demand), and if she also wants to download attachments. In the advanced options, she can also choose to use a different "From" display name and e-mail address. From the push method panel, Maria can choose how to download e-mail automatically using the push service on a regular basis, with either a scheduled sync or only manually upon request (from the Funambol Windows Mobile Sync Client interface or the PocketOutlook send and receive command). Funambol supports many mobile phones for mobile e-mail. The previous description is only for Windows Mobile phones. The manner in which Funambol supports other devices depends on the phone. In some cases, Funambol uses the phone's native e-mail client, such as with Windows Mobile. In other cases, Funambol provides its own mobile e-mail client that is downloaded onto the device.
Read more
  • 0
  • 0
  • 2012

article-image-funambol-development
Packt
06 Jan 2010
8 min read
Save for later

Funambol development

Packt
06 Jan 2010
8 min read
Data synchronization All mobile devices—handheld computers, mobile phones, pagers, and laptops—need to synchronize their data with the server where the information is stored. This ability to access and update information on the fly is the key to the pervasive nature of mobile computing. Yet, today almost every device uses a different technology for data synchronization. Data synchronization is helpful for: Propagating updates between a growing number of applications Overcoming the limitations of mobile devices and wireless connections Maximizing the user experience by minimizing data access latency Keeping scalability of the infrastructure in an environment where the number of devices (clients) and connections tend to increase considerably Understanding the requirements of mobile applications, providing a user experience that is helpful, and not an obstacle, for mobile tasks Data synchronization is the process of making two sets of data look identical, as shown in the following figure: This involves many techniques, which will be discussed in the following sections. The most important are: ID handling Change detection Modification exchange Conflict detection Conflict resolution Slow and fast synchronization ID handling At first glance, ID handling seems like a pretty straightforward process that requires little or no attention. However, ID handling is an important aspect of the synchronization process and is not trivial. In some cases a piece of data is identifiable by a subset of its content fields. For example, in the case of a contact entry, the concatenation of a first name and last name uniquely selects an entry in the directory. In other cases, the ID is represented by a particular field specifically introduced for that purpose. For example, in a Sales Force Automation mobile application, an order is identified by an order number or ID. The way in which an item ID is generated is not predetermined and it may be application or even device specific. In an enterprise system, data is stored in a centralized database, which is shared by many users. Each single item is recognized by the system because of a unique global ID. In some cases, two sets of data (the order on a client and the order on a server) represent the same information (the order made by the customer) but they differ. What could be done to reconcile client and server IDs to make the information consistent? Many approaches can be chosen: Client and server agree on an ID scheme (a convention on how to generate IDs must be defined and used). Each client generates globally unique IDs (GUIDs) and the server accepts client-generated IDs. The server generates GUIDs and each client accepts those IDs. Client and server generate their own IDs and a mapping is kept between the two. Client-side IDs are called Locally Unique Identifiers (LUID) and server-side IDs are called Globally Unique Identifiers (GUID). The mapping between local and global identifiers is referred as LUID-GUID mapping. The SyncML specifications prescribe the use of LUID-GUID mapping technique, which allows maximum freedom to client implementations. Change detection Change detection is the process of identifying the data that was modified after a particular point in time that is, the last synchronization. This is usually achieved by using additional information such as timestamp and state information. For example, a possible database enabled for efficient change detection is shown in the following table: ID First name Last name Telephone State Last_update 12 John Doe +1 650 5050403 N 2008-04-02 13:22 13 Mike Smith +1 469 4322045 D 2008-04-01 17:32 14 Vincent Brown +1 329 2662203 U 2008-03-21 17:29 However, sometimes legacy databases do not provide the information needed to accomplish efficient change detection. As a result, the matter becomes more complicated and alternative methods must be adopted (based on content comparison, for instance). This is one of the most important aspects to consider when writing a Funambol extension, because the synchronization engine needs to know what's changed from a point in time. Modification exchange A key component of a data synchronization infrastructure is the way modifications are exchanged between client and server. This involves the definition of a synchronization protocol that client and server use to initiate and execute a synchronization session. In addition to the exchange modification method, a synchronization protocol must also define a set of supported modification commands. The minimal set of modification commands are as follows: Add Replace Delete Conflict detection Let's assume that two users synchronize their local contacts database with a central server in the morning before going to the office. After synchronization, the contacts on their smartphones are exactly the same. Let's now assume that they update the telephone number for "John Doe" entry and one of them makes a mistake and enters a different number. What will happen the next morning when they both synchronize again? Which of the two new versions of the "John Doe" record should be taken and stored into the server? This condition is called conflict and the server has the duty of identifying and resolving it. Funambol detects a conflict by means of a synchronization matrix shown in the following table: Database A → ↓ Database B New Deleted Updated Synchronized / Unchanged Not Existing New C C C C B Deleted C X C D X Updated C C C B B Synchronized/Unchanged C D A = B Not Existing A X A A X As both users synchronize with the central database, we can consider what happens between the server database and one of the client databases at a time. Let's call Database A, as the client database and Database B, as the server database. The symbols in the synchronization matrix have the following meaning: X: Do nothing A: Item A replaces item B B: Item B replaces item A C: Conflict D: Delete the item from the source(s) containing it Conflict resolution Once a conflict arises and is detected, proper action must be taken. Different policies can be applied. Let's see some of them: User decides: The user is notified of the conflict condition and decides what to do. Client wins: The server silently replaces conflicting items with the ones sent by the client. Server wins: The client has to replace conflicting items with the ones from the server. Timestamp based: The last modified (in time) item wins. Last/first in wins: The last/first arrived item wins. Merge: Try to merge the changes, at least when there is no direct conflict. Consider the case of a vcard, where two concurrent modifications have been applied to two different fields. There is a conflict at the card level, but the two changes can be merged so that both clients can then have a valid version of the card. This is the best example of the case when the change is not directly conflicting. Do not resolve. Note that Funambol adopts a special merging policy that guarantees that the user does not lose data. The server always tries to merge if possible. When a conflict cannot be resolved with merging (for example, there are conflicting changes on the same field), the value in the last synchronization wins over the older synchronizations to meet the expectation of the user who is synchronizing. In this way, when the users who applied previous changes receive the new updates all devices will be in sync. Synchronization modes: Full or fast There are many modes to carry out the synchronization process. The main distinction is between fast and full synchronization. Fast synchronization involves only the items changed since the last synchronization between two devices. Of course, this is an optimized process that relies on the fact that, the devices were fully synchronized at some point in the past; this way, the state at the beginning of the sync operation is well known and sound. When this condition is not met (for instance, the mobile device has been reset and lost the timestamp of the last synchronization), a full synchronization must be performed. In a full synchronization, the client sends its entire database to the server, which compares it with its local database and returns the modifications that must be applied to be up-to-date again. Both fast and full synchronization modes can be performed in one of the following manners: Client-to-server: The server updates its database with client modifications, but sends no server-side modifications Server-to-client: The client updates its database with server modifications, but sends no client-side modifications Two-way: The client and server exchange their modifications and both databases are updated accordingly Extending Funambol The Funambol platform can be extended in many areas to integrate Funambol with existing systems and environments. The most common integration use cases and the Funambol modules involved are: Officer: Integrating with an external authentication and authorization service SyncSource: Integrating with an external datasource to address client specific issues Synclet: Adding pre or postprocessing to a SyncML message Admin WS: Integrating with an external management tool These are illustrated in the following diagram: Funambol extensions are distributed and deployed as Funambol modules. This section describes the structure of a Funambol module, while the following sections describe each of these listed scenarios. A Funambol module represents the means by which developers can extend the Funambol server. A module is a packaged set of files containing Java classes, installation scripts, configuration files, initialization SQL scripts, components, and so on, used by the installation procedure to embed extensions into the server core. For more information on how to install Funambol modules, see the Funambol Installation and Administration Guide.
Read more
  • 0
  • 0
  • 1863
Modal Close icon
Modal Close icon