Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-games-and-exercises
Packt
09 Aug 2017
3 min read
Save for later

Games and Exercises

Packt
09 Aug 2017
3 min read
In this article by Shishira Bhat and Ravi Wray, authors of the book, Learn Java in 7 days, we will study the following concepts: Making an object as the return type for a method Making an object as the parameter for a method (For more resources related to this topic, see here.) Let’s start this article by revisiting the reference variablesand custom data types: In the preceding program, p is a variable of datatype,Pen. Yes! Pen is a class, but it is also a datatype, a custom datatype. The pvariable stores the address of the Penobject, which is in heap memory. The pvariable is a reference that refers to a Penobject. Now, let’s get more comfortable by understanding and working with examples. How to return an Object from a method? In this section, let’s understand return types. In the following code, methods returnthe inbuilt data types (int and String), and the reason is explained after each method, as follows: int add () { int res = (20+50); return res; } The addmethod returns the res(70) variable, which is of the int type. Hence, the return type must be int: String sendsms () { String msg = "hello"; return msg; } The sendsmsmethod returns a variable by the name of msg, which is of the String type. Hence, the return type is String. The data type of the returning value and the return type must be the same. In the following code snippet, the return type of the givenPenmethod is not an inbuilt data type. However, the return type is a class (Pen) Let’s understand the following code: The givePen ()methodreturns a variable (reference variable) by the name of p, which is of the Pen type. Hence, the return type is Pen: In the preceding program, tk is a variable of the Ticket type. The method returns tk; hence, the return type of the method is Ticket. A method accepting an object (parameter) After seeing how a method can return an object/reference, let's understand how a method can take an object/reference as the input,that is, parameter. We already understood that if a method takes parameter(s), then we need to pass argument(s). Example In the preceding program,the method takestwo parameters,iandk. So, while calling/invoking the method, we need to pass two arguments, which are 20.5 and 15. The parameter type andthe argument type must be the same. Remember thatwhen class is the datatype, then object is the data. Consider the following example with respect toa non-primitive/class data type andthe object as its data: In the preceding code, the Kid class has the eat method, which takes ch as a parameter of the Chocolatetype, that is,the data type of ch is Chocolate, which is a class. When class is the data type then the object of that class is an actual data or argument. Hence,new Chocolate() is passed as an argument to the eat method. Let's see one more example: The drink method takes wtr as the parameter of the type,Water, which is a class/non-primitive type; hence, the argument must be an object of theWater class. Summary In this article we have learned what to return when a class is a return type for a method and what to pass as an argument for a method when a class is a parameter for the method.  Resources for Article: Further resources on this subject: Saying Hello to Java EE [article] Getting Started with Sorting Algorithms in Java [article] Debugging Java Programs using JDB [article]
Read more
  • 0
  • 0
  • 11438

article-image-data-storage-forcecom
Packt
09 Jan 2017
14 min read
Save for later

Data Storage in Force.com

Packt
09 Jan 2017
14 min read
In this article by Andrew Fawcett, author of the book Force.com Enterprise Architecture - Second Edition, we will discuss how it is important to consider your customers' storage needs and use cases around their data creation and consumption patterns early in the application design phase. This ensures that your object schema is the most optimum one with respect to large data volumes, data migration processes (inbound and outbound), and storage cost. In this article, we will extend the Custom Objects in the FormulaForce application as we explore how the platform stores and manages data. We will also explore the difference between your applications operational data and configuration data and the benefits of using Custom Metadata Types for configuration management and deployment. (For more resources related to this topic, see here.) You will obtain a good understanding of the types of storage provided and how the costs associated with each are calculated. It is also important to understand the options that are available when it comes to reusing or attempting to mirror the Standard Objects such as Account, Opportunity, or Product, which extend the discussion further into license cost considerations. You will also become aware of the options for standard and custom indexes over your application data. Finally, we will have some insight into new platform features for consuming external data storage from within the platform. In this article, we will cover the following topics: Mapping out end user storage requirements Understanding the different storage types Reusing existing Standard Objects Importing and exporting application data Options for replicating and archiving data External data sources Mapping out end user storage requirements During the initial requirements and design phase of your application, the best practice is to create user categorizations known as personas. Personas consider the users' typical skills, needs, and objectives. From this information, you should also start to extrapolate their data requirements, such as the data they are responsible for creating (either directly or indirectly, by running processes) and what data they need to consume (reporting). Once you have done this, try to provide an estimate of the number of records that they will create and/or consume per month. Share these personas and their data requirements with your executive sponsors, your market researchers, early adopters, and finally the whole development team so that they can keep them in mind and test against them as the application is developed. For example, in our FormulaForce application, it is likely that managers will create and consume data, whereas race strategists will mostly consume a lot of data. Administrators will also want to manage your applications configuration data. Finally, there will likely be a background process in the application, generating a lot of data, such as the process that records Race Data from the cars and drivers during the qualification stages and the race itself, such as sector (a designated portion of the track) times. You may want to capture your conclusions regarding personas and data requirements in a spreadsheet along with some formulas that help predict data storage requirements. This will help in the future as you discuss your application with Salesforce during the AppExchange Listing process and will be a useful tool during the sales cycle as prospective customers wish to know how to budget their storage costs with your application installed. Understanding the different storage types The storage used by your application records contributes to the most important part of the overall data storage allocation on the platform. There is also another type of storage used by the files uploaded or created on the platform. From the Storage Usage page under the Setup menu, you can see a summary of the records used, including those that reside in the Salesforce Standard Objects. Later in this article, we will create a Custom Metadata Type object to store configuration data. Storage consumed by this type of object is not reflected on the Storage Usage page and is managed and limited in a different way. The preceding page also shows which users are using the most amount of storage. In addition to the individual's User details page, you can also locate the Used Data Space and Used File Space fields; next to these are the links to view the users' data and file storage usage. The limit shown for each is based on a calculation between the minimum allocated data storage depending on the type of organization or the number of users multiplied by a certain number of MBs, which also depends on the organization type; whichever is greater becomes the limit. For full details of this, click on the Help for this Page link shown on the page. Data storage Unlike other database platforms, Salesforce typically uses a fixed 2 KB per record size as part of its storage usage calculations, regardless of the actual number of fields or the size of the data within them on each record. There are some exceptions to this rule, such as Campaigns that take up 8 KB and stored Email Messages use up the size of the contained e-mail, though all Custom Object records take up 2 KB. Note that this record size also applies even if the Custom Object uses large text area fields. File storage Salesforce has a growing number of ways to store file-based data, ranging from the historic Document tab, to the more sophisticated Content tab, to using the Files tab, and not to mention Attachments, which can be applied to your Custom Object records if enabled. Each has its own pros and cons for end users and file size limits that are well defined in the Salesforce documentation. From the perspective of application development, as with data storage, be aware of how much your application is generating on behalf of the user and give them a means to control and delete that information. In some cases, consider if the end user would be happy to have the option to recreate the file on demand (perhaps as a PDF) rather than always having the application to store it. Reusing the existing Standard Objects When designing your object model, a good knowledge of the existing Standard Objects and their features is the key to knowing when and when not to reference them. Keep in mind the following points when considering the use of Standard Objects: From a data storage perspective: Ignoring Standard Objects creates a potential data duplication and integration effort for your end users if they are already using similar Standard Objects as pre-existing Salesforce customers. Remember that adding additional custom fields to the Standard Objects via your package will not increase the data storage consumption for those objects. From a license cost perspective: Conversely, referencing some Standard Objects might cause additional license costs for your users, since not all are available to the users without additional licenses from Salesforce. Make sure that you understand the differences between Salesforce (CRM) and Salesforce Platform licenses with respect to the Standard Objects available. Currently, the Salesforce Platform license provides Accounts and Contacts; however, to use the Opportunity or Product objects, a Salesforce (CRM) license is needed by the user. Refer to the Salesforce documentation for the latest details on these. Use your user personas to define what Standard Objects your users use and reference them via lookups, Apex code, and Visualforce accordingly. You may wish to use extension packages and/or dynamic Apex and SOQL to make these kind of references optional. Since Developer Edition orgs have all these licenses and objects available (although in a limited quantity), make sure that you review your Package dependencies before clicking on the Upload button each time to check for unintentional references. Importing and exporting data Salesforce provides a number of its own tools for importing and exporting data as well as a number of third-party options based on the Salesforce APIs; these are listed on AppExchange. When importing records with other record relationships, it is not possible to predict and include the IDs of related records, such as the Season record ID when importing Race records; in this section, we will present a solution to this. Salesforce provides Data Import Wizard, which is available under the Setup menu. This tool only supports Custom Objects and Custom Settings. Custom Metadata Type records are essentially considered metadata by the platform, and as such, you can use packages, developer tools, and Change Sets to migrate these records between orgs. There is an open source CSV data loader for Custom Metadata Types at https://github.com/haripriyamurthy/CustomMetadataLoader. It is straightforward to import a CSV file with a list of race Season since this is a top-level object and has no other object dependencies. However, to import the Race information (which is a child object related to Season), the Season and Fasted Lap By record IDs are required, which will typically not be present in a Race import CSV file by default. Note that IDs are unique across the platform and cannot be shared between orgs. External ID fields help address this problem by allowing Salesforce to use the existing values of such fields as a secondary means to associate records being imported that need to reference parent or related records. All that is required is that the related record Name or, ideally, a unique external ID be included in the import data file. This CSV file includes three columns: Year, Name, and Fastest Lap By (of the driver who performed the fastest lap of that race, indicated by their Twitter handle). You may remember that a Driver record can also be identified by this since the field has been defined as an External ID field. Both the 2014 Season record and the Lewis Hamilton Driver record should already be present in your packaging org. Now, run Data Import Wizard and complete the settings as shown in the following screenshot: Next, complete the field mappings as shown in the following screenshot: Click on Start Import and then on OK to review the results once the data import has completed. You should find that four new Race records have been created under 2014 Season, with the Fasted Lap By field correctly associated with the Lewis Hamilton Driver record. Note that these tools will also stress your Apex Trigger code for volumes, as they typically have the bulk mode enabled and insert records in chunks of 200 records. Thus, it is recommended that you test your triggers to at least this level of record volumes. Options for replicating and archiving data Enterprise customers often have legacy and/or external systems that are still being used or that they wish to phase out in the future. As such, they may have requirements to replicate aspects of the data stored in the Salesforce platform to another. Likewise, in order to move unwanted data off the platform and manage their data storage costs, there is a need to archive data. The following lists some platform and API facilities that can help you and/or your customers build solutions to replicate or archive data. There are, of course, a number of AppExchange solutions listed that provide applications that use these APIs already: Replication API: This API exists in both the web service SOAP and Apex form. It allows you to develop a scheduled process to query the platform for any new, updated, or deleted records between a given time period for a specific object. The getUpdated and getDeleted API methods return only the IDs of the records, requiring you to use the conventional Salesforce APIs to query the remaining data for the replication. The frequency in which this API is called is important to avoid gaps. Refer to the Salesforce documentation for more details. Outbound Messaging: This feature offers a more real-time alternative to the replication API. An outbound message event can be configured using the standard workflow feature of the platform. This event, once configured against a given object, provides a Web Service Definition Language (WSDL) file that describes a web service endpoint to be called when records are created and updated. It is the responsibility of a web service developer to create the end point based on this definition. Note that there is no provision for deletion with this option. Bulk API: This API provides a means to move up to 5000 chunks of Salesforce data (up to 10 MB or 10,000 records per chunk) per rolling 24-hour period. Salesforce and third-party data loader tools, including the Salesforce Data Loader tool, offer this as an option. It can also be used to delete records without them going into the recycle bin. This API is ideal for building solutions to archive data. Heroku Connect is a seamless data synchronization solution between Salesforce and Heroku Postgres. For further information, refer to https://www.heroku.com/connect. External data sources One of the downsides of moving data off the platform in an archive use case or with not being able to replicate data onto the platform is that the end users have to move between applications and logins to view data; this causes an overhead as the process and data is not connected. The Salesforce Connect (previously known as Lightning Connect) is a chargeable add-on feature of the platform is the ability to surface external data within the Salesforce user interface via the so-called External Objects and External Data Sources configurations under Setup. They offer a similar functionality to Custom Objects, such as List views, Layouts, and Custom Buttons. Currently, Reports and Dashboards are not supported, though it is possible to build custom report solutions via Apex, Visualforce or Lightning Components. External Data Sources can be connected to existing OData-based end points and secured through OAuth or Basic Authentication. Alternatively, Apex provides a Connector API whereby developers can implement adapters to connect to other HTTP-based APIs. Depending on the capabilities of the associated External Data Source, users accessing External Objects using the data source can read and even update records through the standard Salesforce UIs such as Salesforce Mobile and desktop interfaces. Summary This article explored the declarative aspects of developing an application on the platform that applies to how an application is stored and how relational data integrity is enforced through the use of the lookup field deletion constraints and applying unique fields. Upload the latest version of the FormulaForce package and install it into your test org. The summary page during the installation of new and upgraded components should look something like the following screenshot. Note that the permission sets are upgraded during the install. Once you have installed the package in your testing org, visit the Custom Metadata Types page under Setup and click on Manage Records next to the object. You will see that the records are shown as managed and cannot be deleted. Click on one of the records to see that the field values themselves cannot also be edited. This is the effect of the Field Manageability checkbox when defining the fields. The Namespace Prefix shown here will differ from yours. Try changing or adding the Track Lap Time records in your packaging org, for example, update a track time on an existing record. Upload the package again then upgrade your test org. You will see the records are automatically updated. Conversely, any records you created in your test org will be retained between upgrades. In this article, we have now covered some major aspects of the platform with respect to packaging, platform alignment, and how your application data is stored as well as the key aspects of your application's architecture. Resources for Article: Further resources on this subject: Process Builder to the Rescue [article] Custom Coding with Apex [article] Building, Publishing, and Supporting Your Force.com Application [article]
Read more
  • 0
  • 0
  • 11434

article-image-remote-sensing-and-histogram
Packt
04 Jan 2016
10 min read
Save for later

Remote Sensing and Histogram

Packt
04 Jan 2016
10 min read
In this article by Joel Lawhead, the author of Learning GeoSpatial Analysis with Python - Second Edition, we will discuss remote sensing. This field grows more exciting every day as more satellites are launched and the distribution of data becomes easier. The high availability of satellite and aerial images as well as the interesting new types of sensors that are being launched each year is changing the role that remote sensing plays in understanding our world. In this field, Python is quite capable. In remote sensing, we step through each pixel in an image and perform a type of query or mathematical process. An image can be thought of as a large numerical array and in remote sensing, these arrays can be as large as tens of megabytes to several gigabytes. While Python is fast, only C-based libraries can provide the speed that is needed to loop through the arrays at a tolerable speed. (For more resources related to this topic, see here.) In this article, whenever possible, we’ll use Python Imaging Library (PIL) for image processing and NumPy, which provides multidimensional array mathematics. While written in C for speed, these libraries are designed for Python and provide Python’s API. In this article, we’ll start with basic image manipulation and build on each exercise all the way to automatic change detection. Here are the topics that we’ll cover: Swapping image bands Creating image histograms Swapping image bands Our eyes can only see colors in the visible spectrum as combinations of red, green, and blue (RGB). Airborne and spaceborne sensors can collect wavelengths of the energy that is outside the visible spectrum. In order to view this data, we move images representing different wavelengths of light reflectance in and out of the RGB channels in order to make color images. These images often end up as bizarre and alien color combinations that can make visual analysis difficult. An example of a typical satellite image is seen in the following Landsat 7 satellite scene near the NASA's Stennis Space Center in Mississippi along the Gulf of Mexico, which is a leading space center for remote sensing and geospatial analysis in general: Most of the vegetation appears red and water appears almost black. This image is one type of false color image, meaning that the color of the image is not based on the RGB light. However, we can change the order of the bands or swap certain bands in order to create another type of false color image that looks more like the world we are used to seeing. In order to do so, you first need to download this image as a ZIP file from the following: http://git.io/vqs41 We need to install the GDAL library with Python bindings. The Geospatial Data Abstraction Library(GDAL)includes a module called gdalnumeric that loads and saves remotely sensed images to and from NumPy arrays for easy manipulation. GDAL itself is a data access library and does not provide much in the name of processing. Therefore, in this article, we will rely heavily on NumPy to actually change images. In this example, we’ll load the image in a NumPy array using gdalnumeric and then, we’ll immediately save it in a new .tiff file. However, upon saving, we’ll use NumPy’s advanced array slicing feature to change the order of the bands. Images in NumPy are multidimensional arrays in the order of band, height, and width. Therefore, an image with three bands will be an array of length three, containing an array for each band, the height, and width of the image. It’s important to note that NumPy references the array locations as y,x (row, column) instead of the usual column and row format that we work with in spreadsheets and other software: from osgeo import gdalnumeric src = "FalseColor.tif" arr = gdalnumeric.LoadFile(src) gdalnumeric.SaveArray(arr[[1, 0, 2], :], "swap.tif",format="GTiff", prototype=src) Also in the SaveArray() method, the last argument is called prototype. This argument let’s you specify another image for GDAL from which we can copy spatial reference information and some other image parameters. Without this argument, we’d end up with an image without georeferencing information, which can not be used in a geographical information system (GIS). In this case, we specified our input image file name as the images are identical, except for the band order. The result of this example produces the swap.tif image, which is a much more visually appealing image with green vegetation and blue water: There’s only one problem with this image. It’s a little dark and difficult to see. Let’s see if we can figure out why. Creating histograms A histogram shows the statistical frequency of data distribution in a dataset. In the case of remote sensing, the dataset is an image, the data distribution is the frequency of the pixels in the range of 0 to 255, which is the range of the 8-byte numbers used to store image information on computers. In an RGB image, color is represented as a three-digit tuple with (0,0,0) being black and (255,255,255) being white. We can graph the histogram of an image with the frequency of each value along the y axis and the range of 255 possible pixel values along the x axis. We can use the Turtle graphics engine that is included with Python to create a simple GIS. We can also use it to easily graph histograms. Histograms are usually a one-off product that makes a quick script, like this example, great. Also, histograms are typically displayed as a bar graph with the width of the bars representing the size of the grouped data bins. However, in an image, each bin is only one value, so we’ll create a line graph. We’ll use the histogram function in this example and create a red, green, and blue line for each band. The graphing portion of this example also defaults to scaling the y axis values to the maximum RGB frequency that is found in the image. Technically, the y axis represents the maximum frequency, that is, the number of pixels in the image, which would be the case if the image was of one color. We’ll use the turtle module again; however, this example could be easily converted to any graphical output module. However, this format makes the distribution harder to see. Let’s take a look at our swap.tif image: from osgeo import gdalnumeric import turtle as t def histogram(a, bins=list(range(0, 256))): fa = a.flat n = gdalnumeric.numpy.searchsorted(gdalnumeric.numpy.sort(fa), bins) n = gdalnumeric.numpy.concatenate([n, [len(fa)]]) hist = n[1:]-n[:-1] return hist defdraw_histogram(hist, scale=True): t.color("black") axes = ((-355, -200), (355, -200), (-355, -200), (-355, 250)) t.up() for p in axes: t.goto(p) t.down() t.up() t.goto(0, -250) t.write("VALUE", font=("Arial, ", 12, "bold")) t.up() t.goto(-400, 280) t.write("FREQUENCY", font=("Arial, ", 12, "bold")) x = -355 y = -200 t.up() for i in range(1, 11): x = x+65 t.goto(x, y) t.down() t.goto(x, y-10) t.up() t.goto(x, y-25) t.write("{}".format((i*25)), align="center") x = -355 y = -200 t.up() pixels = sum(hist[0]) if scale: max = 0 for h in hist: hmax = h.max() if hmax> max: max = hmax pixels = max label = pixels/10 for i in range(1, 11): y = y+45 t.goto(x, y) t.down() t.goto(x-10, y) t.up() t.goto(x-15, y-6) t.write("{}" .format((i*label)), align="right") x_ratio = 709.0 / 256 y_ratio = 450.0 / pixels colors = ["red", "green", "blue"] for j in range(len(hist)): h = hist[j] x = -354 y = -199 t.up() t.goto(x, y) t.down() t.color(colors[j]) for i in range(256): x = i * x_ratio y = h[i] * y_ratio x = x - (709/2) y = y + -199 t.goto((x, y)) im = "swap.tif" histograms = [] arr = gdalnumeric.LoadFile(im) for b in arr: histograms.append(histogram(b)) draw_histogram(histograms) t.pen(shown=False) t.done() Here's what the histogram for swap.tif looks similar to after running the example: As you can see, all the three bands are grouped closely towards the left-hand side of the graph and all have values that are less than 125. As these values approach zero, the image becomes darker, which is not surprising. Just for fun, let’s run the script again and when we call the draw_histogram() function, we’ll add the scale=False option to get an idea of the size of the image and provide an absolute scale. Therefore, we take the following line: draw_histogram(histograms) Change it to the following: draw_histogram(histograms, scale=False) This change will produce the following histogram graph: As you can see, it’s harder to see the details of the value distribution. However, this absolute scale approach is useful if you are comparing multiple histograms from different products that are produced from the same source image. Now that we understand the basics of looking at an image statistically using histograms, how do we make our image brighter? Performing a histogram stretch A histogram stretch operation does exactly what the name suggests. It distributes the pixel values across the whole scale. By doing so, we have more values at the higher-intensity level and the image becomes brighter. Therefore, in this example, we’ll use our histogram function; however, we’ll add another function called stretch() that takes an image array, creates the histogram, and then spreads out the range of values for each band. We’ll run these functions on swap.tif and save the result in an image called stretched.tif: import gdalnumeric import operator from functools import reduce def histogram(a, bins=list(range(0, 256))): fa = a.flat n = gdalnumeric.numpy.searchsorted(gdalnumeric.numpy.sort(fa), bins) n = gdalnumeric.numpy.concatenate([n, [len(fa)]]) hist = n[1:]-n[:-1] return hist def stretch(a): hist = histogram(a) lut = [] for b in range(0, len(hist), 256): step = reduce(operator.add, hist[b:b+256]) / 255 n = 0 for i in range(256): lut.append(n / step) n = n + hist[i+b] gdalnumeric.numpy.take(lut, a, out=a) return a src = "swap.tif" arr = gdalnumeric.LoadFile(src) stretched = stretch(arr) gdalnumeric.SaveArray(arr, "stretched.tif", format="GTiff", prototype=src) The stretch algorithm will produce the following image. Look how much brighter and visually appealing it is: We can run our turtle graphics histogram script on stretched.tif by changing the filename in the im variable to stretched.tif: im = "stretched.tif" This run will give us the following histogram: As you can see, all the three bands are distributed evenly now. Their relative distribution to each other is the same; however, in the image, they are now spread across the spectrum. Summary In this article, we covered the foundations of remote sensing including band swapping and histograms. The authors of GDAL have a set of Python examples, covering some advanced topics that may be of interest, available at https://svn.osgeo.org/gdal/trunk/gdal/swig/python/samples/. Resources for Article: Further resources on this subject: Python Libraries for Geospatial Development[article] Python Libraries[article] Learning R for Geospatial Analysis [article]
Read more
  • 0
  • 0
  • 11394

article-image-important-features-mockito
Packt
04 Sep 2013
4 min read
Save for later

Important features of Mockito

Packt
04 Sep 2013
4 min read
Reducing boilerplate code with annotations Mockito allows the use of annotations to reduce the lines of test code in order to increase the readability of tests. Let's take into consideration some of the tests that we have seen in previous examples. Removing boilerplate code by using the MockitoJUnitRunner The shouldCalculateTotalWaitingTimeAndAssertTheArgumentsOnMockUsingArgumentCaptor from Verifying behavior (including argument capturing, verifying call order and working with asynchronous code) section, can be rewritten as follows, using Mockito annotations, and the @RunWith(MockitoJUnitRunner.class) JUnit runner: @RunWith(MockitoJUnitRunner.class) public class _07ReduceBoilerplateCodeWithAnnotationsWithRunner { @Mock KitchenService kitchenServiceMock; @Captor ArgumentCaptor mealArgumentCaptor; @InjectMocks WaiterImpl objectUnderTest; @Test public void shouldCalculateTotalWaitingTimeAndAssert TheArgumentsOnMockUsingArgumentCaptor() throws Exception { //given final int mealPreparationTime = 10; when(kitchenServiceMock.calculate PreparationTime(any(Meal.class))).thenReturn(mealPreparationTime); //when int waitingTime = objectUnderTest.calculate TotalWaitingTime(createSampleMeals ContainingVegetarianFirstCourse()); //then assertThat(waitingTime, is(mealPreparationTime)); verify(kitchenServiceMock).calculatePreparation Time(mealArgumentCaptor.capture()); assertThat(mealArgumentCaptor.getValue(), is (VegetarianFirstCourse.class)); assertThat(mealArgumentCaptor.getAllValues().size(), is(1)); } private List createSampleMeals ContainingVegetarianFirstCourse() { List meals = new ArrayList(); meals.add(new VegetarianFirstCourse()); return meals; } } What happened here is that: All of the boilerplate code can be removed due to the fact that you are using the @RunWith(MockitoJUnitRunner.class) JUnit runner Mockito.mock(…) has been replaced with @Mock annotation You can provide additional parameters to the annotation, such as name, answer or extraInterfaces. The fieldname related to the annotated mock is referred to in any verification so it's easier to identify the mock ArgumentCaptor.forClass(…) is replaced with @Captor annotation. When using the @Captor annotation you avoid warnings related to complex generic types Thanks to the @InjectMocks annotation your object under test is initialized, proper constructor/setters are found and Mockito injects the appropriate mocks for you There is no explicit creation of the object under test You don't need to provide the mocks as arguments of the constructor Mockito @InjectMocksuses either constructor injection, property injection or setter injection Taking advantage of advanced mocks configuration Mockito gives you a possibility of providing different answers for your mocks. Let's focus more on that. Getting more information on NullPointerException Remember the Waiter's askTheCleaningServiceToCleanTheRestaurantMethod(): @Override public boolean askTheCleaningServiceToCleanTheRestaurant (TypeOfCleaningService typeOfCleaningService) { CleaningService cleaningService = cleaningServiceFactory.getMe ACleaningService(typeOfCleaningService); try{ cleaningService.cleanTheTables(); cleaningService.sendInformationAfterCleaning(); return SUCCESSFULLY_CLEANED_THE_RESTAURANT; }catch(CommunicationException communicationException){ System.err.println("An exception took place while trying to send info about cleaning the restaurant"); return FAILED_TO_CLEAN_THE_RESTAURANT; } } Let's assume that we want to test this function. We inject the CleaningServiceFactory as a mock but we forgot to stub the getMeACleaningService(…) method. Normally we would get a NullPointerException since, if the method is not stubbed, it will return null. But what will happen, if as an answer we would provide a RETURNS_SMART_NULLS answer? Let's take a look at the body of the following test: @Mock(answer = Answers.RETURNS_SMART_NULLS) CleaningServiceFactory cleaningServiceFactory; @InjectMocks WaiterImpl objectUnderTest; @Test public void shouldThrowSmartNullPointerExceptionWhenUsingUnstubbedMethod() { //given // Oops forgotten to stub the CleaningServiceFactory.getMeACle aningService(TypeOfCleaningService) method try { //when objectUnderTest.askTheCleaningServiceToCleanTheRestaurant( TypeOfCleaningService.VERY_FAST); fail(); } catch (SmartNullPointerException smartNullPointerException) { System.err.println("A SmartNullPointerException will be thrown here with a very precise information about the error [" + smartNullPointerException + "]"); } } What happened in the test is that: We create a mock with an answer RETURNS_SMART_NULLS of the CleaningServiceFactory The mock is injected to the WaiterImpl We do not stub the getMeACleaningService(…) of the CleaningServiceFactory The SmartNullPointerException will be thrown at the line containing the cleaningService.cleanTheTables() It will contain very detailed information about the reason for the exception to happen and where it happened In order to have the RETURNS_SMART_NULLS as the default answer (you wouldn't have to explicitly define the answer for your mock), you would have to create the class named MockitoConfiguration in a package org.mockito.configuration that either extends the DefaultMockitoConfiguration or implements the IMockitoConfiguration interface: package org.mockito.configuration; import org.mockito.internal.stubbing.defaultanswers.ReturnsSmartNulls; import org.mockito.stubbing.Answer; public class MockitoConfiguration extends DefaultMockitoConfiguration { public Answer<Object> getDefaultAnswer() { return new ReturnsSmartNulls(); } } Summary In this article we learned in detail about reducing the boilerplate code with annotations, and taking advantage of advanced mocks configuration, along with their code implementation. Resources for Article : Further resources on this subject: Testing your App [Article] Drools JBoss Rules 5.0 Flow (Part 2) [Article] Easily Writing SQL Queries with Spring Python [Article]
Read more
  • 0
  • 0
  • 11388

article-image-pipeline-and-producer-consumer-design-patterns
Packt
20 Dec 2014
48 min read
Save for later

Pipeline and Producer-consumer Design Patterns

Packt
20 Dec 2014
48 min read
In this article created by Rodney Ringler, the author of C# Multithreaded and Parallel Programming, we will explore two popular design patterns to solve concurrent problems—Pipeline and producer-consumer, which are used in developing parallel applications using the TPL. A Pipeline design is one where an application is designed with multiple tasks or stages of functionality with queues of work items between them. So, for each stage, the application will read from a queue of work to be performed, execute the work on that item, and then queue the results for the next stage. By designing the application this way, all of the stages can execute in parallel. Each stage just reads from its work queue, performs the work, and puts the results of the work into the queue for the next stage. Each stage is a task and can run independently of the other stages or tasks. They continue executing until their queue is empty and marked completed. They also block and wait for more work items if the queue is empty but not completed. The producer-consumer design pattern is a similar concept but different. In this design, we have a set of functionality that produces data that is then consumed by another set of functionality. Each set of functionality is a TPL task. So, we have a producer task and a consumer task, with a buffer between them. Each of these tasks can run independently of each other. We can also have multiple producer tasks and multiple consumer tasks. The producers run independently and produce queue results to the buffer. The consumers run independently and dequeue from the buffer and perform work on the item. The producer can block if the buffer is full and wait for room to become available before producing more results. Also, the consumer can block if the buffer is empty, waiting on more results to be available to consume. In this article, you will learn the following: Designing an application with a Pipeline design Designing an application with a producer-consumer design Learning how to use BlockingCollection Learning how to use BufferedBlocks Understanding the classes of the System.Threading.Tasks.Dataflow library (For more resources related to this topic, see here.) Pipeline design pattern The Pipeline design is very useful in parallel design when you can divide an application up into series of tasks to be performed in such a way that each task can run concurrently with other tasks. It is important that the output of each task is in the same order as the input. If the order does not matter, then a parallel loop can be performed. When the order matters and we don't want to wait until all items have completed task A before the items start executing task B, then a Pipeline implementation is perfect. Some applications that lend themselves to pipelining are video streaming, compression, and encryption. In each of these examples, we need to perform a set of tasks on the data and preserve the data's order, but we do not want to wait for each item of data to perform a task before any of the data can perform the next task. The key class that .NET has provided for implementing this design pattern is BlockingCollection of the System.Collections.Concurrent namespace. The BlockingCollection class was introduced with .NET 4.5. It is a thread-safe collection specifically designed for producer-consumer and Pipeline design patterns. It supports concurrently adding and removing items by multiple threads to and from the collection. It also has methods to add and remove that block when the collection is full or empty. You can specify a maximum collection size to ensure a producing task that outpaces a consuming task does not make the queue too large. It supports cancellation tokens. Finally, it supports enumerations so that you can use the foreach loop when processing items of the collection. A producer of items to the collection can call the CompleteAdding method when the last item of data has been added to the collection. Until this method is called if a consumer is consuming items from the collection with a foreach loop and the collection is empty, it will block until an item is put into the collection instead of ending the loop. Next, we will see a simple example of a Pipeline design implementation using an encryption program. This program will implement three stages in our pipeline. The first stage will read a text file character-by-character and place each character into a buffer (BlockingCollection). The next stage will read each character out of the buffer and encrypt it by adding 1 to its ASCII number. It will then place the new character into our second buffer and write it to an encryption file. Our final stage will read the character out of the second buffer, decrypt it to its original character, and write it out to a new file and to the screen. As you will see, stages 2 and 3 will start processing characters before stage 1 has finished reading all the characters from the input file. And all of this will be done while maintaining the order of the characters so that the final output file is identical to the input file: Let's get started. How to do it First, let's open up Visual Studio and create a new Windows Presentation Foundation (WPF) application named PipeLineApplication and perform the following steps: Create a new class called Stages.cs. Next, make sure it has the following using statements. using System; using System.Collections.Concurrent; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Threading; In the MainWindow.xaml.cs file, make sure the following using statements are present: using System; using System.Collections.Concurrent; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Threading; Next, we will add a method for each of the three stages in our pipeline. First, we will create a method called FirstStage. It will take two parameters: one will be a BlockingCollection object that will be the output buffer of this stage, and the second will be a string pointing to the input data file. This will be a text file containing a couple of paragraphs of text to be encrypted. We will place this text file in the projects folder on C:. The FirstStage method will have the following code: public void FirstStage(BlockingCollection<char> output, String PipelineInputFile)        {            String DisplayData = "";            try            {                foreach (char C in GetData(PipelineInputFile))                { //Displayed characters read in from the file.                   DisplayData = DisplayData + C.ToString();   // Add each character to the buffer for the next stage.                    output.Add(C);                  }            }            finally            {                output.CompleteAdding();             }      } Next, we will add a method for the second stage called StageWorker. This method will not return any values and will take three parameters. One will be a BlockingCollection value that will be its input buffer, the second one will be the output buffer of the stage, and the final one will be a file path to store the encrypted text in a data file. The code for this method will look like this: public void StageWorker(BlockingCollection<char> input, BlockingCollection<char> output, String PipelineEncryptFile)        {            String DisplayData = "";              try            {                foreach (char C in input.GetConsumingEnumerable())                {                    //Encrypt each character.                    char encrypted = Encrypt(C);                      DisplayData = DisplayData + encrypted.ToString();   //Add characters to the buffer for the next stage.                    output.Add(encrypted);                  }   //write the encrypted string to the output file.                 using (StreamWriter outfile =                            new StreamWriter(PipelineEncryptFile))                {                    outfile.Write(DisplayData);                }              }            finally            {                output.CompleteAdding();            }        } Now, we will add a method for the third and final stage of the Pipeline design. This method will be named FinalStage. It will not return any values and will take two parameters. One will be a BlockingCollection object that is the input buffer and the other will be a string pointing to an output data file. It will have the following code in it: public void FinalStage(BlockingCollection<char> input, String PipelineResultsFile)        {            String OutputString = "";            String DisplayData = "";              //Read the encrypted characters from the buffer, decrypt them, and display them.            foreach (char C in input.GetConsumingEnumerable())            {                //Decrypt the data.                char decrypted = Decrypt(C);                  //Display the decrypted data.                DisplayData = DisplayData + decrypted.ToString();                  //Add to the output string.                OutputString += decrypted.ToString();              }              //write the decrypted string to the output file.            using (StreamWriter outfile =                        new StreamWriter(PipelineResultsFile))            {                outfile.Write(OutputString);            }        } Now that we have methods for the three stages of our pipeline, let's add a few utility methods. The first of these methods will be one that reads in the input data file and places each character in the data file in a List object. This method will take a string parameter that has a filename and will return a List object of characters. It will have the following code: public List<char> GetData(String PipelineInputFile)        {            List<char> Data = new List<char>();              //Get the Source data.            using (StreamReader inputfile = new StreamReader(PipelineInputFile))            {                while (inputfile.Peek() >= 0)                {                    Data.Add((char)inputfile.Read());                }              }              return Data;        } Now we will need a method to encrypt the characters. This will be a simple encryption method. The encryption method is not really important to this exercise. This exercise is designed to demonstrate the Pipeline design, not implement the world's toughest encryption. This encryption will simply take each character and add one to its ASCII numerical value. The method will take a character type as an input parameter and return a character. The code for it will be as follows: public char Encrypt(char C)        {            //Take the character, convert to an int, add 1, then convert back to a character.            int i = (int)C;            i = i + 1;            C = Convert.ToChar(i);              return C; } Now we will add one final method to the Stages class to decrypt a character value. It will simply do the reverse of the encrypt method. It will take the ASCII numerical value and subtract 1. The code for this method will look like this: public char Decrypt(char C)      {            int i = (int)C;            i = i - 1;            C = Convert.ToChar(i);              return C;        } Now that we are done with the Stages class, let's switch our focus back to the MainWindow.xaml.cs file. First, you will need to add three using statements. They are for the StreamReader, StreamWriter, Threads, and BlockingCollection classes: using System.Collections.Concurrent; using System.IO; using System.Threading; At the top of the MainWindow class, we need four variables available for the whole class. We need three strings that point to our three data files—the input data, encrypted data, and output data. Then we will need a Stages object. These declarations will look like this: private static String PipelineResultsFile = @"c:projectsOutputData.txt";        private static String PipelineEncryptFile = @"c:projectsEncryptData.txt";        private static String PipelineInputFile = @"c:projectsInputData.txt";        private Stages Stage; Then, in the MainWindow constructor method, right after the InitializeComponent call, add a line to instantiate our Stages object: //Create the Stage object and register the event listeners to update the UI as the stages work. Stage = new Stages(); Next, add a button to the MainWindow.xaml file that will initiate the pipeline and encryption. Name this button control butEncrypt, and set its Content property to Encrypt File. Next, add a click event handler for this button in the MainWindow.xaml.cs file. Its event handler method will be butEncrypt_Click and will contain the main code for this application. It will instantiate two BlockingCollection objects for two queues. One queue between stages 1 and 2, and one queue between stages 2 and 3. This method will then create a task for each stage that executes the corresponding methods from the Stages classes. It will then start these three tasks and wait for them to complete. Finally, it will write the output of each stage to the input, encrypted, and results data files and text blocks for viewing. The code for it will look like the following code: private void butEncrpt_Click(object sender, RoutedEventArgs e)        {            //PipeLine Design Pattern              //Create queues for input and output to stages.            int size = 20;            BlockingCollection<char> Buffer1 = new BlockingCollection<char>(size);            BlockingCollection<char> Buffer2 = new BlockingCollection<char>(size);              TaskFactory tasks = new TaskFactory(TaskCreationOptions.LongRunning, TaskContinuationOptions.None);              Task Stage1 = tasks.StartNew(() => Stage.FirstStage(Buffer1, PipelineInputFile));            Task Stage2 = tasks.StartNew(() => Stage.StageWorker(Buffer1, Buffer2, PipelineEncryptFile));            Task Stage3 = tasks.StartNew(() => Stage.FinalStage(Buffer2, PipelineResultsFile));              Task.WaitAll(Stage1, Stage2, Stage3);              //Display the 3 files.            using (StreamReader inputfile = new StreamReader(PipelineInputFile))            {                while (inputfile.Peek() >= 0)                {                    tbStage1.Text = tbStage1.Text + (char)inputfile.Read();                }              }            using (StreamReader inputfile = new StreamReader(PipelineEncryptFile))            {                 while (inputfile.Peek() >= 0)                {                    tbStage2.Text = tbStage2.Text + (char)inputfile.Read();                }              }            using (StreamReader inputfile = new StreamReader(PipelineResultsFile))             {                while (inputfile.Peek() >= 0)                {                    tbStage3.Text = tbStage3.Text + (char)inputfile.Read();                }              }      } One last thing. Let's add three textblocks to display the outputs. We will call these tbStage1, tbStage2, and tbStage3. We will also add three label controls with the text Input File, Encrypted File, and Output File. These will be placed by the corresponding textblocks. Now, the MainWindow.xaml file should look like the following screenshot: Now we will need an input data file to encrypt. We will call this file InputData.txt and put it in the C:projects folder on our computer. For our example, we have added the following text to it: We are all finished and ready to try it out. Compile and run the application and you should have a window that looks like the following screenshot: Now, click on the Encrypt File button and you should see the following output: As you can see, the input and output files look the same and the encrypted file looks different. Remember that Input File is the text we put in the input data text file; this is the input from the end of stage 1 after we have read the file in to a character list. Encrypted File is the output from stage 2 after we have encrypted each character. Output File is the output of stage 3 after we have decrypted the characters again. It should match Input File. Now, let's take a look at how this works. How it works Let's look at the butEncrypt click event handler method in the MainWindow.xaml.cs file, as this is where a lot of the action takes place. Let's examine the following lines of code:            //Create queues for input and output to stages.            int size = 20;            BlockingCollection<char> Buffer1 = new BlockingCollection<char>(size);            BlockingCollection<char> Buffer2 = new BlockingCollection<char>(size);            TaskFactory tasks = new TaskFactory(TaskCreationOptions.LongRunning, TaskContinuationOptions.None);              Task Stage1 = tasks.StartNew(() => Stage.FirstStage(Buffer1, PipelineInputFile));            Task Stage2 = tasks.StartNew(() => Stage.StageWorker(Buffer1, Buffer2, PipelineEncryptFile));            Task Stage3 = tasks.StartNew(() => Stage.FinalStage(Buffer2, PipelineResultsFile)); First, we create two queues that are implemented using BlockingCollection objects. Each of these is set with a size of 20 items. These two queues take a character datatype. Then we create a TaskFactory object and use it to start three tasks. Each task uses a lambda expression that executes one of the stages methods from the Stages class—FirstStage, StageWorker, and FinalStage. So, now we have three separate tasks running besides the main UI thread. Stage1 will read the input data file character by character and place each character in the queue Buffer1. Remember that this queue can only hold 20 items before it will block the FirstStage method waiting on room in the queue. This is how we know that Stage2 starts running before Stage1 completes. Otherwise, Stage1 will only queue the first 20 characters and then block. Once Stage1 has read all of the characters from the input file and placed them into Buffer1, it then makes the following call:            finally            {                output.CompleteAdding();            } This lets the BlockingCollection instance, Buffer1, to know that there are no more items to be put in the queue. So, when Stage2 has emptied the queue after Stage1 has called this method, it will not block but will instead continue until completion. Prior to the CompleteAdding method call, Stage2 will block if Buffer1 is empty, waiting until more items are placed in the queue. This is why a BlockingCollection instance was developed for Pipeline and producer-consumer applications. It provides the perfect mechanism for this functionality. When we created the TaskFactory, we used the following parameter: TaskCreationOptions.LongRunning This tells the threadpool that these tasks may run for a long time and could occasionally block waiting on their queues. In this way, the threadpool can decide how to best manage the threads allocated for these tasks. Now, let's look at the code in Stage2—the StageWorker method. We need a way to remove items in an enumerable way so that we can iterate over the queues items with a foreach loop because we do not know how many items to expect. Also, since BlockingCollection objects support multiple consumers, we need a way to remove items that no other consumer might remove. We use this method of the BlockingCollection class: foreach (char C in input.GetConsumingEnumerable()) This allows multiple consumers to remove items from a BlockingCollection instance while maintaining the order of the items. To further improve performance of this application (assuming we have enough available processing cores), we could create a fourth task that also runs the StageWorker method. So, then we would have two stages and two tasks running. This might be helpful if there are enough processing cores and stage 1 runs faster than stage 2. If this happens, it will continually fill the queue and block until space becomes available. But if we run multiple stage 2 tasks, then we will be able to keep up with stage 1. Then, finally we have this line of code: Task.WaitAll(Stage1, Stage2, Stage3); This tells our button handler to wait until all of the tasks are complete. Once we have called the CompleteAdding method on each BlockingCollection instance and the buffers are then emptied, all of our stages will complete and the TaskFactory.WaitAll command will be satisfied and this method on the UI thread can complete its processing, which in this application is to update the UI and data files:            //Display the 3 files.            using (StreamReader inputfile = new StreamReader(PipelineInputFile))            {                while (inputfile.Peek() >= 0)                {                    tbStage1.Text = tbStage1.Text + (char)inputfile.Read();                }              }            using (StreamReader inputfile = new StreamReader(PipelineEncryptFile))            {                while (inputfile.Peek() >= 0)                {                    tbStage2.Text = tbStage2.Text + (char)inputfile.Read();                }              }            using (StreamReader inputfile = new StreamReader(PipelineResultsFile))            {                while (inputfile.Peek() >= 0)                {                    tbStage3.Text = tbStage3.Text + (char)inputfile.Read();                }              } Next, experiment with longer running, more complex stages and multiple consumer stages. Also, try stepping through the application with the Visual Studio debugger. Make sure you understand the interaction between the stages and the buffers. Explaining message blocks Let's talk for a minute about message blocks and the TPL. There is a new library that Microsoft has developed as part of the TPL, but it does not ship directly with .NET 4.5. This library is called the TPL Dataflow library. It is located in the System.Threading.Tasks.Dataflow namespace. It comes with various dataflow components that assist in asynchronous concurrent applications where messages need to be passed between multiple tasks or the data needs to be passed when it becomes available, as in the case of a web camera streaming video. The Dataflow library's message blocks are very helpful for design patterns such as Pipeline and producer-consumer where you have multiple producers producing data that can be consumed by multiple consumers. The two that we will take a look at are BufferBlock and ActionBlock. The TPL Dataflow library contains classes to assist in message passing and parallelizing I/O-heavy applications that have a lot of throughput. It provides explicit control over how data is buffered and passed. Consider an application that asynchronously loads large binary files from storage and manipulates that data. Traditional programming requires that you use callbacks and synchronization classes, such as locks, to coordinate tasks and have access to data that is shared. By using the TPL Dataflow objects, you can create objects that process image files as they are read in from a disk location. You can set how data is handled when it becomes available. Because the CLR runtime engine manages dependencies between data, you do not have to worry about synchronizing access to shared data. Also, since the CLR engine schedules the work depending on the asynchronous arrival of data, the TPL Dataflow objects can improve performance by managing the threads the tasks run on. In this section, we will cover two of these classes, BufferBlock and ActionBlock. The TPL Dataflow library (System.Threading.Tasks.Dataflow) does not ship with .NET 4.5. To install System.Threading.Tasks.Dataflow, open your project in Visual Studio, select Manage NuGet Packages from under the Project menu and then search online for Microsoft.Tpl.Dataflow. BufferBlock The BufferBlock object in the Dataflow library provides a buffer to store data. The syntax is, BufferBlock<T>. The T indicates that the datatype is generic and can be of any type. All static variables of this object type are guaranteed to be thread-safe. BufferBlock is an asynchronous message structure that stores messages in a first-in-first-out queue. Messages can be "posted" to the queue by multiple producers and "received" from the queue by multiple consumers. The TPL DatafLow library provides interfaces for three types of objects—source blocks, target blocks, and propagator blocks. BufferBlock is a general-purpose message block that can act as both a source and a target message buffer, which makes it perfect for a producer-consumer application design. To act as both a source and a target, it implements two interfaces defined by the TPL Dataflow library—ISourceBlock<TOutput> and ITargetBlock<TOutput>. So, in the application that we will develop in the Producer-consumer design pattern section of this article, you will see that the producer method implements BufferBlock using the ITargetBlock interface and the consumer implements BufferBlock with the ISourceBlock interface. This will be the same BufferBlock object that they will act on but by defining their local objects with a different interface there will be different methods available to use. The producer method will have Post and Complete methods, and the consumer method will use the OutputAvailableAsync and Receive methods. The BufferBlock object only has two properties, namely Count, which is a count of the number of data messages in the queue, and Completion, which gets a task that is an asynchronous operation and completion of the message block. The following is a set of methods for this class: Referenced from http://msdn.microsoft.com/en-us/library/hh160414(v=vs.110).aspx Here is a list of the extension methods provided by the interfaces that it implements: Referenced from http://msdn.microsoft.com/en-us/library/hh160414(v=vs.110).aspx Finally, here are the interface references for this class: Referenced from http://msdn.microsoft.com/en-us/library/hh160414(v=vs.110).aspx So, as you can see, these interfaces make using the BufferBlock object as a general-purpose queue between stages of a pipeline very easy. This technique is also useful between producers and consumers in a producer-consumer design pattern. ActionBlock Another very useful object in the Dataflow library is ActionBlock. Its syntax is ActionBlock<TInput>, where TInput is an Action object. ActionBlock is a target block that executes a delegate when a message of data is received. The following is a very simple example of using an ActionBlock:            ActionBlock<int> action = new ActionBlock<int>(x => Console.WriteLine(x));              action.Post(10); In this sample piece of code, the ActionBlock object is created with an integer parameter and executes a simple lambda expression that does a Console.WriteLine when a message of data is posted to the buffer. So, when the action.Post(10) command is executed, the integer, 10, is posted to the ActionBlock buffer and then the ActionBlock delegate, implemented as a lambda expression in this case, is executed. In this example, since this is a target block, we would then need to call the Complete method to ensure the message block is completed. Another handy method of the BufferBlock is the LinkTo method. This method allows you to link ISourceBlock to ITargetBlock. So, you can have a BufferBlock that is implemented as an ISourceBlock and link it to an ActionBlock since it is an ITargetBlock. In this way, an Action delegate can be executed when a BufferBlock receives data. This does not dequeue the data from the message block. It just allows you to execute some task when data is received into the buffer. ActionBlock only has two properties, namely InputCount, which is a count of the number of data messages in the queue, and Completion, which gets a task that is an asynchronous operation and completion of the message block. It has the following methods: Referenced from http://msdn.microsoft.com/en-us/library/hh194684(v=vs.110).aspx The following extension methods are implemented from its interfaces: Referenced from http://msdn.microsoft.com/en-us/library/hh194684(v=vs.110).aspx Also, it implements the following interfaces: Referenced from http://msdn.microsoft.com/en-us/library/hh194684(v=vs.110).aspx Now that we have examined a little of the Dataflow library that Microsoft has developed, let's use it in a producer-consumer application. Producer-consumer design pattern Now, that we have covered the TPL's Dataflow library and the set of objects it provides to assist in asynchronous message passing between concurrent tasks, let's take a look at the producer-consumer design pattern. In a typical producer-consumer design, we have one or more producers putting data into a queue or message data block. Then we have one or more consumers taking data from the queue and processing it. This allows for asynchronous processing of data. Using the Dataflow library objects, we can create a consumer task that monitors a BufferBlock and pulls items of the data from it when they arrive. If no items are available, the consumer method will block until items are available or the BufferBlock has been set to Complete. Because of this, we can start our consumer at any time, even before the producer starts to put items into the queue. Then we create one or more tasks that produce items and place them into the BufferBlock. Once the producers are finished processing all items of data to the BufferBlock, they can mark the block as Complete. Until then, the BufferBlock object is still available to add items into. This is perfect for long-running tasks and applications when we do not know when the data will arrive. Because the producer task is implementing an input parameter of a BufferBlock as an ITargetBlock object and the consumer task is implementing an input parameter of a BufferBlock as an ISourceBlock, they can both use the same BufferBlock object but have different methods available to them. One has methods to produces items to the block and mark it complete. The other one has methods to receive items and wait for more items until the block is marked complete. In this way, the Dataflow library implements the perfect object to act as a queue between our producers and consumers. Now, let's take a look at the application we developed previously as a Pipeline design and modify it using the Dataflow library. We will also remove a stage so that it just has two stages, one producer and one consumer. How to do it The first thing we need to do is open Visual Studio and create a new console application called ProducerConsumerConsoleApp. We will use a console application this time just for ease. Our main purpose here is to demonstrate how to implement the producer-consumer design pattern using the TPL Dataflow library. Once you have opened Visual Studio and created the project, we need to perform the following steps: First, we need to install and add a reference to the TPL Dataflow library. The TPL Dataflow library (System.Threading.Tasks.Dataflow) does not ship with .NET 4.5. Select Manage NuGet Packages from under the Project menu and then search online for Microsoft.Tpl.Dataflow. Now, we will need to add two using statements to our program. One for StreamReader and StreamWriter and one for the BufferBlock object: using System.Threading.Tasks.Dataflow; using System.IO; Now, let's add two static strings that will point to our input data file and the encrypted data file that we output: private static String PipelineEncryptFile = @"c:projectsEncryptData.txt";        private static String PipelineInputFile = @"c:projectsInputData.txt"; Next, let's add a static method that will act as our producer. This method will have the following code:        // Our Producer method.        static void Producer(ITargetBlock<char> Target)        {            String DisplayData = "";              try            {                foreach (char C in GetData(PipelineInputFile))                {                      //Displayed characters read in from the file.                    DisplayData = DisplayData + C.ToString();                      // Add each character to the buffer for the next stage.                    Target.Post(C);                  }            }              finally            {                Target.Complete();            }          } Then we will add a static method to perform our consumer functionality. It will have the following code:        // This is our consumer method. IT runs asynchronously.        static async Task<int> Consumer(ISourceBlock<char> Source)        {            String DisplayData = "";              // Read from the source buffer until the source buffer has no            // available output data.            while (await Source.OutputAvailableAsync())            {                    char C = Source.Receive();                      //Encrypt each character.                    char encrypted = Encrypt(C);                      DisplayData = DisplayData + encrypted.ToString();              }              //write the decrypted string to the output file.            using (StreamWriter outfile =                         new StreamWriter(PipelineEncryptFile))            {                outfile.Write(DisplayData);            }              return DisplayData.Length;        } Then, let's create a simple static helper method to read our input data file and put it in a List collection character by character. This will give us a character list for our producer to use. The code in this method will look like this:        public static List<char> GetData(String PipelineInputFile)        {            List<char> Data = new List<char>();              //Get the Source data.            using (StreamReader inputfile = new StreamReader(PipelineInputFile))            {                while (inputfile.Peek() >= 0)                {                    Data.Add((char)inputfile.Read());                }              }              return Data;        } Next, we will add a static method to encrypt our characters. This method will work like the one we used in our pipelining application. It will add one to the ASCII numerical value of the character:        public static char Encrypt(char C)        {            //Take the character, convert to an int, add 1, then convert back to a character.            int i = (int)C;            i = i + 1;            C = Convert.ToChar(i);              return C;        } Then, we need to add the code for our Then, we need to add the code for our Main method. This method will start our consumer and producer tasks. Then, when they have completed processing, it will display the results in the console. The code for this method looks like this:        static void Main(string[] args)        {            // Create the buffer block object to use between the producer and consumer.            BufferBlock<char> buffer = new BufferBlock<char>();              // The consumer method runs asynchronously. Start it now.            Task<int> consumer = Consumer(buffer);              // Post source data to the dataflow block.            Producer(buffer);              // Wait for the consumer to process all data.            consumer.Wait();              // Print the count of characters from the input file.            Console.WriteLine("Processed {0} bytes from input file.", consumer.Result);              //Print out the input file to the console.            Console.WriteLine("rnrn");            Console.WriteLine("This is the input data file. rn");            using (StreamReader inputfile = new StreamReader(PipelineInputFile))            {                while (inputfile.Peek() >= 0)                {                    Console.Write((char)inputfile.Read());                }              }              //Print out the encrypted file to the console.            Console.WriteLine("rnrn");            Console.WriteLine("This is the encrypted data file. rn");            using (StreamReader encryptfile = new StreamReader(PipelineEncryptFile))            {                while (encryptfile.Peek() >= 0)                {                    Console.Write((char)encryptfile.Read());                }              }             //Wait before closing the application so we can see the results.            Console.ReadLine();        } That is all the code that is needed. Now, let's build and run the application using the following input data file: Once it runs and completes, your output should look like the following screenshot: Now, try this with your own data files and inputs. Let's examine what happened and how this works. How it works First we will go through the Main method. The first thing Main does is create a BufferBlock object called buffer. This will be used as the queue of items between our producer and consumer. This BufferBlock is defined to accept character datatypes. Next, we start our consumer task using this command: Task<int> consumer = Consumer(buffer); Also, note that when this buffer object goes into the consumer task, it is cast as ISourceBlock. Notice the method header of our consumer: static async Task<int> Consumer(ISourceBlock<char> Source) Next, our Main method starts our producer task using the following command: Producer(buffer); Then we wait until our consumer task finishes, using this command: consumer.Wait(); So, now our Main method just waits. Its work is done for now. It has started both the producer and consumer tasks. Now our consumer is waiting for items to appear in its BufferBlock so it can process them. The consumer will stay in the following loop until all items are removed from the message block and the block has been completed, which is done by someone calling its Complete method:      while (await Source.OutputAvailableAsync())            {                    char C = Source.Receive();                      //Encrypt each character.                    char encrypted = Encrypt(C);                      DisplayData = DisplayData + encrypted.ToString();              } So, now our consumer task will loop asynchronously, removing items from the message queue as they appear. It uses the following command in the while loop to do this: await Source.OutputAvailableAsync()) Likewise, other consumer tasks can run at the same time and do the same thing. If the producer is adding items to the block quicker than the consumer can process them, then adding another consumer will improve performance. Once an item is available, then the consumer calls the following command to get the item from the buffer: char C = Source.Receive(); Since the buffer contains items of type character, we place the item received into a character value. Then the consumer processes it by encrypting the character and appending it to our display string: Now, let's look at the consumer. The consumer first gets its data by calling the following command: GetData(PipelineInputFile) This method returns a List collection of characters that has an item for each character in the input data file. Now the producer iterates through the collection and uses the following command to place each item into the buffer block: Target.Post(C); Also, notice in the method header for our consumer that we cast our buffer as an ITargetBlock type: static void Producer(ITargetBlock<char> Target) Once the producer is done processing characters and adding them to the buffer, it officially closes the BufferBlock object using this command: Target.Complete(); That is it for the producer and consumer. Once the Main method is done waiting on the consumer to finish, it then uses the following code to write out the number of characters processed, the input data, and the encrypted data:      // Print the count of characters from the input file.            Console.WriteLine("Processed {0} bytes from input file.", consumer.Result);              //Print out the input file to the console.            Console.WriteLine("rnrn");            Console.WriteLine("This is the input data file. rn");            using (StreamReader inputfile = new StreamReader(PipelineInputFile))            {                while (inputfile.Peek() >= 0)                {                    Console.Write((char)inputfile.Read());                }              }              //Print out the encrypted file to the console.            Console.WriteLine("rnrn");            Console.WriteLine("This is the encrypted data file. rn");            using (StreamReader encryptfile = new StreamReader(PipelineEncryptFile))            {                while (encryptfile.Peek() >= 0)                {                    Console.Write((char)encryptfile.Read());                }              } Now that you are comfortable implementing a basic producer-consumer design using objects from the TPL Dataflow library, try experimenting with this basic idea but use multiple producers and multiple consumers all with the same BufferBlock object as the queue between them all. Also, try converting our original Pipeline application from the beginning of the article into a TPL Dataflow producer-consumer application with two sets of producers and consumers. The first will act as stage 1 and stage 2, and the second will act as stage 2 and stage 3. So, in effect, stage 2 will be both a consumer and a producer. Summary We have covered a lot in this article. We have learned the benefits and how to implement a Pipeline design pattern and a producer-consumer design pattern. As we saw, these are both very helpful design patterns when building parallel and concurrent applications that require multiple asynchronous processes of data between tasks. In the Pipeline design, we are able to run multiple tasks or stages concurrently even though the stages rely on data being processed and output by other stages. This is very helpful for performance since all functionality doesn't have to wait on each stage to finish processing every item of data. In our example, we are able to start decrypting characters of data while a previous stage is still encrypting data and placing it into the queue. In the Pipeline example, we examined the benefits of the BlockingCollection class in acting as a queue between stages in our pipeline. Next, we explored the new TPL Dataflow library and some of its message block classes. These classes implement several interfaces defined in the library—ISourceBlock, ITargetBlock, and IPropogatorBlock. By implementing these interfaces, it allows us to write generic producer and consumer task functionality that can be reused in a variety of applications. Both of these design patterns and the Dataflow library allow for easy implementations of common functionality in a concurrent manner. You will use these techniques in many applications, and this will become a go-to design pattern when you evaluate a system's requirements and determine how to implement concurrency to help improve performance. Like all programming, parallel programming is made easier when you have a toolbox of easy-to-use techniques that you are comfortable with. Most applications that benefit from parallelism will be conducive to some variation of a producer-consumer or Pipeline pattern. Also, the BlockingCollection and Dataflow message block objects are useful mechanisms for coordinating data between parallel tasks, no matter what design pattern is used in the application. It will be very useful to become comfortable with these messaging and queuing classes. Resources for Article: Further resources on this subject: Parallel Programming Patterns [article] Watching Multiple Threads in C# [article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article]
Read more
  • 0
  • 0
  • 11302

article-image-data-migration-scenarios-sap-business-one-application-part-1
Packt
20 Oct 2009
25 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 1

Packt
20 Oct 2009
25 min read
Just recently, I found myself in a data migration project that served as an eye-opener. Our team had to migrate a customer system that utilized Act! and Peachtree. Both systems are not very famous for having good accessibility to their data. In fact, Peachtree is a non-SQL database that does not enforce data consistency. Act! also uses a proprietary table system that is based on a non-SQL database. The general migration logic was rather straightforward. However, our team found that the migration and consolidation of data into the new system posed multiple challenges, not only on the technical front, but also for the customer when it came to verifying the data. We used the on-the-edge tool xFusion Studio for data migration. This tool allows migrating and synchronizing data by using simple and advanced SQL data messaging techniques. The xFusion Studio tool has a graphical representation of how the data flows from the source to the target. When I looked at one section of this graphical representation, I started humming the song Welcome to the Jungle. Take a look at the following screenshot and find out why Guns and Roses may have provided the soundtrack for this data migration project: What we learned from the above screenshot is quite obvious and I have dedicated this article to helping you overcome these potential issues. Keep it simple and focus on information rather than data. You know that just by having more data does not always mean you’ve added more information. Sometimes, it just means a data jungle has been created. Making the right decisions at key milestones during the migration can keep the project simple and guarantee the success. Your goal should be to consolidate the islands of data into a more efficient and consistent database that provides real-time information. What you will learn about data migration In order to accomplish the task of migrating data from different sources into SAP Business ONE application, a strategy must be designed that addresses the individual needs of the project at hand. The data migration strategy uses proven processes and templates. The data migration itself can be managed as a mini project depending on the complexity. During the course of this article, the following key topics will be covered. The goal is to help you make crucial decisions, which will keep a project simple and manageable: Position the data migration tasks in the project plan – We will start by positioning the data migration tasks in the project plan. I will further define the required tasks that you need to complete as a part of the data migration. Data types and scenarios – With the general project plan structure in place, it is time to cover the common terms related to data migration. I will introduce you to the main aspects, such as master data and transactional data, as well as the impact they have on the complexity of data migration. SAP tools available for migration – During the course of our case study, I will introduce you to the data migration tools that come with SAP. However, there are also more advanced tools for complex migrations. You will learn about the main player in this area and how to use it. Process of migration – To avoid problems and guarantee success, the data migration project must follow a proven procedure. We will update the project plan to include the procedure and will also use the process during our case study. Making decisions about what data to bring – I mentioned that it is important to focus on information versus data. With the knowledge of the right tools and procedures, it is a good time to summarize the primary known issues and explain how to tackle them. The project plan We are still progressing in Phase 2 – Analysis and Design. The data migration is positioned in the Solution Architecture section and is called Review Data Conversion Needs (Amount and Type of Data). A thorough evaluation of the data conversion needs will also cover the next task in the project plan called Review Integration Points with any 3rd Party Solution. As you can see, the data migration task stands as a small task in the project plan. But as mentioned earlier, it can wind up being a large project depending on the number and size of data sources that need to be migrated. To honor this, we will add some more details to this task. As the task name suggests, we must review data conversion needs and identify the amount and type of data. This simple task must be structured in phases, just like the entire project that is structured in phases. Therefore, data migration needs to go through the following phases to be successful: 1. Design - Identify all of the Data Sources 2. Extraction of data into Excel or SQL for review and consistency 3. Review of Data and Verification(Via Customer Feedback) 4. Load into SAP System and Verification Note that the validation process and the consequential load could be iterative processes. For example, if the validated data has many issues, it only makes sense to perform a load into SAP if an additional verification takes place before the load. You only want to load data into an SAP system for testing if you know the quality of the records going to be loaded is good. Therefore, new phases were added in the project plan (seen below). Please do this in your project too based on the actual complexity and the number of data sources you have. A thorough look at the tasks above will be provided when we talk about the process of migration. Before we do that, the basic terms related to data migration will be covered. Data sources—where is my data There is a great variety in the potential types data sources. We will now identify the most common sources and explain their key characteristics. However, if there is a source that is not mentioned here, you can still migrate the data easily by transitioning it into one of the following formats. Microsoft Excel and text data migration The most common format for data migration is Excel, or text-based files. Text-based files are formatted using commas or tabs as field separators. When a comma is used as a field separator, the file format is referred to as Comma Separated Values (CSV). Most of the migration templates and strategies are based on Excel files that have specific columns where you can manually enter data, or copy and paste larger chunks. Therefore, if there is any way for you to extract data from your current system and present it in Excel, you have already done a great deal of data migration work. Microsoft Access An Access database is essentially an Excel sheet on a larger scale with added data consistency capability. It is a good idea to consider extracting Access tables to Excel in order to prepare for data migration. SQL If you have very large sets of data, then instead of using Excel, we usually employ an SQL database. The database then has a set of tables instead of Excel sheets. Using SQL tables, we can create SQL statements that can verify data and analyze results sets. Please note that you can use any SQL database, such as Microsoft SQL Server, Oracle, IBM DB, and so on. SaaS (Netsuite, Salesforce) SaaS stands for Software as a Service. Essentially, it means you can use software functionality based on a subscription. However, you don't own the solution. All of the hardware and software is installed at the service center, so you don't need to worry about hardware and software maintenance. However, keep in mind that these services don't allow you to manage the service packs according to your requirements. You need to adjust your business to the schedule of the SaaS company. If you are migrating from a modern SaaS solution, such as Salesforce or Netsuite, you will probably know that the data is not at your site, but rather stored at your solution hosting provider. Getting the data out to migrate to another solution may be done by obtaining reports, which could then be saved in an Excel format. Other legacy data The term legacy data is often mentioned when evaluating larger old systems. Legacy data basically comprises a large set of data that a company is using on mostly obsolete systems. AS/400 or Mainframe The IBM AS/400 is a good example of a legacy data source. Experts who are capable of extracting data from these systems are highly sought after, and so the budget must be on a higher scale. AS/400 data can often be extracted into a text or an Excel format. However, the data may come without headings. The headings are usually documented in a file that describes the data. You need to make sure that you get the file definitions, without which the pure text files may be meaningless. In addition, the media format is worth considering. An older AS/400 system may utilize a backup tape format which is not available on your Intel server. Peachtree, QuickBooks, and Act! Another potential source for data migration may be a smaller PC-based system, such as Peachtree, QuickBooks, or Act!. These systems have a different data format, and are based on non-SQL databases. This means the data cannot be accessed via SQL. In order to extract data from those systems, the proprietary API must be used. For example, if Peachtree displays data in the applications forms, it uses the program logic to put the pieces together from different text files. Getting data out from these types of systems is difficult and sometimes impossible. It is recommended to employ the relevant API to access the data in a structured way. You may want to run reports and export the results to text or Excel. Data classification in SAP Business ONE There are two main groups of data that we will migrate to the SAP Business ONE application: master data and transaction data. Master data Master data is the basic information that SAP Business ONE uses to record transactions (for example, business partner information). In addition, information about your products, such as items, finished goods, and raw materials are considered master data. Master data should always be migrated if possible. It can easily be verified and structured in an Excel or SQL format. For example, the data could be displayed using Excel sheets. You can then quickly verify that the data is showing up in the correct columns. In addition, you can see if the data is broken down into its required components. For example, each Excel column should represent a target field in SAP Business ONE. You should avoid having a single column in Excel that provides data for more than one target in SAP Business ONE. Transaction data Transaction data are proposals, orders, invoices, deliveries, and other similar information that comprise a combination of master data to create a unique business document. Customers often will want to migrate historical transactions from older systems. However, the consequences of doing this may have a landslide effect. For example, inventory is valuated based on specific settings in the finance section of a system. If these settings are not identical in the new system, transactions may look different in the old and the new system. This makes the migration very risky as the data verification becomes difficult on the customer side. I recommend making historical transactions available via a reporting database. For example, often, sales history must be available when migrating data. You can create a reporting database that provides sales history information. The user can use this data via reports within the SAP Business ONE application. Therefore, closed transactions should be migrated via a reporting database . Closed transactions are all of the business-related activities that were fully completed in the old system. Open transactions, on the other hand, are all of the business-related activities that are currently not completed. It makes sense that the open transactions be migrated directly to SAP, and not to a history database because they will be completed within the new SAP system. As a result of the data migration, you would be able to access sales history information from within SAP by accessing a reporting database. Open transactions will be completed within SAP, and then consequently lead to new transactions in SAP. Create a history database for sales history and manually enter open transactions. SAP DI-API Now that we know the main data types for an SAP migration, and the most common sources, we can take a brief look at the way the data is inserted into the SAP system. Based on the SAP guidelines, you are not allowed to insert data directly in the underlying SQL tables. The reason for that is that it can cause inconsistencies. When SAP works with the database, multiple tables are often updated. If you manually update a table to insert data, there is a good chance that another table has a link that also requires updating. Therefore, unless you know the exact table structure for the data you are trying to update, don't mess with the SAP SQL tables. If you carefully read this and understand the table structure, you will now know that there may be situations where you decide to access the tables directly. If you decide to insert data directly into the SAP database tables, you run the risk of losing your warranty. Migration scenarios and key decisions Data migration not only takes place as a part of a new SAP implementation, but also if you have a running system and you want to import leads or a list of new items. Therefore, it is a good idea to learn about the scenarios that you may come across and be able to select the right migration and integration tools. As outlined before, data can be divided into two groups: master data and transaction data. It is important that you separate the two, and structure each data migration accordingly. Master data is an essential component for manifesting transactions. Therefore, even if you need to bring over transactional data, the master data must already be in place. Always start with the master data alongside a verification procedure, and then continue with the relevant transaction data. Let’s now briefly look at the most common situations where you may require the evaluation of potential data migration options. New company (start-up) In this setup, you may not have extensive amounts of existing data to migrate. However, you may want to bring over lead lists or lists of items. During the course of this article, we will import a list of leads into SAP using the Excel Import functionality. Many new companies require the capability to easily import data into SAP. As you already know by now, the import of leads and item information will be considered as importing master data. Working with this master data by entering sales orders and so forth, would constitute transaction data. Transaction data is considered closed if all of the relevant actions are performed. For example, a sales order is considered closed if the items are delivered, invoiced, and paid for. If the chain of events is not complete, the transaction is open. Islands of data scenario This is the classic situation for an SAP implementation. You will first need to identify the available data sources and their formats. Then, you select the master data you want to bring over. With multiple islands of data, an SAP master record may result from more than one source. A business partner record may come, in part, from an existing accounting system, such as QuickBooks or Peachtree. Whereas other parts may come from a CRM system, such as Act!. For example, the billing information may be retrieved from the finance system and the relevant lead and sales information, such as specific contacts and notes, may come from the CRM system. In such a case, you need to merge this information into a new consistent master record in SAP. For this situation, first manually put the pieces together. Once the manual process works, you can attempt to automate the process. Don't try to directly import all of the data. You should always establish an intermediary level that allows for data verification. Only then import the data into SAP. For example, if you have QuickBooks and Act!, first merge the information into Excel for verification, and then import it into SAP. If the amount of data is large, you can also establish an SQL database. In that case, the Excel sheets would be replaced by SQL tables. IBM legacy data migration The migration of IBM legacy data is potentially the most challenging because the IBM systems are not directly compatible with Windows-based systems. Therefore, almost naturally, you will establish a text-based, or an Excel-formatted, representation of the IBM data. You can then proceed with verifying the information. SQL migration The easiest migration type is obviously the one where all of the data is already structured and consistent. However, you will not always have documentation of the table structure where the data resides. In this case, you need to create queries against the SQL tables to verify the data. The queries can then be saved as views. The views you create should always represent a consistent set of information that you can migrate. For example, if you have one table with address information, and another table with customer ID fields, you can create a view that consolidates this information into a single consistent set. Process of migration for your project I briefly touched upon the most common data migration scenarios so you can get a feel for the process. As you can see, whatever the source of data is, we always attempt to create an intermediary platform that allows the data to be verified. This intermediary platform is most commonly Excel or an SQL database. The process of data migration has the following subtasks: Identify available data sources Structure data into master data and transaction data Establish an intermediary platform with Excel or SQL Verify data Match data columns with Excel templates Run migration based on templates and verify data Based on this procedure, I have added more detail to the project plan. As you can see in this example, based on the required level of detail, we can make adjustments to the project plan to address the requirements: SAP standard import features Let's take a look at the available data exchange features in SAP. SAP provides two main tools for data migration. The fi rst option is to use the available menu in the SAP Business ONE client interface to exchange data. The other option is to use the Data Transfer Workbench (DTW). Standard import/export features— walk-through You can reach the Import from Excel form via Administration | Data Import/Export. As you can see in the following screenshot on the right top section of the form, the type of import is a drop-down selection. The options are BP and Items. In the screenshot, we have selected BP, which allows business partner information to be imported. There are drop-down fields that you can select based on the data you want to import. However, keep in mind that certain fields are mandatory, such as the BP Code field, whereas others are optional. The fields you select are associated with a column as you can see here: If you want to find out if a field is mandatory or not, simply open SAP and attempt to enter the data directly in the relevant SAP form. For example, if you are trying to import business partner information, enter the fields you want to import and see if the record can be saved. If you are missing any mandatory fields, SAP will provide an error message. You can modify the data that you are planning to import based on that. When you click on the OK button in the Import from Excel form (seen above), the Excel sheet with all of the data needs to be selected. In the following screenshot, you can see how the Excel sheet in our example looks. For example, column A has all of the BP Codes. This is in line with the mapping of columns to fields that we can see on the Import from Excel form. Please note that the file we select must be in a .txt format. For this example, I used the Save As feature in Excel (seen in the following screenshot) to save the file in the Text MS-DOS (*.txt) format. I was then able to select the BP Migration.txt file. This is actually a good thing because it points to the fact that you can use any application that can save data in the .txt format as the data source. The following screenshot shows the Save As screen: I imported the file and a success message confirms that the records were imported into SAP: A subsequent check in SAP confirms that the BP records that I had in the text file are now available in SAP: In the example, we only used two records. It is recommended to start out with a limited number of records to verify that the import is working. Therefore, you may start by reducing your import file to five records. This has the advantage of the import not taking a long time and you can immediately verify the result. See the following screenshot: Sometimes, it is not clear what kind of information SAP expects when importing. For example, at first Lead, Customer, Vendor were used in Column C to indicate the type of BP that was to be imported. However, this resulted in an error message upon completion of the import. Therefore, system information was activated to check what information SAP requires for the BP Type representation. As you can see in the screenshot of the Excel sheet you get when you click on the OK button in the Import from Excel form, the BP Type information is indicated by only one letter using L, C, or V. In the example screenshot above, you can clearly see L in the lower left section. The same thing is done for Country in the Addresses section. You can try that by navigating to Administration | Sales | Countries, and then hovering over the country you will be importing. In my example, USA was internally represented by SAP as US. It is a minor issue. However, when importing data, all of these issues need to be addressed. Please note that the file you are trying to import should not be open in Excel at the same time, as this may trigger an error. The Excel or text file does not have a header with a description of the data. Standard import/export features for your own project SAP’s standard import functionality for business partners and items is very straightforward. For your own project, you can prepare an Excel sheet for business partners and items. If you need to import BP or item information from another system, you can get this done quickly. If you get an error during the import process, try to manually enter the data in SAP. In addition, you can use the System Information feature to identify how SAP stores information in the database. I recommend you first create an Excel sheet with a maximum of two records to see if the basic information and data format is correct. Once you have this running, you can add all of the data you want to import. Overall, this functionality is a quick way to get your own data into the system. This feature can also be used in case you regularly receive address information. For example, if you have salespeople visiting trade fairs, you can provide them with the Excel sheet that you may have prepared for BP import. The salespeople can directly add their information there. Once they return from the trade fair with the Excel fi les, you can easily import the information into SAP and schedule follow-up activities using the Opportunity Management System. The item import is useful if you work with a vendor who updates his or her price lists and item information on a monthly basis. You can prepare an Excel template where the item information will regularly be entered and you can easily import the updates into SAP. Data Migration Workbench (DTW) The SAP standard import/export features are straightforward, but may not address the full complexity of the data that you need to import. For this situation, you may want to evaluate the SAP Data Migration Workbench (DTW). The functionality of this tool provides a greater level of detail to address the potential data structures that you want to import. To understand the basic concept of the DTW, it is a good idea to look at the different master data sections in SAP as business objects. A business object groups related information together. For example, BP information can have much more detail than what was previously shown in the standard import. The DTW templates and business objects To better understand the business object metaphor, you need to navigate to the DTW directory and evaluate the Templates folder. The templates are organized by business objects. The oBusinessPartners business object is represented by the folder with the same name (seen below). In this folder, you can find Excel template files that can be used to provide information for this type of business object. The following objects are available as Excel templates: BPAccountReceivables BPAddresses BPBankAccounts BPPaymentDates BPPaymentMethods BPWithholdingTax BusinessPartners ContactEmployees Please notice that these templates are Excel .xlt files, which is the Excel template extension. It is a good idea to browse through the list of templates and see the relevant templates. In a nutshell, you essentially add your own data to the templates and use DTW to import the data. Connecting to DTW In order to work with DTW, you need to connect to your SAP system using the DTW interface. The following screenshot shows the parameters I used to connect to the Lemonade Stand database: Once you are connected, a wizard-type interface walks you through the required steps to get started. Look at the next screenshot: The DTW examples and templates There is also an example folder in the DTW installation location on your system. This example folder has information about how to add information to your Excel templates. The following screenshot shows an example for business partner migration. You can see that the Excel template does have a header line on top that explains the content in the particular column. The actual template files also have comments in the header fi le, which provide information about the data format expected, such as String, Date, and so on. See the example in this screenshot: The actual template is empty and you need to add your information as shown here:   DTW for your own project If you realize that the basic import features in SAP are not sufficient, and your requirements are more challenging, evaluate DTW. Think of the data you want to import as business objects where information is logically grouped. If you are able to group your data together, you can modify the Excel templates with your own information. The DTW example folder provides working examples that you can use to get started. Please note that you should establish a test database before you start importing data this way. This is because once new data arrives in SAP, you need to verify the results based on the procedure discussed earlier. In addition, be prepared to fine-tune the import. Often, an import and data verification process takes four attempts of data importing and verification. Summary In this article, we covered the tasks related to data migration. This also included some practical examples for simple data imports related to business partner information and items. In addition, more advanced topics were covered by introducing the SAP DTW (Data Transfer Workbench) and the related aspects to get you started. During the course of this article, we positioned the data migration task in the project plan. The project plan was then fine-tuned with more detail to give some justice to the potential complexity of a data migration project. The data migration tasks established a process, from design to data mapping and verification of the data. Notably, the establishment of an intermediary data platform was recommended for your projects. This will help you verify data at each step of the migration. The key message of keeping it simple will be the basis for every migration project. The data verification task ensures simplicity and the quality of your data. If you have read this article you may be interested to view : Competitive Service and Contract Management in SAP Business ONE Implementation: Part 1 Competitive Service and Contract Management in SAP Business ONE Implementation: Part 2 Data Migration Scenarios in SAP Business ONE Application- part 2
Read more
  • 0
  • 0
  • 11269
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-hello-c-welcome-net-core
Packt
11 Jan 2017
10 min read
Save for later

Hello, C#! Welcome, .NET Core!

Packt
11 Jan 2017
10 min read
In this article by Mark J. Price, author of the book C# 7 and .NET Core: Modern Cross-Platform Development-Second Edition, we will discuss about setting up your development environment; understanding the similarities and differences between .NET Core, .NET Framework, .NET Standard Library, and .NET Native. Most people learn complex topics by imitation and repetition rather than reading a detailed explanation of theory. So, I will not explain every keyword and step. This article covers the following topics: Setting up your development environment Understanding .NET (For more resources related to this topic, see here.) Setting up your development environment Before you start programming, you will need to set up your Interactive Development Environment (IDE) that includes a code editor for C#. The best IDE to choose is Microsoft Visual Studio 2017, but it only runs on the Windows operating system. To develop on alternative operating systems such as macOS and Linux, a good choice of IDE is Microsoft Visual Studio Code. Using alternative C# IDEs There are alternative IDEs for C#, for example, MonoDevelop and JetBrains Project Rider. They each have versions available for Windows, Linux, and macOS, allowing you to write code on one operating system and deploy to the same or a different one. For MonoDevelop IDE, visit http://www.monodevelop.com/ For JetBrains Project Rider, visit https://www.jetbrains.com/rider/ Cloud9 is a web browser-based IDE, so it's even more cross-platform than the others. Here is the link: https://c9.io/web/sign-up/free Linux and Docker are popular server host platforms because they are relatively lightweight and more cost-effectively scalable when compared to operating system platforms that are more for end users, such as Windows and macOS. Using Visual Studio 2017 on Windows 10 You can use Windows 7 or later to run code, but you will have a better experience if you use Windows 10. If you don't have Windows 10, and then you can create a virtual machine (VM) to use for development. You can choose any cloud provider, but Microsoft Azure has preconfigured VMs that include properly licensed Windows 10 and Visual Studio 2017. You only pay for the minutes your VM is running, so it is a way for users of Linux, macOS, and older Windows versions to have all the benefits of using Visual Studio 2017. Since October 2014, Microsoft has made a professional-quality edition of Visual Studio available to everyone for free. It is called the Community Edition. Microsoft has combined all its free developer offerings in a program called Visual Studio Dev Essentials. This includes the Community Edition, the free level of Visual Studio Team Services, Azure credits for test and development, and free training from Pluralsight, Wintellect, and Xamarin. Installing Microsoft Visual Studio 2017 Download and install Microsoft Visual Studio Community 2017 or later: https://www.visualstudio.com/vs/visual-studio-2017/. Choosing workloads On the Workloads tab, choose the following: Universal Windows Platform development .NET desktop development Web development Azure development .NET Core and Docker development On the Individual components tab, choose the following: Git for Windows GitHub extension for Visual Studio Click Install. You can choose to install everything if you want support for languages such as C++, Python, and R. Completing the installation Wait for the software to download and install. When the installation is complete, click Launch. While you wait for Visual Studio 2017 to install, you can jump to the Understanding .NET section in this article. Signing in to Visual Studio The first time that you run Visual Studio 2017, you will be prompted to sign in. If you have a Microsoft account, for example, a Hotmail, MSN, Live, or Outlook e-mail address, you can use that account. If you don't, then register for a new one at the following link: https://signup.live.com/ You will see the Visual Studio user interface with the Start Page open in the central area. Like most Windows desktop applications, Visual Studio has a menu bar, a toolbar for common commands, and a status bar at the bottom. On the right is the Solution Explorer window that will list your open projects: To have quick access to Visual Studio in the future, right-click on its entry in the Windows taskbar and select Pin this program to taskbar. Using older versions of Visual Studio The free Community Edition has been available since Visual Studio 2013 with Update 4. If you want to use a free version of Visual Studio older than 2013, then you can use one of the more limited Express editions. A lot of the code in this book will work with older versions if you bear in mind when the following features were introduced: Year C# Features 2005 2 Generics with <T> 2008 3 Lambda expressions with => and manipulating sequences with LINQ (from, in, where, orderby, ascending, descending, select, group, into) 2010 4 Dynamic typing with dynamic and multithreading with Task 2012 5 Simplifying multithreading with async and await 2015 6 string interpolation with $"" and importing static types with using static 2017 7 Tuples (with deconstruction), patterns, out variables, literal improvements Understanding .NET .NET Framework, .NET Core, .NET Standard Library, and .NET Native are related and overlapping platforms for developers to build applications and services upon. Understanding the .NET Framework platform Microsoft's .NET Framework is a development platform that includes a Common Language Runtime (CLR) that manages the execution of code and a rich library of classes for building applications. Microsoft designed the .NET Framework to have the possibility of being cross-platform, but Microsoft put their implementation effort into making it work best with Windows. Practically speaking, the .NET Framework is Windows-only. Understanding the Mono and Xamarin projects Third parties developed a cross-platform .NET implementation named the Mono project (http://www.mono-project.com/). Mono is cross-platform, but it fell well behind the official implementation of .NET Framework. It has found a niche as the foundation of the Xamarin mobile platform. Microsoft purchased Xamarin and now includes what used to be an expensive product for free with Visual Studio 2017. Microsoft has renamed the Xamarin Studio development tool to Visual Studio for the Mac. Xamarin is targeted at mobile development and building cloud services to support mobile apps. Understanding the .NET Core platform Today, we live in a truly cross-platform world. Modern mobile and cloud development have made Windows a much less important operating system. So, Microsoft has been working on an effort to decouple the .NET Framework from its close ties with Windows. While rewriting .NET to be truly cross-platform, Microsoft has taken the opportunity to refactor .NET to remove major parts that are no longer considered core. This new product is branded as the .NET Core, which includes a cross-platform implementation of the CLR known as CoreCLR and a streamlined library of classes known as CoreFX. Streamlining .NET .NET Core is much smaller than the current version of the .NET Framework because a lot has been removed. For example, Windows Forms and Windows Presentation Foundation (WPF) can be used to build graphical user interface (GUI) applications, but they are tightly bound to Windows, so they have been removed from the .NET Core. The latest technology for building Windows apps is the Universal Windows Platform (UWP). ASP.NET Web Forms and Windows Communication Foundation (WCF) are old web applications and service technologies that fewer developers choose to use today, so they have also been removed from the .NET Core. Instead, developers prefer to use ASP.NET MVC and ASP.NET Web API. These two technologies have been refactored and combined into a new product that runs on the .NET Core named ASP.NET Core. The Entity Framework (EF) 6.x is an object-relational mapping technology for working with data stored in relational databases such as Oracle and Microsoft SQL Server. It has gained baggage over the years, so the cross-platform version has been slimmed down and named Entity Framework Core. Some data types in .NET that are included with both the .NET Framework and the .NET Core have been simplified by removing some members. For example, in the .NET Framework, the File class has both a Close and Dispose method, and either can be used to release the file resources. In .NET Core, there is only the Dispose method. This reduces the memory footprint of the assembly and simplifies the API you have to learn. The .NET Framework 4.6 is about 200 MB. The .NET Core is about 11 MB. Eventually, the .NET Core may grow to a similar larger size. Microsoft's goal is not to make the .NET Core smaller than the .NET Framework. The goal is to componentize .NET Core to support modern technologies and to have fewer dependencies so that deployment requires only those components that your application really needs. Understanding the .NET Standard The situation with .NET today is that there are three forked .NET platforms, all controlled by Microsoft: .NET Framework, Xamarin, and .NET Core. Each have different strengths and weaknesses. This has led to the problem that a developer must learn three platforms, each with annoying quirks and limitations. So, Microsoft is working on defining the .NET Standard 2.0: a set of APIs that all .NET platforms must implement. Today, in 2016, there is the .NET Standard 1.6, but only .NET Core 1.0 supports it; .NET Framework and Xamarin do not! .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the missing APIs that developers need to port old code written for .NET Framework to the new cross-platform .NET Core. .NET Standard 2.0 will probably be released towards the end of 2017, so I hope to write a third edition of this book for when that's finally released. The future of .NET The .NET Standard 2.0 is the near future of .NET, and it will make it much easier for developers to share code between any flavor of .NET, but we are not there yet. For cross-platform development, .NET Core is a great start, but it will take another version or two to become as mature as the current version of the .NET Framework. This book will focus on the .NET Core, but will use the .NET Framework when important or useful features have not (yet) been implemented in the .NET Core. Understanding the .NET Native compiler Another .NET initiative is the .NET Native compiler. This compiles C# code to native CPU instructions ahead-of-time (AoT) rather than using the CLR to compile IL just-in-time (JIT) to native code later. The .NET Native compiler improves execution speed and reduces the memory footprint for applications. It supports the following: UWP apps for Windows 10, Windows 10 Mobile, Xbox One, HoloLens, and Internet of Things (IoT) devices such as Raspberry Pi Server-side web development with ASP.NET Core Console applications for use on the command line Comparing .NET technologies The following table summarizes and compares the .NET technologies: Platform Feature set C# compiles to Host OSes .NET Framework Mature and extensive IL executed by a runtime Windows only Xamarin Mature and limited to mobile features iOS, Android, Windows Mobile .NET Core Brand-new and somewhat limited Windows, Linux, macOS, Docker .NET Native Brand-new and very limited Native code Summary In this article, we have learned how to set up the development environment, and discussed in detail about .NET technologies. Resources for Article: Further resources on this subject: Introduction to C# and .NET [article] Reactive Programming with C# [article] Functional Programming in C# [article]
Read more
  • 0
  • 0
  • 11259

article-image-function-passing
Packt
19 Nov 2014
6 min read
Save for later

Function passing

Packt
19 Nov 2014
6 min read
In this article by Simon Timms, the author of the book, Mastering JavaScript Design Patterns, we will cover function passing. In functional programming languages, functions are first-class citizens. Functions can be assigned to variables and passed around just like you would with any other variable. This is not entirely a foreign concept. Even languages such as C had function pointers that could be treated just like other variables. C# has delegates and, in more recent versions, lambdas. The latest release of Java has also added support for lambdas, as they have proven to be so useful. (For more resources related to this topic, see here.) JavaScript allows for functions to be treated as variables and even as objects and strings. In this way, JavaScript is functional in nature. Because of JavaScript's single-threaded nature, callbacks are a common convention and you can find them pretty much everywhere. Consider calling a function at a later date on a web page. This is done by setting a timeout on the window object as follows: setTimeout(function(){alert("Hello from the past")}, 5 * 1000); The arguments for the set timeout function are a function to call and a time to delay in milliseconds. No matter the JavaScript environment in which you're working, it is almost impossible to avoid functions in the shape of callbacks. The asynchronous processing model of Node.js is highly dependent on being able to call a function and pass in something to be completed at a later date. Making calls to external resources in a browser is also dependent on a callback to notify the caller that some asynchronous operation has completed. In basic JavaScript, this looks like the following code: var xmlhttp = new XMLHttpRequest()xmlhttp.onreadystatechange=function()if (xmlhttp.readyState==4 &&xmlhttp.status==200){//process returned data}};xmlhttp.open("GET", http://some.external.resource, true); xmlhttp.send(); You may notice that we assign onreadystatechange before we even send the request. This is because assigning it later may result in a race condition in which the server responds before the function is attached to the ready state change. In this case, we've used an inline function to process the returned data. Because functions are first class citizens, we can change this to look like the following code: var xmlhttp;function requestData(){xmlhttp = new XMLHttpRequest()xmlhttp.onreadystatechange=processData;xmlhttp.open("GET", http://some.external.resource, true); xmlhttp.send();}function processData(){if (xmlhttp.readyState==4 &&xmlhttp.status==200){   //process returned data}} This is typically a cleaner approach and avoids performing complex processing in line with another function. However, you might be more familiar with the jQuery version of this, which looks something like this: $.getJSON('http://some.external.resource', function(json){//process returned data}); In this case, the boiler plate of dealing with ready state changes is handled for you. There is even convenience provided for you should the request for data fail with the following code: $.ajax('http://some.external.resource',{ success: function(json){   //process returned data},error: function(){   //process failure},dataType: "json"}); In this case, we've passed an object into the ajax call, which defines a number of properties. Amongst these properties are function callbacks for success and failure. This method of passing numerous functions into another suggests a great way of providing expansion points for classes. Likely, you've seen this pattern in use before without even realizing it. Passing functions into constructors as part of an options object is a commonly used approach to providing extension hooks in JavaScript libraries. Implementation In Westeros, the tourism industry is almost nonextant. There are great difficulties with bandits killing tourists and tourists becoming entangled in regional conflicts. Nonetheless, some enterprising folks have started to advertise a grand tour of Westeros in which they will take those with the means on a tour of all the major attractions. From King's Landing to Eyrie, to the great mountains of Dorne, the tour will cover it all. In fact, a rather mathematically inclined member of the tourism board has taken to calling it a Hamiltonian tour, as it visits everywhere once. The HamiltonianTour class provides an options object that allows the definition of an options object. This object contains the various places to which a callback can be attached. In our case, the interface for it would look something like the following code: export class HamiltonianTourOptions{onTourStart: Function;onEntryToAttraction: Function;onExitFromAttraction: Function;onTourCompletion: Function;} The full HamiltonianTour class looks like the following code: var HamiltonianTour = (function () {function HamiltonianTour(options) {   this.options = options;}HamiltonianTour.prototype.StartTour = function () {   if (this.options.onTourStart&&typeof (this.options.onTourStart)    === "function")   this.options.onTourStart();   this.VisitAttraction("King's Landing");   this.VisitAttraction("Winterfell");   this.VisitAttraction("Mountains of Dorne");   this.VisitAttraction("Eyrie");   if (this.options.onTourCompletion&&typeof    (this.options.onTourCompletion) === "function")   this.options.onTourCompletion();}; HamiltonianTour.prototype.VisitAttraction = function (AttractionName) {   if (this.options.onEntryToAttraction&&typeof    (this.options.onEntryToAttraction) === "function")   this.options.onEntryToAttraction(AttractionName);    //do whatever one does in a Attraction   if (this.options.onExitFromAttraction&&typeof    (this.options.onExitFromAttraction) === "function")   this.options.onExitFromAttraction(AttractionName);};return HamiltonianTour;})(); You can see in the highlighted code how we check the options and then execute the callback as needed. This can be done by simply using the following code: var tour = new HamiltonianTour({onEntryToAttraction: function(cityname){console.log("I'm delighted to be in " + cityname)}});tour.StartTour(); The output of the preceding code will be: I'm delighted to be in King's LandingI'm delighted to be in WinterfellI'm delighted to be in Mountains of DorneI'm delighted to be in Eyrie Summary In this article, we have learned about function passing. Passing functions is a great approach to solving a number of problems in JavaScript and tends to be used extensively by libraries such as jQuery and frameworks such as Express. It is so commonly adopted that using it provides to added barriers no your code's readability. Resources for Article: Further resources on this subject: Creating Java EE Applications [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article] Dart with JavaScript [article]
Read more
  • 0
  • 0
  • 11069

article-image-vrops-introduction-and-architecture
Packt
22 May 2015
10 min read
Save for later

vROps – Introduction and Architecture

Packt
22 May 2015
10 min read
In this article by Scott Norris and Christopher Slater, the authors of Mastering vRealize Operations Manager, we introduce you to vRealize Operations Manager and its component architecture. vRealize Operations Manager (vROps) 6.0 is a tool from VMware that helps IT administrators monitor, troubleshoot, and manage the health and capacity of their virtual environment. vROps has been developed from the stage of being a single tool to being a suite of tools known as vRealize Operations. This suite includes vCenter Infrastructure Navigator (VIN), vRealize Configuration Manager (vCM), vRealize Log Insight, and vRealize Hyperic. Due to its popularity and the powerful analytics engine that vROps uses, many hardware vendors supply adapters (now known as solutions) that allow IT administrators to extend monitoring, troubleshooting, and capacity planning to non-vSphere systems including storage, networking, applications, and even physical devices. In this article, we will learn what's new with vROps 6.0; specifically with respect to its architecture components. One of the most impressive changes with vRealize Operations Manager 6.0 is the major internal architectural change of components, which has helped to produce a solution that supports both a scaled-out and high-availability deployment model. In this article, we will describe the new platform components and the details of the new deployment architecture. (For more resources related to this topic, see here.) A new, common platform design In vRealize Operations Manager 6.0, a new platform design was required to meet some of the required goals that VMware envisaged for the product. These included: The ability to treat all solutions equally and to be able to offer management of performance, capacity, configuration, and compliance of both VMware and third-party solutions The ability to provide a single platform that can scale up to tens of thousands of objects and millions of metrics by scaling out with little reconfiguration or redesign required The ability to support a monitoring solution that can be highly available and to support the loss of a node without impacting the ability to store or query information To meet these goals, vCenter Operations Manager 5.x (vCOps) went through a major architectural overhaul to provide a common platform that uses the same components no matter what deployment architecture is chosen. These changes are shown in the following figure: When comparing the deployment architecture of vROps 6.0 with vCOps 5.x, you will notice that the footprint has changed dramatically. Listed in the following table are some of the major differences in the deployment of vRealize Operations Manager 6.0 compared to vRealize Operations Manager 5.x: Deployment considerations vCenter Operations Manager 5.x vRealize Operations Manager 6.0 vApp deployment vApp consists of two VMs: The User Interface VM The Analytics VM There is no supported way to add additional VMs to vApp and therefore no way to scale out. This deploys a single virtual appliance (VA), that is, the entire solution is provided in each VA. As many as up to 8 VAs can be deployed with this type of deployment. Scaling This deployment could only be scaled up to a certain extent. If it is scaled beyond this, separate instances are needed to be deployed, which do not share the UI or data. This deployment is built on the GemFire federated cluster that supports sharing of data and the UI. Data resiliency is done through GemFire partitioning. Remote collector Remote collectors are supported in vCOps 5.x, but with the installable version only. These remote collectors require a Windows or Linux base OS. The same VA is used for the remote collector simply by specifying the role during the configuration. Installable/standalone option It is required that customers own MSSQL or Oracle database. No capacity planning or vSphere UI is provided with this type of deployment. This deployment leverages built-in databases. It uses the same code base as used in the VA. The ability to support new scaled out and highly available architectures will require an administrator to consider which model is right for their environment before a vRealize Operations Manager 6.0 migration or rollout begins. The vRealize Operations Manager component architecture With a new common platform design comes a completely new architecture. As mentioned in the previous table, this architecture is common across all deployed nodes as well as the vApp and other installable versions. The following diagram shows the five major components of the Operations Manager architecture: The five major components of the Operations Manager architecture depicted in the preceding figure are: The user interface Collector and the REST API Controller Analytics Persistence The user interface In vROps 6.0, the UI is broken into two components—the Product UI and the Admin UI. Unlike the vCOps 5.x vApp, the vROps 6.0 Product UI is present on all nodes with the exception of nodes that are deployed as remote collectors. Remote collectors will be discussed in more detail in the next section. The Admin UI is a web application hosted by Pivotal tc Server(A Java application Apache web server) and is responsible for making HTTP REST calls to the Admin API for node administration tasks. The Cluster and Slice Administrator (CaSA) is responsible for cluster administrative actions such as: Enabling/disabling the Operations Manager cluster Enabling/disabling cluster nodes Performing software updates Browsing logfiles The Admin UI is purposely designed to be separate from the Product UI and always be available for administration and troubleshooting tasks. A small database caches data from the Product UI that provides the last known state information to the Admin UI in the event that the Product UI and analytics are unavailable. The Admin UI is available on each node at https://<NodeIP>/admin. The Product UI is the main Operations Manager graphical user interface. Like the Admin UI, the Product UI is based on Pivotal tc Server and can make HTTP REST calls to the CaSA for administrative tasks. However, the primary purpose of the Product UI is to make GemFire calls to the Controller API to access data and create views, such as dashboards and reports. As shown in the following figure, the Product UI is simply accessed via HTTPS on TCP port 443. Apache then provides a reverse proxy to the Product UI running in Pivotal tc Server using the Apache AJP protocol. Collector The collector's role has not differed much from that in vCOps 5.x. The collector is responsible for processing data from solution adapter instances. As shown in the following figure, the collector uses adapters to collect data from various sources and then contacts the GemFire locator for connection information of one or more controller cache servers. The collector service then connects to one or more Controller API GemFire cache servers and sends the collected data. It is important to note that although an instance of an adapter can only be run on one node at a time, this does not imply that the collected data is being sent to the controller on that node. Controller The controller manages the storage and retrieval of the inventory of the objects within the system. The queries are performed by leveraging the GemFire MapReduce function that allows you to perform selective querying. This allows efficient data querying as data queries are only performed on selective nodes rather than all nodes. We will go in detail to know how the controller interacts with the analytics and persistence stack a little later as well as its role in creating new resources, feeding data in, and extracting views. Analytics Analytics is at the heart of vROps as it is essentially the runtime layer for data analysis. The role of the analytics process is to track the individual states of every metric and then use various forms of correlation to determine whether there are problems. At a high level, the analytics layer is responsible for the following tasks: Metric calculations Dynamic thresholds Alerts and alarms Metric storage and retrieval from the Persistence layer Root cause analysis Historic Inventory Server (HIS) version metadata calculations and relationship data One important difference between vROps 6.0 and vCOps 5.x is that analytics tasks are now run on every node (with the exception of remote collectors). The vCOps 5.x Installable provides an option of installing separate multiple remote analytics processors for dynamic threshold (DT) processing. However, these remote DT processors only support dynamic threshold processing and do not include other analytics functions. Although its primary tasks have not changed much from vCOps 5.x, the analytics component has undergone a significant upgrade under the hood to work with the new GemFire-based cache and the Controller and Persistence layers. Persistence The Persistence layer, as its name implies, is the layer where the data is persisted to a disk. The layer primarily consists of a series of databases that replace the existing vCOps 5.x filesystem database (FSDB) and PostgreSQL combination. Understanding the persistence layer is an important aspect of vROps 6.0, as this layer has a strong relationship with the data and service availability of the solution. vROps 6.0 has four primary database services built on the EMC Documentum xDB (an XML database) and the original FSDB. These services include: Common name Role DB type Sharded Location Global xDB Global data Documentum xDB No /storage/vcops/xdb Alarms xDB Alerts and Alarms data Documentum xDB Yes /storage/vcops/alarmxdb HIS xDB Historical Inventory Service data Documentum xDB Yes /storage/vcops/hisxdb FSDB Filesystem Database metric data FSDB Yes /storage/db/vcops/data CaSA DB Cluster and Slice Administrator data HSQLDB (HyperSQL database) N/A /storage/db/casa/webapp/hsqldb Sharding is the term that GemFire uses to describe the process of distributing data across multiple systems to ensure that computational, storage, and network loads are evenly distributed across the cluster. Global xDB Global xDB contains all of the data that, for the release of vROps, can not be sharded. The majority of this data is user configuration data that includes: User created dashboards and reports Policy settings and alert rules Super metric formulas (not super metric data, as this is sharded in the FSDB) Resource control objects (used during resource discovery) As Global xDB is used for data that cannot be sharded, it is solely located on the master node (and master replica if high availability is enabled). Alarms xDB Alerts and Alarms xDB is a sharded xDB database that contains information on DT breaches. This information then gets converted into vROps alarms based on active policies. HIS xDB Historical Inventory Service (HIS) xDB is a sharded xDB database that holds historical information on all resource properties and parent/child relationships. HIS is used to change data back to the analytics layer based on the incoming metric data that is then used for DT calculations and symptom/alarm generation. FSDB The role of the Filesystem Database is not differed much from vCOps 5.x. The FSDB contains all raw time series metrics for the discovered resources. The FSDB metric data, HIS object, and Alarms data for a particular resource share the same GemFire shard key. This ensures that the multiple components that make up the persistence of a given resource are always located on the same node. Summary In this article, we discussed the new common platform architecture design and how Operations Manager 6.0 differs from Operations Manager 5.x. We also covered the major components that make up the Operations Manager 6.0 platform and the functions that each of the component layers provide. Resources for Article: Further resources on this subject: Solving Some Not-so-common vCenter Issues [article] VMware vRealize Operations Performance and Capacity Management [article] Working with VMware Infrastructure [article]
Read more
  • 0
  • 0
  • 11051

article-image-progressive-mockito
Packt
14 Jul 2014
16 min read
Save for later

Progressive Mockito

Packt
14 Jul 2014
16 min read
(For more resources related to this topic, see here.) Drinking Mockito Download the latest Mockito binary from the following link and add it to the project dependency: http://code.google.com/p/mockito/downloads/list As of February 2014, the latest Mockito version is 1.9.5. Configuring Mockito To add Mockito JAR files to the project dependency, perform the following steps: Extract the JAR files into a folder. Launch Eclipse. Create an Eclipse project named Chapter04. Go to the Libraries tab in the project build path. Click on the Add External JARs... button and browse to the Mockito JAR folder. Select all JAR files and click on OK. We worked with Gradle and Maven and built a project with the JUnit dependency. In this section, we will add Mockito dependencies to our existing projects. The following code snippet will add a Mockito dependency to a Maven project and download the JAR file from the central Maven repository (http://mvnrepository.com/artifact/org.mockito/mockito-core): <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <version>1.9.5</version> <scope>test</scope> </dependency> The following Gradle script snippet will add a Mockito dependency to a Gradle project: testCompile 'org.mockito:mockito-core:1.9.5' Mocking in action This section demonstrates the mock objects with a stock quote example. In the real world, people invest money on the stock market—they buy and sell stocks. A stock symbol is an abbreviation used to uniquely identify shares of a particular stock on a particular market, such as stocks of Facebook are registered on NASDAQ as FB and stocks of Apple as AAPL. We will build a stock broker simulation program. The program will watch the market statistics, and depending on the current market data, you can perform any of the following actions: Buy stocks Sell stocks Hold stocks The domain classes that will be used in the program are Stock, MarketWatcher, Portfolio, and StockBroker. Stock represents a real-world stock. It has a symbol, company name, and price. MarketWatcher looks up the stock market and returns the quote for the stock. A real implementation of a market watcher can be implemented from http://www.wikijava.org/wiki/Downloading_stock_market_quotes_from_Yahoo!_finance. Note that the real implementation will connect to the Internet and download the stock quote from a provider. Portfolio represents a user's stock data such as the number of stocks and price details. Portfolio exposes APIs for getting the average stock price and buying and selling stocks. Suppose on day one someone buys a share at a price of $10.00, and on day two, the customer buys the same share at a price of $8.00. So, on day two the person has two shares and the average price of the share is $9.00. The following screenshot represents the Eclipse project structure. You can download the project from the Packt Publishing website and work with the files: The following code snippet represents the StockBroker class. StockBroker collaborates with the MarketWatcher and Portfolio classes. The perform() method of StockBroker accepts a portfolio and a Stock object: public class StockBroker { private final static BigDecimal LIMIT = new BigDecimal("0.10"); private final MarketWatcher market; public StockBroker(MarketWatcher market) { this.market = market; } public void perform(Portfolio portfolio,Stock stock) { Stock liveStock = market.getQuote(stock.getSymbol()); BigDecimal avgPrice = portfolio.getAvgPrice(stock); BigDecimal priceGained = liveStock.getPrice().subtract(avgPrice); BigDecimal percentGain = priceGained.divide(avgPrice); if(percentGain.compareTo(LIMIT) > 0) { portfolio.sell(stock, 10); }else if(percentGain.compareTo(LIMIT) < 0){ portfolio.buy(stock); } } } Look at the perform method. It takes a portfolio object and a stock object, calls the getQuote method of MarketWatcher, and passes a stock symbol. Then, it gets the average stock price from portfolio and compares the current market price with the average stock price. If the current stock price is 10 percent greater than the average price, then the StockBroker program sells 10 stocks from Portfolio; however, if the current stock price goes down by 10 percent, then the program buys shares from the market to average out the loss. Why do we sell 10 stocks? This is just an example and 10 is just a number; this could be anything you want. StockBroker depends on Portfolio and MarketWatcher; a real implementation of Portfolio should interact with a database, and MarketWatcher needs to connect to the Internet. So, if we write a unit test for the broker, we need to execute the test with a database and an Internet connection. A database connection will take time and Internet connectivity depends on the Internet provider. So, the test execution will depend on external entities and will take a while to finish. This will violate the quick test execution principle. Also, the database state might not be the same across all test runs. This is also applicable for the Internet connection service. Each time the database might return different values, and therefore asserting a specific value in your unit test is very difficult. We'll use Mockito to mock the external dependencies and execute the test in isolation. So, the test will no longer be dependent on real external service, and therefore it will be executed quickly. Mocking objects A mock can be created with the help of a static mock() method as follows: import org.mockito.Mockito; public class StockBrokerTest { MarketWatcher marketWatcher = Mockito.mock(MarketWatcher.class); Portfolio portfolio = Mockito.mock(Portfolio.class); } Otherwise, you can use Java's static import feature and static import the mock method of the org.mockito.Mockito class as follows: import static org.mockito.Mockito.mock; public class StockBrokerTest { MarketWatcher marketWatcher = mock(MarketWatcher.class); Portfolio portfolio = mock(Portfolio.class); } There's another alternative; you can use the @Mock annotation as follows: import org.mockito.Mock; public class StockBrokerTest { @Mock MarketWatcher marketWatcher; @Mock Portfolio portfolio; } However, to work with the @Mock annotation, you are required to call MockitoAnnotations.initMocks( this ) before using the mocks, or use MockitoJUnitRunner as a JUnit runner. The following code snippet uses MockitoAnnotations to create mocks: import static org.junit.Assert.assertEquals; import org.mockito.MockitoAnnotations; public class StockBrokerTest { @Mock MarketWatcher marketWatcher; @Mock Portfolio portfolio; @Before public void setUp() { MockitoAnnotations.initMocks(this); } @Test public void sanity() throws Exception { assertNotNull(marketWatcher); assertNotNull(portfolio); } } The following code snippet uses the MockitoJUnitRunner JUnit runner: import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class StockBrokerTest { @Mock MarketWatcher marketWatcher; @Mock Portfolio portfolio; @Test public void sanity() throws Exception { assertNotNull(marketWatcher); assertNotNull(portfolio); } } Before we deep dive into the Mockito world, there are a few things to remember. Mockito cannot mock or spy the following functions: final classes, final methods, enums, static methods, private methods, the hashCode() and equals() methods, anonymous classes, and primitive types. PowerMock (an extension of EasyMock) and PowerMockito (an extension of the Mockito framework) allows you to mock static and private methods; even PowerMockito allows you to set expectations on new invocations for private member classes, inner classes, and local or anonymous classes. However, as per the design, you should not opt for mocking private/static properties—it violates the encapsulation. Instead, you should refactor the offending code to make it testable. Change the Portfolio class, create the final class, and rerun the test; the test will fail as the Portfolio class is final, and Mockito cannot mock a final class. The following screenshot shows the JUnit output: Stubbing methods We read about stubs in ,Test Doubles. The stubbing process defines the behavior of a mock method such as the value to be returned or the exception to be thrown when the method is invoked. The Mockito framework supports stubbing and allows us to return a given value when a specific method is called. This can be done using Mockito.when() along with thenReturn (). The following is the syntax of importing when: import static org.mockito.Mockito.when; The following code snippet stubs the getQuote(String symbol) method of MarcketWatcher and returns a specific Stock object: import static org.mockito.Matchers.anyString; import static org.mockito.Mockito.when; @RunWith(MockitoJUnitRunner.class) public class StockBrokerTest { @Mock MarketWatcher marketWatcher; @Mock Portfolio portfolio; @Test public void marketWatcher_Returns_current_stock_status() { Stock uvsityCorp = new Stock("UV", "Uvsity Corporation", new BigDecimal("100.00")); when (marketWatcher.getQuote( anyString ())). thenReturn (uvsityCorp); assertNotNull(marketWatcher.getQuote("UV")); } } A uvsityCorp stock object is created with a stock price of $100.00 and the getQuote method is stubbed to return uvsityCorp whenever the getQuote method is called. Note that anyString() is passed to the getQuote method, which means whenever the getQuote method will be called with any String value, the uvsityCorp object will be returned. The when() method represents the trigger, that is, when to stub. The following methods are used to represent what to do when the trigger is triggered: thenReturn(x): This returns the x value. thenThrow(x): This throws an x exception. thenAnswer(Answer answer): Unlike returning a hardcoded value, a dynamic user-defined logic is executed. It's more like for fake test doubles, Answer is an interface. thenCallRealMethod(): This method calls the real method on the mock object. The following code snippet stubs the external dependencies and creates a test for the StockBroker class: import com.packt.trading.dto.Stock; import static org.junit.Assert.assertNotNull; import static org.mockito.Matchers.anyString; import static org.mockito.Matchers.isA; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; @RunWith(MockitoJUnitRunner.class) public class StockBrokerTest { @Mock MarketWatcher marketWatcher; @Mock Portfolio portfolio; StockBroker broker; @Before public void setUp() { broker = new StockBroker(marketWatcher); } @Test public void when_ten_percent_gain_then_the_stock_is_sold() { //Portfolio's getAvgPrice is stubbed to return $10.00 when(portfolio.getAvgPrice(isA(Stock.class))). thenReturn(new BigDecimal("10.00")); //A stock object is created with current price $11.20 Stock aCorp = new Stock("A", "A Corp", new BigDecimal("11.20")); //getQuote method is stubbed to return the stock when(marketWatcher.getQuote(anyString())).thenReturn(aCorp); //perform method is called, as the stock price increases // by 12% the broker should sell the stocks broker.perform(portfolio, aCorp); //verifying that the broker sold the stocks verify(portfolio).sell(aCorp,10); } } The test method name is when_ten_percent_gain_then_the_stock_is_sold; a test name should explain the intention of the test. We use underscores to make the test name readable. We will use the when_<<something happens>>_then_<<the action is taken>> convention for the tests. In the preceding test example, the getAvgPrice() method of portfolio is stubbed to return $10.00, then the getQuote method is stubbed to return a hardcoded stock object with a current stock price of $11.20. The broker logic should sell the stock as the stock price goes up by 12 percent. The portfolio object is a mock object. So, unless we stub a method, by default, all the methods of portfolio are autostubbed to return a default value, and for the void methods, no action is performed. The sell method is a void method; so, instead of connecting to a database to update the stock count, the autostub will do nothing. However, how will we test whether the sell method was invoked? We use Mockito.verify. The verify() method is a static method, which is used to verify the method invocation. If the method is not invoked, or the argument doesn't match, then the verify method will raise an error to indicate that the code logic has issues. Verifying the method invocation To verify a redundant method invocation, or to verify whether a stubbed method was not called but was important from the test perspective, we should manually verify the invocation; for this, we need to use the static verify method. Why do we use verify? Mock objects are used to stub external dependencies. We set an expectation, and a mock object returns an expected value. In some conditions, a behavior or method of a mock object should not be invoked, or sometimes, we may need to call the method N (a number) times. The verify method verifies the invocation of mock objects. Mockito does not automatically verify all stubbed calls. If a stubbed behavior should not be called but the method is called due to a bug in the code, verify flags the error though we have to verify that manually. The void methods don't return values, so you cannot assert the returned values. Hence, verify is very handy to test the void methods. Verifying in depth The verify() method has an overloaded version that takes Times as an argument. Times is a Mockito framework class of the org.mockito.internal.verification package, and it takes wantedNumberOfInvocations as an integer argument. If 0 is passed to Times, it infers that the method will not be invoked in the testing path. We can pass 0 to Times(0) to make sure that the sell or buy methods are not invoked. If a negative number is passed to the Times constructor, Mockito throws MockitoException - org.mockito.exceptions.base.MockitoException, and this shows the Negative value is not allowed here error. The following methods are used in conjunction with verify: times(int wantedNumberOfInvocations): This method is invoked exactly n times; if the method is not invoked wantedNumberOfInvocations times, then the test fails. never(): This method signifies that the stubbed method is never called or you can use times(0) to represent the same scenario. If the stubbed method is invoked at least once, then the test fails. atLeastOnce(): This method is invoked at least once, and it works fine if it is invoked multiple times. However, the operation fails if the method is not invoked. atLeast(int minNumberOfInvocations): This method is called at least n times, and it works fine if the method is invoked more than the minNumberOfInvocations times. However, the operation fails if the method is not called minNumberOfInvocations times. atMost(int maxNumberOfInvocations): This method is called at the most n times. However, the operation fails if the method is called more than minNumberOfInvocations times. only(): The only method called on a mock fails if any other method is called on the mock object. In our example, if we use verify(portfolio, only()).sell(aCorp,10);, the test will fail with the following output: The test fails in line 15 as portfolio.getAvgPrice(stock) is called. timeout(int millis): This method is interacted in a specified time range. Verifying zero and no more interactions The verifyZeroInteractions(Object... mocks) method verifies whether no interactions happened on the given mocks. The following test code directly calls verifyZeroInteractions and passes the two mock objects. Since no methods are invoked on the mock objects, the test passes: @Test public void verify_zero_interaction() { verifyZeroInteractions(marketWatcher,portfolio); } The verifyNoMoreInteractions(Object... mocks) method checks whether any of the given mocks has any unverified interaction. We can use this method after verifying a mock method to make sure that nothing else was invoked on the mock. The following test code demonstrates verifyNoMoreInteractions: @Test public void verify_no_more_interaction() { Stock noStock = null; portfolio.getAvgPrice(noStock); portfolio.sell(null, 0); verify(portfolio).getAvgPrice(eq(noStock)); //this will fail as the sell method was invoked verifyNoMoreInteractions(portfolio); } The following is the JUnit output: The following are the rationales and examples of argument matchers. Using argument matcher ArgumentMatcher is a Hamcrest matcher with a predefined describeTo() method. ArgumentMatcher extends the org.hamcrest.BaseMatcher package. It verifies the indirect inputs into a mocked dependency. The Matchers.argThat(Matcher) method is used in conjunction with the verify method to verify whether a method is invoked with a specific argument value. ArgumentMatcher plays a key role in mocking. The following section describes the context of ArgumentMatcher. Mock objects return expected values, but when they need to return different values for different arguments, argument matcher comes into play. Suppose we have a method that takes a player name as input and returns the total number of runs (a run is a point scored in a cricket match) scored as output. We want to stub it and return 100 for Sachin and 10 for xyz. We have to use argument matcher to stub this. Mockito returns expected values when a method is stubbed. If the method takes arguments, the argument must match during the execution; for example, the getValue(int someValue) method is stubbed in the following way: when(mockObject.getValue(1)).thenReturn(expected value); Here, the getValue method is called with mockObject.getValue(100). Then, the parameter doesn't match (it is expected that the method will be called with 1, but at runtime, it encounters 100), so the mock object fails to return the expected value. It will return the default value of the return type—if the return type is Boolean, it'll return false; if the return type is object, then null, and so on. Mockito verifies argument values in natural Java style by using an equals() method. Sometimes, we use argument matchers when extra flexibility is required. Mockito provides built-in matchers such as anyInt(), anyDouble(), anyString(), anyList(), and anyCollection(). More built-in matchers and examples of custom argument matchers or Hamcrest matchers can be found at the following link: http://docs.mockito.googlecode.com/hg/latest/org/mockito/Matchers.html Examples of other matchers are isA(java.lang.Class<T> clazz), any(java.lang.Class<T> clazz), and eq(T) or eq(primitive value). The isA argument checks whether the passed object is an instance of the class type passed in the isA argument. The any(T) argument also works in the same way. Why do we need wildcard matchers? Wildcard matchers are used to verify the indirect inputs to the mocked dependencies. The following example describes the context. In the following code snippet, an object is passed to a method and then a request object is created and passed to service. Now, from a test, if we call the someMethod method and service is a mocked object, then from test, we cannot stub callMethod with a specific request as the request object is local to the someMethod: public void someMethod(Object obj){ Request req = new Request(); req.setValue(obj); Response resp = service.callMethod(req); } If we are using argument matchers, all arguments have to be provided by matchers. We're passing three arguments and all of them are passed using matchers: verify(mock).someMethod(anyInt(), anyString(), eq("third argument")); The following example will fail because the first and the third arguments are not passed using matcher: verify(mock).someMethod(1, anyString(), "third argument");
Read more
  • 0
  • 0
  • 11001
article-image-getting-started-windows-installer-xml-wix
Packt
20 Oct 2010
10 min read
Save for later

Getting Started with Windows Installer XML (WiX)

Packt
20 Oct 2010
10 min read
       Introducing Windows Installer XML In this section, we'll dive right in and talk about what WiX is, where to get it, and why you'd want to use it when building an installation package for your software. We'll follow up with a quick description of the WiX tools and the new project types made available in Visual Studio. What is WiX? Although it's the standard technology and has been around for years, creating a Windows Installer, or MSI package, has always been a challenging task. The package is actually a relational database that describes how the various components of an application should be unpacked and copied to the end user's computer. In the past you had two options: You could try to author the database yourself—a path that requires a thorough knowledge of the Windows Installer API. You could buy a commercial product like InstallShield to do it for you. These software products will take care of the details, but you'll forever be dependent on them. There will always be parts of the process that are hidden from you. WiX is relatively new to the scene, offering a route that exists somewhere in the middle. Abstracting away the low-level function calls while still allowing you to write much of the code by hand, WiX is an architecture for building an installer in ways that mere mortals can grasp. Best of all, it's free. As an open source product, it has quickly garnered a wide user base and a dedicated community of developers. Much of this has to do not only with its price tag but also with its simplicity. It can be authored in a simple text editor (such as Notepad) and compiled with the tools provided by WiX. As it's a flavor of XML, it can be read by humans, edited without expensive software, and lends itself to being stored in source control where it can be easily merged and compared. The examples in this article will show how to create a simple installer with WiX using Visual Studio. Is WiX for you? To answer the question "Is WiX for you?" we have to answer "What can WiX do for you?" It's fairly simple to copy files to an end user's computer. If that's all your product needs, then the Windows Installer technology might be overkill. However, there are many benefits to creating an installable package for your customers, some of which might be overlooked. Following is a list of features that you get when you author a Windows Installer package with WiX: All of your executable files can be packaged into one convenient bundle, simplifying deployment. Your software is automatically registered with Add/Remove Programs. Windows takes care of uninstalling all of the components that make up your product when the user chooses to do so. If files for your software are accidentlly removed, they can be replaced by right-clicking on the MSI file and selecting Repair. You can create different versions of your installer and detect which version has been installed. You can create patches to update only specific areas of your application. If something goes wrong while installing your software, the end user's computer can be rolled back to a previous state. You can create Wizard-style dialogs to guide the user through the installation. Many people today simply expect that your installer will have these features. Not having them could be seen as a real deficit. For example, what is a user supposed to do when they want to uninstall your product but can't find it in the Add/Remove Programs list and there isn't an uninstall shortcut? They're likely to remove files haphazardly and wonder why you didn't make things easy for them. Maybe you've already figured that Windows Installer is the way to go, but why WiX? One of my favorite reasons is that it gives you greater control over how things work. You get a much finer level of control over the development process. Commercial software that does this for you also produces an MSI file, but hides the details about how it was done. It's analogous to crafting a web site. You get much more control when you write the HTML yourself as opposed to using WYSIWYG software. Even though WiX gives you more control, it doesn't make things overly complex. You'll find that making a simple installer is very straightforward. For more complex projects, the parts can be split up into multiple XML source files to make it easier to work with. Going further, if your product is made up of multiple products that will be installed together as a suite, you can compile the different chunks into libraries that can be merged together into a single MSI. This allows each team to isolate and manage their part of the installation package. WiX is a stable technology, having been first released to the public in 2004, so you don't have to worry about it disappearing. It's also had a steady progression of version releases. The most current version is updated for Windows Installer 4.5 and the next release will include changes for Windows Installer 5.0, which is the version that comes preinstalled with Windows 7. These are just some of the reasons why you might choose to use WiX. Where can I get it? You can download WiX from the Codeplex site, http://wix.codeplex.com/, which has both stable releases and source code. The current release is version 3.0. Once you've downloaded the WiX installer package, double-click it to install it to your local hard drive. This installs all of the necessary files needed to build WiX projects. You'll also get the WiX SDK documentation and the settings for Visual Studio IntelliSense, highlighting and project templates. Version 3 supports Visual Studio 2005 and Visual Studio 2008, Standard edition or higher. WiX comes with the following tools:   Tool What it does Candle.exe Compiles WiX source files (.wxs) into intermediate object files (.wixobj). Light.exe Links and binds .wixobj files to create final .msi file. Also creates cabinet files and embeds streams in MSI database. Lit.exe Creates WiX libraries (.wixlib) that can be linked together by Light. Dark.exe Decompiles an MSI file into WiX code. Heat.exe Creates a WiX source file that specifies components from various inputs. Melt.exe Converts a "merge module" (.msm) into a component group in a WiX source file. Torch.exe Generates a transform file used to create a software patch. Smoke.exe Runs validation checks on an MSI or MSM file. Pyro.exe Creates an patch file (.msp) from .wixmsp and .wixmst files. WixCop.exe Converts version 2 WiX files to version 3. In order to use some of the functionality in WiX, you may need to download a more recent version of Windows Installer. You can check your current version by viewing the help file for msiexec.exe, which is the Windows Installer service. Go to your Start Menu and select Run, type cmd and then type msiexec /? at the prompt. This should bring up a window like the following: If you'd like to install a newer version of Windows Installer, you can get one from the Microsoft Download Center website. Go to: http://www.microsoft.com/downloads/en/default.aspx Search for Windows Installer. The current version for Windows XP, Vista, Server 2003, and Server 2008 is 4.5. Windows 7 and Windows Server 2008 R2 can support version 5.0. Each new version is backwards compatible and includes the features from earlier editions. Votive The WiX toolset provides files that update Visual Studio to provide new WiX IntelliSense, syntax highlighting, and project templates. Together these features, which are installed for you along with the other WiX tools, are called Votive. You must have Visual Studio 2005 or 2008 (Standard edition or higher). Votive won't work on the Express versions. If you're using Visual Studio 2005, you may need to install an additional component called ProjectAggregator2. Refer to the WiX site for more information: http://wix.sourceforge.net/votive.html After you've installed WiX, you should see a new category of project types in Visual Studio, labeled under the title WiX. To test it out, open Visual Studio and go to File New | Project|. Select the category WiX. There are six new project templates: WiX Project: Creates a Windows Installer package from one or more WiX source files WiX Library Project: Creates a .wixlib library C# Custom Action Project: Creates a .NET custom action in C# WiX Merge Module Project: Creates a merge module C++ Custom Action Project: Creates an unmanaged C++ custom action VB Custom Action Project: Creates a VB.NET custom action Using these templates is certainly easier than creating them on your own with a text editor. To start creating your own MSI installer, select the template WiX Project. This will create a new .wxs (WiX source file) for you to add XML markup to. Once we've added the necessary markup, you'll be able to build the solution by selecting Build Solution from the Build menu or by right-clicking on the project in the Solution Explorer and selecting Build. Visual Studio will take care of calling candle.exe and light.exe to compile and link your project files. If you right-click on your WiX project in the Solution Explorer and select Properties, you'll see several screens where you can tweak the build process. One thing you'll want to do is set the amount of information that you'd like to see when compiling and linking the project and how non-critical messages are treated. Refer to the following screenshot: Here, we're selecting the level of messages that we'd like to see. To see all warnings and messages, set the Warning Level to Pedantic. You can also check the Verbose output checkbox to get even more information. Checking Treat warnings as errors will cause warning messages that normally would not stop the build to be treated as fatal errors. You can also choose to suppress certain warnings. You'll need to know the specific warning message number, though. If you get a build-time warning, you'll see the warning message, but not the number. One way to get it is to open the WiX source code (available at http://wix.codeplex.com/SourceControl/list/changesets) and view the messages.xml file in the Wix solution. Search the file for the warning and from there you'll see its number. Note that you can suppress warnings but not errors. Another feature of WiX is its ability to run validity checks on the MSI package. Windows Installer uses a suite of tests called Internal Consistency Evaluators (ICEs) for this. These checks ensure that the database as a whole makes sense and that the keys on each table join correctly. Through Votive, you can choose to suppress specific ICE tests. Use the Tools Setting page of the project's properties as shown in the following screenshot: In this example, ICE test 102 is being suppressed. You can specify more than one test by separating them with semicolons. To find a full list of ICE tests, go to MSDN's ICE Reference web page at: http://msdn.microsoft.com/en-us/library/aa369206%28VS.85%29.aspx The Tool Settings screen also gives you the ability to add compiler or linker command-line flags. Simply add them to the text boxes at the bottom of the screen.
Read more
  • 0
  • 0
  • 10991

article-image-introduction-enterprise-business-messages
Packt
27 Feb 2012
4 min read
Save for later

Introduction to Enterprise Business Messages

Packt
27 Feb 2012
4 min read
(For more resources on Oracle, see here.) Before we jump into the AIA Enterprise Business Message (EBM) standards, let us understand a little more about Business Messages. In general, Business Message is information shared between people, organizations, systems, or processes. Any information communicated to any object in a standard understandable format are called messages. In the application integration world, there are various information-sharing approaches that are followed. Therefore, we need not go through it again, but in a service-oriented environment, message-sharing between systems is the fundamental characteristic. There should be a standard approach followed across an enterprise, so that every existing or new business system could understand and follow the uniform method. XML technology is a widely-accepted message format by all the technologies and tools. Oracle AIA framework provides a standard messaging format to share the information between AIA components. Overview of Enterprise Business Message (EBM) Enterprise Business Messages (EBMs) are business information exchanged between enterprise business systems as messages. EBMs define the elements that are used to form the messages in service-oriented operations. EBM payloads represent specific content of an EBO that is required to perform a specific service. In an AIA infrastructure, EBMs are messages exchanged between all components in the Canonical Layer. Enterprise Business Services (EBS) accepts EBM as a request message and responds back to EBM as an output payload. However, in Application Business Connector Service (ABCS), the provider ABCS accepts messages in the EBM format and translates them into the application provider's Application Business Message (ABM) format. Alternatively, the requester ABCS receives ABM as a request message, transforms it into an EBS, and calls the EBS to submit the EBM message. Therefore, EBM has been a widely-accepted message standard within AIA components. The context-oriented EBMs are built using a set of common components and EBO business components. Some EBMs may require more than one EBO to fulfill the business integration needs. The following diagram describes the role of an EBM in the AIA architecture: EBM characteristics The fundamentals of EBM and its characteristics are as follows: Each business service request and response should be represented in an EBM format using a unique combination of an action and an EBO instance. One EBM can support only one action or verb. EBM component should import the common component to make use of metadata and data types across the EBM structure. EBMs are application interdependencies. Any requester application that invokes Enterprise Business Services (EBS) through ABCS should follow the EBM format standards to pass as payload in integration. The action that is embedded in the EBM is the only action that sender or requester application can execute to perform integration. The action in the EBM may also carry additional data that has to be done as part of service execution. For example, the update action may carry information about whether the system should notify after successful execution of update. The information that exists in the EBM header is common to all EBMs. However, information existing in the data area and corresponding actions are specific to only one EBM. EBM headers may carry tracking information, auditing information, source and target system information, and error-handling information. EBM components do not rely on the underlying transport protocol. Any service protocols such as HTTP, HTTPs, SMTP, SOAP, and JMS should carry EBM payload documents. Exploring AIA EBMs We explored the physical structure of the Oracle AIA EBO in the previous chapter; EBMs do not have a separate structure. EBMs are also part of the EBO's physical package structure. Every EBO is bound with an EBM. The following screenshot will show the physical structure of the EBM groups as directories: As EBOs are grouped as packages based on the business model, EBMs are also a part of that structure and can be located along with the EBO schema under the Core EBO package.
Read more
  • 0
  • 0
  • 10954

article-image-setting-glassfish-jms-and-working-message-queues
Packt
30 Jul 2010
4 min read
Save for later

Setting up GlassFish for JMS and Working with Message Queues

Packt
30 Jul 2010
4 min read
(For more resources on Java, see here.) Setting up GlassFish for JMS Before we start writing code to take advantage of the JMS API, we need to configure some GlassFish resources. Specifically, we need to set up a JMS connection factory, a message queue, and a message topic. Setting up a JMS connection factory The easiest way to set up a JMS connection factory is via GlassFish's web console. The web console can be accessed by starting our domain, by entering the following command in the command line: asadmin start-domain domain1 Then point the browser to http://localhost:4848 and log in: A connection factory can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Connection Factories node, then clicking on the New... button in the main area of the web console. For our purposes, we can take most of the defaults. The only thing we need to do is enter a Pool Name and pick a Resource Type for our connection factory. It is always a good idea to use a Pool Name starting with "jms/" when picking a name for JMS resources. This way JMS resources can be easily identified when browsing a JNDI tree. In the text field labeled Pool Name, enter jms/GlassFishBookConnectionFactory. Our code examples later in this article will use this JNDI name to obtain a reference to this connection factory. The Resource Type drop-down menu has three options: javax.jms.TopicConnectionFactory - used to create a connection factory that creates JMS topics for JMS clients using the pub/sub messaging domain javax.jms.QueueConnectionFactory - used to create a connection factory that creates JMS queues for JMS clients using the PTP messaging domain javax.jms.ConnectionFactory - used to create a connection factory that creates either JMS topics or JMS queues For our example, we will select javax.jms.ConnectionFactory. This way we can use the same connection factory for all our examples, those using the PTP messaging domain and those using the pub/sub messaging domain. After entering the Pool Name for our connection factory, selecting a connection factory type, and optionally entering a description for our connection factory, we must click on the OK button for the changes to take effect. We should then see our newly created connection factory listed in the main area of the GlassFish web console. Setting up a JMS message queue A JMS message queue can be added by expanding the Resources node in the tree at the left-hand side of the web console, expanding the JMS Resources node and clicking on the Destination Resources node, then clicking on the New... button in the main area of the web console. In our example, the JNDI name of the message queue is jms/GlassFishBookQueue. The resource type for message queues must be javax.jms.Queue. Additionally, a Physical Destination Name must be entered. In this example, we use GlassFishBookQueue as the value for this field. After clicking on the New... button, entering the appropriate information for our message queue, and clicking on the OK button, we should see the newly created queue: Setting up a JMS message topic Setting up a JMS message topic in GlassFish is very similar to setting up a message queue. In the GlassFish web console, expand the Resources node in the tree at the left hand side, then expand the JMS Resouces node and click on the Destination Resources node, then click on the New... button in the main area of the web console. Our examples will use a JNDI Name of jms/GlassFishBookTopic. As this is a message topic, Resource Type must be javax.jms.Topic. The Description field is optional. The Physical Destination Name property is required. For our example, we will use GlassFishBookTopic as the value for this property. After clicking on the OK button, we can see our newly created message topic: Now that we have set up a connection factory, a message queue, and a message topic, we are ready to start writing code using the JMS API.
Read more
  • 0
  • 0
  • 10948
article-image-setting-python-development-environment-mac-os-x
Packt
19 Mar 2010
12 min read
Save for later

Setting Up Python Development Environment on Mac OS X

Packt
19 Mar 2010
12 min read
I decided to use the /usr/local directory as destination and compile everything from source. Some people favor binaries. However, binary distributions are often pre-packaged and end up in some sort of installer - they could contain certain things that we dislike and so on. This is also the case with Python on Mac OS X. You can download binary distribution from the official Python website (and you are in fact encouraged to do so, if using OS X), which suffers exactly from these kind of problems. It comes with an installer and installs some stuff out of the /usr/local directory which we don't need. It maybe useful to some of the Cocoa developers who deal with the Python code also, as it eases the installation of the PyObjC (a bridge between the Python and Objective-C) later on. But we don't need that either. We will end up with the pure, lean and mean installation of Python and some supportive applications. Setting Up Python Development Environment on Mac OS X Background First, a bit of a background. You run Mac OS X, one of the finest operating systems available to date. It's not just the looks, there is a fine machinery under the hood that makes it so good. OS X is essentially a UNIX system, more specifically BSD UNIX. It contains large chunks of FreeBSD code which is a good thing, because FreeBSD is a very robust and quality product. On top of that, there is a beautiful Aqua interface for you to enjoy. For some people, this combination is the best of both worlds. Because of the UNIX background, you get many benefits as well. For instance, you have quite a collection of useful UNIX tools available for you to use. In fact, Apple even ships Python out of the box! You may as well ask yourself why do I then need to install and set it up? - Good question. It depends. OS X is a commercial product and it takes some time from release to release and because of that Python version that is shipped with the current OS X (10.4) is a bit old. Maybe that is not an issue for you, but for some people it is. We will focus on getting and installing the newest Python version as that brings some other benefits we will mention later. Remember that /usr/local thing from the beginning? And what's up with that "safe heaven" talk? Well, there is a thing that is called Filesystem Hierarchy Standard, or FHS. The FHS sets up some basic conventions for use by UNIX and UNIX-like systems when mapping out a filesystem. Mac OS X breaks it at some places (as do many other UNIX variants) but most systems respect it. The FHS defines the /usr/local as the "tertiary hierarchy for local data installed by the system administrator", which basically means that it is the safe, standard place for you to put your own custom compiled programs there. Using the /usr/local directory for this purpose is important for many reasons but there is one that is most critical: System Updates. System Updates are automatic methods used by operating systems to deliver newer versions of the software to their users. This new software pieces are then installed at their usual location often with brute force, regardless of what was there before. So, for instance if you had modified or installed some newer version of some important system software, the Software Update process will overwrite it, thus rendering your changes lost. To overcome this problem, we will install all of our custom software in this safe place, the /usr/local directory. Getting to Work At last, the fun part (or so I hope). Requirements First, some prerequisites. You will need the following to get going: Mac OS X 10.4 XCode 2.4 or newer (this contains the necessary compilers) XCode is not installed by default on new Macs, but it can be obtained from the Mac OS X install DVD or from the Apple Developer Connection for free. Strategy As you might have guessed from the previous discussion, I decided to use the /usr/local directory as destination and compile everything from source. Some people favor binaries. However, binary distributions are often pre-packaged and end up in some sort of installer - they could contain certain things that we dislike and so on. This is also the case with Python on Mac OS X. You can download binary distribution from the official Python website (and you are in fact encouraged to do so, if using OS X), which suffers exactly from these kind of problems. It comes with an installer and installs some stuff out of the /usr/local directory which we don't need. It maybe useful to some of the Cocoa developers who deal with the Python code also, as it eases the installation of the PyObjC (a bridge between the Python and Objective-C) later on. But we don't need that either. We will end up with the pure, lean and mean installation of Python and some supportive applications. Additional benefit of compiling from source is that we can look through the actual source code and audit or modify it before we actually install it. I will focus on Python installation that is web development oriented. You will end up with a basic set of tools which you can use to build database-driven web sites powered by Python scripting language. Let's begin, shall we? Using /usr/local In order for all this to work, we will have to make some slight adjustments. For a system to see our custom Python installation, we will have to set the path to include /usr/local first. Mac OS X, like other UNIX systems, uses a "path" to determine where it should look for UNIX applications. The path is just an environment variable that is executed (if it's set) each time you open a new Terminal window. To set up a path, either create or edit a file .bash_login (notice the dot, it's hidden file) in your home directory using a text editor. I recommend the following native OS X text editors: TextMate, BBEdit or TextWrangler, and the following UNIX editors: Emacs or vi(m). To edit the file with TextMate for example, fire up the Terminal and type: mate ~/.bash_login This will open the file with TextMate. Now, add the following line at the end of the file: export PATH="/usr/local/bin:$PATH" After you save and close the file, apply the changes (from terminal) with the following command: . ~/.bash_login While we're at it, we could just as well (using the previous method) enter the following line to make the Terminal UTF-8 aware: export LC_CTYPE=en_US.UTF-8 In general, you should be using UTF-8 anyway, so this is just a bonus. And it is even required to do for some things to work, Subversion for example has some problems if this isn't set. Setting Up the Working Directory It's nice to have a working directory where you will download all the source files and possibly revert to it later. We'll create a directory called src in the /usr/local directory: sudo mkdir -p /usr/local/srcsudo chgrp admin /usr/local/srcsudo chmod -R 775 /usr/local/srccd /usr/local/src Notice the sudo command. It means "superuser do" or "substitute user and do". It will ask you for your password. Just enter it when asked. We are now in this new working directory and will download and compile everything here. Python Finally we are all set up for the actual work. Just enter all the following commands correctly and you should be good to go. We are starting off with Python, but to compile Python properly we will first install some prerequisites, like readline and sqlite. Technically, SQLite isn't required, but it is necessary to compile it first, so that later on Python picks up its libraries and makes use of them. One of the new things in the newest Python 2.5 is native SQLite database driver. So, we will kill two birds with one stone ;-). curl -O ftp://ftp.cwru.edu/pub/bash/readline-5.2.tar.gztar -xzvf readline-5.2.tar.gzcd readline-5.2./configure --prefix=/usr/localmakesudo make installcd .. If you get an error about no acceptable C compiler, then you haven't installed XCode. We can now proceed with SQLite installation. curl -O http://www.sqlite.org/sqlite-3.3.13.tar.gztar -xzvf sqlite-3.3.13.tar.gzcd sqlite-3.3.13./configure --prefix=/usr/local --with-readline-dir=/usr/localmakesudo make installcd .. Finally, we can download and install Python itself. curl -O http://www.python.org/ftp/python/2.5/Python-2.5.tgztar -xzvf Python-2.5.tgzcd Python-2.5./configure --prefix=/usr/local --with-readline-dir=/usr/local --with-sqlite3=/usr/localmakesudo make installcd .. This should leave us with the core Python and SQLite installation. We can verify this by issuing the following commands: python -Vsqlite3 -version Those commands should report new version numbers we just compiled (2.5 for Python and 3.3.13 for SQLite). Do the happy dance now! Before we get to excited, we should also verify are they properly linked together by entering interactive Python interpreter and issuing few commands (don't type ">>>", these are here for illustrative purposes, because you also get them in the interpreter): python>>> import sqlite3>>> sqlite3.sqlite_version'3.3.13' Press C-D (that's CTRL + D) to exit the interactive Python interpreter. If your session looks like the one above, we're all set. If you get some error about missing modules, that means something is not right. Did you follow all the steps as mentioned above? We now have the Python and SQLite installed. The rest is up to you. Do you want to program sites in Django, CherryPy, Pylons, TurboGears, web.py etc.? Just install the web framework you are interested with. Need any additional modules, like Beautiful Soup for parsing HTML? Just go ahead and install it… For development needs, all frameworks I tried come with a suitable development server, so you don't need to install any web server to get started. CherryPy in addition even comes with a great production-ready WSGI web server. Also, for all your database needs, I find SQLite more then adequate while in development mode. I even find it more then enough for some live sites also. It's great little zero-configuration database. If you have bigger needs, it's easy to switch to some other database on the production server (you are planning to use some database abstraction layer, do you?). For completeness sake, let's pretend you're going to develop sites with CherryPy as web framework, SQLite as database, SQLAlchemy as database abstraction layer (toolkit, ORM) and Mako for templates. So, we are missing CherryPy, SQLAlchemy and Mako. Let's get them while they're hot: cd /usr/local/srccurl -O http://download.cherrypy.org/cherrypy/3.0.1/CherryPy-3.0.1.tar.gztar -xzvf CherryPy-3.0.1.tar.gzcd CherryPy-3.0.1sudo python setup.py installcd ..curl -O http://cheeseshop.python.org/packages/source/S/SQLAlchemy/SQLAlchemy-0.3.5.tar.gztar -xzvf SQLAlchemy-0.3.5.tar.gzcd SQLAlchemy-0.3.5sudo python setup.py installcd ..curl -O http://www.makotemplates.org/downloads/Mako-0.1.4.tar.gztar -xzvf Mako-0.1.4.tar.gzcd Mako-0.1.4sudo python setup.py installcd .. Do the happy dance again! This same pattern applies for many other Python web frameworks and modules. What have we just achieved? Well, we now have "invisible" Python web development environment which is clean, fast, self-contained and in the safe place to rest on. Combine it with TextMate (or any other text editor you like) and you will have some serious good times. Again, for even more completeness sake, we will cover Subversion. Subversion is a version control system. Sounds exciting, eh? Actually, it's a very powerful and sane thing to learn and use. But, I'm not covering it because of actual version control, but because many software projects use it, so you will sometimes have the need to checkout (download your own local copy) some projects code. For example, Django project uses it, and their development version is often better than the actual released "stable" version. So, the only way of having (and keeping up with) the development version is to use Subversion to obtain it and keep it updated. All you usually need to do in order to obtain the latest revision of some software is to issue the following command (example for Django): svn co http://code.djangoproject.com/svn/django/trunk/ django_src Here are the steps to download and compile Subversion: curl -O http://subversion.tigris.org/downloads/subversion-1.4.3.tar.gzcurl -O http://subversion.tigris.org/downloads/subversion-deps-1.4.3.tar.gztar -xzvf subversion-1.4.3.tar.gztar -xzvf subversion-deps-1.4.3.tar.gzcd subversion-1.4.3./configure --prefix=/usr/local --with-openssl --with-ssl --with-zlibmakesudo make installcd .. However, even on some moderate recent computer hardware, Subversion can take a long time to compile. If that's the case you don't want to compile it, or you simply just use it for time to time to do some checkouts, you may prefer to download some pre-compiled binary. I know what I said about binaries before, but there is a very fine one over at Martin Ott. It's packaged as a standard Mac OS X installer, and it installs just where it should, in /usr/local directory. When speaking about version control, I'm more a decentralized version control person. I really like Mercurial — it's fast, small, lightweight, but it also scales fairly well for more demanding scenarios. And guess what, it's also written in Python. So, go ahead, install it too, and start writing those nice Python powered web sites! That would be all from me today. While I provided the exact steps for you to follow, that doesn't mean that you should pick the same components. These days (coming from the Django background), I'm learning Pylons, Mako, SQLAlchemy, Elixir and a couple of other components. It makes sense currently, as Pylons is strongly built around WSGI compliance and philosophy which makes the components more reusable and should make it easier to switch to or from any other Python WSGI-centric framework in the future. Good luck!
Read more
  • 0
  • 0
  • 10856

article-image-team-project-setup
Packt
29 Oct 2015
5 min read
Save for later

Team Project Setup

Packt
29 Oct 2015
5 min read
In this article, by Tarun Arora and Ahmed Al-Asaad, author of the book Microsoft Team Foundation Server Cookbook, gives you knowledge about: Using Team Explorer to connect to Team Foundation Server Creating and setting up a new Team Project for a scrum team (For more resources related to this topic, see here.) Microsoft Visual Studio Team Foundation Server 2015 is the backbone of Microsoft's Application Lifecycle Management (ALM) solution, providing core services such as version control, work item tracking, reporting, and automated builds. Team Foundation Server helps organizations communicate and collaborate more effectively throughout the process of designing, building, testing, and deploying software—ultimately leading to increased productivity and team output, improved quality, and greater visibility into the application life cycle. Team Foundation Server is Microsoft on premise offering application life cycle management tooling; Visual Studio Online is a collection of developer services that runs on Microsoft Azure and extends the development experience in the cloud. Team Foundation server is very flexible and supports a broad spectrum of topologies. While a simple one-machine setup may suffice for small teams, you'll see enterprises using scaled-out complex topologies. You'll find that TFS topologies are largely driven by the scope and scale of its use in an organization. Ensure that you have details of your Team Foundation Server handy. Please refer to the Microsoft Visual Studio Licensing guide available at the following link to learn about the license requirements for Team Foundation Server: http://www.microsoft.com/en-gb/download/details.aspx?id=13350. Using Team Explorer to connect to Team Foundation Server 2015 and GitHub To build, plan, and track your software development project using Team Foundation Server, you'll need to connect the client of your choice to Team Foundation Server. In this recipe, we'll focus on connecting Team Explorer to Team Foundation Server. Getting ready Team Explorer is installed with each version of Visual Studio; alternatively, you can also install Team Explorer from the Microsoft download center as a standalone client. When you start Visual Studio for the first time, you'll be asked to sign in with a Microsoft account, such as Live or Hotmail, and provide some basic registration information. You should choose a Microsoft account that best represents you. If you already have an MSDN account, it's recommended that you sign in with its associated Microsoft account. If you don't have a Microsoft account, you can create one for free. Logging in is advisable, not mandatory. How to do it... Open Visual Studio 2015. Click on the Team Toolbar and select Connect to Team Foundation Server. In Team Explorer ,click on Select Team Projects.... In the Connect to Team Foundation Server form, the dropdown shows a list of all the TFS Servers you have connected to before. If you can't see the server you want to connect to in the dropdown, click on Servers to enter the details of the team foundation server. Click on Add and enter the details of your TFS Server and then click on OK. You may be required to enter the log in details to authenticate against the TFS server. Click Close on the Add/Remove Team Foundation Server form. You should now see the details of your server in the Connect to Team Foundation Server form. At the bottom left, you'll see the user ID being used to establish this connection. Click on Connect to complete the connection, this will navigate you back to the Team Explorer. At this point, you have successfully connected Team Explorer to Team Foundation Server. Creating and setting up a new Team Project for a Scrum Team Software projects require a logical container to store project artifacts such as work items, code, build, releases, and documents. In the Team Foundation Server the logical container is referred to as Team Project. Different teams follow different processes to organize, manage, and track their work. Team Projects can be customized to specific project delivery frameworks through process templates. This recipe explains how to create a new team project for a scrum team in the Team Foundation Server. Getting ready The new Team Project created action needs to be trigged from Team Explorer. Before you can create a new Team Project, you need to connect Team Explorer to Team Foundation Server. The recipe Connecting Team Explorer to Team Foundation Server explains how this can be done. In order to create a new Team Project, you will need the following permissions: You must have the Create new projects permission on the TFS application tier. This permission is granted by adding users to the Project Collection Administrators TFS group. The Team Foundation Administrators global group also includes this permission. You must have created new team sites permission within the SharePoint site collection that corresponds to the TFS team project collection. This permission is granted by adding the user to a SharePoint group with Full Control rights on the SharePoint site collection. In order to use the SQL Server Reporting Services features, you must be a member of the Team Foundation Content Manager role in Reporting Services. To verify whether you have the correct permissions, you can download Team Foundation Server Administration Tool from Codeplex available at https://tfsadmin.codeplex.com/. TFS Admin is an open source tool available under the Microsoft Public license (Ms-PL). Summary In this article, we have looked at setting up a Team Project in Team Foundation Server 2015. We started off by connecting Team Explorer to Team Foundation Server and GitHub. We then looked at creating a team project and setting up a scrum team. Resources for Article: Further resources on this subject: Introducing Liferay for Your Intranet [article] Preparing our Solution [article] Work Item Querying [article]
Read more
  • 0
  • 0
  • 10756
Modal Close icon
Modal Close icon